00:00:00:13 - 00:00:03:07 Welcome to the Open Security Controls Assessment 00:00:03:07 - 00:00:06:25 Language, the 10th workshop in the series. 00:00:07:05 - 00:00:09:19 I'm your host today Michaela Iorga. 00:00:09:19 - 00:00:13:10 And I also serve as the OSCAL Strategic Director. 00:00:14:03 - 00:00:18:00 Our guests today are coming from the Google Cloud CISO, 00:00:18:09 - 00:00:22:23 and we have, Vikram Khare, the director of Continuous 00:00:22:23 - 00:00:26:22 assurance engineer, and Valentin Mihai, technical lead. 00:00:27:07 - 00:00:30:09 And they're going to share with us the OSCAL adoption 00:00:30:09 - 00:00:33:15 for continuous assurance and beyond. 00:00:33:23 - 00:00:36:15 So with that, help me welcome 00:00:36:15 - 00:00:40:13 Vikram and Valentin and, the microphone is yours. 00:00:41:01 - 00:00:42:09 All right, everyone. 00:00:42:09 - 00:00:45:24 the topic we're going to cover today is OSCAL adoption 00:00:45:24 - 00:00:48:25 for continuous assurance and beyond. 00:00:49:09 - 00:00:50:08 Agenda: 00:00:50:08 - 00:00:53:25 This will be about how we're moving towards continuous assurance. 00:00:54:04 - 00:00:58:01 Some of the adoption challenges, specifically around, data alignment, 00:00:58:10 - 00:00:59:29 process and systems overview, 00:00:59:29 - 00:01:03:23 and a little bit more on, what some of our future plans are. 00:01:03:28 - 00:01:06:28 what we're thinking along the lines of, we could aspire to do 00:01:07:01 - 00:01:08:10 if all of this works out. 00:01:08:10 - 00:01:11:12 So in terms of introductions, as Michaela mentioned, I'm Vikram Khare. 00:01:11:12 - 00:01:14:26 I work for Google Cloud in the CISO office, 00:01:14:26 - 00:01:17:26 and I'm the director of continuous assurance engineering. 00:01:17:28 - 00:01:21:13 And you'll also be hearing from Val Mihai, who's our technical lead 00:01:21:13 - 00:01:22:23 for continuous assurance. 00:01:23:26 - 00:01:26:20 Why do we care about OSCAL? Really. 00:01:26:20 - 00:01:28:23 the case for OSCAL is for us, 00:01:28:23 - 00:01:32:01 It's not that we want to go off and adopt an emerging standard. 00:01:32:01 - 00:01:35:04 It's that we really want to enable continuous assurance. 00:01:35:08 - 00:01:39:06 And for us, OSCAL is a very robust way of exchanging 00:01:39:06 - 00:01:42:11 risk and controls data within different systems themselves, 00:01:42:15 - 00:01:47:12 between potentially our customers and even regulators and partners. 00:01:47:12 - 00:01:51:06 we find it's a very comprehensive taxonomy for GRC systems 00:01:51:06 - 00:01:54:06 and also continuous controls, monitoring platforms. 00:01:54:07 - 00:01:58:13 And we're also interested in the tooling ecosystem that's being built with OSCAL. 00:01:58:27 - 00:02:02:14 we think that this could really help drive interoperability and standardization. 00:02:02:20 - 00:02:08:04 So the fact that we've seen vulnerability management systems and also GRC systems 00:02:08:04 - 00:02:12:07 coming out in support of OSCAL is a very positive sign for us. 00:02:12:28 - 00:02:16:23 And most importantly, that this is a standardized way of conducting 00:02:16:25 - 00:02:18:19 security and compliance audits. 00:02:18:19 - 00:02:23:09 if you operate a regulated environment, you'll be dealing with third party 00:02:23:09 - 00:02:28:14 auditors, regulatory agencies and potentially even your customers 00:02:28:14 - 00:02:32:14 and shared audits and anything that can be done to standardize 00:02:32:18 - 00:02:36:13 how you present that data to those auditors, to the people 00:02:36:13 - 00:02:40:22 who are requesting the audits, can allow us to then begin an automation. 00:02:40:22 - 00:02:43:12 I think if anyone has ever worked on a engineering project, 00:02:43:12 - 00:02:47:23 you know, that you have to have like a good set of requirements and a good idea 00:02:47:23 - 00:02:51:23 of what can be automated before you can really do anything like that. 00:02:51:28 - 00:02:52:25 I guess it's also important 00:02:52:25 - 00:02:55:25 for us to talk about what we mean by continuous assurance. 00:02:55:27 - 00:02:59:02 to us, continuous assurance is kind of an umbrella term 00:02:59:02 - 00:03:00:16 for a lot of industry buzzwords. 00:03:00:16 - 00:03:04:10 If you hear about things like compliance, privacy, security by design, 00:03:04:16 - 00:03:08:16 what those things are doing is they're helping you build continuous assurance. 00:03:08:28 - 00:03:13:24 if you hear about terms like policy as code, again, a well constructed 00:03:13:24 - 00:03:17:29 policy is aligned to a set of controls that are being routinely tested 00:03:17:29 - 00:03:19:08 in the organization. 00:03:19:08 - 00:03:24:02 And ideally, those controls are running in an automated manner. 00:03:24:12 - 00:03:27:04 but a more tangible idea of what all these different 00:03:27:04 - 00:03:30:18 buzzwords mean to us is that for every control, 00:03:30:18 - 00:03:34:10 you have a set of assurance activities that are fully automated, 00:03:34:27 - 00:03:37:16 and that you also have real time monitoring of the controls 00:03:37:16 - 00:03:39:08 based on objectively defined metric. 00:03:39:08 - 00:03:39:24 And I think 00:03:39:24 - 00:03:41:18 the last thing we want to talk about is, 00:03:41:18 - 00:03:44:24 is that we want to treat compliance failures like system downtime. 00:03:44:24 - 00:03:46:01 A lot of times what happens 00:03:46:01 - 00:03:50:05 is that in, regulated environments, there's a level of subjectivity. 00:03:50:05 - 00:03:51:09 Do you meet a requirement? 00:03:51:09 - 00:03:55:07 Do you not meet a requirement, a failure and a compliance obligation? 00:03:55:22 - 00:03:58:16 While important, it may not be treated with the same urgency 00:03:58:16 - 00:04:01:16 as like a security breach or a server going down. 00:04:01:29 - 00:04:05:01 And we want to kind of shift things and the mindset 00:04:05:01 - 00:04:08:18 such that anytime we do have a compliance violation, it's treated 00:04:08:18 - 00:04:12:11 with the same sense of urgency that like a server outage would be. 00:04:12:27 - 00:04:16:05 So what are the challenges we've run into with the adoption 00:04:16:05 - 00:04:19:05 of OSCAL and moving towards continuous assurance? 00:04:19:14 - 00:04:22:26 I think some of the minor challenges we've run into is the timeline. 00:04:23:05 - 00:04:26:10 there's obviously public sector requirements to support OSC 00:04:26:14 - 00:04:30:14 style assessments, and so that requires some engineering work that we're doing. 00:04:30:14 - 00:04:35:02 It also requires us to rethink how we are cataloging controls. 00:04:35:02 - 00:04:38:17 So the onboarding of the controls has to be, rethought of 00:04:39:00 - 00:04:42:28 we also need to think through how we do data change management on our controls. 00:04:43:00 - 00:04:46:04 the other minor challenges are that obviously OSCAL is an emerging 00:04:46:04 - 00:04:46:19 standard. 00:04:46:19 - 00:04:49:19 So it's still in the process of being developed and finalized. 00:04:50:04 - 00:04:53:11 And also we have to standardize a lot of our taxonomy 00:04:53:11 - 00:04:56:11 and rethink some of the data models that we have internally. 00:04:56:22 - 00:04:59:01 What are the major challenges we run into? 00:04:59:01 - 00:05:01:05 just because of the size and scale that we operate 00:05:01:05 - 00:05:03:25 and there are third party software challenges. 00:05:03:25 - 00:05:07:08 So while even though there is kind of an ecosystem of tools being developed 00:05:07:08 - 00:05:11:02 around OSCAL, it's not necessarily the case that we would be adopting them. 00:05:11:07 - 00:05:15:08 organization of the data is very challenging for us, 00:05:15:08 - 00:05:18:08 and we'll get into what we're doing there. 00:05:18:09 - 00:05:21:19 And lastly, any sort of technical debt needs to be done away with. 00:05:21:19 - 00:05:26:11 So if you are doing continuous monitoring, as soon as you're finding gaps, 00:05:26:11 - 00:05:29:01 you have to go deal with, you know, typically a control 00:05:29:01 - 00:05:32:15 maybe tested on a bi annual basis or an annual basis. 00:05:32:26 - 00:05:35:17 If you move towards continuous monitoring of those controls, you're 00:05:35:17 - 00:05:39:15 really increasing the frequency to be, ideally would be real time, 00:05:40:00 - 00:05:41:09 but typically most people in the industry 00:05:41:09 - 00:05:43:10 would say that a control is continuously monitored 00:05:43:10 - 00:05:46:05 when it's tested on at least a monthly frequency. 00:05:46:05 - 00:05:48:26 And once you do that, you'll find the gaps more quickly 00:05:48:26 - 00:05:51:04 and you'll have to get them remediated more quickly. 00:05:51:04 - 00:05:53:15 So how are we moving towards continuous assurance? 00:05:53:15 - 00:05:53:29 Really? 00:05:53:29 - 00:05:58:01 The first phase of all of this is, completed like, a proof of concept. 00:05:58:01 - 00:06:01:16 we are basically using templates and scripts 00:06:01:22 - 00:06:04:21 to see if we can generate OSCAL style assessments. 00:06:04:21 - 00:06:09:07 we're moving into what we call the OSCAL builder phase of an MVP, 00:06:09:10 - 00:06:12:10 where it'll be more UI driven and web driven. 00:06:12:28 - 00:06:17:04 we're also going through a very extensive data prepopulation exercise 00:06:17:04 - 00:06:21:07 and our GRC systems, you know, making sure that the asset inventory, 00:06:21:13 - 00:06:25:27 the vulnerability management information is all centralized correctly. 00:06:26:08 - 00:06:28:23 And we're also thinking through usability enhancements. 00:06:28:23 - 00:06:31:09 And then finally, phase four will be maturing. 00:06:31:09 - 00:06:34:16 The whole process really at phase three, what we're doing is a lot of 00:06:34:16 - 00:06:37:24 the data collection will be done in a somewhat manual manner. 00:06:37:24 - 00:06:40:00 Some of it automated, some of it manual. 00:06:40:00 - 00:06:42:26 And really with phase four we want to have like a two way 00:06:42:26 - 00:06:46:15 integration system with our GRC system, where most of the data 00:06:46:15 - 00:06:49:15 collection that needs to be done is fully automated, 00:06:49:17 - 00:06:52:09 including how we externalize the data to auditors. 00:06:52:09 - 00:06:55:05 So in terms of aligning the data, we have to really think through 00:06:55:05 - 00:06:57:03 like the entire data lifecycle here. 00:06:57:03 - 00:07:01:11 and that begins with like how we capture our requirements later on in the slides, 00:07:01:11 - 00:07:06:12 you'll see Val POC through a piece on the overall end to end process. 00:07:06:12 - 00:07:08:04 We'll talk about how we get like 00:07:08:04 - 00:07:11:11 internal compliance requirements and external compliance requirements. 00:07:11:24 - 00:07:15:18 The external compliance requirements are all around regulatory decomposition. 00:07:16:06 - 00:07:19:17 And with OSCAL you do have a way of generating machine 00:07:19:17 - 00:07:22:26 readable, formats for different regulations, not just FedRAMP. 00:07:23:11 - 00:07:26:20 And so for us, we have to start thinking about how we're managing 00:07:27:04 - 00:07:30:10 our security and compliance requirements, really at that intake level. 00:07:30:10 - 00:07:32:21 And that begins with the regulatory decomposition. 00:07:33:20 - 00:07:34:22 we needed to come up with a 00:07:34:22 - 00:07:37:22 more granular structure for defining controls. 00:07:37:25 - 00:07:39:03 We need a way to like, draw 00:07:39:03 - 00:07:41:29 like a straight line from what are the regulatory requirements 00:07:41:29 - 00:07:45:14 we have to how are the controls actually being implemented 00:07:46:00 - 00:07:50:06 and then into specific metrics for measuring the control performance. 00:07:50:06 - 00:07:54:00 And you'll see like how we think through control metrics 00:07:54:00 - 00:07:57:00 and the aggregation process we have to go through. 00:07:57:19 - 00:08:00:07 another big part of this is really refining 00:08:00:07 - 00:08:02:22 the asset model with the adoption of OSCAL, 00:08:02:22 - 00:08:06:26 we really have to get much more granular and detailed on what we're doing. 00:08:07:04 - 00:08:11:26 And so, you know, establishing an asset taxonomy that takes into account 00:08:11:26 - 00:08:15:28 tooling and automation and everything else has become like very important for us. 00:08:16:26 - 00:08:19:09 And then finally, the last piece is that we're aggregating 00:08:19:09 - 00:08:20:24 data from disparate sources. 00:08:20:24 - 00:08:24:08 the OSCAL definition of a control requires a higher level of granularity. 00:08:24:08 - 00:08:27:24 So internally we've rewritten our data models. 00:08:27:24 - 00:08:29:11 And we are also looking 00:08:29:11 - 00:08:33:11 at like populating some of the data that we're keeping an external resources. 00:08:33:11 - 00:08:36:11 We need to consolidate it into the GRC system. 00:08:36:13 - 00:08:39:04 probably one of the most important lessons learned we have around 00:08:39:04 - 00:08:41:12 OSCAL is, is that when you think about it, it's adoption. 00:08:41:12 - 00:08:45:17 You really have to think about one getting it adopted into your GRC system first. 00:08:46:04 - 00:08:49:04 So with that said, how does this actually look? 00:08:49:15 - 00:08:52:06 when we get a specific control, 00:08:52:06 - 00:08:55:26 like for example, this is a control that simply states 00:08:55:26 - 00:08:59:27 that a cloud service provider has an AV program, an antivirus program. 00:09:01:03 - 00:09:02:01 we'll look at that. 00:09:02:01 - 00:09:07:03 And we may have like supporting controls that we have defined in our GRC system 00:09:07:13 - 00:09:10:13 that we have AV scans running daily in prod 00:09:10:14 - 00:09:13:16 that all AV findings are resolved in 24 hours, 00:09:13:27 - 00:09:18:11 and that all incoming data into a data center is scanned on an annual basis. 00:09:19:01 - 00:09:20:05 Now, this is a hypothetical. 00:09:20:05 - 00:09:22:12 This isn't how we actually run things. 00:09:22:12 - 00:09:25:01 but let's say that that breaks down into like two different centers 00:09:25:01 - 00:09:28:16 within the US, we would then look at like the control implementation 00:09:28:22 - 00:09:31:02 as you kind of like shift to the right on the slide, 00:09:31:02 - 00:09:33:28 you'll see the boxes in the blue with the control implementation. 00:09:33:28 - 00:09:38:03 and this will have like details like in North America, the data center CSP 00:09:38:09 - 00:09:43:16 is running scans using, certain software on all production systems. 00:09:43:16 - 00:09:46:24 It'll talk about where the AV scan results are aggregated. 00:09:47:16 - 00:09:52:05 And if you go down, you'll see another set of blue boxes there around cloud-CA. 00:09:52:20 - 00:09:53:23 And basically what we're doing 00:09:53:23 - 00:09:56:23 there is we could potentially be running a different AVs. 00:09:57:13 - 00:09:58:13 Right. 00:09:58:13 - 00:10:00:10 And so what we have here is kind of like a breakdown of 00:10:00:10 - 00:10:03:16 like what is the actual requirement that we have to meet and how we've, 00:10:03:16 - 00:10:06:26 like aligned it according to OSCAL internally. 00:10:07:01 - 00:10:10:24 And one of the values of all this is that this level of granularity 00:10:10:24 - 00:10:13:14 that you see here, we basically have like different 00:10:13:14 - 00:10:16:14 with this kind of granularity, we can actually do the automation. 00:10:16:21 - 00:10:19:03 We all actually know and have it clearly documented that 00:10:19:03 - 00:10:22:08 there are different antivirus systems and different data centers 00:10:22:25 - 00:10:25:24 that can help us make sure that we're doing the automation correctly. 00:10:25:24 - 00:10:28:00 It can also help us make sure that we're looking 00:10:28:00 - 00:10:30:25 at the continuous controls monitoring correctly. 00:10:30:25 - 00:10:33:17 what you have here in terms of how we can 00:10:33:17 - 00:10:36:17 measure continuous assurance is on the left. 00:10:36:21 - 00:10:40:00 That initial requirement, the CSP has an AV program. 00:10:40:15 - 00:10:42:04 it's going to be an aggregate measurement. 00:10:42:04 - 00:10:46:23 it'll be either red yellow or green red meaning that the control is failing. 00:10:46:23 - 00:10:49:27 Yellow means that it's in a warning state, and green mean that it's works. 00:10:50:21 - 00:10:53:25 Now, what is that determination of red yellow, green? 00:10:53:25 - 00:10:55:01 It's an aggregate of 00:10:55:01 - 00:10:58:01 like what you see on the right, which are the different CCM metrics. 00:10:58:08 - 00:11:00:20 This could be the percentage of scans that are, 00:11:01:21 - 00:11:03:22 being completed, 00:11:03:22 - 00:11:07:23 the time it takes to complete all the, vulnerability remediations 00:11:08:04 - 00:11:12:07 and whether all incoming data is scanned, we can assign weights to them, 00:11:12:07 - 00:11:16:05 and we can figure out just exactly what state the control is functioning in. 00:11:16:19 - 00:11:20:19 And with that, I'm going to hand this over to Val to talk about, 00:11:21:08 - 00:11:24:13 our high level process and just take you through the rest of the presentation. 00:11:24:22 - 00:11:25:15 Thanks, Vikram. 00:11:25:15 - 00:11:28:15 so when we when we look at our poll process, 00:11:28:15 - 00:11:31:18 got it broken down into, three key areas, right. 00:11:31:28 - 00:11:33:14 we have the intake process. 00:11:33:14 - 00:11:35:04 We have a cataloging 00:11:35:04 - 00:11:38:22 and our onboarding process, as well as continuous assurance phase. 00:11:39:10 - 00:11:43:16 And fundamentally during the intake process, the focus is on consuming 00:11:44:01 - 00:11:46:19 various data sources such as external compliance 00:11:46:19 - 00:11:48:25 reviews, internal compliance reviews. 00:11:48:25 - 00:11:52:06 our risk assessment methodology, external regulations, 00:11:52:15 - 00:11:55:06 best practices, external standards, etc. 00:11:55:06 - 00:11:59:06 as well as our own control maturity and control lifecycle process 00:11:59:23 - 00:12:02:21 and all of those things contribute to something called a control gap. 00:12:03:09 - 00:12:07:04 And the idea meaning is that those requirements fundamentally 00:12:07:04 - 00:12:10:04 don't align 100% with the control catalog 00:12:10:04 - 00:12:13:04 and the current capabilities that we're measuring and we're assessing. 00:12:13:09 - 00:12:15:17 So as part of that control gap definition. 00:12:15:19 - 00:12:18:09 We then pivot into the creation of a control. 00:12:18:09 - 00:12:20:20 So we go through the whole process of the control definition. 00:12:20:20 - 00:12:23:18 And as Vikram had highlighted on the previous slides, 00:12:23:18 - 00:12:27:00 we have internal best practices in terms of the level of granularity 00:12:27:11 - 00:12:29:10 and how those controls need to be defined. 00:12:29:18 - 00:12:34:03 So there's, this concept that the control needs to be granular enough that from 00:12:34:04 - 00:12:38:12 the control objective, drawing it all the way through to the actual metrics 00:12:38:12 - 00:12:40:02 during the assessments 00:12:40:02 - 00:12:44:10 and the poems and the remediations, that it's all directly linear. 00:12:44:10 - 00:12:46:12 You can see the, the connection between them. 00:12:46:16 - 00:12:49:09 and a lot of this is going to be fundamentally driven by the scope. 00:12:49:09 - 00:12:53:14 Essentially the scope is going to define, the control implementations. 00:12:53:14 - 00:12:56:20 It's going to impact the control definitions and handling that over to, 00:12:57:10 - 00:12:59:17 the control creation modification process. 00:12:59:18 - 00:13:02:07 Once the controls have been identified, 00:13:02:07 - 00:13:03:25 we look at a couple of different options. 00:13:03:25 - 00:13:06:13 One, is this a control that has, 00:13:06:16 - 00:13:09:16 let's say direct tooling that is currently already in place, 00:13:09:24 - 00:13:12:07 in which case we can go directly to the control being onboarded 00:13:12:07 - 00:13:14:10 and we can start to observe it or to measure it 00:13:14:10 - 00:13:16:28 in other scenarios, we have these two diverging paths, meaning 00:13:16:28 - 00:13:21:24 that either we don't have, for example, definitions for, those measurements, 00:13:22:11 - 00:13:25:20 or there is an additional requirement because it's a manual process 00:13:26:02 - 00:13:27:26 that assurance automation be inserted. 00:13:27:26 - 00:13:29:01 In other words, 00:13:29:01 - 00:13:32:07 to make the process observable in order to be able to generate metrics to 00:13:32:07 - 00:13:33:02 drive that automate. 00:13:33:08 - 00:13:35:17 but once we've gone through this whole phase of, 00:13:35:17 - 00:13:37:26 in taking, identifying the gaps, 00:13:37:26 - 00:13:39:10 doing the control definition, 00:13:39:13 - 00:13:41:15 going through the creation process, 00:13:41:15 - 00:13:44:15 the metrics, onboarding and getting into the observability, 00:13:44:21 - 00:13:47:19 we now have a control that's ready to be consumed. 00:13:47:19 - 00:13:51:03 And that consumption actually diverges into two different areas. 00:13:51:17 - 00:13:53:24 One, we can start to externalize it to create 00:13:53:24 - 00:13:56:11 audit artifacts, for example, the OSCAL package. 00:13:56:19 - 00:13:58:27 Because as part of that control onboarding, 00:13:58:27 - 00:14:00:13 we've had to set the control objectives. 00:14:00:13 - 00:14:02:11 We have to identify the systems, the scope. 00:14:02:11 - 00:14:04:23 We've had to understand the implementing action, 00:14:04:23 - 00:14:07:02 which is essentially the fundamental building blocks 00:14:07:02 - 00:14:09:00 that are required by OSCAL. 00:14:09:00 - 00:14:12:18 And then all of that, that data and the metadata that Vikram 00:14:12:18 - 00:14:16:09 was highlighting earlier, enables us to be able to turn on continuous monitoring. 00:14:16:09 - 00:14:20:03 Because we have source systems, we have target systems, we have in scope 00:14:20:03 - 00:14:23:19 assets, we have all of these elements that now feed into it. 00:14:24:05 - 00:14:27:03 And now we can simply apply thresholds for things like alerting 00:14:27:03 - 00:14:30:03 and notifications, identifying responsible parties 00:14:30:05 - 00:14:33:04 and really moving towards what we call, 00:14:33:04 - 00:14:35:01 control reliability engineering. Right. 00:14:35:05 - 00:14:38:10 Same type of concept where we look at control failures. 00:14:38:10 - 00:14:40:03 We can start to look at them the same way that we would 00:14:40:03 - 00:14:43:03 with any other sort of failure or outage or, 00:14:43:07 - 00:14:45:21 event that happens within our services. 00:14:45:21 - 00:14:49:13 So, in order for us to built the back end infrastructure, 00:14:49:13 - 00:14:53:08 we really focused on categorizing the systems and understanding 00:14:53:08 - 00:14:55:12 how these different elements need to come together. 00:14:55:12 - 00:14:58:17 when we look at it, we have really five buckets with four core 00:14:58:17 - 00:15:02:05 that are, I would say almost unique to, to the core process. 00:15:02:18 - 00:15:04:15 So obviously we have a whole lot of benefits. Right. 00:15:04:15 - 00:15:07:16 So various control signals, policies, risk 00:15:07:16 - 00:15:11:04 data, third party services, customer commitments, contracts, 00:15:11:26 - 00:15:14:26 external regulators, industry best practices, 00:15:14:26 - 00:15:17:14 our own internal practices, best practices and policies. 00:15:17:14 - 00:15:19:14 All of those things can be considered inputs. 00:15:20:18 - 00:15:20:29 all of 00:15:20:29 - 00:15:23:29 that stuff gets aggregated into our systems of record. 00:15:24:04 - 00:15:27:24 our systems of record act as the primary aggregation points in order 00:15:27:24 - 00:15:30:24 for us to be able to establish the right relationship model 00:15:31:00 - 00:15:34:28 between all of these various inputs that can have different types of 00:15:35:06 - 00:15:39:15 information, and to be able to correlated and align it to that OSCAL model. 00:15:39:15 - 00:15:41:22 So some of these could be control implementation we can propose 00:15:41:22 - 00:15:45:19 some of these could be requirements that we would then map to controls. 00:15:45:29 - 00:15:48:28 Some of that could be related to components and systems. 00:15:48:28 - 00:15:51:25 Other things could actually be the metrics themselves. 00:15:51:25 - 00:15:54:29 So what we do is we gather all this information these systems of record, 00:15:55:01 - 00:15:59:25 which are fundamentally that the heart and soul of the way that we perform the CMR 00:16:00:19 - 00:16:03:21 and from that piece, the data can either be consumed in data 00:16:03:21 - 00:16:06:21 analytics or directly piped out the customer. 00:16:06:26 - 00:16:08:09 When we talk about the analytics side 00:16:08:09 - 00:16:11:25 of it, we're really focused on, things like the risk assessments 00:16:12:05 - 00:16:15:06 to be able to to do our own risk assessments and risk quantification, 00:16:15:23 - 00:16:19:29 to do any sort of upstream alerting and monitoring, control to threat mapping 00:16:19:29 - 00:16:23:26 and modeling with all of these things are part of that data analytics ecosystem. 00:16:23:26 - 00:16:26:05 So basically, how are we joining this data? 00:16:26:05 - 00:16:28:16 Are we contextualizing it, etc.. 00:16:28:16 - 00:16:30:28 And other data directly from the systems report 00:16:30:28 - 00:16:33:20 or from these analytics are then used for end user outputs. 00:16:33:20 - 00:16:37:27 and those end user outputs could be things like generating an SSP, 00:16:38:13 - 00:16:41:24 growing a dashboard, an audit report, sharing assessment 00:16:41:24 - 00:16:46:00 plans, as well as end user outputs being for our own internal consumption. 00:16:46:00 - 00:16:50:05 So for our risk quantification, compliance assessments, those types of things. 00:16:50:28 - 00:16:54:14 And obviously because all of this data has a significant 00:16:54:14 - 00:16:58:16 sort of, compliance and legal angle to it, we do have this concept 00:16:58:16 - 00:17:00:07 of an evidence repository. 00:17:00:07 - 00:17:03:29 And this is partially to meet sort of data retention requirements. 00:17:04:04 - 00:17:08:07 but it's also to help us manage to derive works, the idea being that 00:17:08:20 - 00:17:12:15 we want to get to a place where anything that's shared externally can be traced 00:17:12:15 - 00:17:15:20 back to the bits and bytes internally that help us make those assessments 00:17:15:20 - 00:17:16:18 and those versions. 00:17:16:18 - 00:17:19:19 So this is sort of the vision of the ecosystem that we're 00:17:19:19 - 00:17:23:11 in the process of building and developing and kind of putting together So, 00:17:23:27 - 00:17:26:01 the areas where we're seeing a lot of opportunity 00:17:26:01 - 00:17:27:12 as we've been working through this 00:17:27:12 - 00:17:28:14 and as we've been developing 00:17:28:14 - 00:17:31:04 this ecosystem of tools, a couple of areas come up. 00:17:31:04 - 00:17:35:01 So one obviously, the immediate one would be the audit toil reduction. 00:17:35:08 - 00:17:39:01 obviously FedRAMP, we know FedRAMP is very interested in OSCAL . 00:17:39:04 - 00:17:42:02 We're actively exploring it. 00:17:42:02 - 00:17:43:00 But it would be great. 00:17:43:00 - 00:17:46:27 Want to see this adoption go beyond the US public sector. 00:17:47:14 - 00:17:50:19 and also see the expansion of OSCAL to include CCF. 00:17:50:27 - 00:17:51:06 Right. 00:17:51:06 - 00:17:54:08 And really getting into the metrics side of it to be able to 00:17:54:23 - 00:17:57:22 not only share sort of the assessment plans all that aspect of it, 00:17:57:22 - 00:18:01:06 but see if there's a way to for us to shorten the actual audit and evidence 00:18:01:21 - 00:18:02:27 gathering cycle. 00:18:02:27 - 00:18:04:23 and obviously, you know, the more regulatory bodies 00:18:04:23 - 00:18:07:29 that adopt these CCM metrics and this whole process, 00:18:08:17 - 00:18:12:03 the more we could scale, the more can be refined, the more it can be perfected. 00:18:12:08 - 00:18:14:02 additionally, you know, having more participation 00:18:14:02 - 00:18:17:10 and collaboration from regulators and the standards bodies to help us 00:18:17:16 - 00:18:20:16 establish a robust data sharing architecture. 00:18:20:16 - 00:18:24:07 So we have ideas, we have mechanisms on how, you know, we could do things. 00:18:24:11 - 00:18:26:16 but, you know, they're not industry standards. 00:18:26:16 - 00:18:31:02 And to understand how others are intending to consume the data and helping them, 00:18:31:02 - 00:18:33:04 helping us align with what they're intending to do 00:18:33:04 - 00:18:35:14 would be something that would be of great interest to us. 00:18:35:14 - 00:18:38:29 the other aspect of it is this idea of the break analysis tool reduction. 00:18:38:29 - 00:18:42:22 So as I mentioned during the intake piece, as we decompose regulations, 00:18:42:22 - 00:18:44:07 as we decompose requirements, 00:18:44:07 - 00:18:47:07 we end up building a pretty robust catalog of controls. 00:18:47:15 - 00:18:48:06 currently 00:18:48:06 - 00:18:51:16 the process is relatively manual or historically it's been relatively late. 00:18:52:10 - 00:18:55:17 What we're looking to do is to find mechanisms to introduce AI 00:18:55:17 - 00:18:58:17 and other technologies to accelerate that process. 00:18:58:20 - 00:19:01:17 So to basically feed it a PDF, map it to a controls catalog 00:19:01:17 - 00:19:04:17 catalog can help spit out and give us some sort of rapid assessment 00:19:05:01 - 00:19:08:01 as to how we align with specific frameworks and regulations. 00:19:08:15 - 00:19:09:09 Right. 00:19:09:09 - 00:19:12:06 that's that's one way to kind of reduce it to a reduction in that end 00:19:12:06 - 00:19:15:17 And the other way is also to see if there's opportunities 00:19:15:17 - 00:19:19:16 for us to facilitate, by providing tools and, 00:19:20:01 - 00:19:22:26 technologies to regulators and to others that would allow them 00:19:22:26 - 00:19:27:04 to convert the existing regulations into machine readable formats. 00:19:27:04 - 00:19:29:25 Or if we provide the technologies that do that, 00:19:29:25 - 00:19:32:10 to have them review and sign off on. Right. 00:19:32:10 - 00:19:35:23 So this can help drive some of that standardization in catalogs 00:19:35:23 - 00:19:37:24 like we're seeing that today with that FedRAMP 00:19:37:24 - 00:19:40:03 But it's not something that we're seeing universally. 00:19:40:03 - 00:19:42:28 you know, that's that's a big area for us where we see a lot of opportunity 00:19:42:28 - 00:19:46:04 to drive standardization and minimize confusion associated 00:19:46:04 - 00:19:49:16 with the potential interpretation of a regulation. 00:19:49:18 - 00:19:52:17 the other thing is, in the integration of controls with broader 00:19:52:17 - 00:19:54:12 IT management and security operations. 00:19:54:12 - 00:19:57:26 right now we have all this, this very valuable information 00:19:57:26 - 00:20:01:13 that's very much used to drive these compliance requirements 00:20:01:13 - 00:20:04:21 and to measure our ability to meet, know, these security standards. 00:20:04:26 - 00:20:08:22 What's interesting to us is this ability to kind of match these controls, 00:20:08:24 - 00:20:10:07 and to other security incidents, 00:20:10:07 - 00:20:13:07 to understand how controls can potentially help mitigate some. 00:20:13:10 - 00:20:16:02 as well as using incidents in reverse 00:20:16:02 - 00:20:19:05 to be able to assess the effectiveness of controls. 00:20:19:07 - 00:20:19:14 Right. 00:20:19:14 - 00:20:22:08 So the idea is, is that you design something, you implement 00:20:22:08 - 00:20:24:09 it, you're constantly measuring it. 00:20:24:09 - 00:20:28:05 But then, to go back and say, okay, it's been effective in mitigating these areas, 00:20:28:05 - 00:20:31:22 these threats, these security events, preventing them reducing risks, etc.. 00:20:32:05 - 00:20:36:05 But also when hasn't it been effective or how can it be matured 00:20:36:19 - 00:20:40:16 and how can it be, addressed, adapted and tweaked to deliver more value? 00:20:41:05 - 00:20:41:12 Right. 00:20:41:12 - 00:20:44:21 Kind of going back to that initial cycle on that slide where we have that maturity 00:20:44:21 - 00:20:45:18 box near the bottom. 00:20:46:18 - 00:20:49:18 so that's, that's sort of the the areas of future opportunity. 00:20:49:27 - 00:20:50:27 Again, we're 00:20:50:27 - 00:20:54:07 we're still sort of connecting the dots and building out that infrastructure. 00:20:54:26 - 00:20:57:26 But as we're adopting these things, 00:20:58:13 - 00:21:00:28 we're noticing that a lot of these opportunity 00:21:00:28 - 00:21:03:28 areas are arising that are kind of hitting us to 00:21:04:01 - 00:21:07:01 to see and to explore how we can leverage this framework 00:21:07:18 - 00:21:10:06 and fundamentally what this framework helps 00:21:10:06 - 00:21:13:06 us surface out of our own sort of internal practices 00:21:13:11 - 00:21:16:20 and to apply to a broader set of, technologies and disciplines. 00:21:17:25 - 00:21:20:25 If I'm not mistaken, this may be the last slide. 00:21:21:12 - 00:21:23:06 That's it. All right. 00:21:23:06 - 00:21:26:27 Thank you, everyone, for, taking some time out to hear our presentation.