Using Cyber Risk Quantification to Make the Right Risk Decisions

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Using Cyber Risk Quantification to Make the Right Risk Decisions. The summary for this episode is: <p>Cybersecurity programs involve lots of moving parts, and they only grow more complex over time as technology becomes more advanced and cyber threats become more numerous and sophisticated. Cyber risk quantification can be a crucial tool for keeping up with shifting cybersecurity landscapes.On this episode of GRC &amp; Me, Chris Clarke is joined by Protiviti’s Daniel Stone, Director, and Tim Kelly, Associate Director, to discuss how cyber risk quantification can lead to better risk decision-making, how to beat analysis paralysis when you’ve got reams of risk data in front of you, and the best ways to use risk quantification to reduce reactivity and improve communication across your organization.</p>
Risk management in day-to-day life
03:19 MIN
Pros, cons, and impact of risk quantification
03:42 MIN
Some cautions to consider
03:36 MIN
Reporting risk quantification to the board
01:57 MIN
Preparing for the unexpected
02:32 MIN
Do you ever feel confident that you know all of your assets?
02:28 MIN
Growing the business using risk quantification
03:00 MIN
Why DORA and NIS 2 matter
03:36 MIN
Building a more resilient organization with risk quantification
02:33 MIN

Chris Clarke: Welcome to GRC& Me, a podcast where we interview governance, risk and compliance thought leaders on hot topics, industry- specific challenges and trends to learn more about their methods, solutions, and just outlook in the space, and hopefully have a little fun doing it. I'm your host, Chris Clarke. With me today are Daniel Stone and Tim Kelly from Protiviti. Daniel is a director of the technology risk and resilience practice focused on cyber risk quantification. Over 10 years in the technology risk advisory space, specializing in assessing financial and technology risks to organizations and ensuring management has adequate controls to mitigate those risks while serving a wide variety of clients across various industries. Now, welcome Daniel. Do you mind telling us a little bit about yourself and what your journey has been in GRC?

Daniel Stone: Sure. Thanks, Chris and happy to be here today. I originally have kind of an audit background, so it's been kind of an interesting move throughout my career between financial and IT audit and then moving over to cybersecurity. That's kind of been the consistent theme, is kind of my risk management GRC background throughout all of those. I think kind of the audit background lends itself well to cybersecurity, risk management and also some of the financial aspects of risk quantification. Originally an accountant, but as one of my bosses likes to say, I'm a recovering CPA. Whole goal is to use some of the same principles, but use them to manage risk and security in a better way. I spent a lot of years doing cyber risk assessments, trying to find ways to problem solve and better prioritize risk in an area that maybe isn't as mature as financial risk management sometimes.

Chris Clarke: That's awesome. Thanks for sharing your background. Additionally, Tim is an associate director in the Protiviti Chicago office specializing in cyber risk quant and risk management program development. He has experience performing quantitative risk assessments, developing and implementing risk management programs, and performing cybersecurity and strategy assessments. Welcome, Tim. How about you? What brought you to GRC?

Tim Kelly: Yeah, thanks Chris, and thanks for having us on the show today. I started my career in cybersecurity, did everything from technical risk assessments, program development and maturity assessments all the way through to more technical kind of architecture and cloud security sort of work. I think where I really started to see an emerging theme across a variety of the clients that I worked with is that there were issues taking things from a technical level or the kind of boots on the ground level to this thematic enterprise- wide. How do we start to make sense of that? I think that's really how I got into program development because I wanted to figure out how we could build processes that made sense in the context of an organization. Not only how do we scale those, but also how do we create views and value for stakeholders in and outside of security. I think that was kind of the interesting problem that brought me into the GRC space from more of the technical security side and the rest is history.

Chris Clarke: Yeah, that's awesome. I think it's super fascinating, Daniel, you coming from the audit business side background, and Tim, you coming from the security space. How both of those different paths can lead to this kind of nexus of the two of cyber risk quant. That the decisions you're making and what you're advising your clients on have real impacts on the business and the way they approach those programs. I think that's just a really cool story in the way that goes. Before we jump heads on directly into risk quant and getting onto the meaty topics, I'd love to just get us started with some daily risk management. I think we talk a lot about it in a professional practice, but we use risk management in our day- to- day. The example I love to give is I have a young child. My wife and I were terrified of not getting enough sleep. The way we approached and mitigated that risk is we started to go to bed at the same time he did every night, so we go to bed at 8: 30 or nine o'clock nowadays. Where that has really helped us mitigate the risk of a bad night's sleep is that if he ever had a bad night, we were already six to eight hours in and felt good when we woke up. It's really turned into more of a strategic advantage for us because now we're getting up at 4:30, 5: 00 and we have this time to ourselves to really plan out the day and get ahead on things. I use that as a joking example of risk management, but I'd love to hear either your perspectives on how do you use risk management in your day- to-day.

Tim Kelly: Sure. I'm happy to jump in on that one. I think maybe one of the more obvious examples is just your own insurance products that you purchase. I'm recently a homeowner, so we kind of went through that whole process again. Just looking at the details of homeowners insurance and life insurance and your whole financial picture. Separately I'm a bit of a finance nerd, so I love crunching the numbers on all that sort of stuff. I do think it's interesting, especially having worked in risk management for nearly a decade, to take that perspective on your own life, your financial life, and also just the peace of mind that brings of being able to weather any type of storm that comes through. Yeah, that's my thought on things.

Chris Clarke: Very cool. Taking the transfer risk approach.

Tim Kelly: Exactly.

Chris Clarke: Daniel? Sorry.

Daniel Stone: Yeah. I mean, there's the classic example of every day you're in traffic and trying to get to work in the morning. Although I don't know, sometimes that is more or less applicable to folks these days. Just this morning on the way in, it's like, okay, I need to be here by a certain time in the morning. I can leave my house as late as X time and hope for the best in Atlanta traffic, which is not always a smart bet to make. Then there are little shortcuts and different routes you can take, but there's always the risk of encountering a school bus or a MARTA bus or a construction somewhere on the way. Every morning is a new route, that's for sure.

Chris Clarke: That's awesome. No, I can only imagine Atlanta traffic as my route is from my bed to a different room. I appreciate you sharing. I know that's just a goofy start to it, but jumping a little bit more now into cyber risk management and quantification. As you both talk about risk quantification, kind of the business impacts of it, what are some pros and cons that folks listening to this should be thinking about as they start down the path of risk quantification and really the impacts of the business associated with it?

Daniel Stone: Yeah, sure. There's certainly a lot of both. Like with any type of topic, every risk management methodology has its use cases, it's strengths and some areas where it's maybe not as impactful. I think it's important to look at all of them as tools in the toolbox. Where I think risk quantification excels and its major pro is that it's actionable, it's decision focused. I almost like to think of it less of a risk management methodology and more of a decision making framework. Within that, the real benefit of risk quantification is that it's all about making comparisons and decisions and enabling management to make better calls on what they're going to invest in, how they're going to secure an organization. The main pro to me is putting things in that framework kind of forces you to make decisions. Because if you think about how we traditionally deal with the risk, it's really a check the box exercise in some cases. You get this big long report that comes out to management and they say, oh, I've got 300 high risks in the organization, 200 mediums, 100 lows, whatever. Hopefully it goes the other direction in terms of criticality, but it's easy for folks to say, oh, a high risk, I can accept that. It's harder to say, this risk could be up to$ 20 million a year of loss exposure to my organization so I'm going to accept that. It kind of forces you to make a decision in some respects. I think that's good in risk management because I don't think we see enough of that sometimes. I think that would be one pro that I would point out there. Curious too, Tim, kind of what maybe you see as another pro.

Tim Kelly: Yeah, I think one of the other major benefits that I see is that you're able to compare risks outside of cybersecurity. As we take that up to the ERM level or whether that's making business decisions about do we go into a market or how do we invest in our technology stack to make sure that we're resilient against cyber threats or otherwise? It's a much broader conversation. When you go to that discussion with, I've got three critical risks on this application, we're talking apples and oranges, and it doesn't add a whole lot of value to that discussion. If we're able to talk in dollars and cents, I think that can be very beneficial. Then in addition to that, as we start to look at actions that are driven out of that, maybe getting back to the earlier example of, can we accept that? Yes, possibly, but also can we ensure that out? Is there a product that we're able to purchase or contractual terms that we're able to implement that help mitigate some of that risk? I think it becomes a much more clear path in terms of next steps as to how do we respond to this and how do we make sure that we're doing what's best for the organization with a broader lens.

Chris Clarke: No, I appreciate it. It makes a lot of sense, and I love the analogy of it becomes another tool in the toolbox, Daniel. I think oftentimes, you get all this data in front of you, which is what you need for it, and it can really lead to some kind of analysis paralysis of if you have all this data, what do you do with it? Forcing leaders to make that decision, It's incredibly impactful to the business. It allows us to make those decisions on tech stack and where to invest and how to turn that into a strategic advantage, which is super powerful. I'd be interested to hear the flip side of it, of where you've seen this maybe go astray. What some of the cons of risk quant, or maybe not cons, but cautions for people as they go into this.

Daniel Stone: Yeah, I mean, you hit on, Chris, some of the analysis paralysis piece. Certainly when we're talking about risk quantification, it's not uncommon for folks to see that availability of data as an opportunity to, well, let's go quantify everything. Let's put all of this data together, and until we have the exact right answer, we can't do anything with this data. I think we tend to caution folks to think about it as an opportunity to do something with data that's going to get you a better decision than where you were before. You're going to be able to quantify things at a new level, but there's definitely a diminishing returns on accuracy. If you're spending all your time investing in your risk quantification program and fine- tuning it to get the exact right answer, there's a cost that comes associated with that. You're almost creating a new risk of spending more time on risk quantification than actually creating value for the organization. There's a cautionary tale there, and I think one of the things that folks need to worry about a little more is just where am I going to spend the energy on this? How precise do I need my answer to be? As a result, what level of data or analysis within the fair model or whatever type of risk quantification solution you're using is good enough to get you a good answer. I think that's a real art sometimes, but it is something that can be, if you just start out with a fair program or in risk quantification, you might try to overdo some things and feel like it's not attainable. Whereas maybe you're just not thinking about it the right way in terms of how much investment you're going to put in the program to get what you want top of it. We see that a lot.

Chris Clarke: Ironically, it's the quantification of the hours to go into whether or not to quantify everything. Tim, you have another perspective?

Tim Kelly: Yeah. No, I agree with all of that. We like to use the term a useful level of precision. Depending on the question you're trying to answer, get to the point where you have a level of precision that allows you to make that decision and then move forward. I think maybe just one of the other things that I was thinking about as we were talking there is kind of getting to the point where the program's up and running and getting past that initial threshold. There is an upfront lift as you're starting to build out your processes and building that inventory of data. A lot of that once in place is more of just a maintenance exercise as opposed to a net new. There is an upfront lift, and I think a lot of organizations will get stuck in that initial kind of program build and not necessarily see the full value. Once it's steady state, not to say it runs itself, but certainly less of a level of effort as it relates to the care and feeding of a program.

Chris Clarke: That makes a ton of sense, and I think that's even where I get a little nervous about things, is there's always this standup or this activation energy that's needed from an organization to get over that initial bump. When you're climbing a hill, you don't know necessarily always where the top is, but once you're over the top and you're on that downhill, it's so much easier to maintain and keep going off of it. I like that.

Daniel Stone: Yeah. I think it's free to continue doing what you're doing that might not be working. You don't always see the hidden cost of, are we making bad decisions with the model that we're using right now? Are we missing insight? Are we missing the fact that we could be misprioritizing or not making the right business decisions? It's hard to see that when you've got something that" works" on a daily basis. It's free to keep doing the same thing. Change requires a little bit of investment. I don't know, sometimes you get a lot out of the conversations and the process you go through with risk quantification that adds real value to an organization that I think, for better or worse, you don't always see when you're making that decision of, should we do more of this?

Chris Clarke: I mean, kind of using that as there's a hidden cost to this, but then there's real value in it. How have you seen this be effectively communicated upwards? The people who aren't in the day- to- day, how have you seen this kind of risk quant communicated to boards? What types of reports do you think resonate with them or just communication?

Tim Kelly: Yeah, that's a great question. I think right off the bat, it's important to start with an understanding of where the board's coming from, not only from a technical perspective, but what are their objectives and what's kind of the conversation going on at the board level, so that you can meet them where they are and come with valuable information. Again, getting back to the decision making framework and providing the information that allows for that effective decision making. I think that's kind of the first thing. I think going from there, oftentimes what we see is a board member or a member of leadership will read a headline and then the security organization goes scrambling to react to that particular topic. Let's say ransomware for instance. As opposed to sending everybody's week into a spin, I think the more effective solution to that is having inaudible, and essentially allowing for the organization to respond with, we've looked at scenarios related to ransomware. Those compare to our top risks in this way, and as a result, we determined that we have the ability to respond and recover to a ransomware incident within our tolerance. Because of that, we're prioritizing other types of initiatives and we feel that these are more important because X, Y, Z factor, you name it. I think that rationalized approach and having the research done upfront helps to get away a bit from the emotional side of some of the headlines or kind of hot topics that pop up.

Chris Clarke: Well, I guess even to that point is how do you plan all that? How are you looking around corners for that reactive piece? I guess, no, sorry, go ahead. I'll pause there. How do you plan for when you see a headline and you react to it? Is it just having the model and the data ready? What are the types of prep you can do to make sure that you're equipped to answer those questions?

Tim Kelly: Yeah, I think best case is that you've looked at that risk in the past, but that's not always going to be possible. Ransomware, ideally that's part of your top assessment and you're got kind of the data to back that. I think your point is you're not always going to know what is around that corner. I think the best way of looking at that is saying, we've got these top risks. Maybe we do a quick analysis of something new that popped up and then stack that against our existing set of priorities to say, are we still on target? Did this new set of data and these new circumstances that we're now operating within, did that change the way that we need to look at protecting the organization or responding and recovering to events? I think that's how I look at it. Daniel, I'm curious if you've got any additional thoughts around that.

Daniel Stone: I'll say outside of your traditional black swan events, let's say a high impact, low probability event. Outside of those, I mean, risk isn't new. A lot of what we're talking about are zero- day vulnerabilities or different variants of attacks that may come up in the news here and there. A mature risk management program, when you're kind of working in a risk quantification space, you've got generally a good catalog of what are my assets? What does an outage look like for this asset? What does a loss of data look like from this asset? If you've invested what you need in kind of a risk quantification program and have a mature process for cataloging those probabilities and loss events and things like that, most of what you could do is anticipate those questions by saying, we've looked at this scenario before. We know generally what this looks like. We're happy to get back to you, Mr. Board Member or Mrs. Board Member, and let you know how this specific technology fits within that scenario. We know that at a high level, this is what this risk is to the organization. Having all of that kind of probability management, data management ahead of time in kind of an operational platform that you can query and just get results from, gives you the ability to do that on the fly.

Chris Clarke: Yeah, that makes a ton of sense. If you just know what your environment is, it's really easy to then assess what the impact to that environment is. One thing that always gets me around that though is how do you ever feel confident that you know all of your assets? That feels like a really underlying piece of data that's critical to the risk point piece, but how do you ever build confidence around that?

Daniel Stone: Well, I think the answer is you should probably never feel confident that you have all of your assets. I think you just spend time on it on a regular basis. I hate to go back to the pros and stuff of this methodology, but when you have a mature risk management and risk quantification process in place, it enables a lot of conversations with the business that you may not have had before. Because oftentimes risk management's kind of done in a silo, you've got, I've got this list from my IT team that's got a list of all of our servers on it and I have kind of ransom vulnerability analysis against that. Those aren't the kind of conversations you're having all the time with a risk quantification methodology. You're spending a little more time with folks like BSOs or your product owners or things like that. Understand how risk impacts them. Oftentimes during those discussions, they go different ways than you'd expect. One of the ways that they sometimes go is you learn about a new critical process or an asset or something like that, that the business needs to operate and you can update that within your analysis. I think the key is having more of that dialogue with your business leaders gets you that comfort that you're covering what matters to them.

Tim Kelly: Agreed completely. I think the only thing I would add is that it's expected that the business is going to grow and change or reshape what those assets look like over time. That should be taken into account as you're building processes and developing those relationships which allow you to ensure that that's accurately reflected in your analysis. Easier said than done. I think that overall is kind of a base assumption that the business is changing just as much as the threat environment is changing and those variables need to be taken into consideration.

Chris Clarke: That makes a lot of sense. At the very least, it becomes a discovery exercise in some way. This is kind of tangential I suppose, but one thing that in this quantification conversation we've been talking a lot about loss events, about losing something from the business. I think oftentimes there's a negative reaction to compliance and risk because it's focused on what you're losing. It tends to be the stick and not the carrot. You're doing it to avoid a fine, you're doing it to avoid some kind of bad event happening, rather than positioning it as a growth opportunity or strategic endeavor. How have you seen risk quantification be used to instead incentivize the positive side of the business or help them make strategic decisions that grow?

Daniel Stone: Yeah, I think I would go back to an item Tim was mentioning earlier around using it to make business investment decisions and enable strategy. We have done a number of quantitative scenarios or projects where we're focused on, if we want to deploy this new solution or maybe upgrade our applications, where is it going to have the most impact first? That could be in terms of reducing vulnerability risk to the organization, but it could also be in reducing kind of non- productive time. For example, if we're spending a lot of time supporting a system because it's out of date, no one knows how to use it anymore, which I know most folks listening can relate to. Every company's got some of those black holes of this little piece of server that's duct taped together that was created in 1990 and has been supported ever since that no one touches, but it just works. If we can get off those types of solutions where we're having to internally work on them, we can quantify that and enable not just loss avoidance, but also better solutions for a customer. Things like that can be taken into account in a fair model. I'm not sure that folks always focus on some of those areas, but certainly productivity improvements, revenue growth, things like that, those can all be modeled in the forms of loss in fair. From a loss magnitude perspective, you can show differences in scenarios of how one with more or less revenue loss works. You can go both directions there.

Tim Kelly: Yeah, maybe one example that I think we've seen at a lot of clients is a cloud migration. I think that exactly aligns with what Daniel was talking about, legacy environment and a lot of maintenance cost and a lot of wheel spinning. I think that comparison and that kind of future state projection of what would the new environment look like and what is the risk posture associated to that can certainly add to business justification for those types of investments.

Daniel Stone: I think the decision part that comes in there, just to piggyback off, a lot of companies don't have just one of those things. They've got hundreds or thousands of those old duct taped together solutions that we need to figure out which of those make sense to migrate first. That's a good use case for a decision based model like cyber risk quantification.

Chris Clarke: Yeah, that's super powerful. I think that ties in to what you're saying around with these duct tape server models, there's something to be said for just institutional inertia. That's the way we've always done it, it works, why change? Equipping leaders. There's a lot of that upfront work, but equipping leaders to make those decisions that are ultimately going to help them save time, save money, lower risk, is super powerful in the way that works. I really appreciate that explanation. Pivoting a little bit. We've talked a lot about loss events and quantifying those. I think though now when we pivot to responding to those events in a way, there's a lot of buzz around operational resilience and preparing for if something bad occurs, how do you as an organization become resilient to that? There's things like DORA and NIS 2. Do you all mind giving an overview of those regulations and frameworks and why they matter?

Tim Kelly: Sure. Yeah, happy to. Maybe just start with just kind of DORA versus NIS 2. DORA aims to strengthen resilience in the financial sector, so it's more focused with that type of organization. Whereas NIS 2 is focused on cybersecurity, but across a variety of sectors, so actually all sectors. The other maybe kind of key detail here is that DORA is a regulation, meaning it's legally binding. It applies to all member states across the EU, and it's also highly prescriptive. Whereas NIS 2 is a directive, meaning that member states have that kind of autonomy to choose how they transpose that into law. Jurisdiction is critical to that discussion and determining what applies to your organization is upfront and very important. We will see more as member states start to define those laws or propose them. I think at a high level that kind of covers it. I think in line with that discussion around regulation is also maybe some of the emerging themes within the space of operational resilience, and one of them is data resilience. I think at a high level to describe that, the way we look at that is essentially starting to establish loss tolerances with the business for how much data could be lost to still maintain operations and continue moving forward. There's this kind of idea of deterministic versus non- deterministic recovery types. Typically, we'll look at things from more of a deterministic point of view. There's the rebooting of a server takes X number of hours or minutes. Whereas there are things that probably weren't taken into account previously, such as assuring that a system is free of cyber threats, that there's no advanced persistent threats within the environment, no malware that's kind of hiding in the shadows on systems. Those types of events have variable timelines. I think the overall concept here as we're kind of looking at data resilience and how that feeds into the discussion is we need to take into account that there are different types of recoveries and that's going to vary and we need to plan for that. Having those candid conversations with business leaders, determining their tolerance, and then also being realistic about the state of events as it relates to recovering and responding to a cyber event. I know we went through a lot there, but happy to answer any questions if you want to dive deeper into any of those topics.

Chris Clarke: Well, the very first is I appreciate you. Now I know how to pronounce NIS 2, so that's a good start for me. I guess off of that is there's data resilience, there's the loss tolerance around that. How would a risk quant or fair model help you to build a more resilient organization and ultimately kind of comply or go above that?

Daniel Stone: Yeah, and I think one of the pieces of the fair model and CRQ more broadly is kind of that loss magnitude or impact side of the equation. One of the things that you have the ability to do is really, we've got some of those business focused discussions we've talked about that you have with all your business leaders when you're identifying assets or trying to think about how a loss event would actually impact the organization. That gives you a lot of resiliency focused data. In addition to that, you have the ability to make changes to fair scenarios or models based on whether you're going to implement controls or upgrade a system that might be critical to operations. You can see how that's going to affect an overall loss chain and show, if we make this change, we can reduce downtime in this scenario, which ultimately allows us to get back up and running quicker and reduce our financial loss exposure by a million dollars, or whatever number it actually is. You have the ability to make comparisons between different hypothetical and current state kind of paths for how things could play out in an actual loss event. That allows you to again, make decisions about where you need to invest in resiliency to kind of bring yourself into that tolerance level.

Tim Kelly: Accounting for that uncertainty is critical. Totally agreed, Daniel. I think especially from the op res point of view, oftentimes it's not if this event occurs, it's when it occurs, how do we respond? Whereas when we look at a cyber risk, we're going to take into account the frequency of an event op res is just assuming that that occurs, and then we try and work backwards from that point and say, what can we do to limit the impact? Then is a worst case scenario still tolerable under these set of circumstances? It's a different lens from the kind of pure cyber view, but I think the fair methodology certainly adds value to that conversation.

Chris Clarke: For sure. That makes a lot of sense in that, you can almost prioritize where to focus op res based on the impact that has been identified from a quantification or scenario based modeling. That's incredibly helpful. I guess where I'd be interested to hear is in your experience, where has these type of operational resilience efforts and programs had the most impact in the way businesses are approaching risk or approaching risk quant? Has it changed the way where they plan out these scenarios or changed the way they approach risk scenarios?

Tim Kelly: Yeah. No, I see what you're saying. I think the short answer is yes. I think it's more of an enterprise risk discussion as opposed to a pure cyber discussion. Yeah, I think the conversation's changing. Some industries are ahead of others, and as we talked a little bit about earlier, op res in the financial services space is more evolved than other places. Partially driven by regulation, partially driven by in- house requirements and risk kind of management, for lack of a better term. Yeah, I'm not sure if that answers your question, but it's certainly the conversation's evolving weekly it seems.

Chris Clarke: Yeah, and DORA just came out. It's still on the front lines. We don't know where those impacts have had, but it's interesting to see a broader trend of EU comes out with something, it impacts the financial sector first, and then from there we'll see these kind of downstream effects of how other industries then take into account similar things in other geographies.

Daniel Stone: I'd also point out that some of the concepts behind resiliency and criticality analysis aren't necessarily new. There's some regulatory teeth behind it for financial services, the EU coming out, but healthcare, for example. Risk to patient data, you've always had kind of a focus on maintaining resilient exact copies of those, doing a hospital vulnerability analysis, a hazard and vulnerability analysis for a hospital. You've got NERC CIP in the electrical powered utilities industry. That requires kind of understanding where your critical components are to be able to deliver consistent power. Critical infrastructure has always had some kind of component of resiliency here. I think where we're starting to see more organizations move is solving for some of the actual probability of it and using a little less of that traditional, well, there's a low likelihood and a potentially high impact of this. I think that's where financial services is maybe more ahead of the ball here with some of these regulatory impacts. I do see anywhere that there's a risk or a vulnerability analysis, a supply chain resilience analysis that needs to be done, fair can certainly provide a lot of the tools to enable that and do it in a more efficient, defensible way than you might've done historically.

Chris Clarke: Yeah, that makes a ton of sense. I appreciate that perspective. Yeah, there's just more there and the financial industry is ahead of that, but this doesn't just stop there. It is applicable to everyone and NIS 2 is giving us that framework for in other spaces to prepare for whatever's coming down. I guess similar, this is that emerging regulation piece of it. I'd be interested in talking a little bit about some emerging technology and how that might impact some of this op res or fair piece of it. In particular, with these changes and with AI, there's risks, but there's benefits. I guess, what do you see as some of the main risks around artificial intelligence and how that impacts organizations?

Daniel Stone: Certainly as organizations are adopting artificial intelligence and different kind of emerging technologies, large language models, LLMs, things like that within their operations, there is a lot of productivity gain they experience, but there's totally new types of attacks. In fact, historically within the cybersecurity space, we don't always think as much as we should about the loss of integrity. Within the CIA triad, we think about confidentiality and availability a whole lot, loss of integrity, maybe less so. When we're talking about AI models, large language models, things like that, loss of integrity becomes a real concern. Those models are making decisions, they're the basis for decisions within the organization, and there are new types of attacks or threats that can impact those models. Two that come to mind are poisoning a model, so consistently feeding it things that can make that model make decisions using improper data, which is almost what we're talking about when we're talking about kind of the benefits of using fair versus your legacy risk management decision making processes. The model could be bad and someone can intentionally feed it bad decisions, or accidentally the model can make some bad decisions based on what it's been trained on. There's certainly the risk of that. We need to understand what AI is being trained on, what decisions it's being used in and how impactful, good the model needs to be, how much we need to invest in it when we're making those decisions. I think that's something that you can model out with fair and CRQ and risk scenarios. What does a loss of integrity of decision making look like in this specific area the model's being used in? To understand, to prioritize what models might be most critical to your operations. Then obviously there's a ton of different risks that kind of pop up in the use of AI and things like that. I don't know, Tim, if you have any other thoughts on some key risks that might be worth looking at.

Tim Kelly: Yeah, I think a lot of the recent risks that we've had discussions with clients around is not that AI's bringing some new risks to the table, but rather that it's enhancing existing risks. One example of that would be maybe phishing. An AI algorithm might be able to scrape LinkedIn, understand some of the internal speak of an organization, write a more tailored email to have more targeted phishing campaigns. Can all of that happen today? Absolutely, but would an algorithm help perpetuate that at scale? I think that's really where we see that having an impact. It's not that AI is a new risk, but rather that it's adding threat actor capability. That would be how we'd look at that. Now that said, if AI algorithms are going to have an impact on a variety of risks, can you take the previous version of your assessment, look at how that changed as a result of this new set of factors that were brought in, and then aggregate that across all of your risks to slice and dice that data to communicate that to various stakeholders? Absolutely. At a high level, I think it's more of an enhancement of threat actor capability than anything, at least with what we've modeled today.

Chris Clarke: That's such an interesting, so there's the risk of using AI yourself for business where if the data going into a model's bad, then you're almost like magnifying the bad data risk associated with it, which is fascinating. Then also all the time it's talked about how are threat actors getting smarter, and AI is such an enabling capability for that. I mean, I guess the question then becomes, what's the way for AI to make risk managers and risk decision makers smarter on the flip side to counter that kind of bad use of AI? Are there benefits to it, in using it in risk management?

Daniel Stone: Of course. In the same decision capability, again, if you develop a model that has good capabilities and is trained on the right data. Security tools have been doing this for years now with machine learning, kind of using that to identify anomalous behavior within the organization and identify trends that are unfavorable to put a stop to that. AI can react a lot quicker and see patterns in things that humans can't always do, because they could spend a lot of time doing cycles, looking at different hypotheticals. That enables organizations to identify things quicker that could be risks. It helps create organizations that aren't just recovering from threats, but actually kind of learning from them and being better enabled to respond to a threat in the future. I think that's to me the main benefit of AI, as well as hopefully the ability someday to be able to have it do some of this risk quantification and fair analysis at scale for us. We can ask it questions like, what's on my risk related to this scenario? Versus having to go quantify that ourselves. I think that's probably a few years off before we can do that at real scale, but it helps make a lot of what we're talking about here more achievable for organizations when you can plug some of that up, understand the model, but really get valuable answers out of it to make quick decisions. Even again, someday maybe let the model make some decisions for you, but certainly not anytime soon.

Chris Clarke: Those were kind of the main topics that I was hoping to cover today. One thing that we like to end this on is a little thing called risk or that, which a little bit of a would you rather around risk topics. Starting with a fun one. We talked about DORA, who is not only a new regulation, but also a famous explorer in the children's cartoon space. Thinking about famous explorers, in your opinion, who's a riskier explorer, Indiana Jones or Captain Kirk? Daniel, you can start with this one.

Daniel Stone: Okay. No, it's a good question. I think there has to be some points that go to Captain Kirk just for space exploration is a little bit just inherently riskier I'd say, than on the ground exploration here. I will say Indiana Jones comes across some supernatural stuff here and there that you can't always anticipate, and they both nearly died many times. Unfortunately, they're both in a very risky business, but I'd have to give it to Kirk just for the space angle, the exploring the beyond. Yeah. Tim, what are your thoughts?

Tim Kelly: Yeah, I think the way I look at it is that Indie's dealing with more traditional risks. Guns, boulders, poisonous darts, things that we know, although his tools are limited and his mitigation strategies are limited. Whereas Kirk is dealing with more emerging risks, but he has technology at his side to help combat some of that. Maybe it's a wash. I don't know. We'd have to do the analysis.

Chris Clarke: No, that's a good point. I hadn't thought about it through the lens of also what resources do they have at their disposal. Yeah, Kirk's got Starfleet, but Indie's got a much smaller scope of those in his own way, so I appreciate that. Similarly and more aligned to what we talked about before, there's this concept of where cyber risk originates. In your opinion, you think cyber risk tends to originate more within your organization or from external factors to your organization? Tim, you can go first on this one.

Tim Kelly: That's a good question. If we're just talking about actors, I think it exists more externally. If we just look at the data, there's more external actors than there are insider threats. That said, I think maybe where your question's getting at is, is an organization exposing themselves unnecessarily, which targets them to external actors? It's a good question. I think if I had to just make a decision, I would say more externally, but I could see a case either way.

Daniel Stone: Yeah, I think the framing of the question, Tim, that you get up there is exactly the right way to approach this. Which is we can statistically say 80% plus of threats come from outside the organization, but when you think about it in the sense of who put a system with all their customers' data out on it into web facing, et cetera, in the first place for an attacker to be able to take advantage of? That's an internal decision that was made to do that. Whether that's a risk versus benefit decision, management's always making decisions, whether it's like, is this risk acceptable? You could conceivably argue that risk originally comes from an internal decision or action that was made not maliciously, but it's an interesting kind of chicken or the egg question. Which came first, the internal decision or the attacker?

Chris Clarke: Yeah, go ahead. Sorry, Tim.

Tim Kelly: Oh no, I was just going to say, if you look at it from the realm of control, like what levers can you pull to secure the organization? There are certain factors that you can influence, but then at the same time, there are nation states and actors that have sophistication that you might not be able to defend against at least entirely. I like the question, it's thought- provoking, but I don't know if I've got a full answer for you.

Chris Clarke: It's never going to happen probably purely one way or the other. You're not going to have someone purely internally just committing cyber crime. I mean, there's a hundred percent examples of that. Similarly, you're probably never going to have just these brute force attacks that only work. There's going to be some kind of give and take about that where our attackers are getting, to call back to the AI and phishing emails, they're getting smarter. Are organizations keeping pace and training and enablement to respond to those and react to those? Because everyone in some way could have that kind of impact in giving access to that external inaudible. I guess I don't have a strong opinion on it either, but it's just fascinating how those two play together. Thank you for sharing that. I guess the kind of last one here is when you think about the risk industry and the GRC industry, we talked about emerging technologies, but then similarly we're talking about these regulations on emerging technologies. There's these conversations around how do you responsibly regulate AI? It's interesting, if we talk about the GRC space, which of those two do you think is going to have more of an impact? Is it going to be the way we use the technology within GRC, or is it going to be able to respond to these regulations and comply with them that is around it that will have more of an impact?

Daniel Stone: Yeah, I guess the way I would think about this, I think people are going to feel most of the day- to- day impact from the regulations. Day- to- day a GRC professional, you've got to meet and comply with the letter of the law in those cases, and that's what your GRC program is designed to help do. I would make the theoretical argument that the emerging technologies themselves have the bigger impact. I mean the regulations, despite what your opinion may or may not be, they don't come from out of a vacuum. They come from the fact that there are real societal impacts from emerging technologies, and there's a reason why those regulations are being written most of the time, let's say. There's a reason why financial services institutions, for example, have a lot of resiliency concerns around them. That's because it's really important that folks have access to their digital currency and the ability to have capital freely flow throughout the society. That's extremely important. Same from a healthcare perspective. The fact that we're digitizing so much of that, that's causing the reason for why we need to have this level of scrutiny over those operations. Because if there's an integrity question or something like that related to that, that's just not something that existed before other than what we dealt with, with financial audit. Counting the dollars, making sure they're there in the bank. We don't do that anymore, but we got to make sure that the ones and zeros all line up. It's the same thing, just different technology behind it, in my opinion. The emerging technology is what causes the need for that additional level of scrutiny.

Tim Kelly: Yeah, I agree with all that. I think when you say impact, there's kind of two things that come to mind. There is the impact of this emerging technology on the risk posture of the organization. I think that's the clear winner. The risk is greater as a result of the emerging technology, but at the same time, the organizational impact may be greater from the regulation because it's changing the processes, the way that the organization responds or is required to operate within a certain market. I think if we define impact in those kind of two different ways, that's how I'd look at it. Maybe two winners for two different contests.

Chris Clarke: That makes sense. Yeah, I do think about it in the sense of there's probably the first line of defense of the business is the one adopting that technology, and then the second line needs to figure out how to comply with these regulations to keep the first line out of trouble. Then the third line's got to figure out how to check that the second line's doing it too. No, I appreciate the perspective there. Those were all the questions I had. Any parting thoughts you'd like to leave with the listeners?

Daniel Stone: No, I think this is a good conversation. Have enjoyed having it with the team, and I think there's a lot more to come in this space. Obviously with emerging technologies, risk is always changing. It's one of the things that keeps us interested and moving forward. I think stay tuned in the risk quantification space. There's a lot of automation and emerging technologies that folks are looking to in order to make these more actionable. Excited to see the growth of it over time.

Tim Kelly: Yeah, I think the only thing I would add is I would just reiterate the fact that risk management should be used as a tool. It should help drive a business. It should help make decisions and be effective and shouldn't necessarily be a tollgate or a stick, but rather a tool that helps enable. I think with that kind of reframing, it can change the way that an organization reacts and responds to those types of discussions.

Chris Clarke: Well, I appreciate that and I know that I'm going to take away and start thinking of risk quantification as a way to drive decisions and drive actions with our customers and our clients. I appreciate y'all sharing and for coming on the show.

Tim Kelly: Thanks for having us, Chris.

DESCRIPTION

Cybersecurity programs involve lots of moving parts, and they only grow more complex over time as technology becomes more advanced and cyber threats become more numerous and sophisticated. Cyber risk quantification can be a crucial tool for keeping up with shifting cybersecurity landscapes.On this episode of GRC & Me, Chris Clarke is joined by Protiviti’s Daniel Stone, Director, and Tim Kelly, Associate Director, to discuss how cyber risk quantification can lead to better risk decision-making, how to beat analysis paralysis when you’ve got reams of risk data in front of you, and the best ways to use risk quantification to reduce reactivity and improve communication across your organization.