Shifting Gears To Quantify Risk with Netflix’s Tony Martin-Vegue

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Shifting Gears To Quantify Risk with Netflix’s Tony Martin-Vegue. The summary for this episode is: <p>Switching from traditional risk analysis methods like ordinal lists or red-yellow-and-green charts to more modern approaches like risk quantification requires a paradigm shift in how you think about measuring risk, but the increased accuracy, specificity, and reliability you’ll gain by doing so pays dividends.</p><p>On this episode of GRC &amp; Me, Netflix’s Tony Martin-Vegue join LogicGate’s Chris Clarke to explore the best ways to navigate this transition, how to learn and leverage popular risk quantification frameworks like Open FAIR, and why you shouldn’t completely throw your colored charts out the window just yet.</p>

Chris Clarke: Hi, welcome to GRC & Me, a podcast where we interview governance, risk, and compliance thought leaders on hot topics, industry specific challenges, trends, to learn more about their methods, solutions, and outlook in the space, and hopefully have a little fun doing it. I'm your host, Chris Clarke. With me today is Tony Martin- Vegue, staff information security risk engineer at Netflix, where he's responsible for leading its information, security, and technology risk management strategy and vision. He's also the co- chair of the San Francisco chapter of the FAIR Institute, and a professional speaker on top of the topics of risk management, cyber risk quantification, information security, and decision science. Welcome, Tony.

Tony Martin-Vegue: Hi Chris.

Chris Clarke: Could you tell us more about yourself, and what's your journey been like in GRC?

Tony Martin-Vegue: Yes, I'd be happy to. Thank you for that warm welcome, and thank you for having me on the podcast. It's a pleasure to be here. I really enjoyed the podcast I was on with you all, I think it was last year. Times sure flies, but I'm thrilled to be back, so thank you again. Okay. My journey in GRC. It's long. It's been a long journey. I started out in IET, like a lot of people in the GRC space, started out many years ago doing just about everything you could possibly imagine that one could do in IT. Everything from help desk, running cabling through attics, and getting covered in fiberglass, setting up servers, learning how to code, websites, everything. All of that and everything in between. And I decided that I really needed a career change. I just wasn't really feeling personally fulfilled with that type of IT work. So I wanted a career change, but I didn't want to throw away all that knowledge, all the certifications that I had earned. So I decided information security, that this would be a really good segue into a career change. I could still use the old knowledge that I had, but hopefully transition into something that was more fulfilling. I started moving into information security, and with that, I found out quickly you really have to find a focus. Do you want to be a pen tester, a red team or blue team, or do you want to work on business continuity? Do you want to be on the incident response team? There's a lot of focus areas. And I chose risk management. And the reason why I chose risk management, it's really two reasons. I was lucky to have a really good mentor early off. He was brilliant in the field of risk management, cyber risk management, and technology risk management. Taught me everything I needed to know back when I was starting out, and he's the first person that connected something for me. I have a degree in economics, and he connected that for me. He's like, " Listen, it's the same thing. Calculating the interest rate on a bond, the future interest rate on a bond or future return on investment for investments, that's the same thing as a risk assessment. You're just figuring out return or your losses, and then determining the probability of occurrence, and then filling in the blanks from there." So he really connected that for me, that I could reuse that in this new GRC space that I was in. And then honestly, the rest is history. Here I am.

Chris Clarke: That's pretty awesome. That's a super cool path from... I mean, I'd be even interested in how you went from economics into IT, in that path.

Tony Martin-Vegue: It was the early 2000s, and if you had a pulse, and you knew what a computer was, you could get a job in IT.

Chris Clarke: Fascinating.

Tony Martin-Vegue: And you can make a lot of money doing it.

Chris Clarke: Yeah. I mean, I appreciate you sharing. One thing that has come up pretty frequently is the power of a good mentor in people's careers. I guess, what's the best piece of advice you've ever gotten on your career from your mentor or from others? What advice would you give to people starting out in risk management?

Tony Martin-Vegue: Network. Network. It's so simple. We hear it all the time, and I feel like there's nothing more important than that. If I look back at my career and think of all the crappy jobs that I've had with horrible bosses, or bad work- life balance, or just stuff I didn't enjoy, or it wasn't a good match for me personally or my skills, those were all jobs that I got off of LinkedIn, or job boards, or just stuff like that. And all the best jobs I've ever had have all been through networking. People that knew me, people that I knew, people that were able to take my skills, my personality, what I bring to the table, and match it up with people that are looking for that. And so many doors have opened up for me with networking. Not only jobs, but also friendships, opportunities to speak at conferences and share my knowledge. Both mentor- mentee relationships. I've been able to be a mentor to people entering the field, which has been incredibly enriching for me. But I think everybody still needs mentors, even people late in their career. So I've been able to hook up with really good mentors still, and it's really important to me to have those types of relationships. Serving on volunteer positions for various information security or GRC groups throughout the world, it's just been really enriching. And I owe all of that to networking. And it's hard for me because like a lot of people, I think everybody has social anxiety to some degree. I definitely do. I don't find networking easy, so it's something I have to intentionally do. I have to set an intention to do it, set a goal, and just go out there and take a deep breath and do it. But it's been really enriching for me. So that's really the best piece of career advice I've ever received. It's simple. We hear it all the time, but it's just fundamental.

Chris Clarke: I appreciate you sharing that. Yeah. I originally thought you were making an IT joke, like build a network.

Tony Martin-Vegue: Networking. Yeah.

Chris Clarke: But to your point, yeah, that's really powerful and it's good to know that. I think there's some... Because I feel similarly, I'm inherently an introvert. Meeting new people is always tough. But when there is that intentionality behind it, and almost you know going in that there's a goal, it's really powerful. Thank you again for sharing. Before we jump into the risk management topics, I always like starting with something we call risk in real life. So better for worse, we are all risk managers, and we think about mitigation, and transferring risk, and avoiding risk, and all that, whether we think so or not. So before we started recording, I talked a little bit about how my wife still cuts my hair. I'll let anyone who actually watches this decide whether or not she did a good job. But one of the things that I always do to mitigate the risk of her being an architect and not a hairstylist is that I always plan or ask for a haircut when I know that I don't have to travel or be anywhere other than Zoom for at least two weeks. So I can have a way almost of, if there's a little snip or there's something kind of off, or the line, I know that it's not going to be too noticeable for at least a little while. I don't know if you have an example that you'd like to share.

Tony Martin-Vegue: I like that. So you're mitigating risk. You're implementing a risk response decision tree. You mitigate it by forecasting ahead. " Let's see what I have going on," and then asking for the haircut to give you time. Yeah, I like that. I do a lot of that too. I can't help it. I've been a risk analyst for so long, that I literally can't help it. Every single day, I'm doing risk analyses, whether it's in my head. Sometimes break up the Monte Carlo simulations though, just in real life. I'm going to give you an example, but I think before I do that, I need to describe just my philosophy on risk, or risk management, or risk analysis. I think that those of us in cybersecurity, I've also seen this with ERM folks or operational risk folks, we often misunderstand the purpose of risk management. It doesn't exist to identify the things that can go wrong and point it out to people, even though oftentimes, that is the outcome. Risk management exists to identify trade- offs between benefits, and risks, and bad things that can happen, and give leadership decision makers enough information so that they can decide the best, most cost- effective, and safest path forward. It's a balance between risk seeking and risk avoidance. Risk is good. It's not inherently bad. We all think it's bad. Risk isn't bad. Risk is good. Risk enables us to achieve objectives. Think about getting in a car. Just driving is one of the riskiest activities that a human can do. It really is. It's up there. It's up there with... It's riskier than skydiving or bungee jumping. It's really risky. But why do we do it? We do it because we need to achieve an objective. That objective is getting to work, getting to the grocery store, dropping kids off at the school. So there's a reason that we drive. It gives us something. It's good. It gives us reward. So that's risk seeking behavior. Now we need to mitigate that with the other side, with things that can go wrong. So how can you mitigate the risk of driving? Well, you can wear a seatbelt. First of all, right off the bat, that significantly reduces your risk of death by a lot. I don't remember the figure, but a lot if you do that. You can maintain your car. You can get a car with ABS brakes and airbags. There's a whole bunch of things you can do. That's all technology. And then you can implement process improvements. I'm using parallels for companies. Process improvements would be defensive driving. Don't be a jerk, don't zoom around, don't speed. Just practice those types of good driving habits. Now you also want to mitigate risk, because bad things are almost unavoidable. Just like data breaches, every company is going to have a data breach at least some point in their life. You're going to have a car accident. Now, how do you mitigate the risk of a car accident? You can do the aforementioned things that I mentioned, but you also need to assume that it's unavoidable. I think the listeners know where I'm going. You need to transfer some of that risk out, car insurance, so you're not on the hook for a catastrophic cost. So you can avoid some risk by defensive driving and all of that. You can mitigate some by wearing seat belts and maintaining your car. You can transfer some risk by car insurance. So I run through these calculations all the time in my head. So risk analysis, risk management exists to achieve objectives. I still want to achieve my objective, which is driving my car, but I want to do it safely. So that's basically how I approach it. For quantitative analysis, I actually did a quantitative analysis on my house a couple of years back. So I live in the Bay Area, and like many people, whether this is dumb or not, a lot of people do. I live in a liquefaction zone. Now, a liquefaction zone is when an earthquake occurs, the sand, and the silt, and the fill that your house is built on takes the characteristic of liquid during shaking of an earthquake. So it's almost like being on a waterbed when your house starts to shake. So the way to mitigate that is seismic retrofitting, that if you can find the bedrock or if you can do other things to make your house more stable during that type of event, your house has a greater chance of standing. Now, I wanted to figure out the cost of that seismic retrofit versus the cost of a catastrophic earthquake, and the probability of a catastrophic earthquake occurring, knowing that I was going to sell the house at some point in the future. So which one should I do? So I did a pretty complicated risk analysis, risk quantification of course with some Monte Carlo simulations, the cost of everything, and I got a pretty good return on investment calculation of what I should do, and it turns out it was a good choice. So during the time that I lived there, an earthquake didn't occur, so I didn't have that incident. I did put some money into earthquake retrofit. Not a ton. It wasn't the kitchen sink type of repair, but it was just enough to make myself safe if the earthquake did happen, and it got me more money for when I did sell it. So that's just an example. But as I mentioned, I can't help myself. I do these all the time, all day.

Chris Clarke: That's super cool. I really appreciate you explaining that philosophy. I've really misunderstood, but using risk as a strategic decision maker, and those trade- offs is powerful. And then I love the quantification of that. When you explain it like that, it seems really clear. So I know we've talked about a lot of risk quantification pieces, but jumping now into the actual aspect of it, how would you explain risk quant to an eight- year- old?

Tony Martin-Vegue: Okay, that's a great question. And the reason why it's a great question is I have an eight- year- old, and CEOs sometimes act like eight- year- olds. I'm totally kidding, but sometimes you have to explain things in very simple terms. So this is how I explained it to my kids. I have two young kids. So if you or your listeners are familiar with... So there's a jelly bean product, jelly bean line called Jelly Belly, and they have a special product line called BeanBoozled, and what BeanBoozled is, is it's a box of jelly beans and each fantastic tasting jelly bean. Plum, pear, peach, buttered popcorn, coconut, each of them has a completely disgusting counterpart that's indistinguishable visually from the taste of the good one. So for example, juicy pear, its counterpart is boogers, peach is barf, buttered popcorn is rotten egg, etc., etc. Licorice is skunk spray. So this is how I explain risk quantification. " Would you like to play the game BeanBoozled with me? If you win, I'm going to give you$ 20. And you just have to guess which beans are the disgusting ones and which ones aren't. But of course, the only way to find out is to eat them." Now there's a catch. There's not an equal amount of gross flavors to good flavors. Some boxes have more gross flavors, some boxes have a lot more gross flavors, others have more good flavors. And we don't know what ratio we have in this box here. We don't know. So before we play, that's a big mystery to us. Now, this is where risk quantification comes in. Risk quantification is a tool that helps you decide whether or not to play this game with me. If you play it, you're going to eat some disgusting ones, and that's really gross. But if you play it and win, guess what? You get$ 20. So that's the risk reward ratio, the risk reward trade off. Now, there's some fancy math that we can use to determine the probability of selecting a gross bean. Probability is just the proportion of disgusting beans to good beans. Every time you take one, you are going to find out what type of bean that you just ate. That's probability, the chances of getting a good one versus a gross one. Now next, on the other side of risk quantification, you have magnitude. How gross would it be for you to eat a pencil shaving flavored jelly bean? We call this magnitude of an event at my job, at my work. But here's the catch, it's personal and it's up to you. Each person has a different tolerance for risk. This means some people, they can completely handle eating a jelly bean that tastes like earwax, and other people can't take that at all. They'll throw up. And that's essentially what we're doing here. We're going to run an analysis of the proportion of disgusting beans and good beans, and how much your personal tolerance is for eating those beans. And we're going to help you make a decision. We're going to help you weigh the risk reward. And that's going to help you decide whether or not to play this game with me and win$ 20. So that's how I explain it to kids, and they like it because sometimes I actually will play this game with them, the BeanBoozled game with the actual jelly beans.

Chris Clarke: That is incredible. What a cool way to summarize and explain that. Have your kids ever done that risk and said, " I'm not playing this," or are they still in the phase, " I'll eat anything. I eat boogers anyway, it doesn't matter."

Tony Martin-Vegue: So I love that question, because that goes to the psychology of risk. Risk management is more than just statistics or business management. There's also aspects of psychology in it. Are you a risk seeker or are you risk- averse? And you see this play out at companies. Different departments are risk seeking and some are more risk- averse. Some of it is tied into data, what their revenue is or what their budget is. Sometimes it's just the person. I've asked this question to people. People that go to Vegas a lot, they're more risk seeking and they're more willing to accept risk. The reason why I bring this up is every person has a different tolerance for risk, and a lot of it depends on what you're doing, what the trade- offs are. Now, my son has a very low tolerance for disgusting jelly beans, and he won't play this game for me unless the reward aspect is high enough. My daughter will pretty much play this game for free. For her, the reward is watching me eating the disgusting jelly beans. So that's her reward, that's her risk reward trade off. And then where I see parallels with this in risk management is risk tolerance and risk thresholds. Your capacity for risk is how much risk that you're able to take on. I can think of that really with my son. He has an upper capacity and upper limit of what he's willing to do for money. And then there's personal risk thresholds, risk tolerances. Sometimes that depends on your mood of whether or not you want to play this game with me, or maybe we're having fun doing something else, they want to continue playing. So it's just a really interesting parallel for the psychology of risk and how that plays into risk seeking and risk- averse behaviors.

Chris Clarke: We mentioned the FAIR Institute as, we've talked in terms of profitability magnitude. I guess, what are the different models of risk quantification? And I know you co- chair the FAIR Institute in San Francisco, but is there a reason that you align to their model of risk quantification versus say others in any way?

Tony Martin-Vegue: Yeah, I do like FAIR because it's purpose built for operational risk, and cyber risk is just a subfield of operational risk. So it's very easy to take FAIR and extend it to many areas of your business outside of technology or cyber risk. There's also a plethora of resources out there, and that's probably the number one reason why I use FAIR, is that there's tools. There's applications. People have built FAIR models with Excel and R, so if you don't want to spend any money or you can't spend any money, you can do it for free. There's books, journals, blogs, talks, there's a conference. There's just so much around it. It makes it really easy as a risk analyst to get started, to mature your program, continue the program, train people, train your executives. So it just has that household name within GRC. It's not the only model. There's other great models out there. It's not the best, it's not the worst. Models are neutral models. They're not good or bad. But there's other stuff out there that I've used that's also really good, but that's why I use FAIR is just that it has that... It's easy. It's easy. It's easy to use.

Chris Clarke: No, that's really helpful. I think I probably am a medium or so on the FAIR model, but at the end of the day, it does just come down to that probability versus magnitude. And as you get more information and are able to make more decisions, you can extend that out and almost apply and translate the more technical aspects of it into just operational risk aspects of it in its own way.

Tony Martin-Vegue: Exactly.

Chris Clarke: To relate this back, you mentioned your career change and getting into the cyber field. And I think it's relevant to the FAIR aspect too, but how would you recommend to folks of getting started in cyber or in risk quantification? What's the first step that they should start to take to either start building that for their careers or start building that within their organization?

Tony Martin-Vegue: So I have two different pieces of advice. It's one for you personally, one for people that are looking to expand their own skills to bring risk quantification into their skillset, their personal skillset. And then the second piece of advice would be for those same people to bring it into their company, their organization. So if you're okay, I'd like to give both of those pieces of advice. And I'll start out with the risk analyst, with you personally. There's a really good book out there, it's right there on the bookshelf. There's actually two copies of it, because the first copy's destroyed from reading too many times. So it's called How to Measure Anything in Cybersecurity Risk by Douglas Hubbard and Richard Seiersen. And this is the best book to get started. It's not that long of a book. I think it's around 300 pages. You could read it in a couple weeks off and on. But this is the best way to get yourself calibrated and anchored into thinking about risk quantification. And the reason why I say that is if you're currently running a GRC program, or your skill sets are in the red, yellow, green risk, or high, medium, low, or one, two, three, that kind of thing, switching to risk quantification requires a complete paradigm shift in your way of thinking about risk, and what it means to do an analysis of risk, and what the objectives are of risk. You need to change your thinking. And in order to do that, you need to really understand the science behind it. It's rooted in science, it's rooted in math. And it's not new. Honestly, this is 300 years of math, of science, because cyber risk, it's basically just actuarial science. It's what insurance companies do. It's just that you're modeling DDoS instead of earthquakes. But it's essentially the same thing. It's the same math, it's the same concept, same paradigm, same metaphors. So that book is where I would start. Now the great thing about that is it's not FAIR. They teach you completely different models that you can run for free in Excel today. And you can run your own Monte Carlo simulations. You can build your own Monte Carlo. You can build your own risk registers, everything, everything from scratch, couple weekends in Excel. It's really easy to use. That means if and when you decide to transition yourself to something like FAIR, you already know how the math works. You know all the formulas, the equations, what a Monte Carlo simulation is. I know that sounds intimidating. It's really not once you've built your own and figured out, " Oh, okay, this is all it is." And once you move to FAIR, you're going to have that foundational knowledge. And then from there, just start building out your own risk models. You can just Google it. There's free FAIR tools out there, so you can really start today. You can do a FAIR analysis on selling your house like I did, or seismic retrofit, or you can do really easy stuff, like should I buy the mobile phone protection with T- Mobile for my brand new iPhone 15? That's a risk analysis you can do. What's the probability of you breaking your phone versus the cost of you breaking your phone? And that's going to tell you if you should buy insurance. So there's a lot of examples that you can do, a lot of ways to get started. So just from the risk analysis standpoint... the risk analyst, sorry, the risk analyst standpoint, if you're looking to expand your skillset into GRC, risk quantification in GRC, I would start with that book, and then start doing exercises. And then from there, just go down Google rabbit holes. You can join the FAIR Institute. There's other organizations out there, there's conferences, there's a lot of stuff you can do. So that's from the personal risk analyst standpoint. Now organizations, what if you're in an organization, and you want to get started with risk quantification, or maybe you're curious about it? This is my advice. And I've learned this the hard way through a lot of trial and error. Early in my career, mostly error. Now, I'm a little bit more successful with it because I've learned the hard way. So what you don't want to do is go in and replace the existing red, yellow, green risk program with risk quantification. As tempting as that's going to be, after you read how to measure anything in cybersecurity risk, you might have this feeling like, " We have to do this today. We have to rip the bandaid off and go to risk quantification right away." Don't do it. Don't do it. Continue to run your red, yellow, green program and find an ally within your company that understands the math behind risk quantification, and do a one- off risk analysis just for them, just for their team. Ask them, " What burning question do you have that you want answered?" " I really want to know if I can justify an additional headcount." " Okay. So what risks would that additional headcount mitigate?" And then do some risk quantification for that, and give them some numbers to back up the data. And then do another one, do another one, do another one. Get leaders that are asking for this type of analysis. It's going to do a few things. The first thing it's going to do is you are going to have the opportunity to practice your skills with risk quantification in a low stakes environment. You can make mistakes, because you're just working with a team. It's not part of your official program. You're just helping them out. And the second thing it's going to do is get people talking about you, getting people talking about risk quantification. So it's a win- win for everybody. And then after you have that under your belt, then start looking at transferring your red, yellow, green program over to risk, something that's quantified. At my current job, this is exactly what we did. I've been there for four years. Actually, yesterday was my four- year anniversary, which is exciting for me because I really love working where I work. Started out four years ago, red, yellow, green program was in place. We just did risk analyses. " Hey, team over there, can we help you out? Team over there, let's quantify some risks. Team over there." We did about five or six of those and got a lot of people interested in the program, and then a year later, then came over and flipped the program, the risk register over to risk quantification. And then we never looked back. So very successful with that, and I really recommend that kind of baby steps for your listeners if they're interested in doing that.

Chris Clarke: I really appreciate all of that. I think first with the book, How to Measure Anything in Cyber, we're always looking for book recommendations. I think just, how do we continue to elevate the discussion around that? But then specifically for the organization, I know when I think about change, I know it's painful, and I know it's tough to get folks on board. And I love that concept of just find one person, ask this one question, and then repeat that. Because it takes away this big, almost the fear of starting it, and makes it really digestible and just tactical and manageable.

Tony Martin-Vegue: Right.

Chris Clarke: So maybe just to ask, to pivot this to you, and I know I would normally ask this a little bit differently. But what's your burning question in risk management? What keeps you up at night?

Tony Martin-Vegue: So that's a really good question. So I think I sleep pretty well at night just generally, just because I'm always thinking about these things. And I think that even if you don't have strong mitigations in place, your Gen X listeners, if they're Gen X like me, you'll remember the cartoon GI Joe from the'80s. " Knowing is half the battle." That was their tagline. That's how I feel about risk. Just knowing about it is half the battle. I think for companies, if there's anything that kept me up at night, it wouldn't necessarily be cyber risk. Cyber risk is very serious. It causes a lot of pain and suffering for people. And for companies, it costs a lot of money. That's not the most existential risk for companies right now. I think that geopolitical risk, right now, inflation, inflationary pressures, climate change, geopolitical instability, those pose more of an existential risk to companies today than the current cyber risk landscape. That might change. But just today, right now, I think that that's probably what keeps me up at night. If there was something that's going to put your company out of business, probably not going to be a ransomware attack. It's probably going to be one of those aforementioned categories that I said. So that's one aspect of it, but I also think that there's a more long- term aspect of existential risk to companies that does touch cyber risk, does touch technology risk, but it might not be what you think. It's not ransomware phishing. It's not using the right technology at the right time. It's not exploiting technology. And I think it poses a long- term existential risk to companies more than any DDoS attacks. The technology landscape is changing very rapidly. You have to stay two steps ahead of it and two steps ahead of your competitors. And if you're not, your long- term prospects I think don't look very bright. So if there's something that's going to keep me up at night, that would be one of the number one things, is not exploiting the right technology to the right degree.

Chris Clarke: That's fascinating. And maybe I'm projecting here, but it also feels like some of these are also probably the hardest to quantify because they're so existential. I couldn't measure geopolitical risk because that's such a large umbrella. How do you sequence that down to a tactical risk, to get people to mitigate or transfer in some way?

Tony Martin-Vegue: Yeah, what's interesting about that is it's both fortunate and unfortunate. We have a pool of data to pick from. That's the fortunate part is we do have data. The unfortunate part is that means that companies have failed because of geopolitical instability. Probably not in the United States we would find these things, but you could look in Ukraine, you could look at some places in Eastern Europe. Two decades ago, three decades ago, you could look at Latin America. And what you could do is take a look at how geopolitical instability or geopolitical issues, problems, monetary pressures, inflationary pressures, what those do to companies. And you can look at those companies and find out how, and where, and why they failed due to those external pressures. And from there, you could get a pretty good idea of what those types of situations might look like, how they might come to fruition for a company in the United States. It's not going to be exactly the same, because we have different governments, we have different protections in place. It's just not the same, but it's adjacent. You can get an idea of adjacent risk that happens at other places, and try to extrapolate some of that, and try to understand how that would happen here. So if I was tasked with quantifying the risk of just say poor monetary policy from our government, how that might pose an existential risk to an American US- based company, there's a lot of examples of other companies abroad in which poor monetary policy caused huge problems for those companies. I would just start to list out those things, those causes and effects, and figure out, " Okay, what are the chances of that happening here?" And then that's the beginnings of a risk quantification exercise for that.

Chris Clarke: Yeah, like thinking about it in the US is very different than thinking about it... Just because there isn't a data set here, doesn't mean there isn't data set. So that's what keeps you up at night. I guess to kind of flip that, what are the risks that you think companies aren't talking about enough?

Tony Martin-Vegue: That's a great question. So there's three risks that just immediately popped in my head. And one of them, I'll mention it first, and that's one that everybody knows about, and we're not talking about it because we're sick of it. And it's the most boring, unsexy risk anybody could mention. And that's phishing. And I know it feels dumb saying that, because you would think to yourself, " Haven't we solved that?" That's been around for, how many decades has that been around for and haven't we solved for that? And the sad answer is we haven't. And I feel like it still poses a major cybersecurity risk. It should be for most companies, if not the vast majority of companies, that should be in your top five, if not top three cyber risks. And we're just bored of talking about it. So that's one thing that I would put at top of the list. The other one is artificial intelligence. It's AI. Now I'm going to flip the script. It's probably not what your listeners are thinking. Most people are thinking AI putting our sensitive data in it, and then data gets leaked. Or it gives us bad information and we use that, and our company suffers because it gave us false information. Or maybe some listeners might be thinking Terminator, rise of the machines. It's not any of those. It's not using it, believe it or not. That's what I feel like a big risk is. It's not exploiting AI while your competitors are. So we, and by we, I mean all of us, the entire GRC space, we need to figure out how to enable our companies to harness and exploit AI in a way that's safe and quick. We got to get on this quickly. If you don't, your competitors will and you'll lose competitive advantage pretty quickly. And that could be an existential risk for your company. So that's a risk I feel like we're not talking about, is not exploiting AI and GRC teams, not providing guidance to their leadership to use this safely, but quickly, rapidly. And the last thing that popped in my mind was new SEC guidance that's been released fairly recently on materiality of risk. The only way to comply with this is to bring risk quantification into your GRC program, I feel like the only way to do this. If you have a serious event in which you have to personally go to the SEC and explain what happened, you're never going to be able to justify what you did by saying, " We assessed the risk of data breach. It was yellow, therefore it wasn't material." That's not going to fly. Saying, " It was yellow, it was green, it was red, it was high, it was low, it was median." That's never going to fly. They're going to come back and say, " What's yellow? Yellow relative to what? I see you have two yellows. If you add up two yellows, what does that make? Dark yellow, or is that red?" It's just not going to work. So the only way to do it is risk quantification. We're not talking about this enough. The first company that gets burned with this because they don't have risk quantification and they get in trouble with the SEC, they have a material event and they weren't reporting it, that's going to be a wake- up call for all of us.

Chris Clarke: It's fascinating. Yeah. The phishing one, that's a kicker. But it's just an interesting one because I mean to go back to the psychology of risk, it does inherently rely on humans. It's not technological, it's not process. It is purely a, what's the weakest point in your system? And typically, that's the human aspect of it in some way. On the topic of AI, I think you and our chief product officer can chat sometime. He agrees. The two things is there's risk in not doing it. Move safe, but move fast. How are you moving safe but fast with AI?

Tony Martin-Vegue: I think that companies need to have guardrails in place that provide guidance to users, to employees, that really describes the risk and reward of using AI. So obviously if you have a really good use case, you want to exploit this quickly, but we need to proceed cautiously and really understand the limitations of AI as it exists today. Some of the biggest problems is just false information. There's a lot of examples. I asked it for a profile on me, and ChatGPT did know who I was, but the bio that it created, it knew I was in cyber risk and it knew that I spoke and did writing in various places. And then when I asked it for references and sources, it was completely fabricated. The books that I've written. I haven't written a book, but the books that it said I've written, it's completely fabricated. The journals I wrote for, the conferences I've spoken at. I was really surprised that it made something up that was so out of left field. So there's a big cautionary tale for it. But if you know how to use it and you provide those guardrails to people, to users, then I think your company would be well positioned to exploit it.

Chris Clarke: It's interesting you say that. One of the first conversations we had around AI, it's probably a few months ago, his name's Dorian. He was talking about that and how they were starting to quote legal cases, or asking AI. And basically, AI was kind of like an 8- year- old. It lied, because if you do that and then you're like, " Well, where did you get that?" It will make up sources, and then if you keep inaudible it will eventually admit, " I made it up," in a way. And it's fascinating that that's in a way, a risk in using AI, that it can do that.

Tony Martin-Vegue: Exactly.

Chris Clarke: We talked about, AI is one thing. We've talked about risk quant. Are there any other tools that you think risk managers should have in their toolkit with working in the business?

Tony Martin-Vegue: Yeah, that's a great question. So I think that, I'm going to mention a couple soft skills and then a couple of actual tools. So the thing that I've learned throughout my career is that everybody likes to receive data differently. They interpret data differently. And that goes to the risk communication skill that every risk analyst should have. This doesn't even have to be risk quantification. Even if your program is red, yellow, green, high, medium, low risk, if you start to dig in, you'll notice that leadership interprets these risks differently. Even more so with red, yellow, green risk because it's so open to interpretation. So the thing that I've learned is good risk communication and good use of visuals, and I think that it's worth taking time to read up on this. How to communicate data, how to communicate analysis, and how to do it in a way that's not biased or leading readers to a particular conclusion, either intentionally or unintentionally. I think that as risk analysts, we need to be as unbiased as we can. When we communicate risk results, we really don't want to push people toward a certain outcome. We're here to provide data. You make the decision, you decide the outcome. So that would be one soft skill is risk communication. Another one would be, I just touched on that briefly, is visualization and presenting data. So those are two tools that I think are really essential for a risk analyst. Now on the tool side, the actual application side, I think that the GRC analysts of the future needs to have really strong engineering skills, in the sense that they can spin up higher databases, or MySQL databases, or whatever it is, with Python scripts, R code, maybe Tableau dashboards, R dashboards, whatever it is. And that's not for your risk analysis or for your business, but to store and keep track of the mountain of data that we're all going to be collecting throughout our jobs. So I think that that type of engineer skill is not something, it's not a muscle that we have all collectively exercised, but I think that we need to, if we're going to want to stay relevant. We don't have to be experts, but just a little bit of coding. If you're good with R, that's great. If you know a little bit of Python, your resume's going to really, really look good, especially if you can combine that with some statistical skills. You know how to do Monte Carlo simulations, you know how to draw all different types of graphics for the same risk. So I think those are the tools of the future, the skillset of the future.

Chris Clarke: Awesome. No, I appreciate you sharing that. It is interesting how the volume of data is going to change everything. You mentioned a little bit about, risk managers need to be unbiased in the way they present their problems. Do you view that differently from, " Here's the data, you make the decision," versus, " Here's the data, you make the decision, and here's our recommendation." Is there a place for risk managers to make recommendations around that data for the business, or does it almost step away from the peer risk management of just providing and then letting them make that decision?

Tony Martin-Vegue: There is space for that, and I do make recommendations whenever I do a risk analysis. But it has to be data- driven, it has to be rooted in something real. Let me give you an example. So you do a risk analysis. It's all data- driven. You determine the probability of an event occurring. And if and when it does occur, the magnitude, how much does it hurt? That's going to give you a set of numbers. You're going to know how much a single incident costs within a range of course, and now you're going to annualize it. You're going to also have a range, your annualized loss expectancy. From there, hopefully your company should have risk tolerance, risk capacity, risk threshold, risk appetite. You should have all of those numbers. Now you can take your risk analysis, compare it against those numbers. Now you can start to make recommendations. Does my risk exceed the company's capacity to take on risk? It doesn't. So my recommendation here with this is, it would appear as if we have enough cash reserves to cover this risk, if and when it does happen. If it exceeds that capacity, your recommendation is, " This exceeds our stated capacity for risk. I didn't create that risk capacity. Leadership did. So you all gave me that number. I recommend that we immediately reduce risk by cyber insurance, or increase our cash reserves. We need to do something immediately." Now, you should have risk tolerance or risk thresholds. This could be company- wide, or it could be byproduct or by department. That's the next step in your decision tree. Does it exceed the threshold for risk or tolerance for risk? If it does, I recommend that you... You have to mitigate it. And we should have a really good idea of what controls mitigate the risk. During your risk analysis, you should have revealed some of the weak spots. " Hey, we don't have any logging at all. You should have found that. I recommend you implement logging. You can run another risk analysis that shows a hypothetical future with logging turned on. If you turn logging on and real-time monitoring, you could reduce risk by 25%." So you're still making recommendations, but it's unbiased, it's data- driven, and you're not putting your personal beliefs into it, which I see all too often, unfortunately. Scaring people, which I don't like to do. So I think there's definitely room for that.

Chris Clarke: No, that's helpful. And I think it's almost a reframing of it too, of you're not recommending a different approach to the risk threshold or the risk appetite. You're recommending action or you're recommending some other areas that may overall change the approach to risk, but doesn't inherently change the decision making power for the risk itself. So we talked about your views of risks and internal risks to your organization. Maybe just taking that one step further out, to how do you think about third party risks, and the ways these other partners or vendors introduce risk into your environment? How do you approach risk you can't control?

Tony Martin-Vegue: That's really hard. And I think that, this is my hot take on that space. I think that the current way that we, by we, I mean GRC analysts, GRC professionals, the way we approach third party risk, I think it's mostly broken. The reason why I say it's broken is we have an over- reliance on these questionnaires, whether it's a SIG questionnaire or something custom- made. You send it out to a company, they fill it out, you bring it in, they spend days filming it out, you spend days analyzing it, and then that's usually it. But it's honor system. And the people that are filling it out at these companies generally don't have intimate knowledge of some of the questions that they're answering. They don't know the exact level of logging and realtime monitoring on the system that you're considering purchasing or getting services from. So these questionnaires are mostly honor system based. You ask for a series of policies, or standards, or procedures, they come back, and you just use it as a checklist. You have no idea whether or not they're actually following it or not. The questionnaire says, " Do you follow your information security policy?" You click on, " Yes, sure." It doesn't really mean anything. Now having a SOC 2 report, that helps a little bit, that type of third party audit. But if any of you have been through a SOC audit at your company, if you've had a SOC auditor come in to issue a SOC report for you, you know the problems with that. You know that can do pretty much anything to get yourself ready for a SOC audit. And the minute the auditor leaves, you go back to your own bad habits. And also, you can limit the scope yourself. You set your own scope for your own audit. " Let's look at this over here. Don't look at that over there." So a good GRC analyst might find those gaps, but you also might not. You might not even know to look for it. So a lot of it's honor system based, and I think that there's just a lot of uncovered risk there for companies. And it's hard. It's a really hard space. I mean this honestly. My heart goes out to my GRC brothers and sisters that are in charge of third party audits. You're in a hard job.

Chris Clarke: I was just going to say my poor, tender GRC heart is breaking a little Tony.

Tony Martin-Vegue: I know. Yeah.

Chris Clarke: It makes sense that that's maybe one of the issues around it. What would you change in the way we approach those?

Tony Martin-Vegue: I don't know. I don't know. I think we probably need to recognize that there is an issue, that there is a problem here, and put our collective best minds together, and try to figure out how to do this. But there won't. There's not really going to be an impetus to change this, unless we're all motivated to do it. Something bad has to happen, or there has to be some type of government regulation, like the SEC guidance that I was talking about earlier. That's going to force risk quantification the next couple years. There's a lot of us that have been sounding this alarm for a decade, more than a decade now, about the need to move to risk quantification. But we didn't. Collectively, we didn't. So it might take something like that to fix third party risk.

Chris Clarke: Yeah, it'll be interesting to see, to your point. I mean with the SEC piece on whether or not... We talk about the cyber incident materiality, in a way though, it does impact your data, and could be material, if not you, but your third party is breached in some way. Does the same law and the same regulation become its own forcing function for changing in that third party risk arena?

Tony Martin-Vegue: It should, yeah. It should eventually start to push change. I think I might have mentioned this earlier. It's going to take that first company to get burned, and then we're all going to wake up hopefully.

Chris Clarke: I'll be interested to see, even if there's different... Not to throw these future companies under the bus, but if there's the first company that gets burned from not having cyber risk, or risk quant, and then the next company that gets burned from not having third party risk, and then the next one that gets burned because they don't have a good physical security system, and then servers. And it'd be interesting to see if it forces one change, no change, multiple change. So I guess we'll see on that.

Tony Martin-Vegue: A good question. I'm curious. I'm really curious how long it's going to take.

Chris Clarke: So those are all the meat of the questions that I had. Any last thoughts before we jump into risk or that?

Tony Martin-Vegue: No, just my final thought on this topic is just stay curious. Just stay curious about risk qualification, and analysis and data numbers. And don't assume that anyone has the right answer, the answer for yourself. And that's really the best way to really advance your career and build your skills.

Chris Clarke: I appreciate that. I love that value, breaking curiosity. Yeah. So going to move to a little bit more of the fun part of it, of risk or that, where we just ask, which riskier scenario would you prefer? One piece of pop culture that is really, I think focused on technology risk in its own way is Black Mirror, where they look at technology and potential negative effects on society. I'd be interested. In your opinion, what is the riskiest scenario in Black Mirror?

Tony Martin-Vegue: That's a good question. And I love that show, because I love hate it. I love it because it's so compelling. It's one of the best shows that I've watched in that genre since Twilight Zone, but I hate it because it just hits so close to home. It's just, " Oh, no." But I think my favorite episode is probably from season four, is called Hang the DJ, and this is my favorite riskiest episode. It's about a dystopian future in which people are forced to use a dating app, and the dating app kind of has almost this God- like aura in the episode. And two people get matched up, but they know that something's off. And continually, they just have this feeling that something's off. They decide to escape, to break free of this big brother dating app. The end of the episode, it's revealed that the two people are living in a computer simulation, and those weird feelings that they have are just thousands of simulations of their dating lives. It kind of reminds me of Monte Carlo simulation because you're running thousands of simulations of company years, to try to find out in which company year you get a data breach, and how many of those years you get a data breach. That's all Monte Carlo simulation is. But the reason why I find this the riskiest episode is it reminds me that there is a non- zero chance that we're all living in a computer simulation. I think it's probably not the case, but you can't say that anything's impossible. So there's a non- zero chance. So this is actually called the simulation hypothesis, the idea that we're living in a computer simulation. So it's worth the read for your listeners. There's a really good Wikipedia article on it. It'll give you a nightmare if you don't have those already.

Chris Clarke: Honestly, I would not have probably guessed that episode, so I appreciate that. I was going to guess the one with the metal dogs that run around because-

Tony Martin-Vegue: inaudible-

Chris Clarke: Tangible or visceral. So appreciate that. And maybe now more of a question is, which show would you find riskier to be on? The Great British Bake Off, where everyone is a wonderful baker. They're very kind though. Or Nailed It! where everyone is a poor baker, and you know you're going to get made fun of in some way?

Tony Martin-Vegue: I love that question. Two of my favorite shows on Netflix. I have to think about my personal risk tolerance and my risk factors that would contribute to each of those shows. So I'm not a great cook. I'm not a great cook. I don't like being made fun of, but I have a thick skin. But I want to win. I love winning. I really do. So I think I'm going to choose Nailed It! because they're amateurs. We're all outside of our comfort zone. I think I have a greater chance of winning, of success. Great British Bake Off, I'm almost guaranteed for failure. So there's my risk reward decision right there. I choose Nailed It!

Chris Clarke: That's fair. I'd be going for the Hollywood handshake as much as possible. I know that wouldn't happen with Great British Bake Off. So last one, and this one I think is really relevant, given that you said phishing is still super relevant for organizations. Do you think cyber risk is more likely to originate outside of an organization, say a specific attack? Or is it more likely to originate from inside your organization, like clicking a link, malicious activity, something like that?

Tony Martin-Vegue: If I'm looking at just likelihood, I'm going to choose insiders. And I think that there's data to back up my feeling. Verizon Data Breach Investigations Report, if you look at some of the data, it shows that accidents, just untrained insiders, people pressing the wrong button, all of that leads to a lot of incidents. However, if you look at magnitude, it's probably going to be external cyber attackers, because they have that intention to steal massive amounts of data. So most of the big data breaches that we see in the news, or even stuff like DDoS attacks, ransomware, all that stuff, it's all from external cyber attackers, just because that intention is there. But just by sheer numbers, probably insiders, but the incidents are more contained because that intention isn't there.

Chris Clarke: I appreciate that insight. Those are all the questions I have. Any last words of wisdom for our listeners?

Tony Martin-Vegue: Watch Black Mirror.

Chris Clarke: That's one I can support as well. Thank you, Tony. This was an awesome conversation. I really appreciate having you on the show, and thank you all for listening. Talk to you next time.

Tony Martin-Vegue: Thank you, Chris. Thank you for having me. I enjoyed it.

DESCRIPTION

Switching from traditional risk analysis methods like ordinal lists or red-yellow-and-green charts to more modern approaches like risk quantification requires a paradigm shift in how you think about measuring risk, but the increased accuracy, specificity, and reliability you’ll gain by doing so pays dividends.

On this episode of GRC & Me, Netflix’s Tony Martin-Vegue join LogicGate’s Chris Clarke to explore the best ways to navigate this transition, how to learn and leverage popular risk quantification frameworks like Open FAIR, and why you shouldn’t completely throw your colored charts out the window just yet.