Please Hack Me: Hacking Companies for Good

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Please Hack Me: Hacking Companies for Good. The summary for this episode is: <p>They say it takes a thief to catch a thief, so why not a hacker to catch a hacker?&nbsp;</p><p>That was the premise behind Ted Harrington’s Independent Security Evaluators, a company dedicated to poking holes into other companies’ cyber defenses — for the right reasons, of course. On this episode of GRC &amp; Me, Ted takes LogicGate’s Chris Clarke on a journey down the benevolent hacker’s rabbit hole, where they discuss:</p><ul><li>The difference between white box and black box testing (and which is better.)</li><li>Why carrying these exercises out can build trust and become a competitive advantage in third-party risk assessment.</li><li>Why it’s important to shift your mindset from one that views security as an obstacle to one that views it as an opportunity.</li><li>Uncovering the unknown unknowns in cybersecurity.</li><li>How “defense in depth” strategies can put security teams a step ahead of threat actors.</li><li>The four traits that lead hackers to be successful, and why thinking like one can be an effective way to bolster your cyber defenses.</li></ul>

Chris Clarke: Hi, welcome to GRC & Me, a podcast where we interview governance, risk, and compliance thought leaders on hot topics, industry- specific challenges and trends to learn about their methods, solutions, and how look in the space, and hopefully have a little fun doing it. I'm your host, Chris Clarke. With me today is Ted Harrington, executive partner of Independent Security Advisors, a consulting firm focused on securing high- value assets and performing groundbreaking security research. He's also the author of bestselling book, Hackable: How to Do Application Security Right, and has given a TED Talk, Why You Need to Think Like a Hacker. Hi Ted, how's it going?

Ted Harrington: Good, how are you doing?

Chris Clarke: I'm great. Glad to have you here. So, just to kind of get started, do you mind telling us a little bit more about yourself and what your journey has been in the cybersecurity space?

Ted Harrington: Sure. Yeah, so as you mentioned, one of the partners at ISE and we're the good guy hackers. So basically, that business model is that companies hire us to back them in order to improve the security of those systems. And so that's where Hackable came from, basically all those insights learned from the front lines of ethical hacking, addressing some of the very common misconceptions that we've seen. And then like you guys, we also got interested in helping solve the complexity that is vendors management. So, we have a tool that helps in that area as well called Start. And those two, I think they're parallel or maybe actually intersecting. How does something get hacked and then how does a company deal with ensuring their suppliers and trusted parties are secure? So that's me. And then I just get on stages and podcasts and try to evangelize so that all of us who are trying to be on the same mission together can be helping people better. That's the goal.

Chris Clarke: That's awesome. Well, how did you even get into the hacking space? Did that start from a particular space or job originally or is that just always something you were interested in?

Ted Harrington: I think that my path is maybe a little atypical to people who find themselves in the ethical hacking world, in that a lot of the people who work for us, they come out of computer science programs or some sort of engineering or they were just doing this. It's a very common discussion where our people are like, " This is my hobby. I was going to do this stuff on the weekend anyway, and so why not have this be my job?" And that's what makes them so successful. They're passionate about it. They're not punching a clock, they're not waiting for the data. They love this stuff. But that wasn't my journey. I came at it more from the perspective of entrepreneurship, really. And I was in a completely different field. I was running this company, I had tech that saves water, actually, in irrigation lines. And I was looking for a different type of challenge that more resonated with some of my personal ethos. And that was when I got introduced to the guy who had become my business partner, and he came out of the PhD program at Johns Hopkins and he'd been behind all this really cool security research, they were the first hacked iPhone and they hacked cars and first hacked Android OS when the Android phone first came out. And I was like, " Yes, I want to be part of that." And so we were wanting to do this business and we have really complimentary abilities and skills and it's proven to be just a wonderful relationship. But what to me is fascinating, and I think it's maybe interesting also, especially to career switchers or to students, a lot of people feel like, " Ah, that security thing, it's too late for me. I can't get into it. I don't have this experience. I don't have this degree." I will commonly hear people say something like, " I'm a sophomore in college, it's already too late for me to get into security." And I'm like, " Dude, I was 27 or something before I figured out how to get into security. I didn't know anything. I didn't know absolutely anything." Some people would argue I still don't know anything. I might even argue I still know anything, but I know a couple things now. And so if I can do it, I think anyone can do it.

Chris Clarke: That's super fascinating. Yeah, I think it's been kind of a common theme as you move through your career of just like, well, the people who make those pivots, it's not so much that they had anything other than this desire to learn and it's desire to grow in that space. And I think that's kind of cool to hear. That's a pretty big pivot from water irrigation to hacking and it's cool to see that kind of move up. I think one of the best things that I love doing is just learning other people's jobs so that you work better with them. That's how you ultimately work with customers and define what matters in some way. So I appreciate you sharing that background for it.

Ted Harrington: Yeah, I would imagine that most people probably at some point in their life go through a career pivot. And that's scary, especially when it's mid- career, which is generally I think when people are pivoting.'Cause mid- career like, " I'm making decent money now, but I'm also at a stage in my life where I have maybe some responsibilities, and to pivot, now I've started at the bottom again and maybe even take a pay cut." And I think it's actually exciting is the way we should think about that, the idea that you get to start complete novice again. Become an expert in a thing, and then you can be comfortable. And becoming a novice again is awesome. And even within security, I've been constantly seeking that novice perspective, the different things that I've been able to do, like write a book. I've never written a book before, that was a totally, " All right. Now I'm a novice, how do I do this?" And those are just cool moments I think everyone should pursue, not avoid.

Chris Clarke: Yeah, that's awesome. Yeah, there's a certain amount of risk management in that too, of like, " Well, when's the right time to pivot careers? When's the right time?" So I love that perspective. I think kind maybe moving into some of the hacking pieces and learning a little bit about cyber risk, but I kind of want to start with maybe for the masses of what people know. But we see news of breaches and incidents, and I think we just saw some around MGM and a few others very recently. But what do you see as the most prevalent cyber risks facing organizations?

Ted Harrington: So that is a good question, and it's one that I field a lot. And what's hard about answering it is that it's different for different organizations and they should prioritize different things differently. So, to give a universal statement that is true for all would be untrue and unfair. But I think if you were to generalize, the two biggest areas that I see that plague every organization, I shouldn't say every, scientists, we shouldn't say that, but many organizations. One is software and the other is trusted parties. That might not be a surprise for someone who runs two companies that addresses one of each of those things. I obviously think that's a big enough problem to dedicate my life to it. But the heart of that answer is when we think about the software side, software runs the world. Think of what things that you do in your life right now that don't have a software component to it and buying a taco from a street vendor in Mexico. It's a small number, but you had to book an airplane to get to Mexico, you had to book, you booked a car with an Uber. Software, got you to be able to go have that cash transaction across a language barrier. And so involved with absolutely everything in every aspect of life. And so when we think about that, the adoption, especially when you're looking at the way that enterprises are starting to think about applications now, that's critical. So, I think that's how apps and how software are getting attacked, that's one of the biggest risk factors. And then the supply chain, the ecosystem of vendors, suppliers and trusted third parties. And you look at even the largest enterprise in the world, they're struggling to do that. They're doing it with spreadsheets and emails and very few people in their security staff to handle all the requests, and yet business needs to move forward. They're like, " I need to work with this vendor so you better approve them." That's kind of the pressure that a lot of enterprises are under. But also, if they're secure, maybe that's cool. And so it's like, well, how do you balance all that? So those are two areas that I see as absolutely mission- critical today.

Chris Clarke: And that makes a ton of sense given not just the prevalence of software, but the connectivity of it as well, where to your example of going to Mexico to buy a street taco with cash is like, yeah, you had to use something, your phone, but that your phone had access to give data to the airline, which then in turn, how do you trust that their vendors are also secure? And that, I don't know, that connection almost creates this web of potential, not incidents but access points, which is really terrifying. What do you think companies are doing right to address those two things?

Ted Harrington: That's a great question. Let me slightly rephrase the question. The companies who are doing it right, what are they doing? That's almost a question that you asked, but the distinction that I'm making there is that I think there are more companies who are not doing it right than companies who are doing it. So the companies who are doing the AppSec side right, what they're doing correctly, is they're doing security testing of their solutions on a regular basis of six months or more frequently, and they're doing those in a white box fashion. And white box is a methodology where there's a high degree of information shared. So, someone builds whatever piece of software and they're like, " Hey, I want to see how will this be attacked? Here's how it works." That's what white box is. Black box is like, " I'm building this software. I'm not going to give you any information because somehow I think that makes you the attacker, which somehow helps me." And it actually doesn't do that black box doesn't emulate real- world attack conditions. All it does is limit your security system. The metaphor is this, you're a medieval king, you're in the castle and you want to know, can someone break into the castle and kill you? And so you've got your walls and the drawbridge and the moat and all this stuff. And in the black box approach, you'd be like, " Hey, come try to break in." You'd have some of your knights come try to break in, the knights from one of your loyal, I don't know what all the terms are, but one of the other people, " Send their knights over." And the first thing they do in black box, they're like, " I'm not going to tell you anything about the castle." And so what they do is they look at this moat and they're like, " All right, let's count some alligators. Oh, I think we got five alligators in here. And so now I'll know there's five, so maybe I can swim around the five." And the king's like, " There's seven. What are you doing?" And so black box, it's dumb because if you would do it in a white box, the king would be like, " There's seven alligators." You spent no time on the bank of that moat counting alligators because the king already knew it. So it doesn't help anyone to do it that way. So, the companies who are doing it right, they're doing a white box testing roughly every six months, depending on cadence, could be every three months. And they're doing it that white box testing on a periodic basis. So that's what, the companies who are doing it right, they're doing it that way. They're seeing it as a competitive advantage over their competition. They're investing in it appropriately. They're not trying to run a vulnerability scanner and say that that's somehow a quote unquote, " Penetration test," when it's not. So the companies who are getting it right, those are the kinds of things they're doing. They're investing appropriately, they're doing the right methodology, et cetera.

Chris Clarke: I love the analogy of it, of rather than having folks focus the time on figuring out the way the application is structured and security around it, you can tell them, and then it's just more of how do you design around it.

Ted Harrington: Right.

Chris Clarke: I think one thing we typically see it as a struggle in the risk and compliance space is that similar though competitive advantage conversation where folks will see GRC as quote unquote, " The carrot and not the stick." They see it as an enforcement rather than a, " We need to do this because otherwise bad things will happen," rather than, "W"`e need to do this because it gives us an advantage over our competitors or helps us move forward and make strategic decisions to further that, which is what you described as part of that piece, which is powerful.

Ted Harrington: Yeah, that competitive advantage, it's really powerful and I really am only seeing the more progressive companies recognizing it that way. And so when I was writing Hackable, which is about this side of the equation we're talking about, it addresses vendor risk management, but it's really more about, how do we deal with AppSec? And one of the questions I had to ask myself as I was trying to think about the audience I'm writing this for, which was basically writing it for the kind of people who need the type of work that we do. So, " They're struggling with certain things and let's help them." And I can't help everyone, so let's help as many... A book is another way to do that. And so in order to do that, I had to pause and think about, " Why do these companies even hire us?" And my natural of course reaction was, " Oh, because of course security is the right thing to do. It's a noble pursuit, you should do it." But I'm too much of a pragmatist to realize people don't invest in things because it's the right thing to do. It's terrible to say it as a human being, but that's just not the way the companies invest money. They invest money because they get some sort of business benefit. And so then I started thinking about, " Well, what's the business benefit?" And then the next logical conclusion is, " Oh, well, it avoids a future loss." And yeah, that's true. That's 100% a benefit, is that you spend$ 100, 000 now and it avoids$ 10 million in losses later. But a lot of people, there's this really powerful concept from psychology called recency bias. And recency bias basically says what's happened most recently is the way things are. And so if you haven't been hacked recently, you're like, " Ah, we're doing it right. This is great." And so it's harder to be like, " Well, let's spend money on something that might happen in the future." And so then as I continued thinking about it, I realized I got to this point that it was like, " Wow, all of our customers, what they're really doing is they're really thorough." They're paying for a more rigorous assessment relative to what others are doing in the marketplace, not just because it's the right thing to do and not just'cause it reduces that future loss, but because now they can go to their customers and say, 'Look how much better we are. This is a reflection of our ethos. We say we care about quality, we care about our customers, we care about your information. Here's us proving it.'" And that is really, really powerful, I think. And too many organizations are overlooking the power of that. And then that's where the vendor risk management side comes in, is vendors who want to get enterprises to do business with them. Just doing the minimum, just being like, " Give me the checklist and I'll check those lists." Yeah, sure, you can do that, but you're going to stand out when you're doing something more rigorous and thorough than the absolute minimum that's being asked of you.

Chris Clarke: Yeah, it's not just a transaction, it's a relationship. And so every relationship is built on that trust. And so how do you not just talk the talk around it, but walk that walk so that people can feel and trust that? That's powerful.

Ted Harrington: I talked about when I was analyzing this idea of trust, I think anyone in any aspect of cybersecurity will be like, " Yeah, it's about trust." And I started thinking about that idea of trust, and trust is what everyone wants when they're trying to pursue any sort of security agenda. And that got me thinking about, " Well, what are people actually doing?" And a lot of times people don't realize they're doing this, They're actually doing the opposite. They're triggering fear. So a lot of people will say things like this, they might get a security questionnaire from their enterprise customer they want to work with, and they're like, " Oh man, there's three, four, 500 questions on this thing." And so they just wind up writing not applicable to as many of those questions as they can, thinking, " This is stupid. I want to move past this. I got other things to do." But what does that do? That triggers fear in the recipient of that information because now they're like, " Hold up a second. They said, not applicable to this question that's actually really important to us. And I'm pretty sure in your type of business, this definitely is applicable. I now have to look closer." So you're actually separating yourself. You're creating distance inadvertently in the pursuit of trying to gain trust, and it's a wild, wild thing that people don't realize that they're doing.

Chris Clarke: Yeah, that's fascinating to me. And almost that lack of thinking through the recipient of those answers kind of approach of like, " Well, they must be asking for a reason. They're not just asking to ask in most cases. So what's the core behind and how can I use that to build and further the relationship so that they feel confident that we're the right partner in that situation?"

Ted Harrington: 100%. 100% Yeah.

Chris Clarke: So, we kind of hit on what the right companies are doing. Kind of the flip side of, what's the worst thing a company could be doing in that space? What is the thing that you want people to avoid at all costs?

Ted Harrington: Let me tell you a story, because I think it's very illustrative of the wrong mindset. I don't think everyone is necessarily doing what this story describes expressly, but the mindset a lot of people might have. And the short answer to that is this dismissive or minimizing approach to security, of being like, " Here's another hoop I got to jump through to get this contract." Get rid of that mindset. So there was this one time when we were publishing some research, we'd been looking at a bunch of routers, like small office, home office routers, definitely the one that you have at your home office, the one that I have at mine. And we were looking at them for security vulnerabilities and we found a whole bunch, as we kind of expected we might. And so we found all these vulnerabilities and we went through this process that's called responsible disclosure. And responsible disclosure is where security researchers submit the findings to the afflicted party so they can fix it prior to publishing it, because we're not trying to go give the attack blueprint out to the bad guys. And it's a collaborative process, and there are cases where people don't respond or they ignore it or whatever, and then you have to publish the redacted results. And so it's just not as good for everyone. It's so much better if the thing's fixed. And so I think there were maybe seven manufacturers who had vulnerabilities discovered throughout the course. There were hundreds of issues we found across those seven manufacturers. And so we submitted all this to the respective vendors, manufacturers of these products. And there was one in particular who just didn't respond to anything, not even, " Thanks, we'll move it on." They didn't respond to anything. And then we're like, " All right, well, responsible disclosure has expired." Usually you give them a timeline. It's like, " Hey, in 90 days we're going to publish this, so that should be enough time for us to get it fixed." And they didn't respond. The responsible disclosure expires, so we go, we publish the report. And I can't remember which outlet that was in, but it was in several big ones. I think maybe we broke it with CNN or CNET or something. I don't remember who. But an hour later, that company called and they were like, " We saw the report." And we're like, " You definitely saw the report before this, but okay." And they're like, " We would like to change things." And we're like, " This is fantastic." The whole point of security research is to make things better. And so to hear an organization call us, " Okay, let's give them the benefit of the doubt. Maybe somehow all our attempts to supply the information to them somehow didn't get through." We try all these different ways, contact all different people, use their stated addresses to send things to, " Let's give them the benefit of the doubt. This is a really good thing." And they're like, " We're going to send a VP to your office tomorrow. Awesome. We'll be ready for him. We'll have lunch. It's perfect." So the guy comes in the next day, he shows up and we figure out within 30 seconds that he's not there to talk about, " How do we change the product to improve it?" He's there to talk about, " How do we change the report?" And we're like, " Well, we can do that. That's actually kind of the point, is you find the vulnerability, we remediate the vulnerability, then we verify the remediation." So that's the path. " So remediate the vulnerabilities and then we'll verify that and then the report will be updated. And that's a good thing for you saying like,'Hey, look, we took it seriously.'" And it became clear that wasn't what he wanted. He just wanted us to change the report without any work. And I wish I'd framed it. I don't know where this went. It's in a filing cabinet somewhere. But he hands us a memo that they wrote on their letterhead, and the memo says, " We self- certify that these vulnerabilities no longer exist." And there was literally, I'm not exaggerating the rubber stamp, there was a rubber stamp on it that said, approved or certified or something like that. And I'm like, " This is the best thing that anyone has ever handed me," in terms of being hilarious, because it just doesn't work that way. That's going to, I don't know, your doctor and your doctor's like, " Hey, so last time you were in here, you were overweight, did you lose the weight?" And they're like, " No, I'd like you to just say that I have a different weight number." It doesn't work that way. You got to put in the work to get the result.

Chris Clarke: "And I'm self- certify that I've lost weight on top of that, and here's my paper saying that."

Ted Harrington: Right. Right.

Chris Clarke: That's incredible. Yeah. Go ahead. Sorry.

Ted Harrington: So I don't think most companies are doing... That's an extreme story, obviously, but the underlying principle to it is the mindset, and the mindset is security is an obstacle and how do we remove the obstacle? Which in itself maybe isn't that bad, but it's trying to make it not be a thing and to remove it and to dismiss it and to minimize it. Instead, saying, " Okay, if this is an obstacle or barrier, how do we turn it into an opportunity?" And that's what the companies are doing right, it's turning into an opportunity. And the companies are doing it wrong, is that they're misunderstanding how to actually approach it and what its role is?

Chris Clarke: Yeah, that's a lot of sense of, it's almost viewing it as an opportunity versus ignoring it completely as just something that shouldn't exist because we don't want it to. That's kind of interesting of they acknowledge it, they send someone, they did it. When you think about those organizations, do you think, say they're like CISO, acknowledge that vulnerability in any way, how should should organizations be structured to address that, to be successful in security?

Ted Harrington: In terms of the reporting hierarchy?

Chris Clarke: Yeah.

Ted Harrington: Yeah. So that's a good question'cause there's actually a few levels to your question, one of which would be its own minefield, and maybe two, in the depths of the politics of what the CISO role is. But in terms of how we think about the reporting hierarchy, the CISO, and quite frankly, anyone with a C in their title should report to the CEO. Now, that's not the way that most organizations are set up. Most organizations are set up where the CTO or the CIO, who both of those usually exist in an organization and they report to the CEO, and then the CISO reports to one of those two. Sometimes it's even a further layer down, but you're not a chief anything if you don't report to the CEO. So, we do need a security voice who's heard at the executive level. We definitely need a CISO. And I say we, I'm talking about companies who have something to protect need a CISO, and that person needs to report to the CEO. Now, if an organization wants to have security reporting lower in the chain, first of all, they shouldn't have a C in the title. And well, they just shouldn't do that, period. And here's why you shouldn't do that, is that let's say the CSO reports to the CTO. The CTO now reports to the CEO. And when the CEO says, " Okay, tell me about all things CTO." The CTO, he or she is going to have their whatever, 10 top priorities and security will be one of them, but it will be one of them and it probably won't even be the number one. And so when push comes to shove, security won't get the prioritization that it needs. And if you're set up like that, now this sounds like oversimplifying, but it really is this simple. If security doesn't have a voice to leadership, then security will not have the prioritization that it needs. So you literally, if your CISO does not report to your CEO, you are not prioritizing security. And if that's intentional, that's okay. It's okay if a company says, " Security is not important to us." I would actually prefer that a company says that and be honest with themselves. But when they say, " We take security seriously, security is a priority for us," and they don't have a CISO, or the CISO doesn't report to the CEO, you are actually acting in opposition to your stated words.

Chris Clarke: And maybe to challenge that a little bit, say, you mentioned that the CTO does have a set of priorities. What if even with them though they have this mindset of, " Security is a competitive advantage, it's always going to be our number one." Do you still see that same risk involved in that reporting structure?

Ted Harrington: Definitely. Yeah. So what you just described to me sounds wonderful. I would love to have that CTO in every organization, but if you were to talk to many CTOs, I think you wouldn't find too many who would say that, who would say that security is their top priority. Their priorities are things like efficiency, optimization, change, the roadmap, all those types of things, and it's got to be secure. That's what a lot of them will say. I'm in a fortunate position that I've been asked to serve as an advisor to this really cool group of CTOs, and so it's almost funny to me that I got invited, because it's literally CTOs talking about CTO problems and then me. So whenever the word security is triggered, I'm like, " I'll help you with that." And one of the trends that I've seen across this group is that they all definitely do care about security. They are aware of security, but I don't think any of them would say that they're a security expert and that even that security is the number one top priority, and it's not the thing by which they're measured. CTO is not measured by whatever metrics you might want to use for security. And so if you're not measured by that, how is it ever going to be the top priority? So I think what you described, I would love that to be the case. I just don't think it's realistic.

Chris Clarke: That makes sense. I'd be interested in then on that concept of measuring success then for the CTO, for the CISO in the security space, what are common metrics that you would say define success for that role or for that space?

Ted Harrington: So those are two different things, how is a CTO measured and how is a CISO measured? And how is a CISO measured is a really hard question. Because a CTO, it's a little more straightforward. You can look at things like just what exists. I'm way oversimplifying it, but you can attach numbers to what exists and what will exist and how quickly did it exist and how efficient is it and blah, blah, blah. There's tons of different ways you can measure that. Security is a little bit harder to measure because security is essentially the absence of a bad thing and you can't measure a negative, right? And so you measure these other things, like risk management effectively. One of the ways that I've advocated that CISOs should be measured, one of the ways, not necessarily all of the ways, but one of the ways, would be things like, " How many security vulnerabilities have we remediated?" That could either be an absolute value or it could be a percentage. And because I think that's such a powerful number because it says, " Well, first of all, we have to find them, which means we're investing in the appropriate ways, then we have to actually allocate the resources to fix them." And so doing the combination of those things is really, really powerful because now you're not a CISO being forced in the corner to say, " Well, we haven't been hacked." It's like, " Well, no, here's some actual KPIs that we can speak to that correlate our investment of dollars and person power to an outcome." And actually, one of the things I put in Hackable was some metrics based on just our own experience with assessments and how you can correlate level of effort, which you can correlate to expense or cost, and how you can correlate the level of effort to the vulnerability output that you might be looking at. By being able to make those correlations, now a CISO can go to the board or whoever and say, " Here's why I need a million dollars this year and I can put some numbers to it. Here's how I'd spend that million dollars. It's not just on people, it's on these outcomes that we could actually measure."

Chris Clarke: That's fascinating. There's a really cool book called Upstream, where they basically talk about problem solving through that same lens of like, " Well, if you're truly doing your job right, you're avoiding these problems." So how do you measure something that can be avoided? And I actually think it's something probably relevant in the third party space as well of like, well, how do you know that you've done enough due diligence to avoid something bad happening of the third party that you didn't do business with or the control you put in place to avoid something? So, do you have similar metrics or alignment for security teams when they're thinking about the third party risk space?

Ted Harrington: Yes, probably. The data that I have, I'd have to organize it maybe slightly different in the context of this question, but I think that these types of things, what you'd measure, would be things like the types of controls. " Do we have controls? How many are there? How often have they been updated? How do they correlate to our known attack scenarios? And how many vendors that we're using actually are approved or meet a certain criteria?" In any organization, any enterprise, there's going to be vendors who have maybe some conditional approval or some special acceptance or just straight up in some shadow backend deal they're being used even though no one knows it. And so figuring out, " How do we make sure that we exceed a certain percentage of our vendors, have met a certain criteria based on what business it is that they do for us?" Those are all things that are actually probably much easier to manage than, " Have we been hacked?" It's like, " Well, we can count our vendors, we can count what they're doing." There's a lot more that you can measure.

Chris Clarke: So, you mentioned the shadow IT concept of sometimes things get through the cracks, you don't know what you don't know. How would you advise security teams on answering that question of, how do you address and put controls in place around the unknown unknowns, the things that you don't know?

Ted Harrington: Yeah. Well, there's a way I can answer that specific to vendors and then way I can answer it more broadly across all types of risk. But across the vendor landscape, how a particular business unit might wind up using a vendor that hasn't gone through the security vetting process or whatever. That really comes down to just awareness and communication. So, an organization, the security team has to be communicating across departments, why? And how they're actually helping not hurting the performance of the organization. Delivering a keynote coming up here in a few weeks for exactly that. A movie studio is doing an internal summit so that it's put on by the content security team so they can communicate to everyone else, " Here's how we help you," and, " Here's why this is important." And so that someone isn't out there inadvertently... It's one thing if they intentionally are like, " I know I'm supposed to do that, but I'm going to do this other thing." That's a different type of problem you have to address, but someone who just doesn't know how to bring a vendor through their program awareness, you got to communicate that. And then more broadly, how do you anticipate the unknown unknowns? This is actually the pinnacle of the ethical hacking profession, is that I have this idea of you can imagine a triangle. As you go up the triangle, these are the steps of a hacking process. And as you go up, they start from the things that are maybe the most foundational, most fundamental, the things that are most aligned to automation, like tools can do that kind of stuff. And as you go up, they require more manual, they require a creative problem solving human, the complexity of skills required and experience required increases. And the very tippy top of that pyramid is this idea that you asked about, the unknown unknowns. And so unknown unknowns in an organization, I sort of organize them into, there's maybe I think about them in three types, but arguably, they could be organized differently. But so one is novel versions of common vulnerabilities. So it might be like, " Oh, well, here's a type of injection attack that is a known common type of issue, but the way that it's deployed in order to attack this particular system, that's never been seen before." Another would be zero- day vulnerabilities in your supply chain. So zero days are vulnerabilities that are exploitable right now in the wild where the defender has zero days to fix it before it's exploitable, and the supply chain is what we've been talking about with your vendor ecosystem. And then the third type would be previously unknown attack methods that might exist attackers might be doing right now. There are definitely attack techniques that are happening right now that no one on the defender side, on the good side knows about, but that attackers are doing. Eventually we'll discover it and be like, " Wow, that stinks," and then we'll adapt and evolve. So, how do you deal with these unknown unknowns? And that's what ethical hackers do. Ethical hackers are life learners are relentless in learning new techniques and then trying new techniques. And that's why you work with outside third parties that are experts in this type of stuff. Because even to ethical hackers, there are things that are unknown, but that's how you continue to evolve and to iterate to deal with this, the proverbial arms race where the bad guys are getting better and then the good guys are getting better, and you just keep leveling up. That's how you deal with it.

Chris Clarke: I do want to come back to what makes the ethical hackers so successful in this case, but one thing would be, with these unknowns, you then focus on, well, how do you just react to them? If you can't prevent them, then are there certain steps security teams should take to be ready to react in the situations where these unknowns unknowns do occur?

Ted Harrington: Yeah. So, it's debatable. No, maybe it's not debatable, but the question whether or not you can prevent the unknown unknowns, there's some nuance to that, but having the effort to try to discover unknown unknowns and work with organizations who are constantly pushing themselves, that is actually a preventative mechanism. I forget what the question was. I started answering a different one.

Chris Clarke: No,'cause I appreciate that.'Cause I probably would not think about just discovery of these as a preventative measure. I would think of, well, because even then in a way that is, in my mind, preventative because you have someone trying to find them, you're still looking for those in some way. But there's also the, well, when it does occur, how should teams prepare for that? Is it a question of if or when or is it... Yeah, sorry.

Ted Harrington: Yeah, so there's a few sides to the security coin, and one is prevention, and then the other is detection and response. So, the center point of the coin, I guess, is the breach itself. So can you prevent the breach is one side, that's the preventative side, and then the other is, the breach has happen, what do you do now? And so that's a whole field unto itself, and that's an area that I have personally not focused on. We've been really focused on the preventative side, but the way that organizations deal with that is having built in what's called defense in depth. So defense in depth is a strategy that layers tools and techniques, and it does two things. One, it prevents the likelihood that an attacker is successful in getting in, and then two, it reduces the likelihood that the attacker is successful in exfiltrating whatever it is they're trying to get. So, to break into an organization, to get access to their, let's say, intellectual property, that's part of it. And then the other part is now they need to extract it, how they actually get it out. And so there's all kinds of tools and techniques that try to prevent that, logging, monitoring, data loss prevention, those types of tools. So there's a lot of tools that help with that. Having, there's this idea of red teams and blue teams. And so red teams are essentially security testers who will perform a simulated attack against an organization in order to test the response of the blue team. And the blue team's job is to say, " Oh, we're under attack, now we follow our disaster recovery, our incident response. We do a certain series of steps in order to quarantine the attack to mitigate the outcome." And maybe a great example of response would be what happened several months ago with Colonial Pipeline? Now, I'd never heard of Colonial Pipeline before this particular attack. I don't think anyone outside of the business of delivery of gas had ever heard of them, but most Americans know Colonial Pipeline now as either the largest or maybe one of the largest suppliers of petroleum and petroleum products, which I believe gasoline, I don't know if gasoline is different from petroleum, maybe I should know that. But they're how you get gasoline to a gas station, so you can put gas in your car all up and down the East Coast, and they suffered a cyber attack. And they, on their own, chose to take their system offline. And that was a preventative measure. That was to ensure that the attack itself would be quarantined, that it couldn't spread further, couldn't cause more damage. And in a way, I think that was actually a very brave decision because what it meant was that they forced themselves into the headline news, because now you couldn't get gas up and down the East Coast. So it impacted everyone who was living in that area and it was just headline news. Maybe they could have tried to hide it, maybe let the attack get worse, but not let it get into the news, but they made the choice to quarantine it, which meant that people were going to write stories about it. And so now people know about Colonial Pipeline, and if you Google, I have done this, if you just start typing in Colonial Pipeline, like I said, they're the number one largest supplier or one of the... They're a dominance in their business, but that's not what comes up. That's not what Google autofills. What autofills is cyber attack, Colonial Pipeline Cyber Attack. And it's like, man, that's kind of a bummer of an outcome. But that is a great example of some of the techniques that people can have, is they can actually isolate an attack by taking their own systems offline in order to deal with it.

Chris Clarke: Very cool. Yeah, the concept of defense in depth, I think for the lay person, there's a lot of focus on how do you just stop people from getting in, and less so on, well, how do you stop them from getting out? And thinking about that, there's a big through line of that process for the hacker and designing that system to stop it from both directions you just don't think about.

Ted Harrington: Here's what's so cool about defense in depth. That's really nerdy that I just said that, " Defense in depth is cool." What's so cool about it is, so there's this trite phrase that is said in security a lot, which is that the attacker only needs to be right once, the defender needs to be right every time. The attacker only needs to be right once. But defense in depth actually flips that on its head, in that the attacker needs to get every single step, and the defender only needs to flag it once to stop the attack. And so that's super powerful when you think about that. It changes the conversation from this sort of defeatist nihilistic, perspective of, " It's not if, it's when." People always talk about that and it's like, how is that a helpful thought? Even if it's true, it's not that helpful. Changing it from that, " Well, we're going to get hacked," to, " Here's some stuff we could do that reduces the likelihood, it reduces the impact, or both." And the metaphor, I think how we can visualize, is if we go back to that castle, the king is in the castle, think about how a castle's built. A castle has all these layers of defense. It has the moat, it has the drawbridge, it has the archers up in the turrets, it has the guys with the hot oil, spilling it down the side. And then you get in the castle and there's concentric rings of perimeter walls. And then you get further in, then there's the king's keep and protecting the king's keep is his own personal guard. And those are layers of defense that the attackers have to actually defeat each one in order to get all the way to the king. That's to get in. Then they kill the king. If they want to live, they have to figure out, " How do I fight my way back out of this stuff? I'm in the castle surrounded by my enemy now." And so the likelihood that someone can get all the way in, kill the king, get all the way out and not die, becomes reduced because of those defense mechanisms.

Chris Clarke: I'm chuckling because the king's still dead. But yeah, if the other person wants to get out though, yeah, imagine having to get past all those twice.

Ted Harrington: Or if you thought about it maybe instead of, that's a good improvement to the metaphor, maybe instead of you got in, you killed the king and it was just a suicide mission. You're like, " I want to kidnap the king's daughter for some reason. Okay, now I have the daughter," how am I getting this person out?

Chris Clarke: Yeah. How do we steal this? Yeah, sorry. Didn't mean to hit on the metaphor.

Ted Harrington: No, it's a good improvement.

Chris Clarke: If I'm the king, I'm like, "Ah, I'm so dead."

Ted Harrington: "I'm already dead." Yeah, yeah.

Chris Clarke: I've always thought of that as the Swiss cheese model. You put a bunch of pieces of Swiss cheese in a row. What are the odds that you're going to hit all the way through the hole to get through all of them in a way, rather than letting the Swiss cheese stop it. But I like the castle metaphor. I'll start using that more often. A little while ago, when we were talking about the unknown unknowns, you'd hit on hackers and why they're creative, and that's why they're thinking about these. They're lifelong learners and new vulnerabilities, novel common vulnerabilities, zero days, supply chain, previous unknowns. Are there other characteristics, I think, of what makes a hacker successful in approaching these unknown unknowns?

Ted Harrington: Definitely. I love that question. I've been working on that specific question for some time now. You can see right behind me, for anyone who's watching this on video that I have this poster. It's not a poster, it's a canvas. It's very nice. It says, " Think like a hacker." And that's one of the core ideas that has sort of been the through line for our ethical hacking business, because that is what ethical hackers do. Ethical hackers are hackers, but it's the idea that in order to defend against an attacker, we have to think like an attacker. And so I think the first answer we should have is by first answering the question, what is a hacker? And so hackers are neither good nor bad. Hackers are creative problem solvers. The difference is motivation. So it's, does this person seek to exploit the system in order to get some benefit? Or, does the person seek to find the issues so they can fix them? That's the difference. But both are hackers. Now, whether you're talking about the good kinds of hackers or the bad kinds of hackers, I've found from really studying this question that there really are four traits that define what a hacker is. And then there's a bunch of maybe sub points you can put under each of these. But the big themes that I've found from both observing, from just lived experience, and then I've been interviewing hackers asking this question, how does a hacker think? What are the key themes? What are the mindset traits? And they're four, and they are that hackers are curious, hackers are contrarian, hackers are committed, and hackers are creative. So there's these four mindset traits, that conveniently, I'll start with C as a wonderful way to organize the thoughts, but each of those are a mindset trait that the best hackers have. And when we think about what makes a hacker talented, the more they have of all four of those capabilities, the better they're going to be at this skill. And what's really, really fascinating is I actually think, just as an aside, that there's a hacker in everyone, in all of us. And if we can unleash these four traits, it can enable anyone to look at whatever their goal is differently. But I know that the context of the question right now is talking about how do systems get broken? How do we fix them? And it's the combination of those four traits. When I look at people who work for us, or a company like us or our peers or our competitors who we respect, these types of people, they all have these attributes, they're curious, they... By contrarian, what I mean by that is that they are willing to think differently than the norms. They're willing to challenge assumptions that people have. And then committed is the idea that they're willing to make sacrifices, be persistent, invest resources, and then creative is that they come up with new ways of doing things.

Chris Clarke: I think that's super cool, and I love trying to put that mindset. I guess where I then ask is, well, how do people start to develop these characteristics? Have you seen techniques that help people grow in that method?

Ted Harrington: I do, yes. I haven't fully settled on the aspect to the question, which is, are these nature or nurture? And that's something that I'm actively trying to get my head around right now. Because curiosity, are you born with it or is it cultivated? And so I don't necessarily have the answer to that yet. So I'm operating right now as I'm developing this idea under the assumption that these traits exist in all of us. And the question is whether or not we cultivate them or allow them to be revealed. And so are there techniques that people can do to enhance these? Definitely. So one is just first of all, just knowing that these types of traits are what make hackers successful. Just knowing that exists, I think it reinforces for people like, " Oh, the idea of tenacity and persistence and perseverance, that's what makes someone? Oh, yeah, I kind of already knew that, but that's good. That reminds me that it's not like a hacker, some magical wizard who has this something..." I can be committed to a... I think anyone listening, if they're sitting down to listening to a podcast, it's'cause they're a life learner. They want to learn new skills or techniques or ideas. So that kind of person's like, " That sounds like me, I can do that." So for each of these ideas, I've sort of been organizing, there's three or four different things for each of them you can do. So for example, under the idea of curiosity, hackers are curious. One of the things that really stands out to me is that there's this really powerful tool that hackers have, it's a psychological tool, and it's the question, what if? The ability to just be like, " Well, what if I did it this way? What if I did it that way? What if this assumption I have about how this thing works was not true?" And that what if question, just constantly asking that about all different scenarios in a given situation, those stimulate curiosity. One of the hackers that works for us, he said this quote one time that I was like, " I love this idea so much." He says, " What I do," this is him talking, he says about himself, he says, " What I do," he says, " I look for threads. And then when I find a thread, I pull on it. And I keep pulling and pulling and pulling until the thread unravels the sweater, or it just comes out and then I realize it's the end of the thread." And I'm like, " That's curiosity right there." Being like, " Well, what happens next? What if I do this? What if I try this other thing? What if I combine things?" And so that's a great example of where some people might not have thought about that yet. Okay, well, there's something you can go do, is just ask 20 different versions of the question, what if? And giving yourself, by the way, inherent in that is giving yourself the permission to be ludicrous. To use a metaphor maybe, " To get into college, I need to have graduated from high school." What if you did something else? There are definitely people who've gotten into college without graduating high school. So what if you didn't graduate from high school? What would that have to look like? Well, now that's changing your mind a little bit. Now it's not about, " Can I get into AP chemistry?" It's like, " Can I launch a nonprofit that changes the world in some way that Harvard is going to be like, 'That's the kind of person we want here. And they can get the GED, they'll take the test. We're not worried about that part. We want that.'" It helps you think differently just by starting to ask that question, what if?

Chris Clarke: I like what if concept, because in a way it almost, not invalidates, the nature versus nurture argument, but you can build in these systems and processes to force those four functions. So then what if doesn't have to be like, " Well, what if I'm not curious?" And say, " Well, if you're not curious, what if you just built a framework that you could start to go on top of that?" I guess in that similar way, is there any other systems that you would say to... How do you get a team to be contrarian? How do you get them to go against what they've always done?

Ted Harrington: Sure. Yeah. I actually have a great exercise that people could do for that. So, when people are contrarian, and that word maybe is problematic because there are a few definitions for it. One definition is a contrarian is the person who disagrees for the sake of disagreeing. That person's annoying. We're not talking about that person. We're talking about the person who's willing to think differently about a situation, who's willing to challenge assumptions. And so here's how you would do that. What you need to do before you can challenge an assumption is you need to identify your assumptions, and just identifying them, that alone is really powerful because assumptions are blinding. Assumptions are, we do them sometimes without even being aware of it. And the assumptions are intended to help us be more efficient, more effective, get things done faster, whatever. But we're blind to what's built into that assumption. So here's how we identify and challenge assumptions. If you can imagine, take out a piece of paper and you just basically write three columns on our spreadsheet or Word document, it doesn't have to be paper. And in the first column, you write whatever the goal is. So someone might be like, " Well, we need to build this program for this kind of audience," or whatever. So you state the goal, and that's just one line, whatever. And then in the second column, and you want this to be as many rows filled out as you possibly can, write down what all your assumptions are about what it's going to take to achieve that goal. So some prompt questions would be things like, how much is it going to cost? How long will it take? What kinds of expertise is needed? How many people does it take? What are the barriers? And you can even get into specific things like, do you need to speak English to do this? Write down whatever assumptions you can come up with. And then once you've developed this list and you want that list to be as long as possible, then look at each assumption and challenge it and say, " Is that true? Does that actually need to be the case?" And then here's what's going to happen. A lot of them, I don't know what the number is, but I'm just making it up from the hip here, but let's say 90% of them, 95% of them, you're going to confirm the assumption. You'll be like, " You need to speak English to do that. You need a million dollars to do that. You can't do that with fewer than three people." Whatever it is, you're going to be like, " Absolutely, you need that." But let's say it's 95% are confirmed. That's a good thing. You've confirmed these are valid assumptions. The other 5% are where the opportunities are. So now, you're going to go through it. You're going to be like, " Yep, this is good. This is good. We should definitely do this. This sounds right. Wait a minute, do we need to do this one? Maybe this assumption is flawed. Maybe this assumption is outright incorrect." And now you can start poking at that and saying, " Okay, well we've identified a flawed assumption about how we're doing things. What are some opposite ways we can do that?" And now you can identify, and that's what sort of this third column is, is once you've found those that maybe have flawed or incorrect assumptions, now you give yourself the permission. It doesn't matter how ridiculous the idea is, " We need to build a Tyrannosaurus Rex perfect life- size model to do that." It's like, "I don't care. Put it on the list. That's a different way of doing it." And then just allow yourself to state bad ideas because then now you've identified the flawed assumptions, you've challenged those assumptions, you've come up with this whole list of other ideas, and a couple of those ideas are going to be viable. And now that's a very productive way for an organization to say, " Well, we're not being contrarian, just for the sake of like, 'Well, we don't want to do it that way.' We found a better or a different way to do this." And when Matt Damon and Ben Affleck were writing Good Will Hunting, which went on to be this super, super successful award- winning film that launched both of their careers. They were both just poor, I don't even think they were in school. They were just some dudes from Boston at the time, and they wrote this screenplay. And I can't remember which one said it to which one, I think either Matt said it to Ben or Ben said it to Matt and they were like, " Don't judge me by the quality of my bad ideas. Don't judge me by how bad my ideas are. Judge me by the quality of the good ones." And that freedom that we give ourselves to say, " Let's talk about bad ideas." There's so much power in that. And that's something that my business partner and I are always doing. I get so excited when he calls me, he's like, " I've got a terrible idea." And I'm like, " Yes, let's talk about that." Because obviously you ultimately reject the terrible idea. You might be like, " No, here's the eight reasons that's bad, but we can pull a seed out of that." And when you pull the seed out of that, now that becomes a good idea. And there's so much power in that. But you can only get to that if you identify the assumption, you challenge the assumption, and you give yourself the permission to be ridiculous.

Chris Clarke: I love that. Yeah. Big fan of building those bad ideas. But, Matt Damon, Ben Affleck, I did not know that was how they started that.

Ted Harrington: Yeah, they were completely unknown. They wound up winning, I think they won all the awards, the Oscar, I think they won all the major awards at the Oscars, and then they won something else, too. And I think they were 20 or 22. No one knew that they were just complete Hollywood outsiders and somehow they got Robin Williams to agree to be in this movie. And then the movie, it was incredible and now they're hyper successful.

Chris Clarke: That's awesome. I appreciate you sharing those perspectives on hackers, and I know that I'm going to start taking some of these systems and heuristics to go back and work with my team on how we can be operating, how we can be thinking like hackers. Any other last thoughts on this space to inaudible-

Ted Harrington: Yeah, I think if we were going to wrap it up in a bow, it would be to think about this in a positive light. So, one of the things that frustrates me a little bit about the way people talk about security, you go to a security conference, especially you go to the vendor floor or even visualize what's happening right now, MGM is under this massive cyber attack, they're offline. And I don't know this for a fact because I'm not in any rooms with MGM, but I imagine they are getting absolutely bombarded by security vendors who are saying things like, " Well, if you had our product, this wouldn't have happened." And it's like, " No, that's not true necessarily." Products help, but we should be thinking about the world collaboratively. We should be thinking about this positively. There's smart, passionate people who are working on these problems and security is measurably getting better over time. And I think it's really easy for the doom and gloom to take over, Like, "Oh, these big companies get hacked. Everyone's getting hacked. I'm going to get hacked. What's the point?" And I don't think that's a very good way to think about it because the world is evolving and adapting. New technologies coming out almost every year there's something, it's like, " Oh, this changes society." And the reason we're able to continue to innovate and introduce these technologies and adopt them without the world completely collapsing,'cause there are evil forces out there who would love to see the world collapse. It's because of passion of people in the security industry who are making all this effort. So there's a really, really positive light to be thinking about here, that it's a big problem, but there are really committed people working on it.

Chris Clarke: That's super cool, and I appreciate you ending that piece of it. There's a kind of similar book called The Better Angels of Our Nature, where because of the prevalence of news and how connected we are, it's so much easier to see the bad. But on the whole, we are trending towards a more positive society. And I think that's a cool way to think of it too, of yeah, we focus on the MGM, but we, similar to how the unknowns, we haven't talked about all the things that are going great in the security space.

Ted Harrington: Yeah.

Chris Clarke: I appreciate you sharing it. I do want to end on a few, call it risk or that, which is more of a, " Would you rather?" of risk, but I'm going to start with a kind of easy one. But what's your favorite Ted Lasso or Ted Mosby from How I Met Your mother?

Ted Harrington: Oh, I have a really profound problem with laugh tracks, so I wouldn't even put How I Met Your Mother in the discussion. And Ted Lasso, that might be the most perfect television show. It's funny, but you feel emotions, amazing character development, wonderful plot line. Ted Lasso all day.

Chris Clarke: Okay, I appreciate you sharing that. It makes me feel better about this podcast. So then, more so when we think about a riskier experience for you, was it writing a book or given a TED Talk?

Ted Harrington: Writing the book was probably the scarier, bigger transition at that time because I had not written a book before and I had so much fear in doing that. Now, once I committed to writing the book, I was like, " All right, I see this problem. I know the audience, I know the problem they have. I know I can solve the problem. I want to help people. I got to write this book. This is something hard. I'm motivated. I got all the reasons to do it." And then the fear starts creeping in. And I was really grateful that the company that I worked with to publish my book, they actually gave me quite a bit of coaching. And one of the things that they coached me on was how to deal with fear. They're like, " Fear is what prevents people from writing their books." And that was the first day, we did this writer's workshop, and that was the first thing we did on the first day. It wasn't like, " Here's how you should structure a book." It was like, " Fear? Okay." And it was really powerful'cause we did this exercise and we identified our fears and we talked about how likely that fear is to happen and what's the worst case that happens if it does come to be true, and then how do you mitigate it? And so I had fears about things like I was afraid to write a bad book or not serve my audience or feel like I was a fraud or something, or someone call me a fraud. And so getting through that was really, really difficult. Now I'm on the other side of it. I've been able to take those learnings, implement them to my life. I applied them definitely to the TED Talk, which came next. The book kind of led to the TED Talk and I had tons of fear at the time. I spend my time on stages all the time. And when I was about to get on that stage, I was like, " Am I going to pass out right now?" The nerves are flying at this moment. So they both definitely had their issues, but that's what makes big things worth doing, is you get another side of it and you're like, " I just learned something about myself." But I think the moment in time of transitioning from not being an author to being an author, that was a big leap.

Chris Clarke: I genuinely cannot imagine. But I appreciate you sharing that. And then maybe the last one, and this one I've been asking all of our guests, is we've been talking about cybersecurity all this time, we talked about hackers, we talked about the way systems we set up. When you think about cyber and the risks organizations face, is it more likely to be coming from an external, like the hacker does it right? Or is it more likely that it's wrong in the sense of someone clicks a phishing link or we didn't set up our system right in the first place? Are those risks more likely to be external or internal to your organization?

Ted Harrington: I think there might be a flaw in the question itself, because I actually don't think the perimeter of an organization exists anymore. There is no inside or outside. So even the scenario you just described, the person clicks the link. That is an insider threat, that's an accidental insider. They did something they shouldn't have done by accident, they weren't necessarily meaning to or whatever. And you could say that that attack originated from the outside. But what about the malicious insider? Someone who works for, say, a nation state and they go to work for some big company so they can attack it from the inside. Is that an inside attack? Is that a insider threat? And that is the insider threat, but is that originating from inside or outside? I don't know. It's debatable. When people work from home and their system gets compromised on their home network, is that inside or is that outside? So, I think the way that we should probably reframe it is that the question is probably, are the risks more likely to come because of accidents or intent? And I think you could say that they are all coming from intent, even the ones that are accidental. So someone accidentally clicks the link, they were duped into doing that. And the reason I think that's an important distinction to make, is that when we think about things like, people all the time, they'll say like, " Oh, humans are stupid." And it like, " That's not a great way to think about it. Humans click links and they download stuff." It's like, " Yeah, because that took advantage of some ignorance that they had. That was a well- planned attack." And so intent I think is the probably way I would answer that'cause I don't think that the perimeter exists anymore.

Chris Clarke: All right. No, I appreciate that. That was all the questions that I had. I appreciate your time, and yeah, just thank you so much for sharing your insight. I got a lot that I'm going to go back and start to work on with my team.

Ted Harrington: Awesome. Again, if people want any more information about the book or any of the things that our various companies do, connect with me on LinkedIn, watch my TED Talk, whatever. It's pretty easy. Just go to tedharrington. com and I'm pretty responsive if you want to shoot me a note through there.

Chris Clarke: Awesome. Well thank you so much, Ted. That's our show.

Ted Harrington: Thank you.

DESCRIPTION

They say it takes a thief to catch a thief, so why not a hacker to catch a hacker? 

That was the premise behind Ted Harrington’s Independent Security Evaluators, a company dedicated to poking holes into other companies’ cyber defenses — for the right reasons, of course. On this episode of GRC & Me, Ted takes LogicGate’s Chris Clarke on a journey down the benevolent hacker’s rabbit hole, where they discuss:

  • The difference between white box and black box testing (and which is better.)
  • Why carrying these exercises out can build trust and become a competitive advantage in third-party risk assessment.
  • Why it’s important to shift your mindset from one that views security as an obstacle to one that views it as an opportunity.
  • Uncovering the unknown unknowns in cybersecurity.
  • How “defense in depth” strategies can put security teams a step ahead of threat actors.
  • The four traits that lead hackers to be successful, and why thinking like one can be an effective way to bolster your cyber defenses.