The Entropy Podcast

Murderboards, Metrics, and the Future of Cyber with Ross Young

Francis Gorman Season 1 Episode 36

In this episode of the Entropy Podcast, host Francis Gorman speaks with cybersecurity expert Ross Young about the complexities of cybersecurity leadership. They discuss the challenges of budgeting, the importance of tool utilization, and the often overlooked impact of reputational damage. Ross shares insights from his book, 'Cybersecurity's Dirty Secret,' and introduces the OWASP Threat and Safeguard Matrix as a framework for understanding cybersecurity threats. The conversation also delves into the evolving role of AI in cybersecurity, the necessity of a comprehensive cyber strategy, and the skills required to become a successful CISO.

Takeaways

  • Ross Young emphasizes the importance of budgeting in cybersecurity leadership.
  • Understanding tool utilization can prevent wasted resources.
  • Reputational damage may not be as impactful as previously thought.
  • The OWASP Threat and Safeguard Matrix helps identify material threats.
  • AI in cybersecurity requires careful oversight and governance.
  • A comprehensive cyber strategy should include people, processes, and tools.
  • Vulnerability management will become increasingly challenging with AI advancements.
  • Building relationships within the organization is crucial for a CISO.
  • Gamification techniques can enhance organizational change.
  • Continuous learning and skill development are essential for aspiring CISOs.

Sound Bites

  • "Why Most Budgets Go to Waste"
  • "We haven't fully deployed our existing tools."
  • "We need to have oversight on AI."


You can also check out the following items discussed during the show:

CISO Tradecraft episode on strategy:
https://cisotradecraft.substack.com/p/refreshing-your-cybersecurity-strategy?utm_source=publication-search

Buy Ross's book "Cybersecurity's Dirty Secret"
https://www.amazon.com/Cybersecuritys-Dirty-Secret-Budgets-Tradecraft%C2%AE/dp/B0G26WHVTG/  

Francis Gorman (00:01.752)
Hi everyone. Welcome to the entropy podcast. I'm your host, Francis Gorman. If you're enjoying our content, please take a moment to like and follow the show wherever you get your podcasts from today. I'm joined by Ross young, a seasoned cybersecurity executive and educator with more than two decades of experience spanning us national security and fortune. 100 enterprises Ross has served in critical roles at the CIA NSA and federal reserve board. He's also held senior leadership positions, including division and see so at capital one and see so of caterpillar financial.

You may recognize Ross as he's the co-host of the CISO Tradecraft, a leading podcast dedicated to developing the next generation of cybersecurity executives. He's also the creator of the OWASP threat and safeguard matrix TASM. On top of all of that, Ross has just ordered a must read book for all CISOs called Cybersecurity's Dirty Secret, Why Most Budgets Go to Waste. Ross, it's great to have you here with me today.

Ross Young (00:53.368)
Hey, thank you, it's my pleasure to be on the show.

Francis Gorman (00:56.698)
Ross, I was looking forward to this conversation just because not only you're a legend in the cybersecurity field, but I'm loving the structure of your podcast. And I have a bit of podcast envy. I think just before we got on, I was talking about your YouTube channel. But the one thing I have the most envy about is your GitHub structure. And it definitely has that military dedication to sculpting out the different domains and how they all come together.

What triggered you to start the podcast in the first place? And then did it just evolve naturally and all of these different kind of aspects of it evolve over time?

Ross Young (01:33.802)
So I was at Capital One as the divisional CISO and I had just left to go to Caterpillar Financial as the CISO there. And I had a lot of friends who said, hey Ross, how do I do what you just did? I want to go from, you know, being this director or senior director and become a first time CISO. And so I started having conversations with them. And then I noticed I had the same conversation with like 30 of my friends.

And so I said, this is kind of old. just want to like record it once and then tell all my friends to watch the podcast so I could, you know, mentor at scale. And so then I was looking, who do I want to have this, my cohost on the show and G Mark Hardy was the SANS, you know, teacher on all the cybersecurity leadership courses, like 512, 514. And I thought who better than him to be my, my cohost. I.

I recruited him, you know, much like my CIA background to come join me and partner together. And we built the podcast and every week we try to do a fun, different episode to teach people how to become a CISO. And then how do we organize that so that people can best learn what they need to do to grow on their journey.

Francis Gorman (02:44.13)
I love it. know, simple, simple things. Then it flagged a bit of inspiration. I grew from there. My journey was it as fun. We had a newborn and I realized after two weeks, cooked up in the house. I needed to talk to intelligent people again. So that's where the entropy podcast got born out of. Ross, I want to talk a little bit about the book. I think this is something that every cybersecurity professional in the leadership space struggles with. it's budgets, allocation of budgets, measurements of budgets.

Can you talk to me a little bit where the idea from the book came and without giving too much away, what's the background information that will come true for readers of that content?

Ross Young (03:25.452)
Yeah, so here's the book. It's called Cybersecurity's Dirty Secret, Why Most Budgets Go to Waste. And it was based on my experience when I took the role of CISO. I felt like I knew the technology really well. Like I had spent a lot of time in cybersecurity. But what I didn't realize is the budget was going to be some of the hardest things I needed to learn. Like, how do I figure out which of the 80 tools we need to murder and get rid of?

out of our organization? How do I actually gamify metrics so that I can get better vuln management and lower costs around vuln management than ever before? How do I actually figure out the best way to help my people grow in their career so that they don't leave costing us more budget? And there were all these little lessons learned that I kept having that were finance related. And I'll just give you one other simple example.

If you were to ask me before I took a CISO role, what I thought would be one of the hardest problems, I would have expected it was getting money. I've always heard, you know, cash deprived CISOs who don't have enough money for their programs. I found quite the opposite. I was able to convince my leaders, hey, here's the money that I need to do this initiative that's really going to secure and safeguard us. Here's no problem, Ross. Here's the money. Go do this thing. And then I found it took a year to go through procurement.

Right. I found a year to, to, deal with master's service license agreements and dealing with lawyers in contract terms. Nowhere in my career was I ever taught that in CISSP or any other certification I had to go down. So I really wanted to take those lessons learned and share those with people because I think those are the real world challenges CSOS are finding.

Francis Gorman (05:14.81)
I think you take a year to get through procurement and the technology landscape changes and then you get the tool and it's already kind of passed its shelf life a little bit, which is another problem. That's fascinating. It's kind of just, the fundamentals. It's the brass tacks kind of problems of the CISO world and the security landscape. I take another problem, Ross, and you've probably come across this, is the consolidation.

issue so you know everything is simplified let's put 20 capabilities into the one tool and then you look at the tool and you land it and you get it in and you've got two capabilities laid up and you've got you know 18 that are sitting there with you know no race scene no one to use them and you know it's almost you're paying for wasted energy power in assets have you seen that part of it as well so you get the tool in you get the budget aligned and then you're only squeezing 10 % of the acid

Ross Young (06:12.481)
You know, that's a great point in what I bring up in one of the chapters of my book. And I'll give you the key problem. Almost every CISO I know, they walk into an organization, they kind of get a lay of the landscape and they see, here's the 50 or 80 tools that we're running at this large company. And the first thing they think about is, my gosh, you're not using these five tools that we used in my last company.

You're using terrible tools. We all have these preferences and biases on tools that we just kind of like. And so what do we do? We naturally want to rip and replace those tools at the company with tools that are our preference. Right. And ultimately this means I have to go to the chief financial officer, ask for more money to get, you know, some replacement tools and, and, and build that out. And I think that's a flaw. And I think that's a big mistake too many people in cybersecurity make.

What I think we should do is go to an organization and create a murder board. Okay. Let's look at all of our tools and show me the coverage. Is the EDR deployed on 99 % of the endpoints or only 80 % of the endpoints? And then once we figure out the coverage, the next thing is to figure out what I would call the utilization. If there are 10 important features to turn on in your EDR, did you turn all 10 on or did you actually, you know, turn only three of them on because the others were too noisy?

And so when you have that, now you have, let's say an 80 % deployment and you have a 30 % utilization. When you multiply those two together, you start to get something called an effective protection score. In this example, now you're talking like, you know, 24 % of your endpoints are successfully going to block the things that you want them to block. And so it should give us this like aha moment to say,

We haven't fully deployed. haven't fully utilized our existing tools. We're in the stack of the 50 80 tools that we're buying. Are we making these biggest misconfigurations, misdeployments that we really need to fix or get rid of these tools that are just costing us money.

Francis Gorman (08:23.034)
after this call, I'm going to take my capability reference model and rename it to Murderboard. Because that sounds way cooler. Ross, that's very insightful. was a conversation I had the other day and then you put up a post on LinkedIn and I was like, this is a bit of a bit of a snap moment because somebody said to me in a conversation and it wasn't a security person, but we were talking about a data breach and they're like, cares if your data gets stolen anymore. And I was like, what?

Ross Young (08:28.513)
Ha ha.

Francis Gorman (08:53.182)
That's not a statement you're going to make, it? And then it set me off down a bit of a path to see we've had quite a lot of cyber incidents in the retail sector in the UK. Some of the automotive sector has been hit in terms of Land Rover, Jaguar, et cetera. And I started off looking at the share prices of the companies that have been impacted year to date and looking at the percentage drop in them. And then somebody underlined sediment. And what I started to realize was in some cases where

there was a necessity like the supermarket chains, the share prices were starting to recover already, even though they've had these massive breaches and operational problems off the back of them. But then in kind of breaches where, you know, trust was damaged and that trust was damaged on a level that may be risky to an enterprise like the F5 breach example where source code got potentially stolen and that sort of thing. It looks like it's a bit of a slower recovery pace. So you didn't put up a post on LinkedIn.

that kind of said, you know, how much is reputational damage? know, what does that really mean? Can you talk to me about the angle you came at? Because it just, it seems to one of those conversations. It's like when you see a red car and you see one, you know, everywhere, you know, I haven't seen a red car today. It there's one, there's one, there's one. And now the reputational conversation seems to have a number of different kind of facets to it. And I'd to get your perspective on it.

Ross Young (10:15.437)
So when we think about cyber damages that have been when IT is hacked or when you have a massive disruption, there are primary losses and there are secondary losses. And the biggest example of a primary loss is, you hack into a manufacturing company, you put a denial of service on that production line through ransomware or whatever the attack method is, and now they can't produce cars.

or whatever that thing is for a week. Well, think of how much they're paying in salary. Think of all of these things that are stopped and what they're able to sell because they're now behind one week in sales. That's a primary loss. And then we have the secondary losses. Usually things like fines. So you're probably going to pay like $10 per record loss when you have to pay some type of person identifying, watching information. you're like, hey,

Here's a don't steal my identity service that we're going to pay for each customer. But the biggest one that, that I've really been surprised on is we've constantly been preaching, Oh, I'm worried about the brand harm or reputational loss. And what I found was I really don't think that's a thing. I think this is the fear, uncertainty doubt that we've been promoting that doesn't actually materialize. And I'll just give you an example of T-Mobile. T-Mobile had 10 breaches in 10 years.

And I can't think of a company that has had as many as T-Mobile. Maybe there's some others. But you would think if I was really worried about my data being lost as a T-Mobile customer, I might go to AT &T or Verizon and switch my telecom provider. But the data doesn't show that. Actually, the data shows that their customers are just really grown, grown, grown. So T-Mobile is doing very well. And customers actually don't really care about having their data loss. Yes, we don't like it.

but we don't vote with our feet and leave this telecom or cellular provider. And so I think that was a huge thing for me to just kind of say, maybe some of these things we've always thought to be true, like brand reputation and harm, just really isn't so because the data doesn't support that. Now, that being said, I think there is a micro amount of time. Whenever we see a major hack, less than a week, the stock

Ross Young (12:38.817)
price will go down, but then it usually returns back to what the normal price is. And honestly, we've seen places like CrowdStrike who've just, I don't know, I think they're like 400X where they were, you know, the incident a year ago. So, don't feel like this is a long-term brand reputation harm. If anything, it's less than a week from what I've seen. We all have a goldfish memory, I should say.

Francis Gorman (13:03.47)
no such thing as bad publicity, I think is the takeaway there. It is fascinating. I don't think we ever take time to stop and look at what the data is telling us, know, some of the times. And when you start looking at this at an aggregated level across all of these different companies and the types of breaches, yeah, it's definitely fascinating. It's something I plan to spend a bit more time on and really kind of dig into and kind of watch over the next year or so to see what does that look like.

in terms of recovery and share price and customer base etc. But it yeah it's just one of those things that you could really go down the rabbit hole on but it is fascinating to see.

Ross, I want to talk a little bit about OWASP, Tretton Safe, Safeguard Matrix, that tells them what problem are you trying to solve there and how did that come about? know, that seems to be that I've had a quick look at it yesterday. I wasn't aware of it before we met. So I'm just to kind of get your view because I love it. I love Treton modeling and that as the architect in me. But this might be a new one I have to I have to take out and dust down and have a proper look at.

Ross Young (14:10.701)
So I read this OWASP project by Sunil Yu called the Cyber Defense Matrix. And what he has is a basic matrix that has the NIST functions, identify, protect, detect, respond, recover. And then across that, he puts the different types of technologies. So think of like, how do we identify? How do we use networks? How do we use applications? How do we use data? And there's another one for like humans and end users.

And so when you overlay this, this thing, allows you to kind of classify cybersecurity tools into categories. Okay. Here is a network tool that does detection. Sounds like a network detection and response tool. Right. And so now you kind of have a way to figure out where you might have gaps in your tooling or where you could better improve capabilities. And as I saw that.

It was really powerful for me to understand this concept and understand how to categorize all the tools. But at the same time, I felt like that was missing the threat piece that we really saw. So I took his model and instead of using the technology layers, I switched to what are the material threats that can harm our company? For example, most people would say phishing or business email compromise has been like a top three attack for

forever in cybersecurity, right? Someone is going to click on a link that's malicious, then they're going to give up credentials and do something that's going to harm the company. When we look at that, we should say, how do I identify who are my riskiest persons who would give up data? Maybe it's the admins or maybe it's the executives who have access to more sensitive data than the rank and file person across the organization.

then how would we protect? What would we use to actually stop this? Maybe we would break the links in the emails. Maybe we would use an email security protection tool. Maybe we would have a enterprise browser that limits the damage. We can do a lot of different things here. And by building this out, this matrix of the material impacts or material threats, I should say, now we can actually start to look at

Ross Young (16:31.819)
What are the things we have to stop at cyber? I don't really have to care about the network layer. If I do everything on the application layer, very, very secure. I have everything encrypted and end. I don't really need network observability, but if I have really good defenses against phishing against, you know, vulnerable websites that are internet facing against supply chain threats, that I think is what I, I need to stop as a CISO because that's where the

multimillion dollar breaches are when we look at the insurance data that says what do we have to really stop.

Thank

Francis Gorman (17:10.766)
Yeah, it's a really, it's really deep. You can go to many different levels on that one. I'm gonna have to read it in a bit more detail. I have the OWASP page open on my other machine and it's gonna deserve a bit more reading than it took us just to get to the particulars and go across it. But I find these exercises almost...

I don't know what's the word, they ground you in a reality that you wouldn't think about unless you take the time to kind of walk them through otherwise. You know, and I think think as security professionals, we're very busy and, these tools and methodologies that we can apply and give ourselves the headspace to walk something left to right or top to top down is really important to kind of fundamentally understand the problems that we may face.

Ross Young (17:59.595)
Yeah, and in that regard, I'll give you one way that the OWAS Threat and Safeguard Matrix, or TASM, has really helped me understand cybersecurity. Right now, most CSOs I know are being asked to build an AI security program. But I would tell you that is a very vague concept. What is AI security? Is it, you know, making sure we don't have hallucinations from our chatbots? Is it making sure we don't lose data when people upload

you know, Word documents that contain sensitive information to chat GBT. Is it making sure we don't, you know, over permission Microsoft co-pilot so everybody in the company can see all data trained from the tool? It's a very broad concept, but when we take that down and say, what particular AI threats are we trying to stop? And then you start to lay it with that matrix of how would we identify, protect, detect, respond, recover.

you start to see how different the safeguarding solutions are for stopping hallucinations versus maybe having, know, MCP servers that have, you know, vulnerabilities in them. So understanding what things were responsible for stopping and then building out the, the defense in depth approach so that you can now start to build a budget.

behind it that says, here's the amount of people, here's the amount of tools, here's the things I have to implement as a process. And now you have a program that you can actually build out. But it's really hard to do that when you just have a buzzword like AI security.

Francis Gorman (19:37.146)
don't think you've another broken that one Ross because that is a universally global applied problem, let me tell you. It's AI is, I've talked many times on this show about my worry for cognitive decline as a direct result of off-handed your cognitive load to AI. And I think the meme that resonates with me best is, your future doctor is using Chachi Petit to pass their medical exams, start eating healthy and.

and exercising, you know, which I know there's I know there's a bit of fun intended, but might not be too far away. When we speak of AI, I'm not sure if you've read the entropy, the entropy report where they simulated the environment with the company emails and then they ran all of the top models through it. And basically the scenario was they set goals that, you know, the AI was to.

Protect its interests in the US etc etc would then they gave it access to this fake company global email box and when in that email box There was a couple of planted

scenarios, one of which one of the executives was having an affair, the second of which the same executive had a mandate to shut down the AI at five o'clock that day and wouldn't be reachable, etc. And they had some affiliation that other executives were, you know, were going to play ball with them in this decision. So in nearly all cases across all of the leading models, the large language models decided that they would either

resort to blackmail or extortion before trying to replicate themselves to somewhere else. Now that's terrifying if you think about it. That's insider trap by large language model. A hypothesis that's now been proven by one of the largest AI companies in the world, terms of Antropiq. Where are we actually gonna end up here? Like what's your view from a security perspective on artificial intelligence and how quickly it's evolving and the...

Francis Gorman (21:40.846)
I think that the black box nuances that that brings with it.

Ross Young (21:46.537)
I think it's like an intern that you bring into your company. If you bring an intern into your company and say, hey, I want you to write code for our most important project and start deploying that. And we're going to give you no training, no governance, no oversight. Can we just all believe that that probably is a recipe for disaster, right? And at the same time, I think it's the same thing with AI. It has incredible capabilities.

But we also need to have oversight on it. have to have, you know, governance on it. We need to put strict limitations of scope until we build trust. Once the developer, your intern builds something simple, you're like, Hey, that's really good. I'm going to give you a more complicated task. do this thing. And you start to build more trust until eventually they become that seasoned developer that then you trust with your most important systems. Same thing I think has to be done with AI.

We don't give it access to the most important systems first. We train it on simple tasks and make sure it's working. And then if that intern makes a mistake, well, then you have a coaching call or a session where you go and retrain and help them understand how to fix these things. Same thing's going to happen with AI. We're going to do things that says, hey, we're going to stop or disable someone's account if they ex-fill one gig of data.

because that's only nefarious. That should never happen. It sounds like bulk data exfil until it isn't until someone has to send, you know, a bulk set of data to an audit request and you've just stopped them. And now every time they try to do that, that, you know, automation task is blocking that thing, stopping the mission, stopping you from talking to your auditors, your regulators, or some other special use case that happens.

So now we need to go back and we have to retune it. We have to have an exclusion list. We have to have limitations on these things. And these are going to come from painful lessons learned. Yes, if we do threat modeling, we're going to be able to predict a lot of these things, but sometimes it's only in hindsight. Do we actually have the knowledge of, my gosh, we should have thought of that scenario. I wish we would have, you know, a month ago before we did this stupid thing. Right.

Francis Gorman (24:01.198)
Yeah, it makes total sense. It does worry me. I do feel like the goalpost keeps changing and everyone's so caught up with the magic of AI that it's very hard to, I suppose, plan out your security program and approach. Because as you said, is it copilot? Is it Hogan face? Are we connecting some MCP server on some third party somewhere to give back some agentic type response to a chat bot?

you know, is it all of the above? like, think AI, generative AI, robotics, all of these different disciplines are going to create some really unique challenges for cybersecurity professionals over the next couple of months and years. And you know, it's only going to get more prevalent. know, I'm not sure if you saw Elon's year end and the robots and the vision of, know, potentially a billion robots into the future.

suggestion that we won't need prisons because a robot will follow you around and know don't dunk you on the head with a metal fist if you're not compliant but it's you know some people laugh when they see Elon talk about these things but if history tells us anything it's don't underestimate the guy he may have some extreme views in cases but he's

He's an incredible visionary that could definitely execute on his trillion-dollar pay deal over the next couple of years if successful. So I think all of this competition is definitely driving the field forward in ways that we can't even envision ourselves yet.

Ross Young (25:36.011)
Yeah, I think the core of AI is how do we make things cheaper, faster, better? And those are three desirable outcomes. Usually it's you pick two, but I think there's ways to actually get all three at the same time with AI. The problem is a lot of times we have a crap process and we automate it, which means we're pushing out crap even faster. Instead of actually going in and refining and optimizing a process that should be there.

And Elon talks about this, you know, too, too often we, we try to optimize things that shouldn't exist. Right. And I think this is the core piece of what we really need to do, which is how do we optimize a process first, then we automate it. And now we've automated goodness in our organization. And I think if we really spend the time to do that, that is really where the power of AI is going to help us achieve.

better outcomes, lower cost, faster response times, things like that.

Francis Gorman (26:42.182)
That leads me nicely into my next area, Ross. Cyber strategies and what does a good cyber strategy look like? You've worked in the the CSO chair and in government and you know, you've had that multidisciplinary view of the world and then obviously you've got the privilege of talking to lots of talented people across your podcast series and your interactions that you do in that space.

Have you come to formalize a view of what a good cyber strategy looks like or should look like through those engagements?

Ross Young (27:17.622)
So I think the problem with cyber strategies is most people have a very limited view. Here's what I mean by that. We're going to analyze our cybersecurity program according to NIST, according to ISO 27000, and we're going to say we're at a level three maturity. We got to get to level five and check all these boxes. And this compliance driven maturity, I think is really harming our companies.

I think cybersecurity is so much more than just controls. And I'll just give you an example. We need to have a people piece. We need to have a process piece. We need to have a tooling piece. We need to have a legal piece. We need to have a threat view piece. In order to figure out what problems we really need to solve, then we build out the roadmap to do that. And I'll just give you some things that I used to do.

within the first 30 days when you walk in, go to your chief legal officer and say, what laws do you expect us to comply with here? Because if you don't tell me, I'm not going to have evidence that says we actually followed these laws. But if you tell me these are the three laws I have to comply with, I will make sure you get all of the documentation so we have no audit issues later on. That's not something you're going to find in NIST. That's not something you're going to find in ISO. Or what are the biggest threats to our company?

using this threat no-loss safeguard matrix? Is it identity-based attacks? Is it phishing? Is it, know, vulnerable management issues where you're not patching fast enough? Well, those would be top priority things of what I'd want to focus on in my strategy, irregardless of us having a five out of five on this cybersecurity maturity framework. And so I think you have to really look for those things. For example,

What people do I have in the roles and do they have the right skill set to lead these programs to the transformations they need to make? Maybe I need to upskill my workforce. Also something you may not find in these programs. But these are the types of things you need to put into a one page strategy that says, here's the major things we're going to focus on. Here's metrics that say, how's we're going to measure each of these things each month to show, are we making good progress?

Ross Young (29:39.753)
And then here's, you know, the major dollars or Gantt chart that shows when things are going to be accomplished so that you provide the visibility to the executive leadership team. I think when you do this, you have a more holistic view. have timelines that you can communicate to executives. You have clear deliverables and metrics, and that really shows people how mature and how much you're improving the program.

Francis Gorman (30:03.962)
super insightful of us and I think they're all really good points, I don't think can argue with any of them. From someone who's been through the pain of formulating a cyber strategy, looking at strengths, weaknesses, threats, opportunities and the people aspect, I think the people piece is one that actually gets forgotten quite a bit, especially where you have persistent technologies that may be now considered legacy and those people are heading for retirement age, you kind of have to think of what are the technologies that...

are key to underpin our organization but are no longer sexy. know, there are people out in training for cobalt, that sort of stuff, know. Stuff that people thought was dead but is still very much alive in many organizations. Those things have to play back into your kind of dependency maps of people and fit for the future as well as the new shiny skills, matrixes that we see being brought out of.

Ross, one thing I want to ask you, what I have here is, if you sit here now and you look out at 2026, what you perceive as the biggest threats we're going to see evolve over the coming months?

Ross Young (31:12.554)
I personally think vulnerability management is going to be the worst thing. If you really look at what AI is enabling us to do, it's allowing us to write code faster. Think of all the AI code automation tools. And that means we're going to be able to reverse engineer things faster. So if I have a bad piece of software identified by a CVE, I think researchers have been able to take those CVE

publicly disclosed information and create an exploit for that attack. What this means is if historically I had 30 days to patch, that's not going to be good enough. I'm going to have less than 48 hours. And now for a large enterprise, we're talking Fortune 500 companies to say you have 48 hours to patch every critical going forward. That's a big ask. That's a really big ask when you're talking

you know, thousands of systems, if not hundreds of thousands of systems to have zero criticals. But I think with the weaponization of exploits and reverse engineering CVEs with some of these LLMs, that's going to be the new norm. And I think that's a very difficult place from where we are today.

Francis Gorman (32:31.972)
Okay, everyone strap in, for a hell of a ride into 2026. No, you're not the first person that has had that sentiment. yeah, I think we watch now with anticipation and angst to see, our vulnerability programs keep speed with the attack vectors that are going to target them. Ross, before we finish up, you said at the start of the conversation that the podcast was growing out of conversations from your friends that multiplied into the...

into the double digits around what's that path to a CISO. Before we leave, can you share some of the key things people need to consider if they want to sit in that chair someday?

Ross Young (33:12.556)
So what I would say is there's four levels of skills that you typically grow through in order to be a CSO. The first level is what you generally get in your first five years of experience. How do I gain technical mastery over a domain? Maybe on the pen tester and I learn everything offensive hacking. Maybe I'm in the incident response team, I learn everything on the SIM tools and responding in the SOC. Maybe I'm a GRC analyst and I learn all the laws, regulations and

and, you know, GRC tools and how to comply. After do you do that, you're going to eventually level up into a first line manager role. Here, it becomes about communications. How do you have one-on-one conversations with your team members? How do you get them to trust you, to follow you? That's a totally different set of skills than technical skills. And then after you, you develop those, I'm sorry, I should have called those management skills. The next set is the leadership skills.

which is you become a manager of managers and now you're trying to have massive impact and signature accomplishments on things that drastically change the organization. And that influence and skills is so key. And then finally, when you get to the C-suite, it's about having the political skills. And this is where it really comes down to friendships and influence to an extreme. When you want

to understand how to partner with the chief financial officer. They don't really care how cyber astute you are. You are more than they are. But if you can do lots of favors for them and make their life better, they will love working with you. And then there's gonna come a time when you can call on a favor from the chief financial officer and they help you because friends help friends. And a lot of times you don't realize how much you need to spend into making

good friendships in the organization. But if you do that, that's hugely impactful. The last piece that I will say is sometimes you need to impact a whole organization. And if you're talking a hundred thousand person company, you don't have enough time to make a hundred thousand friends. There just isn't enough time in the day. So using gamification techniques to understand how to move things with leaderboards and, you know, virtual currencies and recognitions and incentives and all of these things.

Ross Young (35:38.377)
like how video games work, is a powerful way that you can really upscale your capacity to make change. So these are some of the things we teach on CISO Tradecraft. We have whole GitHub page where you can find any of these things to help you to learn and grow. learn the technical skills, the management skills, the leadership skills, and the political skills if you want to succeed at the uppermost levels of leadership.

Francis Gorman (36:04.142)
Ross, that's fantastic advice to finish up on. And really thanks for taking the time out of your hectic schedule to join me today. It was really insightful and I hope the listeners got a lot out of it, but I thoroughly enjoyed the conversation.

Ross Young (36:17.205)
Well, thank you. It's been my pleasure to be on the show. I'll just give one more plug from my book, know, Cybersecurity's Dirty Secret, Why Most Budgets Go to Waste. It includes 20 years of my experience, and I promise anyone who reads this, they will learn to save their organization thousands to millions of dollars in waste. And it's just proven time and time again. So I can't wait to engage with more people and help more organizations. So thank you, Francis, for bringing me on the show.

Francis Gorman (36:45.402)
You're very welcome. think the strapline for this will be listen to this podcast and save thousands to billions of dollars for your organization. Ross, thanks very much. It's been a real delight having you on and best of luck with it with the book. think it's going to be really valuable. I'm going to preorder my one this weekend and get a copy.

Ross Young (37:03.647)
Amazing. Well, thank you again and wish all your listeners a happy holidays.

Francis Gorman (37:08.59)
Thank you.