AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology

Who Owns AI When It Breaks? | NightDragon

World Wide Technology: Artificial Intelligence Experts Season 1 Episode 60

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 35:59

AI is no longer an experiment — it’s an accountability test.

In this episode of the AI Proving Ground Podcast, Dave DeWalt, Founder and CEO of NightDragon, and Kate Kuehn of WWT unpack what happens when AI systems fail, misfire, or create real-world risk — and who ultimately owns the outcome.

As organizations look toward 2026, boards want visibility, CEOs want measurable impact, and the lines between IT, security, and the business have disappeared. AI accelerates opportunity, but it also accelerates exposure — collapsing decision timelines and reshaping responsibility.

This conversation explores how accountability is shifting to the top of the enterprise, how leaders should think about ownership when AI breaks, and why resilience, governance, and speed now define competitive advantage.

Support for this episode provided by: Graphiant

More about this week's guests:

Dave DeWalt is founder and CEO of NightDragon, a venture and advisory firm focused on building the world's leading SecureTech platform. A four-time CEO, he has led iconic companies including FireEye, McAfee, and Documentum, creating over $20B in shareholder value. A longtime board leader and public servant, DeWalt has advised four U.S. administrations on national security and cybersecurity and is a recognized voice on technology risk and resilience.

Dave's top pick: Infrastructure as a Strategic Target of War

Kate Kuehn joined WWT in 2024, bringing more than 25 years of experience leading and advising cybersecurity, technology, and AI strategy. She has held executive and board roles across the cyber ecosystem — including CISO, CEO, Chief Trust Officer, advisor, and board director — with experience at companies such as Aon, BT, and Verizon. A trusted advisor and award-winning leader, Kate focuses on helping executives and boards align cyber risk, AI, and business strategy in an increasingly complex threat landscape.

Kate's top pick: AI Won't Save You: Easterly, Joyce and CISOs on the Cybersecurity Reality No One Wants to Hear

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.

Why 2026 Changes Everything

SPEAKER_02

From Worldwide Technology, this is the AI Proven Ground Podcast. A few years ago, AI and cybersecurity were about awareness. And last year in 2025, it was more about exposure, breaches, vulnerabilities, proof that the attack surface was bigger than anyone wanted to admit. And now as we move into 2026, a key question is becoming more clear. It isn't if AI will transform the enterprise, it's who's accountable when it does. Today's guests speak right into the heart of that shift. We're lucky to have Dave DeWalt, who's been on the front lines of cybersecurity for decades, as an operator, a CEO, and now as a board member, helping some of the world's largest companies think through AI, risk, and resilience. And Kate Keene is someone you've heard on this show before, a leader who sits at the intersection of cyber risk, national security, and the boardroom, helping organizations understand not just the threats, but the business impact behind them. Together, Kate and Dave address and analyze the global threat landscape, the rise of AI-powered attackers, and why defenders finally have a chance to fight back if they move fast and build trust the right way. So let's jump in. Kate, welcome back to the show, the AI Proven Ground Podcast. How are you today?

SPEAKER_01

I'm good, Brian. Thanks for having me.

SPEAKER_02

Yeah, and Mr. Dave DeWall, thank you so much for taking the time. I know you are incredibly busy.

SPEAKER_03

Oh no, Brian, thanks for having me do anything for WWT as well. So perfect to be here.

Who Owns AI When It Fails?

SPEAKER_02

Great, great. I'm gonna start you both off on a little bit of a multiple choice question here. So if 2024, as it relates to AI and cyber, was a year of awareness, and 2025 was more of a year of exposure, what is the year 2026 going to be defined by? I'll give you four choices here. A reckoning, B, accountability, C, survival, or D, transformation.

SPEAKER_01

I'm gonna go with E, action, but close D transformation.

SPEAKER_03

Yeah. Dave? Yeah, I'd probably say transformation as well, although I was looking for all the letter A's because you know, with AI and agentic, you know, we've got everything leading to the letter A. So I would have said accountability too, but I think transformation.

SPEAKER_01

I think 27 is going to be accountability. Like we're still figuring out what we're doing here and building. And, you know, we're here at the technology summit today. And one of the interesting things that was said is, you know, we're starting to see use cases, some that are really succeeding, and a lot that aren't. And I think this is the year where we're gonna start to see that refine and really start to action. And so the accountability aspect of that, I think, is what we're gonna start to see next year. So I think that's right.

SPEAKER_03

Yeah, sorry, I think that's right on. I feel that in the boards that I'm on as well and the board advisory work, you know, this past year or two, everybody was asking, well, what's your AI strategy? And then this past year is, you know, give me all the project dashboards for my AI projects. And to your point, now it's like the accountability and KPIs around those AI projects is creeping in as well. So you're right about that. It's coming soon.

SPEAKER_02

Yeah. I mean, well, if we're going with accountability, I mean, who's the accountability lie with? Are you gonna is it gonna be executives, boards, shareholders, the you know, the employee base? Who who does that accountability lie with? Or maybe it's a cover all and you're gonna say all the above?

SPEAKER_03

Go ahead.

SPEAKER_01

Go ahead, Dave.

SPEAKER_03

No, I would I would start with I think if the CEO, I start with the CEO because that's where the the buck stops, and you're seeing almost every CEO of every Fortune 500, you know, small or large company, realizing the impact of AI on their businesses is something they got to learn. And it's not just okay, let's save cost with AI and increase productivity or potentially you know create some efficiency in the organization. It's how do we use AI to make money? And how do we really drive not just profitability, but real revenue enhancements with this? And it's really wonderful to watch that CEO leadership down. But it takes a village, as we always say, right? So the CEO working together with the other C level, working with the whole company, convincing them, and it's a powerful community when everybody gets on the same page. And AI is a unifier for that. But it starts with the top, I think.

SPEAKER_01

It's actually so it's interesting when you talk about accountability, and and Brian, you've heard me say this before. We really have two sides of the coin. We have digital risk, digital opportunity. And what we're seeing with AI is AI is really starting to create kind of the walls coming down from a risk perspective. So it's no longer IT, apps, security, you know, traditional technology kind of all operating on their own. And from a board perspective and opportunity perspective, Dave's exactly right. Companies use AI for one of two things, either cost transformation or brand differentiation. Those are the two reasons you start using it. But it's creating this atmosphere where all of a sudden all of the leaders, all the C-suite have to understand the opportunity and not only the technical and cyber risk, but the business risk associated. So it's creating a whole new definition of accountability because we have to understand both that opportunity and risk from as Dave talks about in a board perspective, you know, everything from financial risk, IP risk, reputational risk, and the opportunities associated with it.

Threats Are Moving Faster

SPEAKER_02

Yeah. Well, you know, what are we accountable for? Let's backtrack a little bit here, Dave. I mean, you're about as plugged in as maybe anybody that we've got to do. Well, the queen of cyber here. Both of you, between the two of you, definitely the most plugged in of any pair that we've had on this show. But, you know, Dave, walk us through where we're at right now with the global threat landscape and how maybe AI has either accelerated, you know, those threats or helped improve the threat landscape for that matter.

SPEAKER_03

Yeah, it's a long, it's a long answer, but we're really watching, you know, with any technology inertia, and this one's got a massive, you know, capital letter I for inertia, you know, we see the risks around it. And what keeps me up at night is the speed of adoption of AI and this technology curve we're on. We always watch security be a laggard to this adoption of technology. And this is kind of the case again. You know, we haven't really put trust and safety ahead of the adoption. And whenever that's happened, you get this security gap. And when you get this security gap, you get the attackers getting a real capability to exploit. Having said that, we've learned a lot of lessons. We have a lot of scars from all those gaps over the years, and you can see this community is trying to rally quickly around that security gap. And I'm hoping we can really create security by design better, security rollouts, the whole dev sec ops needs to be secured in order to be safe. And if there's ever an area where it will scare people off, if there isn't trust with the AI platforms and we start getting that virality of no trust, it's going to slow it down. So this is a really important time for us. And to your point, and Kate can say more about this. I've never been more optimistic than I've been now. After 20 plus years, I finally see tools led by AI and agentic frameworks that can automate these SOC and automate the humans. And there's almost, you know, no companies out there having a limited budget for cybersecurity. So now we have this ability through autonomy and agentic capabilities that enable us to multiply the SOC to, you know, exponential growth to create response. And that's exciting because the attacker has always had this advantage against everybody else. And now the defenders have AI too. So we could hit both sides of that coin, but for the most part, I'm optimistic more than ever.

SPEAKER_02

I love that. I mean, so you so a scary potential future, but you are optimistic. Kate, you share that optimism?

Nation-States Go AI-First

SPEAKER_01

I do, and it's interesting. You know, there there's kind of again two sides to the coin. So I was reading a Google report recently from actually last year, January, that 57 APTs advanced persistent threat groups were documented using AI. We expect that to actually double for the report this year. And it was interesting how they're using it. So China's using it for obviously espionage and you know, the normal social media things you would think of, but also code revision. We saw Russia using it to actually take old versions of malware and enhance them in new and different ways, using them to actually do code revision, things like that, and also again social manipulation. North Korea was using it mainly for research on how to, in essence, accelerate some of their, you know, we've all heard about their insertion techniques of putting employees in Western companies. They were using AI to accelerate that ability to do so. So it was fascinating how, you know, nation states are starting their advanced persistent groups, are you starting to use AI? And I think we'll see that trend continue. The interesting thing is from the opportunity is I look at the work like we're doing in the AI proving ground, and we have now a force multiplier of companies that are bringing out incredible solutions, like Dave was saying, using agentic AI, that are allowing us to accelerate things like the SECNOC trend. So blending security operations and network operations in new and unique ways. You know, thinking about how looking at SOC of the future, the ability to leverage large-scale enterprises from an OT perspective, IT, physical security, and create a much more robust platform to really appre to kind of appropriate AI onto the attack surface and help us understand where those new threats are coming. And it was interesting, the FBI said last week that they're positive that AI is really the tool of the future to help us, you know, safeguard against some of the nation state threats that we're looking at. And I couldn't agree more with Dave. I it makes me very optimistic that we're going to be able to start kind of getting ahead of this fight.

Exploits at Machine Speed

SPEAKER_03

Yeah. Well, you Dave, go ahead. I was going to add on for a minute because I'll give you a scary factor too. And then to compliment what Kate was saying, you know, this past year we have seen more vulnerability releases and disclosures than we've we've ever had. And that's just because the attack surface has gotten bigger. We we see a lot more compliance and collaboration and bug bounties and programs and these vulnerabilities come out. You know, I mentioned earlier 19,972 CVEs released so far in the first, you know, nine, 10 months of the year. You know, what's a CISO to do or a CIO to do when you have 20,000 vulnerabilities to patch, and they're all called critical vulnerability exposures, CVEs. And so they can't. But usage of AI and autonomy can help us here a lot. Autonomous pen testing, you know, a CVE comes out. How do I see it in my attack surface quickly? How do I start using AI proving grounds to really understand the impact that that could have, risk quantify that, and go from event detection to response in milliseconds too? And you heard some of the breaches that occurred, you know, over the last few months, one of which was F5, which had from time of exposure of the vulnerability to the time of exploit by the Chinese national organization was 11 hours. Well, AI is just shrinking that and shrinking that and shrinking that. But to be optimistic back again, we also have the capability where we can respond faster and faster to the vulnerability, fix that vulnerability, patch that. And everything we know about cybersecurity is the longer the time goes, the bigger the risk gets. So if we can go from detection of an event to response of that event quickly, we reduce the risk. And that's the optimism.

SPEAKER_02

Are we there right now where we can get to those patches quickly or at the same pace of AI? You've mentioned the both of you have mentioned the word future a couple of times. Are we at the future yet? Or until we get to the future, where's the where are those gaps? Where's the mismatches?

SPEAKER_01

It there's a there's a double edge with this. So yes, I think we're at the future on some levels, but the reality is too, is there's still governance around when how you patch, when you patch, what you do. And so at this point, one of the interesting things that we're seeing with AI is, you know, as we start to put inject more AI into patching, into creating responses, we have to play catch-up with how we look at the discipline around how we respond. And I think we're at the beginning stages of what that's gonna look like. But the optimism to Dave's point is, you know, the fact that we have technology that can create autonomous response that can help us, you know, in essence, defend more real time is awesome. And I love things, Lena, we have partnership with Cisco and other companies that are starting to do almost real-time patching in their devices as we see new things come up. We now have to catch up from a GRC perspective, allowing that technology and trusting that technology as we continue to automate how we respond.

Governance vs. Autonomy

SPEAKER_03

And now what we're watching is the attacker, the attackers are realizing that there's portions of the attack surface that are getting harder and harder to exploit. So it's a race to bear metal, I call it. It's almost a silicon to satellite kind of problem. And the attackers are going deeper and deeper into the BIOS, the firmware, the ASIC, the compute platform underneath, the hardware layers. So we have to bring this AI autonomy, pen testing game all the way down from silicon up. And if we're able to do that, before the bad guys can exploit it, we stay ahead of the risk problem.

SPEAKER_01

And to Dave's point, you know, one of the things I've found really interesting is if you look at, I'll use China as an example. And Dave said it on stage earlier today. He was talking about that everything in security is now a typhoon. It's totally true. We have lots of typhoons. We've got a weather system going on, Dave. It's amazing. If you look and start to analyze where the you know nation states are starting to attack, while we love talking about AI and all these amazing new capabilities, the reality is a lot of the stuff they're going after in order to insert themselves are CVEs from 2018. You know, so again, critical vulnerability exploits from, you know, five, six, seven years ago, because there's still a lot of legacy tech debt in our critical national infrastructure in a lot of our organizations. And so they're leveraging old exploits that we would expect to have patched, to have done things with, to have taken out. And how can they use them in new ways? What why I mentioned this is AI is going to be one of the ways that we can quickly help detect how they're leveraging and using old things. And as Dave said, it's a race to metal. And then how do we actually raise the bar as companies grapple with AI, bringing in new use cases, bringing in new capabilities, and yet dealing with legacy technology that's sitting in environments.

SPEAKER_00

This episode is supported by Graphiant. Graphiant offers next generation networking solutions that simplify and secure enterprise connectivity. Experience seamless network management with Graphiant's innovative platform.

SPEAKER_02

And, you know, there's no more high-leverage situation, it seems like, than you know, in the cyber world. So how how should organizations or leaders within companies start to think about building that trust within their organizations so that the company is more open to finding those use cases or or areas that would innovate to make everybody more secure?

Legacy Debt Meets AI

SPEAKER_03

You know, there's a lot of areas to focus on with that question, but you know, I believe it starts with education and culture and leading from the top, like we talked about early. So, you know, AI can be scary to some, but it can be an incredible enlightenment to these businesses as well. So the more we can educate, the better we can be. Kate and I have spent many years trying to eradicate this one problem in cyber called spear phishing. Like, you know, it's like this thing, this damn word has been around us all the time, and now AI has exacerbated that even more in a way, because now I can simulate your voice, your video, even. I can create all kinds of ways to, you know, emulate using artificial intelligence and an attack using spear phishing. But what do we combat spear phishing with? Education. So I feel like education just has to start from the top all the time. I'm the present within every organization, build a culture of this. That creates the trust because now security operators and security managers, you know, if they have that trust, can really work with the culture and the people and the process to improve. And then the guardrails of trust stay on AI. And that's what we need to have happen.

Trust Is the New Control

SPEAKER_01

It's it's interesting when you talk about building trust and education. I couldn't agree more, is one of the issues we've had in cybersecurity is we use a lot of acronyms and make things very difficult for a layman to understand. And, you know, Dave sits on boards, I do a lot of board education, and I think one of both of our superpowers, and Dave especially, has been around a lot longer than me in this world. Thank you, sir. I'm not admitting that is you know, what is helping to make cyber understandable and that builds trust in education. You know, that's a superpower. And one of the things I talk about a lot is we have to stop sounding like the Snoopy teacher, the SimSox or of the and the ADR. And people are like, what? At the end of the day, cyber by definition means anything to do with a computer, computer transmission. And we're dealing now, especially with AI, with three types of in essence risk. You now have not only the malicious attacks, but now you have mistakes, something goes wrong with the code, you know, things like we saw the crowd strike issue last summer. For me, that was a sev one outage because the Starbucks down the street went down as a mom of five. That's like, you know, a mission critical incident in my world if I can't get my Starbucks. And now you also have malfunction. You know, something goes wrong in AI and all of a sudden your model's not working right. So you know, you it's it's it causes issues. So teaching a board about the business risk aspect, what the impact to the business of malicious mistake malfunction, it takes a cyber in essence expert or a cyber board leader, you know, to help kind of take and translate our world of lots of different acronyms and complexities into the actual risk piece. And I think we'll see a trend where, you know, by putting more cyber expertise on boards, we've seen it with some of the regulatory matters over the course of the year, helping understand the business impact of malicious mistake malfunction is going to be really critical as we continue to move forward in this world.

SPEAKER_03

Kate, I'm glad you brought up the resiliency isn't just cyber as it relates to hackers attacking networks, but resiliency as it relates to uptime and and and really performance of applications. And we've had quite a few of these wake-up calls over the last few years as some of the high-tech vendors have just grown like literally almost exponentially themselves. And we have to build back the design of resiliency into their architectures. CrowdStrike was a wake-up call for me a bit too, because we trust our security vendors. And suddenly, when they give us a patch or an update of content that is going to protect us, we deploy it. And we learn they can make mistakes too. And of course, this is permeated to a lot of vendors as they've grown, you know, with the firewall vendors having outages, AWS having outages, nearly every cloud provider having outages. Learning that business impact, Kate, as you mentioned, is really critical, not just from a cyber breach, but just from a resiliency point of view. And they go together. How do you segment a network? How do you make sure you have the capabilities to stay up and running? What are your dependencies of applications? You know, you're always looking at from an attacker moving laterally, but what about an outage moving laterally? And so they go hand in hand, and it's really important, you know, learning and epiphanies we've had over the last few years there.

SPEAKER_01

And to Dave's point, you know, this all comes down the the term cyber resilience is something we use a lot. But the reality is that it really kind of falls into three buckets. You have operational resilience, how do you keep the lights on? How do you make sure you're prepared? Second is the financial transfer piece. What is the amount of risk that you can take on the balance sheet? And how do you leverage insurance? And other things to really understand what that's going to mean. And then the third side is we have the incident response to the IR and the restoration and ability to respond. There's really in really kind of three areas. But when you look at all of it, we can't boil the ocean. And so, you know, I I've listened to Dave a couple of times talk about the work he does on Delta and Exxon. And it's helping companies get educated that as you look at your response to from a resilience perspective, we used to always say in cyber, and I've I'm now going away from this term, you got to protect the crown jewels. Crown jewels have changed, especially in AI, of what actually is important to an organization. So I talk a lot about how do you achieve two things minimum viable company and minimum viable business. How do you understand what do you need to protect and make sure is resilient from a keep the lights on, keep your business operating? And also, so that's minimum viable company, and then minimum viable business, what do you need to protect in order to keep making money? Where does that go? And so being able to focus on the resilience aspect and the cyber response from that perspective helps organizations not boil the ocean, but really understand the most important parts to look at from a resilience standpoint.

Resilience Beyond Breaches

SPEAKER_03

And I know you'd agree with this, Kate, too, that resiliency framework you just laid out is really important, but it also extends to the suppliers, to the third party, to the fourth parties that we're involved with. And if we look at a lot of the cyber attacks that have occurred or even outages that have occurred from an operational resiliency point of view, a lot of them are involving these third parties. So now as a CIO or a CISO or even a CEO, you've got to look at the entire ecosystem around you. And COVID taught us a little lesson in that because we were supplier restrained for quite some time. But now we see the attacks and the issues around software supply chains and hardware supply chains and all the way down to rare earth minerals that we need for our supply chain. So it goes all the way to supply chain resiliency as well.

SPEAKER_01

A thousand percent. I, you know, I couldn't agree more. And Dave's right. You the supply chain, we've seen it, you know, not only in executive orders, but there's been open letters. You know, there's a lot of focus on third-party risk. Fourth party risk as a former practitioner is still a dark art. I'm waiting for Dave to solve that for me. He's gonna find some company out there that's too.

SPEAKER_03

I'll tell you about it.

SPEAKER_01

Ha ha, I love it. Is, you know, but we have to get that point where we get that granular in the supply chain because we're an ecosystem. And so if one part fails, the impact to critical national infrastructure, large organizations can be catastrophic. And we have to think through that piece.

SPEAKER_03

You know, it's interesting. You alluded to that. Uh, we spent a little time over the past year, not a little bit of time, a lot of bit of time, realizing one of the supply chain resiliency things that we have related to AI and data centers is energy and power. And we've WWT has been a wonderful leader in this respect, trying to enlighten people to this supply chain problem. Data centers are growing really, really fast. AI is becoming a token of every country and sovereign nation that is a critical piece of IP. And what's underlying all that? Energy generation, energy delivery in the supply chain, and bringing that security to energy and energy back to security is really an important component of this as well.

SPEAKER_02

Well, I know earlier today, Dave, just kind of along that same tone, you had mentioned the need for public-private partnership to help drive a lot of that. Dive a little bit deeper into that. What would you like to see in that area?

Supply Chains Under Pressure

SPEAKER_03

Yeah, there's a lot to this. You know, I've enjoyed for for now five administrations being pretty close to the administration watching public-private partnerships form. We've gotten better and better and better at this over the last 20 years. We need to continue that because without a good, strong public-private partnership, we're not strong. And we need a government, we need the agencies, not just the U.S., but all over the world cooperating. Whenever there's victim one, we don't want to let victim two, three, four, five happen. So the more those communications occur, the more the government can help us, the better this is going to be. But the other side of that is policies and regulation things as well. Sometimes regulation is pretty powerful because once you see that there's a mandate required to help create safety and security about a particular area, you know, it makes the industry conform. That can be a good thing. Too much regulation is a bad thing, too. You get so much compliance, every country has a different regulatory environment for cyber, every state has it, and now it becomes too onerous for it. So it's kind of like porridge, you know, you got to get it just right in order to make it effective.

SPEAKER_02

I mean, the two of you have been you mentioned, Dave, you know, involved in a lot of, you know, federal administrations and helping drive policy. Kate, what do we think we might see on the horizon as it as it relates to AI policy, whether it's in general or or securing AI?

Guardrails Without Slowdown

SPEAKER_01

So we're seeing a couple of trends. I mean, so first of all, the AI Action Plan came out this summer, which really is the roadmap of how this administration wants to see how we approach AI. A couple of things with it is one, you know, the mission is clear. This administration wants us to be the leaders, the innovators, the pace setters for how AI is going to be utilized globally. That's bar none. To Dave's point, you know, there's been a term the guardrails are a bit taken down or off. We're going to see more focus on innovation and pace as far as being able to leverage AI in new and creative ways, a little bit less on the regulation side. The critical thing is the, you know, in looking at how we're using AI and from this administration perspective is just making sure that we are, in essence, setting the pace versus playing catch up. You heard some of the speakers talk today about there's a lot of concern that China's catching up to us in some of these areas. And so making sure that we don't get caught where we're behind in how AI could be utilized, I think is the key focus. You're also going to see a trend of, you know, the this current administration thinks that we may have over-rotated a bit on on, in essence, regulation, and a simplification of regulation, I think, across the agencies is going to be seen over the course of the next couple of years. The other thing is fostering international cooperation. So, you know, I was very proud of the work that we did collectively at WWT with the TRIA, which was presented at UNESCO in February, and that is the Responsible AI Act, which basically or a framework which took all 32 global framework standards and boiled them down to five pillars in an implementation plan. We're very pleased to see kind of this administration lean in a bit on how do we continue work like that? How do we break down, in essence, and agree on the areas from a responsible and trust perspective of AI we can agree on internationally.

SPEAKER_03

And I would extend that a little bit as well to think about one thing with Kate and I with our experience. We've seen initially we thought of cyber as, you know, a very specific domain. And we kind of live within that domain. You know, it largely was endpoints and firewalls for a long time and a moat that we would protect. Suddenly the cloud broke down that moat. We had to protect the cloud, the endpoint, the network a little differently. Cyber has become ubiquitous. Cyber is everywhere now, right? When we launch a satellite, we have to create resiliency and security of a satellite and lower its orbit, the communications from that side. We have to do it in the air domain with drones. We now have new types of threats using electronic warfare versus malware than we've ever seen before. So every domain from land, air, space, oceans, like you're starting to see ubiquity of cyber. So the whole of government with the whole of industry has to be thinking in a different way. Because I know one thing, the attackers are thinking that way too. And we have to think the same way as defenders.

No Safe Perimeter Left

SPEAKER_01

It's funny, you know, we were we were joking about music before the call, and it's end the walls, come tumbling down. Yeah, I mean, we are at that point where cyber, it really does impact everything, and not, you know, being like it impacts everything, but we're reliant on technology for everything, and there is a cyber component to all technology. And to Dave's point, you know, a couple of weeks ago, I was having a conversation with a good friend of mine who happens to be the foremost expert in cybersecurity in space for Lichkenstein. Why not? And so you know, she's an amazing lady. But she was trying to teach me about, like to your point, Dave, the differences between ground control security from a cyber perspective and space control security. And what did we need to think about? And she was educating me of okay, you have to Kate go away from the idea that you're going to protect the equipment. What about the SMS signals that are going across? What about the data that's in essence flying through space? How do we start to think about that? And it blew my mind, Dave. And I, and you know, you've been in this as far as looking at different areas and facets and complexities and the creativity we sometimes have to deal with, where I was like, wait a second, yeah, hold on. It's a whole different world when you start to think about how security is now embedded in how we use technology.

SPEAKER_03

Especially in a post-quantum world as well, now where encryption methodologies that we're used to with communications and other types of ways in which we we work with our systems, you know, quantum can create a very powerful, you know, code breaker here and you know, encryption breaker. So to your point, we've we've got to keep our guard up in a lot of different ways. One other area that is worth noting here, too, that kind of keeps me up at night, new domain, is information warfare. So if you kind of look at, you know, what was the Russia-Ukrainian playbook right out of the gate? Knock out communications from space to ground, you know, leverage a propaganda machine as best as you can. And we live in a whole new era now where cyber meets information, misinformation, disinformation, influence operations in a way that really is very dangerous because that creates radicalization of people, psyche changes, and we've taken a lot of guardrails off of our trust and safety of our social media platforms and various ways we ingest media and content. We got to get those guardrails back on, again, pores just right, but how do we do that effectively so that we can make sure that citizens and businesses can get the right information at the right time, not false information?

SPEAKER_01

Well, and to Dave's point, I a thousand percent agree. I mean, I, you know, as a mom of five, it terrifies me. I read something not long ago that 50% of all social media accounts are AI and are nefarious in nature. I mean, think about that. One in two. So, how do we start to think through the impact of that from a social control perspective, from a potential malicious perspective? And, you know, as a practitioner, we've already seen attacks on some of our major companies across the United States where, you know, it's an attack on X, it's erroneous information being leaked, it's a problem that does reputational damage. And we've seen stocks drop over 10% based on a reputational attack. As a CISO and as a risk executive, how do I defend against that? And so I think there's going to be a lot of collaboration that has to happen between social media platforms, organizations, and how we look at the discipline around reputational risk. It's a brave new frontier for us.

SPEAKER_03

I call those brand breaches in a way. Like we think about a cyber breach. Yeah.

SPEAKER_01

I'm totally stealing that.

When Attacks Hit Brands

SPEAKER_03

We think about cyber breaches stealing information, but in many cases, these reputational narrative breaches around your brand or around a person or around something of value that shareholders deem valuable, you get a breach of that. It's very hard to roll that back once it's rolled out. So, do we have crisis plans for that? How do we counter those narratives the right way? So now not only do we have to defend against our networks and our cloud and our, you know, multiple domains, we got to do this thing of ubiquity of information and prevent and detect that too.

SPEAKER_02

Yeah. I mean, so much on security plates. And I'll end here because you've already both been, you know, very gracious with your time. Whether it's geopolitical tensions, deep fakes, uh the blocking and tackling that must take place, or just getting back to the basics. There's so much for any security leader or their teams to consider. What is maybe a key lesson that you learned in 2025 that's really going to have an impact or an implication moving forward in 2026?

SPEAKER_01

I think that the number one thing I learned in 2025 is, you know, every organization is going through some facet of digital transformation. We are living in a revolution. There is no doubt about it. And with that, we have to start thinking about how AI, cyber, digital experience, traditional technology is coming together. And we have to change our thinking about how we address, to Dave's point, cyber's changed. The role of the CISO is changing. The role of how we're leveraging technology is changing. And so we have to create stronger education and trust around how do we create these programs that we can look from a future perspective where everyone plays a part in how we address not both the opportunity and the risk. Very different than just, you know, it used to be. And to Dave's point, uh, when we started in security, you could kind of build a castle. You knew what your crown jewels were. There was a safe in the middle, you built really cool walls, then you dug a moat, then you made the walls taller, and you know, it was great. We had gorgeous castles. Castles are gone. And so as we think about the continual leveraging of technology and the transformation we're experiencing in this revolution, how we think about risk. And to Dave's point, I'll take comms a step further. Building true resilience plans of understanding everyone's role when malicious mistake malfunction happens, is what we need to really focus on in 26 and 27.

unknown

Yeah.

What Leaders Must Do Next

SPEAKER_03

The only thing I'd add on to that, and I think uh she said it very well, I'd probably say I learned a lot of the importance of the crowd, right? The power of the crowd. And the more we work together, the faster we can solve these problems. And it's what I've appreciated about WWT and Kate and the work you've done, is you've unified us all, you know, vendors coming together, but government coming together. Boy, do we need that? Every one of these major breaches where we didn't have the crowd work together, it was really bad. And so the international government's coming together, the US government's coming together with private sector. We need to work together because with the speed of these threats and risks and technology and geopolitical tensions, all the things you mentioned, if we don't work together and we get isolated, really bad things will happen. So we need the crowd. Need the crowd. Well, that's a good way to end it.

SPEAKER_01

And Dave's really good at at putting crowds together in really good ones.

SPEAKER_02

Likewise. Absolutely. Dave, Kate, thank you so much for the time. I hope you enjoy the rest of your day and can't thank you enough for taking the time. Thanks for having us, Brian. Appreciate it. Okay, thanks to Dave and Kate for taking time to speak to me on this important topic. What I learned is this AI is certainly changing what organizations can do, but it's also changing what they're responsible for. The old model, where security lived in one corner of the company and innovation in another, doesn't hold anymore because AI is collapsing those walls. This year and beyond, resilience won't be measured by whether something breaks. It'll be measured by how quickly you understand it, respond to it, and move forward together. This episode of the AI Proving Ground Podcast was co produced by Nas Baker and Kara Kuhn. Our audio and video engineer is John Knoblock. My name is Brian Felt. Thanks for listening, and we'll see you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

WWT Research & Insights Artwork

WWT Research & Insights

World Wide Technology
WWT Partner Spotlight Artwork

WWT Partner Spotlight

World Wide Technology
WWT Experts Artwork

WWT Experts

World Wide Technology
Meet the Chief Artwork

Meet the Chief

World Wide Technology