AI Proving Ground Podcast

AI Won’t Save You: Easterly, Joyce and CISOs on the Cybersecurity Reality No One Wants to Hear

World Wide Technology

At a time when AI feels like oxygen — powering every tool, every conversation, every strategy — security leaders at the forefront are sounding the alarm: it won’t fix the fundamentals. In this episode, top voices from government, industry and the next generation of cyber talent share unfiltered perspectives on AI-augmented threats, the velocity of attacks and what it really takes to defend in 2025 and beyond. Interviews include former CISA Director Jen Easterly, former NSA Cybersecurity Director Rob Joyce and a bevy of cyber experts from inside and out of WWT.

Support for this episode provided by: SentinelOne 

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.

Speaker 1:

From Worldwide Technology. This is the AI Proving Ground podcast. On today's episode. We're at a turning point in cybersecurity. Not because of one breakthrough or one breach, or even one conference, but because the forces shaping the field are converging in ways we haven't seen before. These days, nobody wants an AI science project. We want it woven into the way we detect and mitigate threats, but unfortunately, the bad guys seem to be moving faster. Risk and compliance frameworks are scrambling to keep pace, and identity human and machine has become the new perimeter, and through it, all the fundamentals we've relied on for decades still decide whether innovation makes us safer or more exposed.

Speaker 1:

On today's episode, we're diving into the trove of conversations we had with colleagues and friends at Black Hat, held earlier this month in Las Vegas. These are unique insights from some of the most influential voices in cybersecurity, and what emerges is a picture of an industry wrestling with both its potential and its vulnerabilities and deciding in real time how to move forward. So stick with us. This is the AI Proving Ground podcast from Worldwide Technology Everything AI all in one place. Let's jump in. If there was one theme that cut through nearly every conversation at Black Hat this year, it was AI From keynote stages to booth demos. Ai wasn't just in the spotlight, it was the spotlight Kind of say, ai is like oxygen it's in everything.

Speaker 2:

Now, right, it's all around us. Haven't called it the force yet, but I think that might be the first time.

Speaker 1:

I can say it, but before we even get into the technology itself, wwt's Kate Keene noticed something that went beyond the tech.

Speaker 3:

I love the vibe, I love the community. I love the feeling that I'm getting from this year's show.

Speaker 1:

And yet the tone was very different from last year. In 2024, ai brought more apprehension than excitement, and this year there was a noticeable shift. To really understand the stakes, it's worth hearing from leaders who are at the intersection of AI, national security and critical infrastructure.

Speaker 4:

Jen Easterly, formerly director of Cybersecurity and Infrastructure Security Agency, summed up the mixed feelings that many brought into this year's conference time to be in our field, because I do have a lot of optimism about the ability of AI but to finally get us into a world where ransomware is a shocking anomaly, the fundamental insecurity of software. But if we can get that right, I think we can have game-changing, transformational impacts on the lives of everybody.

Speaker 1:

Leaders who are talking less about if AI will transform cybersecurity and more about how they can guide that transformation responsibly. Keen was just one voice capturing this change. Here's how she reflected on how the conversation has evolved.

Speaker 3:

Last year at Black Hat, the show felt heavy. I don't know how to put it other than that Everyone was. You know the feeling of the show. There was I don't know how to put it other than that Everyone was. You know the feeling of the show. There was a lot of concern about the future of AI. There was a lot of concern about how the impact of the industry was changing. There was just a lot of concern in general and there was almost a frustration of how are we going to change fast enough to meet up with the pace of tech. And the interesting thing is, a year later, there's almost like an embrace, the change moment. There's an excitement that's almost palpable about how we're addressing and leveraging the technology that scared and frustrated us a year ago to address the issues that we're facing today, and I'm loving it.

Speaker 1:

That optimism wasn't blind faith. It came from seeing concrete ways AI could strengthen defenses. Here's Brian Fite, a principal security consultant for AI at WWT.

Speaker 5:

But I'm seeing some really good stuff on how it's going to help, you know, level the field, maybe even level up junior analysts. You know maybe the Sock of the Future Today talk track that we're hearing agentic support, but it's a lot of AI and so when you think about what the security posture was before AI, it adds a level of complexity. So while the promise is, I think, warranted, there's a lot of caution.

Speaker 6:

There's, I think, a general optimism that AI doesn't solve everything. But if you're using AI, you can become a 1x or 12x human and I think that's exciting. And I think what we're seeing in the innovation space as we're seeing these startups a startup is like one person and things like that. You may not need huge startups anymore. I'm the startup, plus all my AI agents, my friends in Agentic do this. Someday I'll have a robot to do all the physical stuff. You're going to start seeing innovation happen more rapidly, so I'm optimistic about that, and it's not just about whether AI can see more.

Speaker 1:

It's about what it means for the people who have to respond. Here's SentinelOne's SVP of global solution engineering, steve Regini.

Speaker 7:

So AI from the perspective of the attacker is like top of mind for everyone, like how do we keep up with it machine to machine? We can't do it with manpower. You know traditional SOC analysts are getting overwhelmed with that stuff. But there's also the other half of it which is kind of like securing the AI and thinking about. You know there's going to be so much leverage of generative AI, agentic AI, and how do you know it's actually not accidentally doing something that leaks sensitive data or that, you know, allows for some sort of prompt injection. That is, like obviously a major concern. What if we can fix that with like a more simplified, you know, ai-based approach that isn't about product silos, is about a single pane of glass, more than just a UI, though, like an actual end-to-end attack path analysis so that you can prevent some of these bad actors. And I think, like probably every CISO on earth is thinking about how to do that, how to leverage that right.

Speaker 1:

For Rob Joyce, former director of cybersecurity at the NSA, there's a path to success, but it's littered with roadblocks.

Speaker 8:

I am wildly optimistic about the ability to find and fix bugs at scale, but I'm not so sure that we will then get that software everywhere that it needs to be, because we have so much legacy tech. We're going to have this period where we might see big wildfires of cyber burndown and then the things we build back up that grow after. That will be very secure.

Speaker 1:

The theme of caution came up again and again, because the same capabilities that empower defenders also supercharge attackers. Edna Conway put it bluntly every new agent, every new AI-enabled process becomes a new identity to secure.

Speaker 9:

Each of those agents will have independent identities. How are we managing those identities as opposed to other identities and if you do it right, agents have capacity to do anything in an environment. I think the limitation is only what we can imagine, what they can do.

Speaker 1:

Edna is a former Microsoft and Cisco Chief Security and Risk Officer, now advising the public and private sector on risk compliance and AI governance. But it's not just technical risk Easterly also sees identity risk in the context of a generational shift.

Speaker 4:

My kids' kids are going to grow up in a world of being an AI native. Right, we're all AI immigrants. These agents are going to be things that you know are just taken for granted.

Speaker 1:

Shannon Wilkinson, CIO and CISO of the Finley Automotive Group, based out of Las Vegas, has been sounding this alarm for some time about a quieter consequence losing our ability to think critically if we treat AI as an oracle instead of a tool.

Speaker 10:

We're basically turning over our wisdom and our knowledge and our critical thinking over to AI and just kind of throwing in a prompt and trusting what the AI outputs. Our kids will never learn how to struggle through problems. They miss out that kind of critical thinking development if they just rely on AI from an early age.

Speaker 1:

And for some industries, AI's promise is tangled with layers of compliance. For instance, in financial services, simply claiming we use AI can trigger months of regulatory scrutiny. Here's Christian Horner, a principal system engineer here at WWT.

Speaker 2:

You come to the table because AI is basically transforming how you protect and how do you protect AI. But at the same time, the risk organizations inside these large firms are very concerned about data classification. But if Copilot gets access to regulated data and provides it to an unregulated user, huge compliance issues, fines, reputation, bad badness.

Speaker 1:

So, while the tone at Black Hat was more hopeful than last year, the message was clear AI isn't a magic bullet. It's a powerful force, one that must be harnessed with care, governance and a deep respect for the risks that it carries. If AI was the headline at Black Hat, risk and governance were the fine print, the details that determine whether the innovation becomes breakthrough or a breach. The conversations here weren't just about what technology can do. They were about whether it should be deployed, how it should be controlled and who is accountable when things go wrong. Rob Joyce pointed out that governance isn't just about stopping bad things. It's about enabling the good at scale.

Speaker 8:

Everybody's trying to figure out what AI means. Is it a threat? Is it a defensive augmentation? Ai is advancing rapidly to be able to find bugs right, and that's the table stakes to then drive out some of the insecurity.

Speaker 1:

For Christian Horner, working with one of the world's largest financial institutions, these aren't hypothetical questions. Ai adoption is directly tied to some of the strictest risk oversight processes in business. What?

Speaker 2:

they don't realize is that the fact that they say that there's a AI engine behind the scenes doing work. All of a sudden, you say that and this activity called model risk management gets put into play, and sometimes the solution providers don't realize that they're actually going to have to go through six to 12 months worth of inspection.

Speaker 1:

That's the reality for heavily regulated industries. Speed to innovate is always tempered by layers of compliance industries, speed to innovate is always tempered by layers of compliance. Even when AI offers transformative capability, leaders have to map it against existing governance models, and sometimes that means saying not yet. Here's WWT's Brian Fite again, who took a similar long view approach, pointing to the NIST AI risk management framework and the need for structured controls before deployment.

Speaker 5:

The nuances, the dirty dozen, as I call them the NIST 600-1, the AI 12 areas of harm. That's what's different and that's where I think, if the use cases pull those out and the control affinities become very clear, once you accept that, okay, bad things can happen, they probably will happen. Let's prepare.

Speaker 1:

When it comes to risk management, success isn't measured by how much AI can detect. For Steve Regini, the SVP over at SentinelOne, it's more about what those detections mean and how they drive action.

Speaker 7:

This isn't just about you know how do I reduce the number of alerts. You know how do I reduce the number of alerts. It's also when I get an alert, it would be really nice to have something that's asking that alert the top hundred questions that an analyst would normally ask it yeah, and then also triaging it for the next step in the process. Just that approach allows an analyst to then elevate and not have to be so bogged down not only in trying to figure out which alert is important. But then what do you do next with that alert? Yeah, and instead having the ability to solve these complex problems, to spend time where they need to spend.

Speaker 1:

And, of course, not all risks are about compliance. Some are about fundamentals. Even the most advanced AI capabilities can still crumble without the basics in place. That was, according to Robert Geis, a field CISO here at WWT.

Speaker 11:

If you haven't done those fundamentals, a lot of these great things that can be so transformational, frankly, also look like they carry with them additional risk. For example, data governance If you're going to go expose yourself to AI, you've got to have those guardrails up to make sure you're doing this properly.

Speaker 1:

Those guardrails, whether they come from regulators, industry standards or internal governance, are becoming a defining feature of responsible AI adoption. And, as Tanium CIO Eric Gaston reminded us, governance isn't just about ticking boxes. It's about aligning innovation with the way modern businesses actually operate.

Speaker 12:

The problem is, it has to follow business right. So technologies right now that follow business businesses. I think we're at a new place right now where companies are really having to be innovative, and I hear it all the time from my customers. They say, you know, we can no longer keep up with the pace of innovation, and they're looking to their vendors and their partners. Actually, you know, not only give them the answer, but give them a path.

Speaker 1:

At Black Hat, that path was a recurring theme, one that often required slowing down just enough to make sure the road ahead was clear, because innovation without governance isn't progress, it's just speed, and in cybersecurity, the speed without control almost always ends in regret.

Speaker 2:

This episode is supported by SentinelOne. Sentinelone offers autonomous endpoint protection to detect and respond to cyber threats swiftly. Secure your endpoints with SentinelOne's AI-driven security platform.

Speaker 1:

If governance sets the rules, the threat landscape tests them, and that battlefield has changed dramatically. We've gone from a world of single, predictable adversary to a crowded arena of state actors, criminal syndicates and opportunistic hackers, all moving at the machine speed. Here's Rob Amizkua, chief revenue officer at Forescout Technologies. He framed it through the lens of constrained budgets and multiplying adversaries. Security is a place where tradeoffs are very dangerous.

Speaker 13:

You know one thing you trade off. You know that's the one thing that's probably going to get you the velocity of the problems. They're getting faster and I think we've got to really kind of come together and stop trying to solve our individual and niche problems and we've got to become a force multiplier for these guys.

Speaker 1:

That urgency isn't just about working better together. It's about recognizing the game itself has changed. The battlefield is no longer a handful of well-known adversaries, it's a sprawling, ever-shifting swarm. One of the most urgent fronts in this fight Identity, as Edna Conway pointed out earlier. Ai agents, iot devices and other non-human entities are multiplying, and each one needs to be authenticated, authorized and monitored.

Speaker 9:

Each of those agents will have independent identities. They can do anything, but do you send your child off and say go with WildVend and here's $5,000. Or do you say use it sparingly, perhaps don't use it to purchase things that are illegal?

Speaker 1:

Brian Feit warned that if organizations can't get human identity right, they'll be unprepared for what's coming next.

Speaker 5:

For every human that you have in your organization, you're going to have an automated process, or two or three, so it's going to be one to ten. So if you can't handle the human identities, how are you going to be able to handle the machine identities at scale? So it's really identity is absolutely the new perimeter and your weird machines are already living with us.

Speaker 1:

For all the cutting-edge tech on display at Black Hat, one truth kept surfacing Without the fundamentals, nothing else really matters. For all the cutting-edge tech on display at Black Hat, one truth kept surfacing Without the fundamentals, nothing else really matters Patch management, visibility, access control. These aren't nice-to-haves, they're the foundation everything else is built on. Chris Schwind, another field CISO here at WWT, sees it clearly when talking to CISOs about where to start.

Speaker 6:

If you haven't heard AI in the last five minutes, you have your headphones on. There's a lot of discussion about AI, but I think some of the problems are still the same. It's risk. Right, it's risk. How do I just patch the things that are most risky? How do we get the visibility into things to make?

Speaker 1:

decisions. That warning from Brian Fite isn't just about what AI can do. It's about where security success really begins, because no matter how advanced the technology Sentinel-1's Steve Regini said it's only as strong as the foundation it's built on. That visibility extends beyond endpoints and networks. It's also about knowing where your data is, how it's classified and who has access, because, as Rob Geis emphasized, without data governance, ai can quickly become a liability.

Speaker 11:

If you haven't done those fundamentals data governance if you're going to go expose yourself to AI, you got to have those guardrails up to make sure you're doing this properly Guardrails- aren't just technical, they're cultural, and culture is what determines whether these fundamentals are enforced every day, not just written in a policy binder.

Speaker 1:

Technology may dominate the headlines, but people remain the beating heart of cybersecurity. That was clear in conversations about workforce development. From getting young professionals into the field to helping seasoned experts adopt new tools. Wwt's Kate Keene reminds us that cybersecurity has room for every background.

Speaker 3:

There is a place for you in cyber. Raise your hand If you're interested in security. Raise your hand and ask for some training. Literally, there are so many different ways to come into cybersecurity. You know technologists, there's always something new. There is a home for everyone in cybersecurity.

Speaker 1:

And Sage, a student attendee at Black Hat, is proof of just that.

Speaker 14:

The advice I would give is to be open to opportunity wherever that may show up. Honestly, I got into cybersecurity by just reaching out to people on LinkedIn, and that's kind of how I was able to find the opportunities that I've been able to take advantage of today.

Speaker 1:

But getting talent in the door is just one step. Eric Gaston from Tanium pointed out that the right technologies and the right culture can help keep them.

Speaker 12:

Having certain technologies in your stock or in your enterprise is going to attract that talent and if the company is not changing and evolving to that, you're going to lose your talent.

Speaker 1:

Okay. So what do we take away from all of these conversations? First, ai is here to stay. We probably all knew that already, but it's important to recognize that AI is changing the rules for both attackers and defenders. The opportunity is real, but so is the risk, and the difference between the two will come down to how well we govern and secure it. Second, identity is the new frontline. Whether it belongs to a person, a process or an AI agent, every identity is now a potential point of compromise. Managing them at scale is no longer optional. And third, the fundamentals still decide the outcome Patch management, visibility, access controls, data governance. They may not grab the headlines, but they are what are keeping innovation from becoming exposure. In other words, the tools are evolving, the threats are accelerating, and our ability to adapt without losing sight of the basics will determine where we stand with. The next turning point arrives.

Speaker 1:

If you liked this episode of the AI Proving Ground podcast, we would love it if you gave us a rating or a review. And if you're not already, don't forget to subscribe on your favorite podcast platform and you can always catch additional episodes or related content to this episode on WWTcom. This episode was co-produced by Nas Baker, cara Coon and Amy Ubriaco. A very special thanks to Matt Berry and Kate Keene for their help in securing these valuable interviews. Our audio and video engineer is John Knobloch and my name is Brian Felt. We'll see you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

WWT Research & Insights Artwork

WWT Research & Insights

World Wide Technology
WWT Partner Spotlight Artwork

WWT Partner Spotlight

World Wide Technology
WWT Experts Artwork

WWT Experts

World Wide Technology
Meet the Chief Artwork

Meet the Chief

World Wide Technology