AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology

F5 Warns: Enterprises Are Running Naked AI

World Wide Technology: Artificial Intelligence Experts Season 1 Episode 56

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 36:39

AI is shipping faster than security teams can catch it—and the attack surface is quietly exploding.

In this episode of the AI Proving Ground Podcast, Shawn Wormke of F5 and Chris Konrad of World Wide Technology unpack the rise of what they call “naked AI”—enterprise AI systems deployed without proper data controls, API protection, or governance.

Drawing on new F5 research, they reveal why only 2% of organizations are truly AI-ready, how shadow AI and exposed interfaces are multiplying risk, and why bolting on security after deployment is already too late. As AI systems move toward greater autonomy—and quantum-era threats loom—the conversation makes one thing clear: trust has to be designed into the AI lifecycle from day one.

If your organization is racing to production, experimenting with agents, or scaling AI faster than policy can keep up, this episode is a wake-up call.

Support for this episode provided by: Red Hat

More about this week's guests:

Chris Konrad is a global cybersecurity executive and Vice President of Global Cyber at World Wide Technology. Since joining WWT in 2014, he has helped build and scale its $4.5B global security business. Chris leads global cyber strategy, practice development, and partner engagement, aligning security programs to business outcomes across public and private sectors. With 27+ years of experience, he is a trusted advisor to C-suite leaders and a member of the Forbes Technology Council, known for turning cybersecurity into a strategic enabler of resilience and growth.

Chris's top pick: Secure All Together: 5 Principles for Building a Culture of Cybersecurity

Shawn Womke is Senior Vice President of Product Management at F5, leading the strategic direction of a portfolio central to how customers build, secure, and scale modern applications. Since joining F5 in 2013, he has helped deliver products grounded in real-world use cases, from open-source innovation in Kubernetes and OpenStack to leading the Aspen Mesh incubation and serving as General Manager of NGINX. With experience spanning Cisco, startups, and global enterprises, Shawn is focused on uniting teams around clear vision, customer impact, and execution at scale.

Shawn's top pick: Texas A&M University System Teams Up with WWT for Cyber Range Challenge

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions. 

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments. 

AI Is Moving Faster Than Security

SPEAKER_01

From Worldwide Technology, this is the AI Proving Ground Podcast. We're living through a moment in enterprise technology where the velocity of innovation is outpacing the velocity of control. Every week, AI becomes more embedded inside decisions, workflows, and customer experiences. But something else is happening too. An undercurrent of anxiety from the people responsible for keeping all of this secure. As I was preparing for this episode, I came across a recent LinkedIn post from our guest, Sean Wormkey at F5, where he pointed out something many people feel but don't always say out loud. He said, While many are reaping the benefits of AI, the majority of cybersecurity professionals are concerned with the growing and evolving attack surface that AI brings. And that gets to the heart of an honest assessment we hear a lot. Boards and executive teams are pushing AI as a mandate, but for security teams, alarms are going off and red flags are being raised. In many cases, they're being asked to secure something that's moving faster than they are. So today, we want to talk honestly about what readiness actually looks like, why only a small percentage of companies feel confident in their AI strategy, and what needs to change so teams can adopt AI without creating new risks. To do that, Sean is joined by Chris Conrad from WWT, two leaders who spend their time working directly with organizations trying to modernize their AI approach while staying secure. They see what's working, what isn't, and why so many teams feel caught in the middle. This is the AI Proving Ground Podcast, Everything AI, all in one place. Let's get into the conversation. Well, Chris, welcome back to the show. How are you doing today?

SPEAKER_02

I'm doing great, Brian. It's it's always good to be here. Thanks for having me.

SPEAKER_01

Absolutely. And Sean, first timer, welcome to the AI Proving Ground Podcast. Thanks for joining us today.

SPEAKER_03

Yeah, thanks so much, Brian. Great to be here and excited to talk with you and Chris today.

Meet the Guests. Meet the Problem.

SPEAKER_01

Perfect. Perfect. Sean, I do want to start with you. I was looking over your LinkedIn profile and some research here. You had a recent post in which you said while many are reaping the benefits of AI, the majority of cybersecurity professionals are concerned with the growing and evolving attack surface that AI brings. So, you know, I'm wondering why are the alarms going off in the world of cyber while so many in the world, enterprise or consumer, are adopting this transformational technology?

Adoption > Control (Uh Oh)

SPEAKER_03

Yeah, it's a it's a great question, Brian. And I think that, you know, for all the benefits that we're starting to see from AI around efficiency gains that we're getting, ability to create content and the ability to make decisions with AI, I think there's a couple of challenges. And a lot of that has come from just the rate of adoption that we've seen this happen. I think that the amount of use that we're seeing from this and the speed of deployment often outpaces organizations' abilities to really control and secure it effectively. You know, teams are under immense pressure to use AI. I mean, there's many corporate initiatives to make sure you're using it, improve the efficiencies that you're having it. And I think that oftentimes that rush to that efficiency can sometimes be at the expense of security. And so I think as it becomes embedded in everything that we do, it puts cyber teams on edge and it really, you know, requires us to not only innovate with speed, but also think about how we're take taking the time to think about how we're going to secure this so that we're able to control the risk that comes along with it.

SPEAKER_01

Yeah. Chris, you know, that's you know, certainly a high-level assessment here. What are you hearing on the, you know, maybe a little bit more closer to the ground, real-world client-based conversations. What are you hearing about the pace of that adoption versus the ability for all of us to secure it?

SPEAKER_02

Yeah, just to follow up on what Shauna just said, just really resonates with me. Just, you know, AI is embedded into everything. Security needs to be embedded into everything. And when I look at just AI in general, it's it's moving faster than most governance models can adapt. I mean, we're watching organizations globally just race to operationalize AI for their business. But then you got to look at the controls, you got to look at things like visibility, accountability. I don't think they've caught up yet. And worldwide, we're we're certainly seeing a widening gap between AI experiments and AI protection. And that's really where the risk is. The technology itself isn't a threat. It's the speed of adoption, you know, as Sean talked about without structure. That's the threat.

unknown

Yeah.

The 2% Problem

SPEAKER_03

Yeah. And I think, Chris, that's a that's a great point. And, you know, one of the things that I think security teams are always playing catch-up to some extent with uh with the work that they're doing. It seems like, but I think, you know, to compound that too, I think is the the the talent that's out there and the number of people who really understand these kinds of technologies deeply to be able to provide that governance, to be able to think, you know, deeply about how you secure this and how you protect your companies from risk is also, I think it's also another another barrier for for security teams that they've got to overcome.

unknown

Great.

SPEAKER_01

Yeah, Sean, stick with you here. F5 recently came out with a research report that was fantastic, by the way. Um, I think it was called the State of AI application strategy for AI readiness. And if anybody hasn't seen it out there, definitely go out to F5's uh website and check it out. I'm gonna read here a couple stats because it was it was interesting. F5 found only 2% of organizations are quote unquote highly ready for AI innovation, while 21% showed a low level of readiness with that vast middle, 77% stuck in the middle somewhere. Sean, what are you what are you and F5, you know, what are you seeing that separates that 2% that are truly ready from from everywhere else? Where's the delta there?

SPEAKER_03

Yeah, I think I think when you look at you know being truly ready to deploy AI and truly get the benefits out of it, you know, I think there's always this sort of pinnacle of viny technology and the companies who get the most out of it. And then, you know, there's always the kind of 2%. But that big middle, there's a couple of key things. You know, first is I think that you have to have kind of this foundational cybersecurity framework, and it has to also include AI, right? So it has to be, you have to have your basic security hygiene under control. And many companies don't even have that on as you know, to start with. But then when you add in AI, it makes it even more complicated. The second one is around the data quality and the security of the data that they have, right? So obviously with AI, data is reliant, or the cleanliness of the data, the security of the data, and the unbiasedness of the data is critical for AI to be successful. And a lot of organizations out there have uh struggled to get their data under control. It's fragmented, it has a lack of governance, you know, that that's out there. And I think that can cause some some serious and significant hurdles for people to get to that high amount of uh readiness. The other one is you know, the tool stacks that we have, they're everywhere. They're disconnected, the integration can be complex, and in AI, so many of the new security tools are coming from newer, smaller, emerging companies and startups, which makes that problem you know even worse. And the the ability to bring all of those things together to get real-time kind of threat intelligence and and threat sharing across AI applications and deployments is is really challenging. We obviously talked about workflow gap or workforce capability gaps. Uh, that's out there. But then just the overall, I think governance framework is immature and still developing, right? I think we see changes to that happening uh every day. And oftentimes inside of companies, it's not even clear who's accountable for making AI decisions or who's accountable for those security outcomes. Is it the AI team? Is it the app team? Is it the security team? And I think that lack of alignment and accountability can also leave organizations vulnerable and leads to that uh lack of readiness.

SPEAKER_01

Well, Chris Sean mentions that you know the explosion of tools that are now leveraging AI. I know before Gen AI exploded onto the scene, we were already advising clients and organizations to consolidate that tool set. Is that one of the key functions that you think is holding organizations back from being in that top tier 2%? Or what else is contributing to all this?

Shadow AI Enters the Chat

SPEAKER_02

Yeah, I love the direction that Sean was going on that. And I think for those who know me really well, I'm I'm just grounded and you have to be brilliant at the fundamentals. You need to understand your inventory, you need to understand what's on your network and what is it doing. Should it be there? I've said that now for years and years. And if and if you don't get that right, you you can't advance your business and especially things around AI. So, I mean, Sean, I love being grounded in those fundamentals. And I I think candidly, where what we're seeing when we talk to CISOs, you know, they tell us they feel the pressure from the business to continue to move faster. But they're inheriting systems that maybe weren't designed for AI workloads. So we're seeing a lot of shadow AI. So we're seeing unsanctioned models, we're seeing ungoverned data, and then of course, exposed APIs. And that's a big deal. And so what customers want is help building that, I'll call it the secure landing zones where innovation and security can scale together, not independently. So when I think about one of our big initiatives here is just making sure that digital transformation, AI, and cyber are one unified conversation and not done in silos, and that's where the problem lies.

SPEAKER_01

Yeah, Sean Chris mentions you know exposed APIs and how that's a big deal. Just for those that may not be aware, why is that such a big deal in this you know evolving, rapidly changing landscape of AI?

SPEAKER_03

Yeah, I think I think with with AI, there's a couple of things with the exposed, you know, APIs that are out there. You know, first and foremost is that you you have to protect your APIs. You have to be able to under, you know, control who's using them, what they're using them for. But in particular with AI, you know, we have almost a new layer in the OSI model. Sometimes around here at F5, we call it the layer eight, right? Because now you're you're much more concerned, obviously, with the context and the responses and what's being asked. And that can put you, you know, those exposed API endpoints can put you at risk to people targeting your models that are running behind those APIs, allowing you to ask questions of it, to get responses that may return, you know, whether it's proprietary data that that an attacker might be trying to get, or even just data inside the company that the requester shouldn't have access to. You know, many of these models with the lack of of data governance that's behind them have sensitive information in them. And with these unprotected API endpoints, you know, you leave yourself exposed to data exfiltration attacks, you leave yourself exposed to token-based attacks, dial service attacks, and and I think with AI, it's uh there's a whole new level of uh complexity that we have to consider with the APIs that are out there.

APIs Wide Open (Layer 8 Too)

SPEAKER_01

Yeah, Chris, I mean, knowing that that that complexity is growing faster and faster, what does that mean in practice for what the term highly ready looks like for security teams or organizations in general? What does that mean for what highly ready looks like today or even you know, six, twelve, eighteen months from now?

SPEAKER_02

Well, we've been we've been talking about it here. So, I mean, it's it's not tools, it's operating model maturity. I mean, I I just think about it, I go back in in history and is kind of look at where organizations are aligned or where they have misalignment. I mean, I just think of some basic things like ML ops and and sec ops. I mean, security teams know how to manage risk really well. AI teams know how to build models. But as I just mentioned a minute ago, they they work in silos. And so we really need to bridge that gap. That's I can't emphasize that enough. We need to bridge that gap by embedding security directly into the AI lifecycle, just like we did with the software development lifecycle, embedding security along the way. And and we have a theme here at Worldwide, and Brian, you and I have talked about this offline in the past, is like no more naked AI. So we need to make sure that we're doing that. Security from chip to cloud.

What “AI-Ready” Actually Means

SPEAKER_03

Yeah, and and Chris, I think that's a great point because you know, when you look you look at you know how people are using AI today compared to where they may be in 12 or 18 months from now, you know, we'll be using and relying on AI to make more and more decisions for us, to take actions, you know, on us, without humans always in the loop when we're when they're taking these actions, especially as more agentic type AI gets rolled out. And without that kind of core secure operational foundation underneath it, the trust that you have in these things just can't be there. Not that it won't be there, it just can't be there without having that that trust built in. And and and you know, we're gonna, as we rely on these things to these, these new applications and these new techniques to to make sometimes very serious decisions, you know, operationally for us or or you know, business-wise. Without that trust, I think the whole thing can can break down pretty quickly.

SPEAKER_02

Yep, yeah.

SPEAKER_01

So we're obviously living through this time, this shift where you know, the same technology that's powering breakthrough defenses to help us remain secure is also powering these next gen attacks. So, you know, I'm curious from a product perspective, Sean, we can start with you. You know, how is that reshaping how the way security products actually function, you know, not just blocking threats, but predicting, adapting, acting in real time, and you know, staying ahead of the curve, so to speak.

SPEAKER_03

Yeah, I mean, I think I think that, you know, as from a from a product perspective, and I think it it unlocks a whole new set of capabilities that we can offer our customers and bring a lot of value to them. You know, things like behavioral analytics, for example, to be able to really detect deviations in user behavior in ways that we've never been able to do before and to to sort of discover new attack patterns, you know, as they're emerging and as they're happening rather than after they've had some catastrophic effects. Obviously, predictive analytics is there as we continue to leverage AI to analyze the vast amounts of data that's out there and that that our you know our customers and our users are collecting from their networks and their applications. You know, we can start to get closer to that real-time defense type of uh products that we've had before. We don't need to wait days or weeks or sometimes even months for signatures, for example, to come out to adapt to these to these new behaviors and these new attacks. And then, you know, that leads into a little bit of the adaptive security policies. You know, we've always talked about self-healing networks and self-healing security systems. I think we're getting closer. We're getting much closer to that with AI. And then, and and then I think, you know, at the end of all of that, I really think you can use AI to, in this may not be so much a product thing, but maybe a process thing or a communication thing, really transforming the way we do instant response, right? We can really start to surface more high fidelity alerts for teams to focus on, for customers to focus on, and really help narrow, narrow what they're looking at to focus on the most, the most important things that are there.

From Static Defense to Adaptive

SPEAKER_01

Yeah, Chris, you know, Sean's mentioning policy there, which has me thinking about you know guardrails and governance and what type of visibility is needed. You know, before we can automate and scale AI to lead towards that, you know, self-healing network, what types of guardrails and governance are we talking about that need to be put into place so that we do avoid that naked eye situation, naked AI situation?

SPEAKER_02

Yeah, no doubt about it. You know, I was at one of our advisory board meetings not too long ago, and and one of the questions that came up to me was around just you know my thoughts and opinions on on automation in general. And again, going back to being grounded in fundamentals, it's like, well, you can't automate unless you understand what you have and what your inventory and what the purposes of those devices on your network are doing before you can go out and automate those things. And so many people just want to jump all the way to the end and the outcome without thinking about those basic hygiene things as we were continuing to talk about here. So for me, is you know, getting highly ready means being able to measure AI risk and just not talk about it. So for today, readiness means let's start with basic governance. As we talked about, Sean's mentioned it now a few times, policy, visibility, knowing what models are you're running and where your data flows. You know, if you look ahead, you know, you go out 12 months from now, readiness will mean continuous red teaming. You know, Sean talked about incident response. I mean, I just love hearing him talk about that. Runtime guardrails and unified observability across AI and across your infrastructure. And that's really the direction that we're moving with with our partners like F5 in the WWT AI proving ground, you know, and turning that readiness into something quantifiable.

Guardrails, Not Speed Bumps

SPEAKER_03

Yeah, and I think Chris, too, when you look at, you know, you talked earlier about the kind of governance model and the new types of development lifecycles for AI applications and the way we generate these things. You know, there's a number of things that I think that we haven't really had to consider before in software development that we're going to have to also be able to explain to our customers. For example, algorithm accountability, you know, being able to document how and why AI models that you're generating and using make the decisions that they make. Being able to, you know, detect and prevent bias that's coming from those systems that you may be exposing your customers to. Um you know, increase in the explainability and the transparency, for example, of the models and the data provenance that's behind the data that you use to train your models to make sure that that wasn't tampered with in some way and is verifiable. You know, all of those things are going to change the way that we develop software, that we develop products, the types of questions that our customers are going to be asking of us in the future. And I think that that's you know, it's an important part of doing business in this AI world.

SPEAKER_02

Yeah, and I look at things like you know, frameworks as an example. So we're we're talking about a lot of things here, and I can probably hear our customers now saying, but is there a guideline? Is there a framework that we can follow? And there are a number of good frameworks out there, and we're we have just developed an AI readiness model for operational resilience, you know, taking it from the governance angle, as I mentioned before, and securing all the models from chip to cloud. So organizations now can look at this framework and to be able to guide their development lifecycle down the road.

SPEAKER_00

This episode is supported by Red Hat. Red Hat provides open source software solutions to help modernize IT infrastructure. Accelerate innovation with Red Hat's enterprise grade technologies.

SPEAKER_01

Yeah, Chris, maybe dive a little bit deeper into that framework, you know, the how and why we developed it and you know where it might be, how it might evolve going forward.

WWT’s Readiness Playbook

SPEAKER_02

Yeah, so I made a comment a little bit ago called No More Naked AI. And so I see all of these AI clusters that are being built for our customers, and we got to think, are you securing this? You know, at worldwide, security is embedded into everything we do and everything that we sell. So it doesn't matter if it's route, switch, Data center, you know, cloud security needs to be a part of that. And we're now doing the same thing with AI. And it's it's not new to worldwide. I mean, it's it's something that we we've been doing now for a long time, but now customers have something that they can refer to and we can implement that. So go back to turning that readiness into something quantifiable. You know, our AI model is going to help with that for sure. And then people can use it in our AI approving ground. They can leverage it there too.

SPEAKER_01

Yeah. Sean, I want to pivot or go back a little bit to some of the guardrail conversations that we were having. F5, uh recently acquired Calypso AI, give organizations those guardrails that they need to scale AI safely. Tell me a little bit more about that acquisition, why it makes sense for the market right now. And are we is that going to kind of address some of those vulnerabilities or establish some of the guardrails that organizations will need increasingly in the future?

F5 + Calypso: Red Teams Welcome

SPEAKER_03

Yeah, so thanks, Brad. Brian, I think the Calypso acquisition, you know, was obviously a very exciting uh acquisition for us. We felt that in Steelfield that Calypso was the leading AI guardrails and AI red teaming company that was out there. And we're so excited to have them part of the F5 uh family and to you know release those guardrails and red teaming capabilities as part of our overall F5 application and delivery security platform. You know, the the the Calypso solution, or we call them now F5 AI guardrails and F5 AI Red Team, those two components really work together to help customers ensure that at inference time the transactions that are happening between their models are safe and secure. So they really specialize in ensuring that you have secure and ethical AI deployments. And that's, you know, obviously critical to the conversation that we've been having today. You can't have naked AI or ungoverned AI because that leads to all kinds of liabilities and risks in the business. And so, you know, the guardrail protection that we have allows you to be compliant. It allows you to keep bad things from going not only into your model, but also out of your model and protect your uh users from that, while the red teaming product, you know, continually tests the models that you're using to find vulnerabilities and help bring that continuous feedback loop then into guardrails so that you can protect your users against any vulnerabilities, biases, or any other kind of compliance needs that you need there. So, you know, it's a really you know fantastic solution. We're excited to have it here. Uh, we've seen great success already, you know, in the in the market with it and great customer feedback. And so this is uh really the kind of the future and the core of AI security for F5 moving forward.

SPEAKER_01

Yeah, Chris, any commentary on on that acquisition or even anything more in terms of the broader MA landscape as it, you know, we talked at the top of this episode about just the explosion of tools, AI startups, whether it's cyber or otherwise. What do you think we might see in the MA space and what does that mean for cyber teams who you know really have to just be you know drilling it day in, day out to make sure that we're secure?

Too Many Tools, Not Enough Trust

SPEAKER_02

Yeah, we we talked about it earlier, just about that tool sprawl. And I think the average enterprise has something like 75 to 125 unique to products. That's a lot. It's a lot to manage, it's a lot to integrate, and so forth. I mean, we're evaluating right now, I think, over 295 AI security startups and just trying to figure out what problems do they solve, how do they do it, what makes them different. How can they best partner with organizations like F5? So I expect there's gonna be a lot of consolidation here in the near future. Almost weekly now we're seeing some of these of our larger partners acquiring some of these AI startups. But back to the move of Calypso, I mean, yeah, that that's great. That was a phenomenal acquisition there. I mean, Sean just said it best, just the enablement there of continuous red teaming, policy enforcement while that model is running, not just after a fact, that that's crucial because you know, AI drift happens all the time. And so guardrails like that help you detect misuse, the model deviation before it becomes a major issue. So, yeah, that's a that's a move that we all applaud here at WWT. Yeah.

SPEAKER_01

Sean, speaking of of news, I'm sure there'll be plenty of great news and announcements coming out of App World, which I know you all will be hosting later this year. I believe it's out in Las Vegas, always a great event. I'm sure you're still in planning mode for that. And so you can't get into like you know super specific details. But broadly speaking, what what should we expect out at App World? Is it gonna be you know more towards securing agents? Are we gonna be even more future-facing AGI? Give us a sense for for what we you know, what we think we might hear.

AI Meets APIs Meets Multicloud

SPEAKER_03

Yeah, yeah, absolutely. So you're appworld is in uh in March in Las Vegas, like you said, it's it's F5's premier and flagship customer event. And I always like to say, you know, App World is really the fastest way to turn your application strategy into an executable plan. Because you, you know, when we have folks come together at App World, they get uh hands-on labs, they get to, you know, talk to other customers and hear their stories. But I think what I've heard most from customers is they love the direct access they get to the experts that we have at F5 and the experts that we have with our partners like WWT who join us. And so I think that's one of the best benefits that they that I that I hear is they get they get a ton of value about that. They get to hear and and sometimes commiserate with their with their peers in the industry about the challenges that they're having, learn from one another, spend a bunch of time with whether it's our executives, whether it's our architects, or whether it's WWT's architects. I think it's it's a it's a great event. And I think what you'll see this year from themes, like you said, we're still in planning, but I think what you'll see for themes is it's it's focused a lot on app and AI security, obviously. Multi-cloud networking, which we continue to see as a big challenge and a bunch of value for our customers around connecting these, their hybrid and multi-cloud architectures together. We talked about it a little bit earlier. This the notion of API security and management is always top of mind with our customers, particularly with AI. The the you know, the the API uh kind of surface area has just exploded, and we're starting to see traffic pass in new directions. So API is obviously a big part. And then Kubernetes and operations. You know, we we all know that as AI applications are deployed, they're primarily deployed in Kubernetes solutions. So our customers are having you know more and more concerns about that. How do we control traffic in and around Kubernetes? And then how do we operate? How is F5 helping them operate more efficiently and effectively? And and I think that we'll have, you know, obviously demos of the Clipso AI there, and we're super excited to have WWT join us there again this year. I think they're going to be bringing some of their reference architectures for app modernization and security and observability, and they'll have architects there that you can spend time with. So it's always a great partnership with WWT, and we're we're super excited to welcome them back this year.

SPEAKER_02

We we go to Vegas a lot, as we say, for different events and whatnot, but this is one I don't miss, App World. I mean, so I think you'll see F5 and WWT together. We'll double down on AI observability. I mean, that's something that's near and dear to our hearts here. Of course, runtime guardrails, multi-cloud resilience, as you talked about. I don't know, you may or may not see a capture the flag event too with our cyber range at F5 App World 2.

SPEAKER_03

Yeah, I heard I heard there was a few uh hackathons and a few capture the flag type activities going on, but I didn't I didn't want to spill too many secrets, you know.

SPEAKER_01

Little game chip out there. Uh love it. Uh make a little bit of what might feel like a pivot here, but something that keeps popping up more and more on this podcast and in other conversations that I'm having around AI, and that's quantum computing. I think we often talk about quantum as a future issue and something that's separate from AI, but they feel like they're getting a little bit more closer as the days go by. Chris, we can start with you here. How close are we to the future of quantum computing? What's the risk? Why does it demand attention, you know, right now?

Quantum Is Closer Than You Think

SPEAKER_02

I think we're talking about this now every day. And so I'm I'm hosting a summit internally here within the next few weeks at our headquarters just to talk about you know the impact of of PQC. And so we're closer than than most realize. The math behind today's encryption is already being targeted by Harvest Now decrypt later strategies. In other words, you know, the adversaries are starting to stockpile encrypted data to crack once quantum computing matures. So if your data needs to stay secure for 10 years, your quantum risk is already here. So I mean, for us, I mean, we got to start thinking about strategies around how we're gonna defend that and what the threats look like in actual practice. So testing these post-quantum cryptography today and not waiting for standards to finalize. And we're we're starting to think about this, we're starting to build some environments in our labs that run both classical and and quantum resistant encryption. And we're using AI to help automate things like key rotation and certificate lifecycle management. So yeah, I mean, we got to start being really proactive about it because it's it's here.

SPEAKER_01

Yeah, Sean, what what is defending against quantum threats look like to you? Is there any parallel between what we're doing from a you know an AI readiness perspective that applies to quantum? If so, what is it? If not, what's the difference?

SPEAKER_03

Well, I think like Chris said, you have to start to address this now. And I think one of the reasons that, or one of the reasons for that is that, and especially if you're adopting AI, is the data that they're harvesting now to decrypt later is much different than just traditional API traffic and application traffic. I mean, you have human beings oftentimes asking real questions about the business and getting responses from these models that can inform strategies and can have a ton of intellectual property and you know sensitive data in them. And even though they may not be able to script them now, when they can, that can unlock a whole bunch of secrets from inside of your company. So I think you know, you have to start addressing that that need now. You have to start adopting PQC techniques today. And from the F5 side, you know, we're starting to think about future generations of hardware, because hardware takes us a long time to build and plan. But how do we integrate quantum resistance ciphers into the chipsets that we're using? How do we ensure that we can adapt to these things in the future? And how do we use, like Chris said, AI to help you know monitor what's going on here and and monitor the progress that our adversaries are making within the network? So I think it's it's really important that we have we address it now. I don't think it's like one of those uh IPv6 problems where we can continue to kick it down, kick it down the road. I think that uh I think that the value behind solving this problem for the adversaries is much higher than than kind of the value that was driving you know IPv6. And and I think this is gonna be a really important thing for us to address over the next three to five years.

SPEAKER_01

Yeah. I mean, Chris, it it feels like with you know, thinking back a couple years ago with the explosion of of Chat GPT, which onset for Gen AI in general, it felt like a lot of people were kind of caught flat foot. Do you get the sense that maybe quantum is gonna be the same situation, or do we kind of learn from that mistake and we're we're we are a little bit more forward thinking as an industry related to defense in the quantum era?

Don’t Get Flat-Footed (Again)

SPEAKER_02

I'd like to think what I'm seeing so far is that I think we're gonna just repeat history. It's just gonna be a last-minute thing. People are gonna be scrambling to get themselves ready. I think that's why we're taking a stance right now is just developing some workshops and briefings for our customers to understand the problem set. What do they have? What's their inventory look like? So, from our lens, yes, we're trying to get way ahead of it right now because Q Day and Sean Kignas on this could be as early as 27, 28. Yeah, it could be. It's it's right around the corner.

SPEAKER_01

Q Day. I mean, anytime you just have a letter followed by day, that's that's always ominous and scary. But let's go back to that F5 research and get, you know, refocus here on AI and we'll we'll close out the episode here after this. You know, whether it's lessons learned from 2025 that can be applied in 2026, or whether it's you know just best practices that we've already touched on in this great conversation, you know, what are the first few things that CISOs or their teams need to be doing right now so that they can start to creep into that upper tier 2% of AI readiness. Uh, Chris, we can start with you and then uh Sean, you can close us out.

SPEAKER_02

Yeah, so I guess just starting with we learned that AI security, it's a team sport. I've been saying security is a team sport now for years. So it's organizations making the most progress are the ones that unify cyber, data, and AI under one strategy. And I talked about digital transformation, AI and cyber. It needs to be one unified strategy, not separate roadmaps. And that's where that's where we need to start.

SPEAKER_01

Yeah, Sean, final thoughts.

Three Moves to Level Up

SPEAKER_03

Yeah, I think when I kind of look forward, I I try I try to think of things in odd numbers. So I think there's three kind of things that uh CISOs or CIOs need to think about today. The first one I think is that they need to really build foundational AI governance in their companies, right? So frameworks and platforms that really help you address the security, the accountability, the transparency, and the ethics of the AI that they're using and that they're deploying. Second one is I think they need to invest in AI lifecycle security. And so these are tools and processes that really help secure the training and the deployment and the monitoring of the models that they're using. And the last is a lot what Chris talked about is that security and AI security is can't be siloed. You have to create cross-functional approaches between the security team, DevOps team, legal, operations, compliance, data, all of these teams need to come together to have a unified approach to securing AI.

SPEAKER_02

Yeah, one more thing I just want to add here, too, that I think we're we're all in agreement on, is just around workforce enablement. I mean, the most sophisticated AI defense, whatever you have, still fails without trained operators who can interpret and can act. You know, when I look at the hacking community today and our adversaries, you know, they're not hacking, they're logging in. So we got to think about that. So workforce enablement to me is is really important.

SPEAKER_01

Yeah, absolutely. Chris and Sean, thank you so much. Uh Chris, always great to have you on. I'm sure we'll have you on again here soon. And uh, Sean, thank you for joining. And we'll see you out in the desert for uh for App World here coming up in a few short months. Uh so to the two of you, thank you so much for joining.

SPEAKER_03

Okay, great. Thanks, Brian.

Final Take: Secure or Sorry

SPEAKER_01

Thanks, Brian. Okay, today's conversation made clear. AI is creating real opportunity, but it also demands a clearer playbook, certainly for the security perspective. The organizations making progress are the ones tightening their fundamentals, getting their data and governance in order, and making sure security has a seat at the table from the beginning. If this episode sparked any ideas or questions, share it with a colleague and send us a note about what you'd like to see us dig into next. This episode of the AI Proving Ground Podcast was co produced by Nas Baker and Kara Kuhn. Our audio and video engineer is John Knoblock. My name is Brian Phelps. Thanks for listening, and we'll see you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

WWT Research & Insights Artwork

WWT Research & Insights

World Wide Technology
WWT Partner Spotlight Artwork

WWT Partner Spotlight

World Wide Technology
WWT Experts Artwork

WWT Experts

World Wide Technology
Meet the Chief Artwork

Meet the Chief

World Wide Technology