AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology

When AI Starts to Act on Its Own, Who’s in Control?

World Wide Technology Season 1 Episode 41

As artificial intelligence moves from prediction to action, a new security frontier is taking shape. In this episode, Zscaler’s Head of AI Innovation Phil Tee and WWT’s VP of Global Cyber Chris Konrad explore the rise of autonomous agents, the evolution of Zero Trust and what it means to secure AI itself. From poisoned prompts to quantum threats, they warn the biggest risk isn’t just deploying AI — it’s trusting it. A sharp, real-world look at how enterprises can protect their future as machines begin to make decisions of their own.

Support for this episode provided by: Okta

More about this week's guests:

Chris Konrad is a transformative executive in global cybersecurity and has played a pivotal role in building and scaling the company’s cybersecurity capabilities—culminating in the leadership of its $4.5B global security business spanning cyber, land, sea, air, and space domains. He leads global strategy, practice development, and partner engagement across WWT’s Global Solutions & Architecture division. He drives cybersecurity initiatives that are tightly aligned with customer outcomes—enabling resilience, innovation, and growth across public and private sectors.

Chris's top pick: Cyber in 2025: We're No Longer Just Defending Systems — We're Defending Reality

Dr. Phil Tee is responsible for driving AI innovations at Zscaler, leveraging our unique data assets and the latest in AI technology to push forward what’s possible in Sec and DevOps for Zscaler customers. His team’s goal is to generate novel offerings in the cyber market and ensure that our customers benefit from the remarkable pace of AI innovation.

Phil's top pick: Hands-On Lab Workshop: Zscaler Unified Vulnerability Management

The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.

Learn more about WWT's AI Proving Ground.

The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.

Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.

SPEAKER_03:

From Worldwide Technology, this is the AI Proving Ground Podcast. Today, artificial intelligence is no longer just predicting cyber attacks, it's launching them. Systems that act on their own are entering enterprise networks at scale, and the same autonomy that promises efficiency can erase the guardrails that once protected us. Security leaders are now asking themselves a different question. Not how do we secure AI, but how do we secure everything AI touches? So on today's show, Zscaler's head of AI innovation, Phil T and WWT's VP of Global Cyber, Chris Conrad, warned that every boardroom deploying AI is also deploying new risk, whether they admit it or not. Because when models start making choices and taking actions across systems, every prompt becomes a potential attack vector. So when your organization deploys its next autonomous AI system, the real question won't be whether it can deliver results. It will be whether you can trust what it decides to do once you hit go. So let's dive in. Glad to be here. And Chris Cronrad, always one of my favorites. Welcome to the show. How are you today?

SPEAKER_01:

I'm doing fantastic. Great to be with you again today.

SPEAKER_03:

Yeah. Well, let's start. You know, Phil, I I gotta say, a little disappointed. You know, head of AI innovations at Zscaler. I kind of expect you to show up in a white uh lab coat, that type of thing. I I love that title. Um, can you just describe for us a little bit about what that means, head of AI Innovations? That seems like a ton to unpack.

SPEAKER_04:

Yeah, and there is a lot to unpack there. So uh and you know, what I would say is my remit uh at Zscaler is to you know, think the unthinkable. Uh it's it's a crossover research into products uh area. I have a small team of dedicated data scientists. In fact, you know, my lab, even though I'm not wearing the the white lab coat, is around the corner from where I'm talking to you from. And, you know, at least one PhD per person uh in that in that in that room. So, you know, these are folks that are you know very, very, very deep in uh the forefront of NLP and and AI. And uh what we're trying to do is we're trying to um see how we can leverage uh what is an incredible data asset at Zscatter, because we sit in the middle of the internet and use that to build tools and uh you know new novel model architectures. Uh so it's not just a question of of agentic or um you know fine-tuning models. We're we're building novel ways of doing uh NLP um to help people um get ahead of the cyber threats and for that matter also the IT operations challenge as well. We're we're not constraining ourselves just to SecOps. And you know, obviously, you know, here we are in 2025. Um, you know, cyber is more important than it ever has been, not that it was ever not important, um, because we live in a very uncertain world. So, you know, we we we think of ourselves as trying to take uh the forefront of of AI and put it to good use in sort of defending our customer citadels.

SPEAKER_03:

Yeah. Chris, I'm I'm curious, you know, based on what Phil just said and just describing his team, sounds like an impressive bunch for sure. What does that signal to the market? Should all, you know, should a lot of organizations have this type of mindset and teams as it relates to AI innovation, how to secure it, so on and so forth.

SPEAKER_02:

Yeah, well, 100%. I mean, I think it's one of the greatest innovations, obviously, in our lifetime, but it's also an opportunity for the attackers. And um, you know, at it worldwide, you know, we see AI through a, I'll call it a pretty practical lens. I mean, candidly, it's the sharpest double-edged sword we have today. And um and it's even more of a powerful tool for defenders when it's used well. Couldn't agree more.

SPEAKER_04:

And you know, look, look, I mean, I was I literally Monday and Tuesday, I was at a conference in California, and they had, you know, the CISOs from Anthropic and um OpenAI, and they were very candid about the fact that you know their tool is both their um you know their the greatest asset and also their greatest threat.

SPEAKER_03:

Well, let's dive into the meat of the conversation. I know we're talking today um more so on the uh agentic AI sphere and what that means from a security um perspective. I do want to level set a bit, Chris. Maybe we can start with you. I you know, I feel like agents get a wide variety of definitions these days. So can you tell us how agents differ from traditional AI and then what that means from a security perspective?

SPEAKER_01:

Yeah, no, happy to dive into it and I'd love to get you know, Phil's insights on this as well.

SPEAKER_02:

I mean, it's the biggest shift is autonomy. I mean, traditional AI predicts, a genet AI, it acts. And it doesn't just analyze the data, it takes action across all of your systems and apps and workflows. And that autonomy introduces an entirely new threat vector. So whether it could be you know prompt injection, it could be the tool misuse, which is a big problem, data exfil, and it really does force every person inside of security organizations to rethink trust at every every layer. So whether it's guardrails that are built for traditional generative AI simply weren't designed for this. Uh, they break when AI systems make decisions and take actions on their own. And this is where things need to evolve. This is where the security model has to evolve.

SPEAKER_04:

Couldn't agree more, actually, Chris. You know, um here's the here's the thing. Uh so you know, we're all human beings. Uh, you know, I promise you I'm not an avatar, um, although I would say that, wouldn't I? Um so we're all human beings, and we come with this wonderful thing called a moral compass. And uh, you know, uh maybe it's a side debate as to whether or not it's a universal uh thing in humanity, but for the large part we come with a moral compass. And what that means is mostly we're all trying to do the right thing most of the time. And so the number of bad actors is really a tiny percentage. And here's the other thing about bad actors, they gotta sleep. Uh, you know, they take time off to, you know, I don't know, watch whatever they watch on Netflix when they're not, you know, extorting uh uh you know money from uh banks and people and so on and so forth. So it's not a continuous thing. Agents do not have a moral compass and they don't sleep, and they, to the best of my knowledge, don't watch Netflix and take time off. So you've got this kind of um dual uh thing going on that you know agents have no way of self-restraint, they will follow their mission, um, and if their mission goes down a path that is, you know, not a good path, they're not gonna stop and think again and go, well, maybe I shouldn't be, you know, copying these secrets or wherever it might be. And the second thing is they're relentless and infinitely scalable, uh, which you know is a benefit when you're trying to use them to toward them in and do work. But in the context of cyber, that's just gonna exponentiate the volume of attacks and exponentiate the potential criticality of some of those attacks as well. So it is a very, very, very different type of landscape that we're talking about here.

SPEAKER_02:

Yeah, there's um, Phil, if I could just maybe add one thing to that, it's you know, here at Worldwide, you know, security is embedded into everything that we do and everything that we sell candidly. And our our mission, if you will, is to secure that AI ecosystem, and we call it from ship to cloud. No more naked AI. So we can't be putting AI infrastructure in organizations without thinking how to secure it. You go back years ago around the software development lifecycle. We were, you know, trying to have a seat at the table to make sure the security was integrated into that development lifecycle. Same thing with AI. Let's not have any conversations without thinking how to secure it throughout the whole cycle.

SPEAKER_03:

Yeah. Well, Phil, I love the idea. You're talking about how these bad actors, you know, they in some cases, many cases, are humans too. I I love the idea of kind of picturing them watching Netflix, whether it's uh Is It Cake or Bridgerton or just to give you an insight into my algorithm there. Um I I uh Phil, I want to share a study that I just read recently. It was from Cornell uh University showed that um poisoning as little as 2% of Asians' training access or training traces, I'm sorry, can embed a backdoor capable of leaking confidential data with over 80% success. So if just 2% is enough to bypass some of these guardrails, does this mean that we have to have a complete restructure of like how we're approaching security and agentic AI? Because I know we also a lot of times talk about, hey, just get to the basics and that is going to help you a tremendous amount, or is it completely rethinking?

SPEAKER_04:

Yeah, I mean, I mean, I'd say a couple of things. I mean, um, you know, and look, I mean, you know, I I I work and represent Zschedder, and you know, this company has been um enormously successful on the basis of its zero trust architecture. And, you know, I I don't think the need for a zero trust approach uh or the architecture that we have gets less, I think gets more. I think you you have to you have to, and maybe it's building on Chris's comment, you have to allow no naked AI. Uh so you know, if if you think about the flow, whether it's an agent or a human going to a language model, you know, it involves the exchange of natural language prompts and getting uh you know uh data back as responses to that, you can't allow those um uh those communications to be ad hoc and willy-nilly to use a British phrase. You know, you've got to thread them through a secure uh channel, which is precisely the principle behind zero trust. But once you do that, then you know you are um looking at a very different category of information that you're having to inspect to assert uh security because you're dealing with natural language, and so this is where you have to have AI inspecting AI, and you know, certainly you know, we've been making noises about this in the market, making moves uh with offerings like AI Guards, which is essentially precisely that. Um, but it's that type of approach that you have to take because you need to be able to detect when a prompt has been poisoned or the training data has been poisoned to be able to take action, and to detect that, you need to be in the path.

SPEAKER_03:

Yeah, I do want to dive into AI Guard a little bit more, but but first, uh Chris, can you talk to us about how maybe how the uh concept or understanding of zero trust has evolved since Gen AI kind of exploded onto the scene, if if it has evolved at all?

SPEAKER_02:

Quite a bit. I mean, I think uh candidly, Zscaler is a great example of how zero trust is being reimagined for today. I mean, their zero trust exchanged is infused with AI. So you think about three things there. It's whether it's AI-driven threat detection, so that spots that zero-day attack. You've got um uh automated policy enforcement and proactive breach prediction. And then you also have you know an AI-powered data discovery, which is a big deal that keeps that sensitive information safe, even if it's inside encrypted traffic.

SPEAKER_04:

Yeah, I mean, that's right. And uh, you know, the the you know, when I when I showed up at Zscaler, um, you know, it was it was definitely one of the things that was a um a very pleasant discovery for me is that Zscaler is an AI forward company. I mean, we've been doing it for quite a while. Uh, you know, and you know, whether it's you know, instant verdict is a classic example, you know, on payloads, being able to detect, you know, uh whether something is malicious or not, um, you know, we have language models that help with that.

SPEAKER_02:

Yeah, and I think this is what, to me, this is what makes our partnership unique. I mean, we work with you side by side, we we do work together in our advanced technology center, you know, also our cyber range where we can do this type of testing and we can do the validations, we can do the integration of these guardrails with could be Microsoft, it could be other ecosystem partners, and the result that we really get is an end-to-end zero trust posture that not only protects the AI, but you know, everything it touches.

SPEAKER_03:

Okay, let's go into a little bit more from uh AI Guard and Z Zscaler's perspective. Uh, you know, Phil, how is that adding new layers of protection? Maybe walk us through uh a little bit of the solution and the value that it provides enterprise uh organizations to defend against all these new emerging threat vectors.

SPEAKER_04:

Absolutely. So I mean, very specifically AI Guard um really adds the capabilities we already have uh to a degree for um public model access into private model access. So, you know, if you're building uh an agentic application, um, you know, obviously uh one of the things that a lot of our customers uh like to do is host their own models um so that they're not dependent upon uh going to uh public uh foundational models um to you know to to to to build their um uh agentic frameworks. Now, when you're doing that, one of the things that you're gonna be um very concerned about uh is you know precisely what we talked about earlier on. So prompt poisoning, malicious use uh of the large language model, um hijacking uh of the language model for uses that it's not um uh designed for, and and and many other different sort of uh bad vectors, if you will, in in terms of the use of the um model. And so what we have is we have the ability to do deep inspection uh on the prompts and determine whether or not uh the prompt is uh clean, uh whether or not it is uh a prompt that the system is uh designed to uh work with, um, and and whether or not it's it's it's it's it's it's intended as part of the application. So for example, uh, you know, I might set up a you know a HR uh application that's uh agentic. And what I wouldn't want that HR application to be able to do is to allow um, say, somebody who works for me to work out how much I earn. And you know, there are there are lots of ways in which you you could imagine that an agent might go rogue and do that, uh, because remember, um, you know, these things are autonomous and have a degree of intelligence, you know. So you want to be able to not just deal with a very simple prompt like tell me how much Phil T earns, but also much more tangential stuff like, you know, how many um you know cups of cups of coffee could Phil buy in a month, uh, assuming that a cup of coffee costs you know five bucks. They're sort of sort of roundabout ways of of getting answers to questions. And what we're able to do is by the use of language models, determine uh the intent, the sentiments, and the the the action that will happen from a prompt, and then allow us to apply policy against that. We can detect gibberish, we can detect inappropriate responses from the models, we can control language. So we give a lot of of power in the hands of our customers to be able to um essentially put policy around prompting, and that's that's what AI God is. So that it's policy around prompting for private models alongside uh the ZIA um uh offering, which is policy around prompting for public models.

SPEAKER_03:

Yeah, Chris, you can see how autonomous agents working with each other that can quickly compound to create a risky profile right there. Um maybe build a little bit on what Phil talked about, um, you know, with with AI Guard and you know, how can we use that and how should we be thinking about you know that type of policy, whether it's access, identity, etc.

SPEAKER_02:

Yeah, and I think when we think about guardrails in general, I mean it is a it's a common concern. And we hear it across, you know, every every vertical and every sector around the world. But to be honest with you, I mean, if it's it's just not true if it's done right. And we we prove this out every day in our labs and our advanced technology center that you can protect without slowing down the innovation, but you need to put the effort into doing it. And I, you know, you've heard me say it on these podcasts in the past and at other events that you got to really focus on the fundamentals, you gotta really focus on the basics, you need to test it. So security doesn't have to be uh that friction point, and it could really fuel when it's built into the design phase that we talked about just a few minutes ago. So, yeah, it's a common concern, but you got to fundamentally go through it. You just have to make sure that you can prove it out and and do your testing.

SPEAKER_03:

Yeah. Chris, I want to go back to something you said that I thought was interesting. You called um AI, I believe you're referring to AI as the sharpest double-edged sword we have. Um, I I love the way you put that. I'd never heard that before, but uh excellent way to describe it. So let's flip the script here. How is AI going to help accelerate the effectiveness of the concept of zero trust? What can it do to make it make sure that it that it's working uh more efficiently or more effectively?

SPEAKER_02:

Yeah, just from uh you're just from an overall operationalization and scaling standpoint. Um, yeah, from from there, that's really where the the work begins. You know, and our job at WWT is to to help our customers turn these these great ideas that they have into the outcomes and safely and at scale. And so how we're doing it today is that we'll we'll work together with partners like Zscaler. So we'll we'll run these joint briefings and workshops. We can do the full-scale testing across our ATC and the cyber range. So the leaders of these organizations can see the security implications before before they go live. And so taking that and taking the governance, taking the red teaming, all that is is important, and you have to pressure test it. And not just from a technological standpoint, but um, you know, as Phil talked about earlier, but from an ethical standpoint and operationally. And that's how the trust is earned, that's how the adoption and the operationalization of it really starts to begin at scale. And I think this is where we are as an industry today, is people really trying to figure out, okay, now what? How do I get this to work across the organization and how can we get it to be done in a secure manner? But that's exactly where we are. Uh Phil, what are your what are your thoughts on that?

SPEAKER_04:

Yeah, no, no, I mean I was I was I was you know, I was gonna say, because you you prompted me, no pun intended, um to think to think about one in I told you I was not an avatar, um, to think about one particular concern that that always goes through my mind. Now, you know, very popular sort of objective architectures, um, you know, if you think about how you make use of language models inside of an agent, um, you know, typically you reserve the planner agent. So this is the central coordinator, um, people give it different names, um, that is responsible for taking the task and then working out what subtasks need to be executed in order to fulfill um the mission of that task. And usually that's the more complex bit of reasoning. So if you're using a um a template like React, you know, the reasoning and action uh type paradigm, that's where you're gonna want to use um more of a foundational model and to build those plans. So quite often you end up with a an architecture, um, an agentic architecture, which uses a combination of external and internal models. So expensive external models for the planning, maybe smaller language models for the individual agents that are processing a much more restricted set of data from one of the tools that they may be making use of. So you end up with this scenario where you've got a mix of external and internal models. Now, where I think that's a particular challenge is, you know, whether it's from a security point of view or from a privacy uh perspective, you know, sort of not sharing uh or leaking PII information out there. That's where you really need a framework that is able to um surveil not just the internal but the external prompting that your um application is doing. And you need to think very, very hard about how you ensure the guardrails so you don't get a lazy or a sloppy programmer, you know, accidentally um sending data that it shouldn't be sending um to a an external um foundational model um or frontier model if you prefer that sort of terminology, rather than the sort of the interior hosted um smaller, simpler models. And you know, that's a very complicated scenario. You're layering a complex architecture on top of a complex application, uh, and you can see where this can very quickly go wrong unless you've got um the tools in place to be able to um I go back to this sort of deep prompt inspection type paradigm.

SPEAKER_00:

This episode is supported by Okta. Okta provides identity and access management solutions to securely connect your people to the right technologies. Simplify user authentication with Okta's reliable identity platform.

SPEAKER_03:

Yeah, Chris, uh maybe make the case for us. I know a lot of people might, you know, think of guardrails as something that would slow you down or potentially make you not move as quickly. Um, but I know here at WWT we like to make the case that you know those guardrails and that governance can, you know, can actually lead to acceleration. Can you make the pitch for us uh as to why that might be the case?

SPEAKER_02:

Yeah, I mean, at the end of the day, as I as I talked about earlier, I mean, so many people worry about the guardrails and slowing the innovation down or the productivity down, you know, on that. As I mentioned earlier, it is a common concern. But at the end of the day, it's like we've created an environment inside of WWT with our advanced technology center, with our AI proving ground, and with our cyber range where people can test, they can do multi-OEM integrations, they can work with this at scale without having to worry about breaking something. And so for us and our customers and our partners like Zscaler can leverage this, access these environments to really be, as I said earlier, to be the fuel and to really get in and build it into the design phase and make sure that everybody is getting what they want out of it.

SPEAKER_03:

Yeah, absolutely. Well, you know, we talk about, you know, kind of pushing those pilots into production with safety. Um, you know, Phil, let's jump to something that could also be a potential um hindrance here, and that's you know, the regulatory environment. What are we seeing on uh regulatory environment or policy um as it relates to to AI and security? What should CISOs or security teams be looking out for or thinking of so that they're in position to keep moving when new regulation comes out?

SPEAKER_04:

Well, it's complicated. Um and you know, I I I hate to say it, but I think that half the problem uh with it is that you've got a a very different uh approach to regulation around AI, depending upon the you know, the nationality or the geographic uh domain that you're in. So, you know, very famously um you know there is a tension uh between the US and Europe. Uh uh, you know, the US wants um a kind of a more freewheeling approach. You know, the Europeans are trying to um you know heavily legislate uh around uh the use of AI and control of privacy in particular um and sovereignty of data. And it's an enormously complex place. So we don't really know where it's gonna where it's gonna land. And we do know that there's gonna be a lot of change, and we do know there's gonna be a lot of chopping and changing um at the various different approaches as these evolve. So, really, at the end of the day, the advice that I would have to to give to CISOs or anybody building um you know AI-based or AI-inspired uh applications is you absolutely have to have the ability to make radical changes about what data goes where uh as the as the regulatory landscape evolves, and maybe take a sort of a almost defensive approach to it. And you know, I kind of go back to that um example I was giving earlier, where you know, a very common approach in building agentic frameworks is to you know to reserve the frontier foundational models um to the planning agent and do everything else with internally hosted stuff where you know sovereignty is and privacy is less of an issue with those types of models. It doesn't go away, but it's less of an issue because it's inside of your uh environments. Um and you know, design privacy first. So, you know, whatever whatever goes out of the planning agent should be asserted to not contain things like social security numbers or email addresses or phone numbers or or whatever it might be. And and you know, without sort of sounding over pluggy about this for ZCedder, that's where you gain you need a zero trust approach to how the props are being shipped around.

SPEAKER_02:

Yeah, you so I just want to jump on that real quick because what what we see the same thing. I mean, it's it's every feeder around the world, every region is tightening all of their rules. So from the EU AI Act to new data localization laws in the US and Asia. I mean, so it really takes organizations to design responsible AI, as you said it earlier, frameworks that that keeps them compliant without losing the speed that they want to be able to move at. And this is what excites me about Zscaler. I mean, the whole AI data protection platform is a great example of that. And just how you go out and do your uh automatically do the discovery, you go out and you classify um you know all the controls and data across channels, and as I said earlier, even in that encrypted traffic. And so this is where really organizations can maintain the privacy, the compliance, and the performance all at once.

SPEAKER_04:

Well, you're spot on, Chris. Uh, and you know, the end at the end of the day, you need to automate as much as possible of the you know data classification and control and so on and so forth. And that is precisely what our platform uses. And you know, it's kind of even worse than what I said because you know, you focus on nation state legislation. We have our very own uh sort of uh uh you know quasi member of the European Union in the United States, being California, you know, takes a very different approach to regulation than you know, Texas, uh or even where I'm sat in New Jersey. And you know, it it is it is an enormously complicated matrix uh that you have to deal with as you think about sort of data in these types of applications.

SPEAKER_03:

Uh Phil or Chris have have have we found a good way that we can kind of advise organizations on how to stay on top of that rapidly evolving matrix? Um, I mean, it seems almost as difficult as staying up to date with all the new AI innovations. Um, so you know, what what is it that we can do to make sure that we're plugged in, that we see what's coming on the horizon, and that we're able to act accordingly. Any good tips from that standpoint? Phil, we got to you?

SPEAKER_04:

Yeah, I mean, you know, it's it's work. Uh, and you know, you need effort and focus and direction from senior leadership. You know, as at Zscaler, you know, we have a team uh in our legal department of compliance officers that worry about this. I mean, I spend time myself on a very regular basis and talking with that team about what it is that we're doing and making sure that uh you know we're not inadvertently um you know crossing any kind of line from a regulatory or legal perspective. And you have to have that mindset of you know, think before you transact uh from a from an AI point of view. And you know, it's just you know, I hate to say it, it's like the the quickest way to get into trouble is you know think that there's nothing to worry about.

SPEAKER_02:

Yeah, and I I think for us, uh Brian, it's it's a it's a few things. One is you know, understanding, first of all, from an assessment standpoint, what levels of risk are organizations operating at, not only technically, but also programmatically. You know, how are they aligned with different standards, frameworks, and regulations that matter to them is important, but we can also come in at a at a board level and give them board briefings on what's important and things they need to be looking out for, as Phil talked about. About a minute ago, like what are what are the nation states we need to be concerned at? What's happening right now across the landscape there? And then take them into, as I said earlier, around briefings and workshops to really nail down exactly what it is they're trying to accomplish, what's the outcome that they're looking to achieve, and then working with our partners like Zscaler to help design the right solution for them.

SPEAKER_03:

No, absolutely. Well, all of this kind of lends itself toward, you know, towards a bigger picture of just what comes next and what do organizations do uh to keep pace, whether it's with the evolving nature of agents, whether it's with uh regulation, or whether it's with you know just a rapid uh change in infrastructure and solutions that people are faced with. Um, Chris, any signals that you might be looking for over the next couple of months to call it a year, or maybe as we get into the bulk of 2026, um, you know, what type of signals are you going to be looking for that would encourage you that more organizations are are safe and secure with their AI strategies?

SPEAKER_02:

Yeah, great, great question on that. So, I mean, obviously, as we said throughout here, the biggest shift we're seeing is that AI is is no longer optional. It's a strategic initiative, it's a strategic imperative for security operations. And if we get it right, AI will close the talent gap, which is a big concern. It'll help strengthen operational resilience for organizations and frees up our best defenders to focus on the hardest problems. But I guess from a from a signals standpoint, maybe there's there's three things that come top of mind. I guess, and Phil, I'd love to get your insights on this. Agree or disagree, just the rise of autonomous AI agents in production environments. We talked about it earlier. I mean, that's that's one. Um the one that I'm most concerned about is the weaponization of AI by attackers. It's getting it's getting more and more difficult every single day, and just the convergence of AI safety and cybersecurity in the regulations globally. Or that's what I'm looking for. Phil, what about you?

SPEAKER_04:

Yeah, well, you know, it's that second point that I think is um, I mean, not that anything you said I uh radically disagree with, uh Chris, but that second point about the weaponization is is is deeply serious. You know, the you know no but nobody decrypts passwords anymore. I mean, this is you know breaching physical security. Um physical security is very, very good. Uh it doesn't have number, it's extraordinarily rare. The weakest link in the chain from a security point of view um are the human beings. And you know, phishing and deep fakes, you know, which have gone from, you know, I'm a Nigerian prince, please give me a bank account, to stuff that is genuinely very, very, very hard for smart people to work out as fakes. And you know, this is gonna it's gonna, you know, whether it's um faking out biometrics, um ever increasingly um sort of convincing uh man in the middle attacks, I mean, all of this stuff is just gonna go up and up and up. And it it turns out that your best bet uh in terms of defending yourself against that kind of thing is AI itself, uh, that's able to spot watermarks and so on and so forth, um, that would give it a chance to be able to say, hey, look, you know, this is a suspicious video, it's not a real video from your bank manager.

SPEAKER_02:

So, what do they what do they say, Phil? I'm not I'm not hacking anymore, I'm just logging in.

SPEAKER_03:

So the risk is using AI, and the risk is also not using AI, is uh is kind of what you're saying here.

SPEAKER_04:

Yeah, it is. And you know, look, I mean, um I was just at a conference, uh, and it it's a regular annual thing. It brings together a load of CISOs and CXOs uh into a room with with with industry leaders, and it is definitely true to um to say that you go back two years, there was a deal of skepticism about AI in the enterprise. That's completely gone away now. Um, you know, everybody is leaning in to build in these applications. But you know, the other, and and I find this so funny, the other conversation that was very, very, very clear was oh, you know, this is dot com 2.0 and the bubble's gonna burst, and you know, maybe maybe maybe maybe this all goes away on its own. But here's the thing when the dot com zero bubble bursts, did that stop us using the internet? Did that um you know, did that make uh those use cases go away? Far from it. We're now we're now beyond our wildest dreams, adopted and connected and you know, uh internet works and so on and so forth. And that's my point about AI, which is you know, this is a change that is indelible, it's not going away, and we better uh get our act together as an industry sooner rather than later about how we do that safely.

SPEAKER_02:

Yeah, and I think we've we've come a long way to from that era, and just from a security standpoint, we've learned a lot over the years in that. And I think now if if enterprises, as we look at this and we're aware of what's happened in the past around the dot-com era and so forth, if enterprises treat AI security as a first mover advantage, not a compliance chore or something we have to go through, um, we'll be the ones that are leading the next era of what I'll call that digital trust.

SPEAKER_04:

Yeah, totally agree. And you know, here's the other thing as well. Like if you don't have enough nightmares, I mean, it is Halloween coming up here in in the US. So, you know, maybe maybe I'll throw another one in. You start marrying quantum with AI, and you know, where basically forget about any uh encryption. So, you know, cryptographically significant quantum computers are way more likely than general purpose quantum computers. Um, but you imagine that alongside with AI. So agents that are smart and can walk through any wall you put up. Yeah, we we've got a lot on our hands.

SPEAKER_03:

Well, I I mean I was gonna close on that question. You know, Phil, put act as if you know you got your white lab coat on. Well, I guess at risk of sounding like I'm prompting your avatar here. Act as if you are uh you know got your white lab coat on. Um you mentioned quantum. What else? Uh, you know, think kind of think kind of longer term, or maybe quantum's not even that much long term. Are we talking defending against AGI, quantum, physical AI? Like where is the puck going long term that's gonna present security risks? And you know, Chris, you can build off what Phil says as well.

SPEAKER_04:

I mean, I can definitely have an argument with myself about this because you know, one of the things I started referring to gen AI as Reg AI now, because it's not generative, it's regurgitative. Um, you know, the there are there are probably a handful of avenues which I think there's gonna be massive advances on. Um, you know, agentic frameworks are gonna continue to evolve. Uh, you know, they're gonna get smarter, they're gonna get better, we're gonna come up with, you know, whether you know emerging protocols like MCP and A2A, um, you know, start to you know bring agent registries, agent authentication, all that kind of stuff is gonna continue to evolve. So there's gonna be a huge advances in this. Second thing is, is you know, we're squeezing the lemon with transformer technology at the moment, and arguably, um, you know, we're starting to see the very beginnings of, you know, we're not getting much better. Um, you know, so Chat GPT 5 um famously 10x the number of parameters from about 1.7 trillion to nearly nearly um uh 60 trillion parameters. My math's not quite right there, but whatever. Um, you know, so so that is that is a huge advance in terms of the um complexity of the model. You know, is it much better? Um jury's out a little bit on that. Some people say yes, some people say no, uh, depending upon whether you work for open AI or not. Um, and and so you know, you sort of feel like there needs to be some breakthrough, some deep seat moments uh with uh, you know, novel sort of model architectures. And then yeah, I would say the third thing is um, you know, quantum um I'm a theoretical physicist by background, and I have all sorts of reasons for not believing in uh a general-purpose quantum computer, which is not the same as believing in a cryptographically significant quantum computer or single-purpose quantum computers, um, changing the game on some very uh traditionally computer heavy tasks like decryption, uh, as an example. And I think when that starts to come into the mix as well, we're gonna start to see some very, very, very interesting changes needed uh in architecture to be able to deal with that.

SPEAKER_02:

Yeah, and if I maybe just to jump on, Brian, as you said, just on the things that maybe keep me up at night, just kind of looking ahead a little bit, uh, maybe just outside of the AI realm, and and uh Phil talked about it briefly here, just around infrastructure modernization. I mean, dormant, nation state, persistent, you know, inside your own network. You know, how are you going out and identifying vulnerable network equipment, end-of-life hardware, end-of-life software, and just making sure you've got a full spectrum hardware configuration assessments across everything you do have data centers, campus, OT environments. To me, that's a big deal. And for going into next year, as I look, that's that's the top of mind. Quantum computing is obviously there. And as we said throughout this podcast, just securing AI. For me, those are the top, top through.

SPEAKER_04:

You know, it's an interesting point you made there, Chris, because like you know, North Korea don't make and supply routers to uh corporate America, but they know people who do. And uh, you know, you you you do have to worry about that. I mean, for maybe it's 20 years, uh, maybe it's longer than that, since a major refresh in networking uh infrastructure takes place. And so there's a lot of stuff gathering dust in the data centers that needs looking at.

SPEAKER_03:

Well, Chris and Phil, thank you so much for taking the time today out of your busy schedules to join us and talk about you know an important topic of securing AI, but we covered a lot agents, uh, regulatory environment, what's coming next. So it was a fantastic conversation. Thank you to the both of you for stopping by. My pleasure.

SPEAKER_04:

Great to talk with you all.

SPEAKER_02:

Yeah, great, great conversation, Phil. Brian, as always, great job, and um, we'll talk talk soon for sure. Absolutely.

SPEAKER_03:

Thanks all. Okay, thanks to Chris and Phil for joining us on today's episode. A few key lessons stand out. First, AI autonomy changes everything. It introduces a level of scale and speed in both defense and attack that traditional security models were never built to handle. Second, the guardrails still matter. Security baked in early, tested, validated, and continuously monitored becomes acceleration, not just friction. And third, this landscape will not slow down for policy to catch up. Regulations are moving, attackers are adapting, and the organizations that stay ahead are the ones treating AI security as a strategic advantage, not just a compliance task. The bottom line: as your enterprise pushes AI further into production, remember that trust is not guaranteed by the technology, it is earned by how you secure it. If you like this episode of the AI Proving Ground podcast, please follow, rate, or review us wherever you listen, and join us next time as we continue exploring how AI is reshaping the enterprise. And you can always catch additional episodes or related content to this episode on WWT.com. This episode was co produced by Nas Baker and Kara Kuhn, our audio and video engineers, John Nomblock. My name is Brian Felt, and we'll see you next time.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

WWT Research & Insights Artwork

WWT Research & Insights

World Wide Technology
WWT Partner Spotlight Artwork

WWT Partner Spotlight

World Wide Technology
WWT Experts Artwork

WWT Experts

World Wide Technology
Meet the Chief Artwork

Meet the Chief

World Wide Technology