AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
AI deployment and adoption is complex — this podcast makes it actionable. Join top experts, IT leaders and innovators as we explore AI’s toughest challenges, uncover real-world case studies, and reveal practical insights that drive AI ROI. From strategy to execution, we break down what works (and what doesn’t) in enterprise AI. New episodes every week.
AI Proving Ground Podcast: Exploring Artificial Intelligence & Enterprise AI with World Wide Technology
AI Moves Fast. So Do Breaches. Now What?
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI is accelerating everything — including your attack surface.
Prompt injection. Shadow AI. Non-human identities. Automated agents making decisions at machine speed. The pressure to move fast is real. So is the risk.
In this episode, Fortinet’s Aamir Lakhani and WWT’s Dave Pisarek lay out a practical path forward. No hype — just the controls that actually work.
We cover:
- Why prompt injection isn’t going away
- How attackers hide payloads in unexpected encodings
- Why identity and access management is becoming the control plane for AI
- Guardrails that matter: least privilege, segmentation, scoped agents, robust logging
- Why every automated system needs a human-reviewed “undo”
You’ll also hear where AI is already helping SOC teams move faster — surfacing anomalies, correlating IOCs, and shrinking time to containment.
If you’re being asked to scale AI while preventing breaches, this is the conversation to have with your security and engineering leaders.
Speed is inevitable.
Exposure isn’t.
More about this week's guests:
Dave Pisarek is a Fortinet Practice Manager with 24 years of experience in information technology and a Fortinet NSE 4 certification. He specializes in network security design and integration, translating complex technical requirements into practical, standards-based solutions. Known for his clear communication and problem-solving skills, Dave develops business plans, architectural designs, and project documentation that align security strategy with operational outcomes.
Dave's top pick: Fortinet Xperts 2025
Aamir Lakhani is Global Director of Threat Intelligence and Adversarial AI Research at Fortinet. With more than 15 years in cybersecurity, he leads research and architectural strategy across cyber defense, malware, mobile threats, and advanced persistent threats. Aamir has designed solutions for major commercial, federal, and defense intelligence organizations, helping them defend against sophisticated adversaries while advancing FortiGuard and AI-driven security capabilities.
Aamir's top pick: Stop Stacking Boxes. Start Designing Security.
The AI Proving Ground Podcast leverages the deep AI technical and business expertise from within World Wide Technology's one-of-a-kind AI Proving Ground, which provides unrivaled access to the world's leading AI technologies. This unique lab environment accelerates your ability to learn about, test, train and implement AI solutions.
Learn more about WWT's AI Proving Ground.
The AI Proving Ground is a composable lab environment that features the latest high-performance infrastructure and reference architectures from the world's leading AI companies, such as NVIDIA, Cisco, Dell, F5, AMD, Intel and others.
Developed within our Advanced Technology Center (ATC), this one-of-a-kind lab environment empowers IT teams to evaluate and test AI infrastructure, software and solutions for efficacy, scalability and flexibility — all under one roof. The AI Proving Ground provides visibility into data flows across the entire development pipeline, enabling more informed decision-making while safeguarding production environments.
AI Is Expanding Your Attack Surface
SPEAKER_01So, on this episode, we'll get practical with how to reduce prompt injection risk, control identity and access management for non-human users, and ultimately keep the business moving. We'll be talking with Fortinet Global Director of Threat Intelligence and AI, Amar Lakani, whose research has guided enterprise security leaders through the shift from signature-based defense to AI-driven, behavior-based protection. And Dave Pissarek, a cyber thought leader here at WWT, who's leading organizations of all shapes and sizes to operationalize practical security measures at scale. From Worldwide Technology, this is the AI Proving Ground Podcast. And if you're currently being told to accelerate AI adoption while your security team is still expected to prevent breaches at machine speed, this episode is your playbook. So let's jump in. Okay, Amar, Dave, thanks for joining today. How are the both of you?
SPEAKER_02I'm doing great. Another day.
Move Fast — Or Get Breached?
SPEAKER_01Perfect. Amar, let's jump right in. Organizations across the, you know, the entire planet are feeling a lot of pressure to drive AI adoption. This is coming from the CEO, from boards, from executive teams. But the CISO or you know, SOC teams, you know, security teams, they sit on the other end of that, you know, downhill, so to speak. So very high level. But if I'm a security analyst or if I'm a security specialist here, how am I thinking about balancing, enabling safe AI adoption that can scale, but while also preventing AI-driven breaches? And oh, by the way, you know, I can't pause the business for any of this. So what's what's the playbook here?
SPEAKER_02You know, it's we're we're figuring out the playbook, honestly, as we're going along. You know, AI is being used everywhere now, right? And we're we're putting more and more information with AI. Generally, you know, historically speaking, the idea was like, let's not share our information out publicly. Let's like, you know, be very conscious about the information we're putting out in the world. With AI, it's almost opposite. You want to give it as much information as you can, build your LLMs with as much data and as much thinking as you can, and then let it make those decisions. Unfortunately, that creates a lot of risk, right? Uh that creates a lot of risk of losing intellectual data being exploited against that data. And also shadow AI, AI now is becoming a really big deal where you don't know how your data is being used or what AI tools and projects may be being used within your organization.
When Attacks Move at Machine Speed
SPEAKER_01Yeah, Dave, I mean, what are you seeing in terms of a shift with security teams here? Are they rising to meet the occasion? Are they constantly you know playing catch up? Uh, how have you seen kind of the role of you know the CISO's office or teams beneath it uh shift?
SPEAKER_00Yeah, it's it's as I mean we're saying there, it's always saying it's uh basically, you know, we're we're kind of flying the ship while we're you know building it, basically, when it comes to AI. It reminds me a lot like back in the the old days when we had you know cloud and everybody just wanted to get in the cloud, wanted to get everything going the way they want to go. And basically, when it comes to AI, it's the same thing. Everybody's doing what they need to do, and some people just aren't thinking a little bit ahead of where they need to be. It's getting checkboxes checked, it's getting things done that way. But if you're not using AI, you're definitely behind the eight ball. You're not going forward, you're staying stagnant, just like some other companies did back years ago, and you saw what happened to them. So you have to do AI. I see a lot of questions, a lot of different looks at different vendors. I also see Fortinet, you know, starting to step up in this space and starting to do things that we need them to do, you know, and everybody needs to do, but I've seen some really good progress coming from Fortinet.
SPEAKER_02Yeah, I would completely agree with you, Dave. I I think yeah, you know, most modern cybersecurity programs weren't designed against attack, you know, against attacks coming from AI systems. And now if you're in a situation where you your organization be can be attacked by machine speed, it has to be defended by machine speed as well. That's pretty much the only choice you have. Another interesting thing is I liked how you mentioned if you're not into AI, you're going to be behind the eight ball because it's at the point where you may not be able to catch up to your competition because of the evolution and the speed of how AI is moving. So if you're not in it right now, right, you could be behind the competition and may never have a chance to catch up. And that could definitely uh you know damage your business, your reputation, as well as uh the products and the revenue you're bringing in.
SPEAKER_00Definitely, I agree with that 100%. It is one of those things where you have to be, you know, in the playing in the field with it. But you know, being a leader, being everything in that to me is kind of a weird place to be because if you are the leading in the AI development, AI security, and everything in there, you always got to be on top. It's kind of nice to see what the industry is doing, see what's happening out there, and then try to try to start to make your way to what your decisions you want to make for your company and or your uh being an OEM or security company.
SPEAKER_02Yeah, and it also gives motivation for attackers to like get ahead of the game. I mean, not only from like corporate espionage standpoint, but also from a nation state standpoint, especially with uh you know restrictions and export and semiconductor space, there's a lot of motivation for a lot of people for nation states to basically use AI to like steal that information as well.
SPEAKER_00It I always find it funny when we talk about AI, there's always those three pillars going on, right? There's the AI security of securing your prompts, your LLMs. There's also being able to find when attacks are happening, things like that. So, like your your 40 AI assist and things to that effect actually helping you. So not only are we protecting AI and attacking AI with AI, we're also trying to detect AI with AI. It's a very unique little situation you're in there where you're where you have analysts just really sitting there watching what's happening.
Prompt Injection Is Here to Stay
SPEAKER_01Well, I know we mention AI a lot on this podcast, but you might have fit in the most AIs in a single sentence right there, Dave. We'll send you a badge. Well, uh Amar, I mean, we we hear about so many different types of attacks here: prompt injection, data poisoning. The list goes on and on. I'm curious, you know, you're you're a researcher that's kind of out in the field right now, always kind of looking to see what's taking place. Are all of those actually operational right now in terms of live attacks, or are they coming down the line? Or, you know, how how should security teams think about these, you know, emerging threats that are powered by AI?
Old Security Rules That Still Win
SPEAKER_02So, first of all, those attacks are live right now. They can be done. Now, how how often are they being done? Well, that's that kind of depends on the attack and the data that's you know being thought after by the threat actors. But they're they're available. And things like you mentioned, uh prompt injection attacks, I think that's never going to go away because that's just an element of how LLMs work. Even new attacks that we're seeing, like uh embedding codes and emojis and uh having that, you know, have an LLM process that, those are some new types of attacks. And now that operating systems kind of have AI agents built into them, right? Think about all the problems we're having with like OpenClaw and all the attacks uh of automated attacks and agentic AI. It's it's definitely a problem. There's there's no doubt about that. Now, how can you protect against that? Sometimes it's the old old boring stuff that still works and making sure you have good segmentation, making sure you have good you know identity management systems and making sure you understand what your data is and classifying your data correctly.
SPEAKER_01Yeah, Dave, I mean, are there emerging blind spots here too that because of these you know new attacks, which are maybe just dressed up um old old system attacks, like what are the blind spots within security teams that we need to be focusing on?
SPEAKER_00It's not just a blind spot, in my opinion. It's you know, as AI continues to grow, you know, we have to still keep our those skills. You still have to have the security skills to see what you're seeing and where you have to go and how you're designing and what you're gonna do if those attacks come through. I don't want to see people start relying on AI and just you know, trusting what it's doing, just letting what I always called, you know, letting oh man, there's the I forgot my terminator terminator reference there. Oh, Skynet, just letting Skynet just go and do its thing. But the funny part about it is, you know, when you think about CISOs and you think about the way they look at security, some of them might want to see more of that automated response and reporting versus actual human beings getting in there and thinking, is this truly what we want to do? Is there gonna be an undo button when it comes to AI security? That's the thing I want to see uh start to be discussed.
SPEAKER_02I think we will have an opportunity to add more automation. I think there's no doubt about that. But it's understanding what that automation does. Uh, for example, even like our IR teams right now, you we used to do a lot of investigation and kind of understand what each component, each IOC we're seeing. Now we're kind of having an AI agent run against all those IOCs, seeing like what the possible outcome is, good and bad, and then automatically creating a playbook, maybe in SOAR, that could uh you know possibly contain that attack. So that's that's that's a good opportunity. I I think also one of the differences is we used to see if we looked at all our attacks in the past, about like 10% of like the CVEs that we saw like each year were the ones that were being exploited most of the time. I think like 90% of the time they were being exploited. Now attackers, you know, they can run an AI agent against all the CVEs, kind of have a much bigger space to attack, and like attack against pretty much a lot more vulnerabilities than they ever have in the past or ever needed to in the past.
AI-Driven Social Engineering Gets Personal
SPEAKER_01Yeah, I mean, this is all expanding so rapidly that the risk profile just becomes, you know, seemingly almost infinite. I'm curious, Amar, in your experience with you know talking with either executive teams or even boards for that matter, how do you express this risk to them so that it puts it into kind of real business speak that you know can yield action after the fact?
SPEAKER_02Well, I'll tell you what, for the first time in a very, very long time, I'm actually more optimistic and hopeful. Every year when we've looked at our threats and looked at like kind of the trends of threats, it gets bad. That that's that's naturally. It always gets bad. There's more attacks, more ransomware, more vulnerabilities coming out. But I think for the first time we actually have a fighting chance against that. And it may be a limited time because of the investment needed to do AI correctly, right? We're we're building the good guys, are actually spending a lot more money in building LLMs and building these systems and getting a little bit ahead of the game. Now, now we know that's not going to last, and we already see pockets of that with like these uh unjailbroken LLMs like chaos uh GPT and fraud GPT and some of these hacker LLMs that are out there. So that that advantage isn't gonna last forever, but we may be able to get ahead of that. And as we said before, once you're a little bit ahead of that, you can kind of have that uh momentum carry you forward. And that's really my message to to board uh to boards and CISOs is to really understand what AI is doing, right? I mean, there's there's so much buzzword with AI right now, there's so much buzzword with machine learning, uh, but they might not really understand where it's being used, how it's being used, and the advantages and the disadvantages that it gives them. And just educating them, that's my first step. My second step is to make sure that they understand there are solutions around that. Sometimes that does require, you know, investment. Sometimes that requires education, and sometimes that just requires a little bit of commitment uh to get around that and to start mitigating some of those risks.
The Identity Crisis: Non-Human Users
SPEAKER_00The funny part is when I think about this, what you were just saying is you know, there's no set it and forget it anymore. You can't just set up a firewall and put some signatures on in some, you know, some layer three stuff, some both through and some layer seven. It has to constantly evolve. And so we talk about like, you know, you know, Fortinet and then you know 40 guard and and all the updates and the trillions of attacks it sees and it pumps down to all the firewalls and everything it has to talk to. That's what has to happen. And that's what these you know, these CISOs and everybody has to understand is they are not just able to put a point product somewhere. It has to spread across our entire infrastructure to be able to get these attacks and CBEs, like you're saying, they they change daily, they're always out there, and being able to attack them all at the same time potentially is a very big risk across your whole entire infrastructure landscape.
SPEAKER_01Yeah, Mar, I like that you have that positivity to you and that, you know, in some cases or in many cases, you think that, you know, security teams are ahead of where AI attackers or you know attackers using AI might be. I think that's a little counter to a lot of what we hear from the market. So pull on that thread a little bit. Are you are you thinking that security teams are already in position to be ahead of the game, or is it like there's a path to get there? And if it's the latter, what does that path look like?
Agent Sprawl and Hidden Exposure
SPEAKER_02I I think they have an opportunity to get a little bit ahead of the game. Yeah, you know, the one thing I do have to bring up, the one vulnerability that, you know, we will always fall for, which attackers definitely take advantage of, is social engineering. Yeah. And they do use uh AI to help them with social engineering. So instead of now, you know, like looking at, you know, sending the email, you can like scan uh LinkedIn for an entire organization, look at common threads within every employee, and then start crafting very custom like phishing attacks or social engineering attacks. The other day I saw a very interesting attack where someone had a deep fake voice call, right, to a secretary trying to do a business email compromise. And while they were doing that, the person said, like, hey, listen, let's get on WhatsApp or whatever and do a video call. So that a deep fake voice call became a deep fake video call. And so it was just like reinforcing the fact, like, hey, I'm really real when they weren't real, right? And they were falling for those attacks. So there there is definitely a lot of danger. There's there's a lot of ways that uh, you know, attackers are still gonna get into systems by utilizing AI. But like I said, uh now at the speed of attack with AI, with the speed of defense with AI, we have more of an opportunity than we've ever had before. And that's kind of what I'm saying. But it does still require kind of that passion, that that interest, as well as that education on understanding how you can use AI to work for you.
SPEAKER_01Yeah, and it certainly gets back to some of the fundamentals that I believe the both of you have already mentioned. I mean, in that sense, Dave, are we are we overblowing or are we overthinking AI security? Or I mean, I guess I should say securing against AI, and really it's just, hey, stay true to your fundamentals. Yes, there will be some cases, you know, where you have to go get a new tool or a new capability, but those fundamentals are really going to keep you safe for the foreseeable future.
SPEAKER_00Yeah, for the foreseeable future, just like you said there, like you, you, your standard security practices we do today, from you know identity to you know firewalls to all everything you can think of that's in security, all have to still be there. You can't drop the toolbox and just say, okay, we don't need this anymore. Let's just stick this one product in there. It never will work that way because the attack surface is way too big. So, you know, if someone comes after my identity through social engineering or phishing or something like that, AI might detect that. But if there's a human, just the one on the keyboard just opening up that door, you've got to find ways of stopping that too. So it's not just you know, you know, throw the you know all the tools away. It is definitely thinking ahead, but not letting go of the fundamentals, as you said.
SPEAKER_01Yeah, Amar, I mean, uh, let's talk about tools here too, because you know, certainly tool sprawl is an issue that I know a lot of security teams are having top of mind right now. How should, you know, how how would you advise a CISO or another, you know, security leader as they think about, hey, if to for me to get ahead of this game and secure my organization, I might have to get the latest tool or the latest feature out of here. Is there a real risk here of tool sprawl or is that something that you know can be handled?
SPEAKER_02I I think there is a very real risk of uh tool sprawl. My my first advice is always to CISOs is get tools that your team feels comfortable with and will use, right? That's yeah, you know, just don't go for uh whatever you hear about. Like think about like uh, you know, what are the tools that your you know your teams really use? What you can bring back for them is to, you know, open up their eyes and understand like maybe where some of the risks are that they hadn't thought about. Dave, you mentioned identity. I'm I'm actually a big believer that identity and access management, although normally not a sexy topic, right, uh in cybersecurity, I think that's gonna become a really big deal because identity is not gonna be just a person anymore. It's gonna be an AI agent, it's gonna be a session, it's going to be, you know, anything that's collecting information, maybe an automation workflow now as well. I think uh we're gonna start thinking about how we like use the our our tools and our foundations in the new world of AI. So I would say, you know, let's figure out like what we really need to do and how to use the tools that are probably existing first. Or if we do have to get new tools, what are the tools that your team feels comfortable using? Right. And that may not be the uh you know the same tool that you feel comfortable using or promoting like as a CISO, but you still have to work with the stock to see how you can make their lives easier.
AI Inside the SOC: What Actually Works
SPEAKER_00Yeah, you you you take me back in time to a time when I was the WAF guy back years ago when you know you you had the uh need to secure all your externally facing applications with the layer seven firewall, correct? And there was always those customers that bought these firewalls and basically sat there. There were really good monitors for them or good logging devices, and they never put a security policy on that never did anything because they didn't understand what they were doing. And if they did do it, they blew up the world. And when the money stops flowing in, then guess what? The security always has to bend. So you always had to have this back and forth with people to be able to figure out exactly how to do that. So your point of knowing the tool and getting something that your teams can actually work makes that tool that much more important and also makes it that much more powerful because buying the you know the the most shiny widget in security today might you know be the worst decision you make.
SPEAKER_01Yeah. I'm glad you mentioned agents as well, Amar. Tell me how that's changing the game here. It feels definitely like it's an inflection point for a lot of people and how they get work done. How much of an inflection point is it from a security standpoint?
SPEAKER_02It's it's it's a big, big inflection point. Just like just even last week, I believe, we we uh were scanning uh the internet on OpenClaw, and we saw like literally hundreds of open claw systems with their admin ports open, and look at looking at them a little deeper, it looked like they were already being used to use as command and control servers. So uh additional software was put on these systems. Agents are being used like the it's we're in a weird time where everyone's getting excited about agents, they want to experiment with agents, they want to like put on agents and see like you know how they can make money, how they can like increase the productivity, how can they be much more efficient, but they're not really thinking about the security, they're not thinking about the access they're giving to the agents, and they're not realizing now like everything that it will have access to and it can do on your behalf.
SPEAKER_01Yeah, Dave, build on that. I mean, what are you there amar is talking about how you know teams or organizations aren't necessarily thinking security first when it comes to the potential explosion of agents? What are the considerations that we would advise for?
Will AI Replace Analysts?
SPEAKER_00It's it goes back to that whole cloud thing I was talking about earlier where everybody just wanted to get stuff in there, even internally here worldwide. You know, we are using agents to see you know all the different departments, all the different things we want to see and combine that is to make one easy place to go. I always ask the question, and I would ask this question to anybody we're talking to out in the field is you know you're putting all this data in there, but what is your vision? What is the goal? What is how is it supposed to look? Because everybody seems to be doing something different. Someone's using you know this tool, someone's using this tool. You know, it just seems like everybody's going out and trying to find the next best thing, but there's no guidance, there's no governance going on across the industry right now. And I think that's something that's very, especially internal to customers, is very, very insecure and also very dangerous. Not just because of you know, security, you know, you know, someone breaching your LLM or doing something like that. It's just data integrity and ensuring that you're not giving out really important things that people should not be seeing just for the want to have it so it's easy to get.
SPEAKER_01All right, John, give me the thumbs up, so we're good to go. Well, Mar, let's let's dive into a little bit more of the positive notes stuff. You know, to this point, we've been talking mostly about how to defend against AI and what emerging threats are coming up. Um, on the flip side, how is AI accelerating defense postures? And where are you finding kind of the most meaningful successes within a stock or within security teams that's that's helping improve an organization's security posture?
What Security Teams Must Unlearn
SPEAKER_02So I'll I'll answer that question with a with a with a story that I saw in real life, actually, with a SOC. So I was at a at a SOC and uh they were using you know a couple of AI tools. They were looking using our store product and they started seeing some just weird stuff like extra traffic, uh, you know, broadcast traffic, so just different things that wouldn't necessarily alert me to something dangerous, but they started looking at that and they started uh looking at the LLM within the box and started asking, hey, where is this coming from? And it started like helping them investigate. And they were able to actually just through a couple of prompts, well, when I say couple, probably like 10 prompts, they were able to like figure out that uh what happened is someone brought in a USB stick, disabled their EDR system on their uh local desktop, they had admin privileges, and basically malware ran and it got it got infected with malware. And that part, the malware actually hadn't luckily hadn't done anything bad yet. It actually hadn't uh exfiled any data or anything else, but it started causing uh replicating story going up to other places onto the system, and that was causing an unusual amount of traffic. And they were able to catch that, and that was like really, really cool, and it started making me think like, okay, that this is where we have some of the positivity coming in, is that our teams can now like look at things they probably would have ignored. I've definitely seen cases like that in the past where teams have completely ignored that, and we didn't find out about it until until we were doing an IR engagement, right? Looking at a breach and where did patient zero occur like months later sometimes. So that's that's some of the positive ways. Is now we can kind of have this like junior analyst or multiple junior analysts kind of doing some of the the grunt work, some of the dirty work, looking at packets, like tracing them where they're coming from, the sources, things that maybe our senior analysts just wouldn't have time for, or it's just too much data for them to really handle in an effective manner.
SPEAKER_01Yeah, it's funny that uh it well, equal parts funny and equal parts scary that people are still bringing in USB sticks and uh plugging them in, I guess. Uh Dave, it sounded like you were about to add something.
SPEAKER_00I was just gonna say, you know, when it comes to the security side of things, like you know, it was discussed there, is just basically, you know, all the all that ability to see all the data and all that. But I also look at it as you know, AI, when we talk about the agents, we talk about being able to put all of our information places, like some things we're working on to be able to get information faster is you know, you're not looking or making a call to your you know, Linux guy just as make an example. You could just ask a question and say, you know, we hit your AI and be like, hey, what is this? And it gives it to you quickly instead of calling, you know, the old call tree. You know, you know, we got to get the Cisco guy, we got to get this guy, we gotta get that guy. Maybe that's actually where we're gonna go forward to be a real positive thing for all the data from a security perspective and also a knowledge perspective can help reduce the time. From an incident to resolution.
SPEAKER_01Yeah, Mar, I mean, once we start talking about kind of the AI for good aspect here, you're also starting to bring in change. This is going to change how socks operate. I'm curious. It's a little bit of a crystal ball question here, but you know, how do you think the role of the SOC analyst is going to change over the, you know, the immediate future? Are skills going to go away? Are skills going to be added? What do you think the future holds as it relates to kind of like talent management?
Guardrails That Don’t Slow the Business
SPEAKER_02Well, I think anyone working in cyber for a long time already knows that skills are always changing and like always evolving. I remember when I first uh started off years ago in networking, right? All you needed was a console cable and and you were good to go, right? And then you have to become like an expert at scripting at a Python, at data centers, right, at uh Docker, at uh containers. And it's the same thing that's on the networking side. It's the exact same thing on the cybersecurity side. It's always gonna be evolving and it's always gonna be changing. And like today, like you, you know, it's gonna be natural. It's gonna be very natural for someone that starts today to just be, you know, in that field and understanding how to use those AI tools. AI is not gonna put a cybersecurity professional out of out of work, but a cybersecurity professional that does not know how to use AI effectively, maybe out of work, right? So we want to know, we want to have those people that feel very comfortable with the tools and how to get the most out of those tools. And so I feel like that change is always occurring, always happening. Uh, but it does give it the SOC analyst specifically, I think, a more interesting job. It lets them actually do that analyst part of their job a lot more effectively.
SPEAKER_00I agree with that too. I mean, I I don't see the skill set going away like we talked about before, is you you you're still gonna get a lot of data. You're gonna get a lot of information coming your way. You still have to know how to understand and interpret that data and have a go forward plan. Like again, with that Skynet reference, you can't just let it do its own thing. You have to understand in your environment how things are working, how things communicate with each other, and what's gonna affect. Um, I do think it would make their jobs maybe easier going forward, but it's then the skill sets have to be there. You can't you have to understand what you're turning off or turning on.
SPEAKER_01I think we all have that easier, the quote unquote easier future or potential ahead of us as it relates to bringing AI into your processes. But you know, one of the key things to actually get to that future is you need to unlearn some of the processes that you do have currently in place so you can kind of rebuild them in more of an AI native fashion. Amar, I'm curious, you know, what types of processes or systems do we have to unlearn, do you think, from the security perspective, so that we can start to understand where AI can play in it?
From Signatures to Behavior-Based Defense
SPEAKER_02You know, I I actually ask myself that question every day in my in my life because every day I think we pick up habits that we do naturally that uh that we kind of get accustomed to, but we don't realize or sometimes we're unwilling to kind of learn a new process for it. And and I and I think what we need to unlearn is going to depend on the tools that we bring in. For example, when we look at uh endpoint agents, right? Uh we we need to start unlearning like the traditional kill chain, right? We need to like that that's that may be not effective. Now we need to start looking at uh maybe an AI agent kill chain and and what's you know what you know what are the steps that it goes through before an attack gets exploited. So we need to start learning about things that we're bringing in, and that may require, I would say, an update, maybe not necessarily a complete unlearning, but an update of how we applied our past knowledge. And of course, be you know, be flexible, right? I mean, that's that's the hard part for most people is just uh be flexible and understand things will change, and that's okay.
SPEAKER_01Yeah. Yeah. Flexibility, I'm glad you mentioned that. It made me think of something that actually goes back a little bit in the conversation. And Dave, I'm gonna pitch this question to you. We talked about tool sprawl, we talked about how things change all the time. In order to be flexible, how how are security controls, how do we build security controls or AI-enabled security controls that aren't gonna go obsolete within the next, you know, next couple days, much less the next couple uh weeks.
SPEAKER_00I don't I don't know if there's a good answer to that question. I because I don't this landscape changes so fast and so rapidly, and the attacks and the way people are uh you know being exposed is just rapidly evolving. I don't see this being you buy something on an EEA for like five years or something like that, right? This is going to be something that has to evolve, change, update, move forward. My biggest thing is like we were talking about earlier, is getting into a technology you're comfortable with. Like, you know, if you're talking about Fortinet, for example, you know, their single OS, the way they are, if you learn the OS, you learn it all, you know all their products and how it functions. That's more of that platform play. When you have more people that have more endpoints, things like that, you have to have a little bit broader of an understanding of all that technology. But as you bring AI into it, yeah, it might lighten the burden of those, but they're gonna have to update, change, evolve, I think more rapidly than we've seen in the industry when it comes to security, you know, traditional firewalls, layer seven protections, they're all gonna have to update. I'm not saying you have to change your OEM every couple years, but you definitely have to keep up with what you're doing and the tool sets you're putting in.
SPEAKER_01Yeah, Mar, I want you to answer that question as well, but almost through the lens of, well, of course, Fortinet here. I mean, knowing that tools can get outdated, how does how does more of the vendor lens look at it to make sure that you always have something that is going to be providing value back to the end user?
Can Defenders Reclaim the Edge?
SPEAKER_02Yeah, you know, at Fortinet and at Foregar Labs, you know, we've been looking at AI for a very long time before it was like a buzzword, right? Because we understood a long time ago that, you know, signatures would be outdated, that you need to start looking at behavior. We actually evolved from like static signatures a long time ago to like almost like uh these complex what we call like smart signatures, like as things with decision trees to like behavioral. So we like started implementing a lot of like machine learning, and then we went into like what we call predictive or classical AI, which is really good for stopping attacks. And then, of course, we've uh implemented Gen AI and now agentic AI to basically help the SOC, help the like uh the education, help uh you know that automation piece of it. So we're we've always been like looking ahead. And I think that's important is that that like most of my team these days, they it used to be like probably 10 years ago, they used to come from a heavy pen testing background, a heavy IR background. Although we still have those skills, a lot of them now are coming from a heavy data science background, uh just understanding like how data works and how data changes and how to how to look at that data. It's funny, like what I assume every vendor does this is we every day we collect tons of uh IOCs, tons of data, PCAPs, exploit files, just malware files, and we look at that and we scan that against all our products to see if we see any threats. We also scan that against like all the open source products that are out there, and we also scan it against all our competitors' products, right? To see like, hey, what do our competitors see of this? And and that's great. But then what we do is we take everything that's a hundred percent clean so that no one detected any type of threats against that. And then we start putting that through our AI and ML models and strategies to see like what does this code have? And we've actually found a number of zero days that way because attackers will generally test code out that necessarily isn't how do I say this, good malware, poorly written malware because it doesn't work, right? But they're testing that out to see, you know, if there's a future attack or future possibility. But we still have will have those baseline in ML, so we can stop those zero days pretty fast. But I think that's that's one of the things that we'll continue to do, that vendors and even large companies will continue to do is invest in that ML, invest in that baseline of what looks normal. And we've been seeing that for like like 10 plus years is you know, what is your baseline of what looks normal and then what's the anomalies based out of that? But machine learning helps us get there. And then, of course, we can create AI agents or AI uh strategies to enforce that machine learning baseline.
SPEAKER_00Yeah, I was gonna say, you know, I've been seeing you the big change from you know the signature base, like he's saying, to that behavioral change. That was a big move. Implementing it was a little bit more difficult. The learning paths and the way you had to figure out your network when things change, uh, false positives, things that came through at the same time.
Training, Testing, and the Road Ahead
SPEAKER_01Yeah. We are at the bottom of the episode here. We're recording this in mid-February. I think we'll be dropping this episode in early March, right around the same time as uh Fortnite Accelerate. You know, Omar, to the extent that you can, you know, what should we expect to uh see or hear at the conference, or how might what we cover you know, out there in Las Vegas, how much is that gonna shift the conversation as it relates to securing with and against AI?
SPEAKER_02Well, if you come to my session accelerate, I'm gonna give you a hundred percent way of winning in the block check table. So you no, no losing. I'm just joking. But uh if only we had an AI agent for that. So seriously, I think what you'll see is first of all, where we're thinking about security and where we're looking at attackers and how attackers are evolving. So you'll see like the new solutions and the new strategies coming from us. We are evolving AI at a speed that's I believe faster than we've ever done before, and you'll start seeing those solutions, not only how to how to protect against AI, but how to use AI and how to defend against AI attacks. We're looking at all three of those pillars. On top of that, we're looking at like just how cybercrime is evolving as well, and what we can what we can do as a community to help stop cybercrime. One of the things our group is doing is we're actually partnering up with crime stoppers to kind of do a reverse bounty, a crime bounty. Let's let's go after the cyber criminals and like almost like a most wanted list like the FBI has and like you know, give rewards if we capture the identities. Of course, there's a lot to be said about that, right? That can go very wrongly if you don't do it correctly with attribution and things like that. But we're looking at like how do we disrupt cybercrime? How do we take that knowledge and put that into AI systems? And how do we actually look at all the new attacks that AI is giving attackers an opportunity to start exploiting and how do we protect against that? And then lastly, how do we just use AI to make our lives easier? And all that plus more is coming, I guess, at accelerate. Absolutely.
SPEAKER_00So, from a worldwide perspective, you know, we are a platinum sponsor, accelerate this year. We will have a booth. One thing we will be doing there is showing previewing to a point our capture the flag game that's going in in the summer of 2026 into the ATC here for Fortinet. So using a lot of technologies that they're using. I think we have four in our phase one game. We're be talking about it a lot. There'll be a chance to register to sign up for updates for the capture the flag game that's coming out, which is called Outbreak. But more of that would be discussed at the booth.
SPEAKER_01Absolutely. And you know, Cyber Range, just for those that may be unfamiliar out there, um, are kind of uh interactive, uh gamified experience on WWT.com that lets you get real hands-on experience with what you might expect an AI attack to look like or any other attack for that matter. Um, well, fantastic stuff, Amar. Dave, thank you so much uh for joining us on the podcast today. We hope to see you out in the desert here in uh in the beginning of March. It'd definitely be better weather than we have here in St. Louis, where it's probably gonna be cold and rainy and windy. Uh, but thank you to the to the two of you again for joining. We'll see you soon. Thank you for having us. Okay, thanks to Amar and Dave for joining today. It's clear identity is no longer just a person, it's a growing ecosystem of AI-powered tools. So before you scale your next AI initiative, ask three questions. What does it have access to? What is it allowed to do? And how fast can we stop it if it goes wrong? This episode of the AI Proving Ground Podcast was co produced by Nas Baker and Kara Kuhn. My name is Brian Felt. Thanks for listening. See you next time.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
WWT Research & Insights
World Wide Technology
WWT Partner Spotlight
World Wide Technology
WWT Experts
World Wide Technology