The Macro AI Podcast

Securing AI Agents

The AI Guides - Gary Sloper & Scott Bryan Season 1 Episode 51

In this episode of The Macro AI Podcast, Gary and Scott dig into one of the biggest challenges emerging in enterprise AI: securing autonomous agents. As businesses deploy systems that can reason and act independently, a new class of risks emerges — from prompt injection and memory poisoning to identity confusion and tool abuse. The hosts explain why the old cybersecurity playbook no longer works, what “intent security” really means, and how identity-bound autonomy can make AI systems trustworthy at scale. 

Send a Text to the AI Guides on the show!


About your AI Guides

Gary Sloper

https://www.linkedin.com/in/gsloper/


Scott Bryan

https://www.linkedin.com/in/scottjbryan/

Macro AI Website:

https://www.macroaipodcast.com/

Macro AI LinkedIn Page:

https://www.linkedin.com/company/macro-ai-podcast/


Gary's Free AI Readiness Assessment:

https://macronetservices.com/events/the-comprehensive-guide-to-ai-readiness


Scott's Content & Blog

https://www.macronomics.ai/blog





00:00
Welcome to the Macro AI Podcast,  where your expert guides Gary Sloper and Scott Bryan navigate the ever-evolving world of artificial intelligence.  Step into the future with us  as we uncover how AI is revolutionizing the global business landscape  from nimble startups to Fortune 500 giants.  Whether you're a seasoned executive,  an ambitious entrepreneur,

00:27
or simply eager to harness AI's potential,  we've got you covered.  Expect actionable insights,  conversations with industry trailblazers  and service providers,  and proven strategies to keep you ahead in a world being shaped rapidly by innovation.  Gary and Scott are here to decode the complexities of AI  and to bring forward ideas that can transform cutting-edge technology  into real-world business success.

00:57
So join us, let's explore, learn and lead together. Welcome back to the Macro AI Podcast, where we break down the biggest transformations in artificial intelligence and what they mean for business leaders. I'm Gary. And I'm Scott. Thanks for tuning in. So 2025 was the year everyone discovered AI agents. You read across the media, you see it everywhere. People are posting it on social media and it's systems that don't just chat, but they act.

01:27
They plan, reason and execute tasks on their own. Yeah, exactly. You're starting to hear about them on a podcast, reading about them. They're really showing up pretty much everywhere and they're starting to  manage emails,  analyze data. ah They're even  running parts of enterprise workflows already. So they're, these are actually digital operators, not just digital assistants. Yeah, that's a good point. They are digital operators and not just an assistant that we're

01:57
may be accustomed to. And I think that shift raises one critical question. If AI agents can act on our behalf, how do we secure them? Yeah, that's the focus of today's episode, securing autonomous AI agents.  We'll explore why agent security is different, what risks business leaders need to understand, and how identity and governance will absolutely be critical to trustworthy AI.

02:25
So what we should probably do is start from the beginning, you what are AI agents and why 2026 will be their breakout year  here in the coming months. So if we just start simple, why don't we take a shot at what makes an AI agent different from a chat bot or a model embedded in a product, Scott? And  I think we should just break it down there for the listeners. Yeah, sure. So an AI agent can actually reason and take action instead of just

02:54
responding to prompts, it connects to data sources, it uses APIs,  it remembers context, and it makes plans toward a goal. uh And so that autonomy is really what separates agents from just static models. Yeah. So essentially they're very similar to employees in your digital ecosystem. They can think, they can act, and they can make things happen across your business. Yeah,  exactly. uh

03:23
As we talked about in some episodes earlier in the year, 2025 was where the world was really introduced to AI agents. And in 2026, we're going to see a major jump in adoption. So every major enterprise software platform is building agent orchestration into their stack. So, you you've heard of Salesforce and, and, um, service now and, and other, another large software platform is building AI agents into the stack. So from, you know, CRM tools to complete.

03:52
IT automation systems building AI agents. Yeah. So, I mean, that ultimately uh means the security implications  scale right along with it. So the more capable these agents become, there could be, you know, more  dangerous aspects uh when something goes wrong. Yeah, certainly. And  once you connect an AI agent to real systems, know, uh payments, procurement or customer data,

04:21
you've moved from just experimentation into production. And that's when security really has to mature very quickly. Yeah, that's actually a good segue, I think, as we dive deeper into security and why security for AI agents is different. And I think we should probably talk about why the traditional cybersecurity model doesn't really fit here for AI agents, especially if you think about how the data is going in and out compared to traditional systems.

04:50
Yeah. I mean, traditional security protects data and it protects networks, but with AI agents,  the real thing to protect is intent. So these systems decide what actions to take. And if someone manipulates that intent, they can steer the agent into doing some serious harm. Yeah. That's a good point, Scott. I mean, it's really an important shift. We're not just guarding the perimeter anymore. And that's what we've been kind of ingrained on for  many years.

05:19
You know, you and I have both been in the thick of that with global networking. ah But here we're really guarding the decision-making and the decision-making process and the inputs and outputs. Yeah. I think that's a great way to put it. It's really guarding the decision-making. So think about an AI that's authorized to make purchases or update customer records. If an attacker can change its reasoning,  they don't have to break into your systems. They can make your own AI.

05:49
actually do it for them. Yeah. So, I mean, we've automated both productivity and risk in this new world. Yeah. Yep. Exactly. And that means uh organizations need to start thinking of these systems as digital actors. entities that require identity, they require verification and governance, just like human employees. Yeah. It's almost like a pre-brack background check as part of the process. Yeah. So, you know, if we're to understand the threat

06:18
landscape, maybe we dig into what these threats actually look like, because they're not hypothetical anymore. They're starting to surface in real deployments and you can read  about some of these use cases online. Yeah. think the  first one we're seeing everywhere is prompt injection. And what that is, it's when an attacker hides  malicious instructions inside a task or inside a data set and the agent reads it and unknowingly

06:47
execute something that it just shouldn't execute.  Kind of like convincing your digital assistant to reveal company  secrets, you know, not by hacking it, but by asking the wrong question in the right way. Yeah. Yeah. And then there's a memory poisoning. So that's where an attacker plants false or misleading information into what agent  the agent remembers or the agent actually goes out and retrieves. So over time, those

07:16
corrupted memories,  ultimately just cascadingly guide bad decisions. Yeah.  It's causing a trigger effect, you know, based on the corrupted memories, you know, to your point. ah I'd also add tool abuse to this mix. Once agents have credentials to access APIs or databases or even ERP systems, you know, the attackers don't need to breach your perimeter. Like we talked about, they just need to trick the agent into misusing

07:46
the access it already has. kind of alluding to your point earlier, making sure that it's hardened at the agent level. Yeah. And then there's the issue of supply chain compromise. So obviously supply chains are getting more and more complex and agents depend on frameworks, plugins and shared libraries. So if one of those gets compromised, that also can cascade across every connected agent. Yeah, that's a good one.

08:15
I'd probably add, um,  you know, uh, a final one here is, which would be identity confusion. So this is where one agent impersonates another or hijacks its credentials. That's when a malicious actor can act as your finance director or operations agent, for example, and execute real actions based on those job,  um, duties within those individuals. So you could be, you know,

08:40
doing a deposit or withdrawal or something like that in the finance organization acting on the finances behalf.  it's  that identity confusion could  cause havoc. Yeah, that's huge. Identity management for AI agents is absolutely critical. These aren't just technical exploits, they're really behavioral manipulations. So the attack surface has shifted from,  like we said, what we're familiar with,

09:07
servers and network to actually the reasoning itself in the models. Yeah. I mean, if think about it from a security standpoint within your organization, your CISO office, it's a totally different mindset. So for decades, cybersecurity has been about protecting data. Now it's about protecting judgment, which is much different. really, know, retooling your internal organization, not just with the tools to protect.

09:34
in this new world, but also just culturally thinking about this differently. Yeah. I think that's really a perfect way to put it.  The new perimeter that you're protecting isn't  the network. It's really  the intent layer in addition to the network. Yeah. So if you're rethinking security through identity and trust, maybe we kind of dive into the solution. I know you and I had talked about this a little bit before the episode  today.

10:01
So, you know, in your opinion, how do you create guardrails that protect that intent layer? Yeah, I think  right back to identity that we were just talking about, that's really where it starts.  Every AI agent should have a verifiable digital identity. So a cryptographic fingerprint that proves who it is and exactly what it's authorized to do. Yeah, it's almost like giving every non-human actor a badge or, and, you know, key card  for the organization.

10:30
Yeah, perfect. It's so once an agent has a unique identity, you can apply capability based access control. So meaning that it can only perform specific pre-approved actions. that's it. Yeah, that's a good point. in that instance, so, know, your analytics agent can read reports, but it can't initiate payments or modify credentials or add or drop users. You know, just having that one specific workflow requirement.

11:00
of reading the reports  is something, one way to kind of harden it down. Yeah. Yep. Exactly. And then every action. So every APA call, every database update or message should be signed and it should be auditable. And that gives you non-repudiation. So you can prove who did it,  who did what, who did when  and why they did it. Yeah. I mean, that really moves from blind automation to governed autonomy.

11:29
So trust, but verify, but doing it at machine speed, which is why you have this agent environment in place. Yeah.  And this approach aligns perfectly with  where global governance frameworks are headed. uh Identity, auditability, like we've talked about on a number of shows, and transparency are becoming the cornerstones of  trustworthy AI. And obviously those are kind of fundamentals of machine learning. Yeah. Good points.

11:56
And I think what we normally do in all of our episodes, you know, we talk about the technical aspect and then, you know, if you're a business leader, you know, what did they need to know? So maybe we give some concrete takeaways for executives listening to the episode. So what should leaders be doing today to prepare for this new environment, not just, um, deploying, but also securing the environment. Yeah. I think, uh, it starts with first building.

12:25
AI governance into your risk management frameworks and ensuring that you're continuously addressing each of the areas that we've already talked about. So identify who owns oversight for autonomous systems  and how incidents will be reviewed and audited.  And I think that definitely is uh a cornerstone for that AI readiness assessment upfront because these  components around AI governance will have a requirement in your

12:53
AI posture within the organization. So good, good point there. ah I also think, you know, evaluating your technology stack, ask your vendors the hard questions. How do your AI systems authenticate? How is every action logged?  Another one could be,  can you verify which entity? Is it human or artificial intelligence? Is it initiated?

13:18
a decision, for example. So these are the things as you're evaluating your technology stack with your vendors, definitely want to ask those difficult questions. Yep. Yeah. then just, you know, organizationally, a focus on trust and culture. So treat AI transparency as part of your brand's credibility,  and boards should understand that.  And I think the companies that make their AI actions explainable and verifiable will be the ones that

13:47
ultimately over time, customers trust. Yeah. Yeah. Good point. I mean, it's really the same principle that made cybersecurity a board  room  issue a decade ago. Now it's happening again with AI and requires the same level of rigor, but your board may not be thinking about those things. So as a business leader, that is one area that you could update your board, ah especially as you're looking to deploy an AI strategy within the organization.

14:13
you know, mention that you have to have the rigor from a cybersecurity standpoint around, you know, your autonomous environment. Yeah. I mean, I think  autonomy without 100 % accountability will ultimately lead to  damage and potentially some chaos, uh autonomy with identity and with verification will let you take advantage of AI productivity  and, and really understand what's happening in your environment.

14:40
Yeah, I agree. And it's not just about compliance, right? It's about competitiveness. So secure AI scales faster because it earns trust. Um, so that's another point to just think about as, you're going down this path, um, you know, within the cybersecurity landscape for your artificial intelligence. Yep. So here's where we land. 2025 was the year of discovery for AI agents. 2026 will be the year of responsibility.

15:10
when enterprises learn how to secure them. Yeah, I think that's right on.  And I think the organizations that treat AI agents as uh accountable digital entities, so every one of them,  and they don't treat them as just experiments,  will really start to emerge  as the leaders with AI agents. Well, yeah, and because the future of AI isn't just about intelligence, it's about trust. Yep, and trust is built on security. Yeah, good point.

15:40
Well, that's all the time we have for today. Thank you for joining us on the macro AI podcast. you found this episode valuable, share it with your leadership team, your CISO or your board.  This is the conversation that every organization needs to be having right now.  Yep. So thanks again. We'll see you next time as we continue exploring how AI is transforming business. So please like and subscribe and share our podcast with you network. Thanks.