The Macro AI Podcast
Welcome to "The Macro AI Podcast" - we are your guides through the transformative world of artificial intelligence.
In each episode - we'll explore how AI is reshaping the business landscape, from startups to Fortune 500 companies. Whether you're a seasoned executive, an entrepreneur, or just curious about how AI can supercharge your business, you'll discover actionable insights, hear from industry pioneers, service providers, and learn practical strategies to stay ahead of the curve.
The Macro AI Podcast
Cyber Defense for Generative AI
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this flagship episode, Gary Sloper and Scott Bryan deliver the most comprehensive executive briefing to date on Cyber Defense for Generative AI—a real-world, board-level conversation every business and technology leader needs to hear.
Generative AI is transforming how enterprises operate, but it also introduces an entirely new attack surface. Traditional cybersecurity models were never built for systems that reason, take action, integrate with sensitive data, and can be manipulated through language alone. This episode breaks down what that means for your business, your customers, and your risk posture.
Gary and Scott guide you through the full lifecycle of securing GenAI: how these systems fail, where attackers are striking today, how enterprise architectures introduce new vulnerabilities, what frameworks (like NIST’s AI RMF) actually matter, and how leaders should build a modern defense-in-depth strategy tailored specifically for LLMs, RAG pipelines, and AI agents.
You’ll hear detailed insight into prompt injection, jailbreaks, data poisoning, insecure output handling, RAG access control, observability, vendor risk, and the organizational operating models required to govern AI safely. The episode closes with a clear 30/90/365-day executive roadmap to help any organization move from experimentation to secure, governed AI at scale.
If you’re a CIO, CISO, CTO, head of data/AI, product leader, or board member tasked with understanding the true cyber risks of GenAI, this episode is your playbook.
Send a Text to the AI Guides on the show!
About your AI Guides
Gary Sloper
https://www.linkedin.com/in/gsloper/
Scott Bryan
https://www.linkedin.com/in/scottjbryan/
Macro AI Website:
https://www.macroaipodcast.com/
Macro AI LinkedIn Page:
https://www.linkedin.com/company/macro-ai-podcast/
Gary's Free AI Readiness Assessment:
https://macronetservices.com/events/the-comprehensive-guide-to-ai-readiness
Scott's Content & Blog
https://www.macronomics.ai/blog
00:00
Welcome to the Macro AI Podcast, where your expert guides Gary Sloper and Scott Bryan navigate the ever-evolving world of artificial intelligence. Step into the future with us as we uncover how AI is revolutionizing the global business landscape from nimble startups to Fortune 500 giants. Whether you're a seasoned executive, an ambitious entrepreneur,
00:27
or simply eager to harness AI's potential, we've got you covered. Expect actionable insights, conversations with industry trailblazers and service providers, and proven strategies to keep you ahead in a world being shaped rapidly by innovation. Gary and Scott are here to decode the complexities of AI and to bring forward ideas that can transform cutting-edge technology into real-world business success.
00:57
So join us, let's explore, learn and lead together. Hey everyone. Welcome back to the Macro AI podcast. I'm Gary Sloper here with my cohost, Scott Bryan. Every week we peel back the layers on what's happening in artificial intelligence, global networking, enterprise transformation, and everything that sits at the intersection of technology and business strategy. Today's episode is the conversation every business leader needs to hear, even if they don't know it yet.
01:27
We're talking about the cyber defense of generative AI. And Scott, I'm going to say it upfront. This is one of the most important risk conversations since companies first moved workloads into the cloud, such as AWS and Azure 15 years ago. Yeah, I think you're right on there. Completely agree. And if you're a CIO or CISO or CTO, head of product or just someone that's making decisions in the boardroom and trying to make sense of how AI fits into your business,
01:57
This episode is definitely something that you might even want to forward onto your teams. I think as everybody's aware now, generative AI isn't just another application that gets plugged into your business. It's a new digital workforce segment. It's uh set of systems that's capable of making decisions and touching sensitive data, uh interacting with customers, changing configs, even triggering downstream automations. Now we've talked about agents a number of times on the show. uh
02:26
And the security model for GEN.AI is completely different from anything that we've dealt with before. Exactly. And here's the big shift. AI doesn't fail the way software fails. It doesn't get breached the way a server gets breached. It can be manipulated, persuaded, socially engineered. We've seen that. Confused, tricked, even hijacked through nothing more than language. The attack surface isn't just your infrastructure anymore. It's the model.
02:55
It's the training data. It's the prompts. It's the rag connectors. And it's every team members interacting with these systems. And that's where the business risk accelerates. Yeah. Yeah. So in this episode, we'll walk you through the full picture, not just the technical concerns, but the organizational, operational, financial, legal, ah all those implications of securing GEN.AI. And we'll talk about what
03:22
threat actors are actually doing today and how enterprises are evolving their architectures ah and where all the true vulnerabilities live. So we can also cover frameworks to adopt, how to build defense and depth strategy and how to red team the systems. So, uh and a little bit of a roadmap of uh how to get going on this.
03:52
All right, Scott, let's dive in. This is going to be a long one. So for anyone listening, time to get settled in. Today, we're going to give you the guide to defending Gen. AI in the enterprise. Okay. Let's start with the simplest question. Why is Gen. AI so different? Why can't corporate security teams just take their existing playbook, such as identity or endpoints and network segmentation or DLP, data loss protection, and just bolt it on?
04:22
to protect AI. For starters, Gen.AI behaves like a synthetic employee. It reads, it writes, it interprets. It can be coaxed into doing things, bypassing policies, revealing information. It can even be, you know, formed to interact with systems in ways developers didn't intend. So there's no precedent for that. Your firewall can't save you from a cleverly
04:50
crafted prompt. So we're so used to protecting the edge. So this is a much different type of stake or stakes that are at the forefront for many businesses. Yeah, definitely. And the other thing is that generative models operate in what you might call the gray zone of determinism. Right. So with traditional software, you can test for deterministic outcomes. So if you send an input X, you expect output Y.
05:19
But with the large language models, LLMs, the output is probabilistic. So the model predicts the next likely token. That means security teams can't guarantee outcomes in the same way. You can't patch human reasoning and you can't patch a model's probabilistic behavior either. And this is why attackers love GEN.AI. It's a system that responds to persuasion. ah It's basically the first time in history that the attack surface has kind of a personality.
05:48
Yeah. And that brings us to the real business impact. When these systems break, they don't just crash. They leak, they'll hallucinate contracts, they fabricate transactions, they expose PII buried in your enterprise search index. ah They rewrite customer emails incorrectly. We've seen that in the news. They escalate privileges through local tool use, for example. And because of the speed at which AI operates, a mistake that used to take a human
06:18
hours can now happen in two seconds. Yeah. Yeah. And I think executives really need to absorb this. uh Your AI initiative can reduce labor costs and improve decision-making and certainly give you a competitive advantage. But without a cyber defense strategy, it can just as quickly expose your business to regulatory violations, data breaches, brand erosion, financial loss. uh So not trying to uh scare you. It's just more of uh
06:48
you know, an awareness and how urgent it is, it's not optional. It's definitely a foundational challenge. Yeah. Cause I mean, it could be unbeknownst to the organization. that's a really good point, Scott. Now, if we were to walk through the typical enterprise, GenAI architecture and look at where risk actually concentrate, you need to think about three major deployment patterns. First, the SaaS or API based model tools like OpenAI, Claude, Gemini and others.
07:17
you pass data to an external endpoint and the vendor hosts that model. This is great for speed and innovation, but also means you could have exposure such as data risk, telemetry risk, and reliance on the vendor's guardrails. The next point really you need to think about is managed cloud services like Azure, OpenAI, Bedrock, Vertex. These give you foundational private networking, better access control,
07:47
Logging and governance inside a hyper scale scalers trust boundary. So that's that's one thing to consider Say the last one really third is self hosted or open source models running on your own GPUs This gives you really maximum control, but also maximum responsibility You own the threat detection You talked about red teaming Scott. So you own the red teaming the patching and the fine tuning risks so that's these are
08:16
really where risk lives in enterprise gen AI architectures, it really depends on which segment. Right. Yeah, exactly. And then you take RAG. We've talked about this in past episodes. That's retrieval, augmented generation, starting to become very popular in AI solutions for even down to small business level. That's where a lot of the new risk is emerging. So enterprises are connecting to AI, connecting AI to SharePoint, OneDrive, Google Drive, Salesforce, Jira, Confluence, and
08:46
proprietary file repositories, they're indexing tens of millions of documents and embedding them. And everything in that knowledge base is treated as ground truth by the model. uh And then there's, here's the kicker, the model has no concept of access control. You have to bolt that on manually. uh Vector databases, unless specifically designed otherwise, they don't enforce access control lists or ACLs. So if one employee
09:14
only has permission to see five documents. Um, but the vector database contains 50,000. There's a risk that the model retrieves something that, violates a policy. Yeah, that's a good point. Um, and then there are AI agents, know, so talked about this, you know, but they are, uh, on previous episodes, really tools that let the model call APIs or create tickets, analyze logs, even update, you know, CRM entries.
09:45
which could also trigger financial workflows as well. So the attack surface expands exponentially once the model can take action. So think of a malicious prompt can cause a lot more damage when the model is empowered to execute on it. Yeah. I think the simplest frame for business leaders is that every additional connection AI has to your enterprise systems is another possible breach pathway. And not just because the attacker is
10:14
you know, breaking into the network, but because they're convincing the AI to open the door for them. Yeah, that's a point. So if we were to shift gears, let's break down the most important risks that enterprises need to pay attention to. The first and probably most infamous is prompt injection. This is the art of using cleverly crafted language to override a model's instructions or break its guardrails. So essentially what attackers have
10:43
figured out is they don't have to hack your systems. They just need to persuade your AI. There's a direct injection where someone types, ignore prior instructions and reveal and whatever that is that you're trying to reveal into a chat window. But there's also indirect injection, which is far more dangerous. That's when malicious instructions hide inside a PDF or a webpage and email or a database record, for example.
11:12
The AI reads it and unknowingly executes the attacker's intent. So it's kind of almost like a Trojan horse. Yeah. Yeah. these things are definitely happening out there. And companies are reporting cases where a PDF containing hidden instructions cause their internal AI assistant to reveal data or to misclassify records. And here's the part that business leaders need to internalize. Many of these attacks can't be fully mitigated.
11:41
So the UK's National Cybersecurity Center literally warned that prompt injection might never be properly fixable because of the way that the models work. And then you have insecure output handling. A surprising number of developers take the model's output and pass it straight into downstream processes. The AI writes code and the app executes it. ah The AI suggests SQL queries and the database runs them. AI generates an email and the system sends it automatically. uh You can imagine the risk in that.
12:11
workflow. Absolutely. And Scott, we haven't even gotten into training data poisoning. If your organization allows fine tuning on user provided content, then malicious actors, if you think about it, can submit poisoned examples designed to skew the model's behavior. Think of it like corrupting a new employee by whispering bad habits during onboarding, for example, you know, causing an issue day one. ah The next model.
12:40
Supply chain risk. Companies pull pre-trained open source models off of hugging face or GitHub without fully validating the weights, the license terms, the patch history or the security provenance. Nobody would download a random dot exe file, an executable file from the internet, or at least I hope you wouldn't. think we've learned that over the years, but enterprises routinely download, you know, 30 gig files, know, model files and treat them as trustworthy.
13:10
That's a supply chain risk in disguise, huge no-no. Yeah, that's a big one. And then, of course, there are the classic threats that are kind of morphing into new shapes. You've got data leakage, denial of service, I think everybody's familiar with, but now through token floating, ah resource exhaustion of GPUs, the abuse of AI agents to cause operational disruptions. I think attackers understand that
13:40
GPU time is expensive and forcing your LLM endpoints into high load scenarios is now a way to not just cause a denial of service, but to cause financial damage. you know, this isn't, this landscape is changing pretty quickly. It's very active. It's evolving weekly. And it's definitely much different from like we talked about, you know, previous just basic network security. Yeah, that's a point. mean, exhausting the resources, you know,
14:08
Can you imagine the financial risk? It's almost like what we used to see in data networking overages that were malicious. So let's shift gears into something that gives leaders structure. And that's frameworks. The one we always recommend executives start with is the NIST AI risk management framework. We've talked about NIST on the past on the show. It's vendor neutral. It's highly practical.
14:34
And it maps extremely well to gen AI governance challenges. The framework organizes your AI program around four key functions. Govern, map, measure, and manage. So for gen AI specifically, NIST is now releasing a generative AI profile that provides concrete mitigation guidance. And it's becoming the North star for us-based enterprises. Yeah. And then layer on top of that, the global regulatory landscape.
15:04
um And in the US, you have the executive order on AI requiring assessments for safety, security, and privacy. In the EU, the AI Act categorizes use cases into their risk tiers and the obligations associated with them. And then you have sector-specific frameworks, healthcare, finance, critical infrastructure, all things that leaders need to pay close attention to. And this is where the conversation becomes less about security and more about risk governance. uh
15:33
large enterprises, enterprises of all size need to understand not only how AI can break, but how AI can create liability. And the frameworks help align uh legal compliance, risk engineering and security all into an operating model for governance. Yeah, those are good points. And so now if we were to get more technical and talk about what a secure gen AI architecture actually looks like at the heart of this
16:01
is the concept of zero trust, which is not new. It's been around, especially if you're in the security world, ah but it's now applied to AI. So every identity, every connector, every data set, every tool call must be explicitly authorized. So enterprises need uh strong uh RBAC and AVAC policies around their AI systems. You want service principles with scoped permissions. You want private networking.
16:29
So your model endpoints aren't exposed to the public internet. ah What else? And you also want to control outbound egress. So the AI cannot communicate with systems that shouldn't. you're kind of trying to almost firewall that a little bit. Yep. And data security obviously becomes fundamental. uh Enterprises need a classification model for data that determines uh what is AI eligible and what is AI restricted.
16:58
Not all files should flow into your retrieval log method generation index. Not all data sets should be passed into prompts and uh tokenization and masking become critical tools and even synthetic data becomes strategically important. But I think here's the piece that often gets overlooked, system prompts. The instructions that we give the AI to set behaviors, uh they're not security controls. They're fragile.
17:27
and a clever attacker can override them. So policies must exist outside the prompt layer and a fully hardened enforcement layer. Right. And then you have defense in depth for rag systems. You want ACL enforcement at retrieval time, not after. You want document level permissions encoded in your metadata and validated every time the model fetches a chunk. You also want last mile checks before the model responds.
17:56
to the user to catch sensitive content leaks. If think about it this way, a secure Gen. AI architecture is more than a model, right? It's governance, identity, networking, data classification, retrieval policies, tool permissions, observability, and continuous testing. I know that sounds a lot, but this is what really goes into a secure Gen. AI architecture.
18:25
Yeah, and think we need to zoom in on prompt injection because it's, I think it's the number one gen AI risk. the reason it's so challenging uh is structural. Large language models are built to follow instructions. If you embed malicious instructions inside a data source that the model trusts, it becomes extremely hard for the model to distinguish between instructions meant for the user and instructions meant to manipulate it. So
18:54
You know, this is fundamentally an input validation problem without a deterministic solution. ah And that's why we rely on those layered mitigations that we talked about. So strong tool permissions, boundaries, provenance tracking to identify where content came from, uh input sanitization to strip out certain patterns, output validation, like you talked about, forcing AI's responses into strict schemas or running them through a secondary model.
19:23
to catch any anomalies. Yeah, this is where enterprises need to adopt the mindset that we adopted years ago in networking. Can't prevent every attack, can't contain impact and minimize uh blast radiuses. Prompt injection can't be solved, but it can be managed. Compromised prompt should not lead to data exfiltration. A malicious input should not be able to
19:51
cause an AI agent to delete a record. So guardrails, permissions and validation, you know, they are the control plan that limits the model ah and what it can do, even if it's temporarily manipulated. Yeah. Yeah, I think let's pivot here and just get into the SDLC, the development process. ah businesses are quickly discovering that building
20:20
AI apps requires a new software development life cycle, STLC. We need threat modeling, not only for the application code, but for the AI workflows, know, the prompt templates and the knowledge sources. need uh code review, but also prompt review. You you need to test the suites, not just for functional behavior, but for red team behavior. And the question becomes, how does the AI behave under stress?
20:49
Can it be tricked into revealing something in an attacker escalate privileges through clever prompting, you know, things like that. Yeah. And the red teaming becomes essentially you need internal or external teams whose job is to poke holes in your AI systems. They'll try jail breaks. They'll try data extraction attacks, uh, privilege escalation, poisoning, uh, impersonation, everything a real attacker would do in a live environment.
21:17
And you need to do this repeatedly because model behavior shift with updates, right? We see those large language models continue to become smarter. ah So this is not a one-time test. It's a continuous validation. No different than you would for any of your other security testing within your organization. And it really has to be treated seriously as application penetration testing or vulnerability scanning. Yep. All right.
21:48
So we've talked quite a bit about, about this and obviously you can tell that we're very passionate about, you know, when it comes to the security landscape here for AI. Yeah. There's a lot to it. Yeah. And, and so the other thing is monitoring AI systems is completely different from monitoring, monitoring, you know, traditional apps, know, SaaS applications or homegrown apps. You're not just looking for logs of get and post requests. You're monitoring prompts.
22:14
your monitoring completions, tool calls, model versions, and anomaly patterns in user behavior that are using your AI infrastructure. You want dashboards that show when a user suddenly starts asking for sensitive data or when the model suddenly begins to respond in unusual ways. You want to detect new jailbreak patterns. ah You also want to log all retrieval events from
22:41
your vector database so you can reconstruct what happened offline. Yeah. Yeah. And then, and then what do you do? You know, incident response. So what happens when your uh AI assistant leaks customer data? uh What happens if a malicious actor poisons your knowledge base? What if an AI agent performs an unauthorized action? You have to have a playbook that includes uh isolating connectors.
23:10
revoking agent tokens, disabling tool use, rolling back to a previous model version, ah re-indexing or cleansing your vector database. All incidents require a blend of cybersecurity forensics and machine learning operations. And your teams must be trained before the first incident occurs, not after. um So Gary, I know this is getting long, but you want to chat a little bit about the...
23:40
Shadow AI? Yeah, absolutely. I think just to comment off your last point, mean, it's really AI business continuity. I mean, you've already put these types of practices in place. And almost every company we talk to is struggling with Shadow AI, uh similar to what we saw with Shadow IT a number of years ago with cloud. In this case, employees are pasting corporate documents into unapproved uh AI tools.
24:09
They've gone and they've spun up their own user accounts without formal approval from the security organization or just the enterprise in general. And this is an educational problem, uh a governance problem and really a security problem is what we've been alluding to. Data loss is the obvious risk, but the other side is model behavior. Employees are unintentionally fine tuning models or storing embeddings in consumer tools.
24:37
you lose control of where your intellectual property lives. So that's a huge exposure for your company. It's not just the financial component of Shadow AI. You're losing the IP to your business. Yeah. And we've talked about Build or Buy in some of the episodes. And I think vendor risk goes far deeper than most organizations assume. ah So you need to know whether an AI vendor ah trains on your data, how long they retain it.
25:06
whether they segregate customers at the model level, whether they have been red teamed, ah whether they store any logs offshore, what their AI safety practices are, whether their model updates could break your guardrails. So, you know, the net of that is, buying AI tools is not like buying a CRM license. You're integrating a probabilistic reasoning engine into your business and you need to treat vendor selection like a, you know, a high stakes security process. Your security team needs to be involved.
25:36
Totally agree. Buying versus building AI tools is something that both you and I can help clients with. And now vendor analysis requires a new level of thorough examination from a risk perspective. Build versus buy is not a new concept. saw that at data centers, we saw that with software deployments. So this is just a new iteration when it comes to AI. This is where things get interesting. Every company is trying to figure out
26:03
who actually owns GenAI risk. And I hear the question all the time. Is it the CISO, the CIO? Is it the chief data officer, chief AI officer, if an organization has one already? Is it legal, compliance, product? And it boils down to the, really the truth is nobody owns it alone. GenAI risk cuts across every function. And the organizations that succeed treat AI as a multidisciplinary initiative.
26:33
They build an AI governance council. They establish clear RACI, spelled R-A-C-I-E, if you're not familiar with that, around model operations, data controls, red teaming, vendor review, and business enablement for the organization. Yeah. Great point, Gary. uh So RACI um is, if you're not familiar with it, it's responsible, accountable, consulted, and informed. And it's classically a project management matrix.
27:03
It certainly applies here. Yeah, I had a boss that I had a couple bosses that had that in the organization quite heavily. So very familiar with it brings back memories. Yeah. And now, you know, here's the, new idea emerging industry is treating AI systems like employees. So you, you you onboard them, you give them permissions, you give them training data, monitor their productivity and inspect and eventually
27:32
What do do? You off board them by revoking access, ah deleting embeddings, decomposing environments. And this mental model actually can, I think, help executives understand the operational realities of AI deployment in their business. Right. So if we were to break this down into sort of an executive playbook, 30, 60, 90, 365 day roadmap. And as we kind of wrap the episode here,
28:01
What can executives act on? I'd say in the first 30 days, you want to inventory everything. What AI tools your teams are using, what data is flowing into them, what connectors you've already deployed, and what governance you're missing. Because you might be missing something from a governance standpoint. And we've talked about AI laws, and these are the things that you just want to be mindful of. You want to identify the highest risk use cases and bring shadow AI under control.
28:30
So then if you move to the 90 day window, you build the foundation, you adopt a reference architecture, you enforce data classification for AI, you deploy guardrails, content filters and logging, you pilot red teaming, and you choose a framework. NIST AI RMF is the one that we recommend as your strategic anchor in your business. So it's one that you definitely want to take a look at. Yep. Good stuff. And then for the one year 365 day horizon is when
28:59
things would start to mature. You operationalize AIS DLC, the software development lifecycle. ah You formalize red team exercises, integrate AI risks into your enterprise risk dashboards, ah negotiate and implement stronger vendor contracts. We talked about build versus buy. ah Evolve your governance council, your AICOE and your governance council in particular. And you begin treating AI not as a
29:28
project, but as a permanent part of the enterprise operating model. And this is where your organization moves from experimenting with AI to running AI as a secure governed business critical platform. Yeah. And you mentioned COE. We uh talked about this on one of the very first and earliest episodes. You really need to make sure you have a COE in place. I know Scott, you've helped out organizations build one. It's really imperative.
29:55
Because without that foundation, everything else could just fall apart. Right. So we covered a lot. This has been a lot for many of you, maybe even to digest, but it's necessary. know, gen AI brings extraordinary value and, I think we can all agree with that, but only when it's deployed safely and responsibly and enterprises that really figure this out now will have a massive competitive edge in the next decade. Yeah. Yeah. And,
30:25
We always give points to business leaders that are listening to this. uh I think the next step is pretty simple. You could take this episode, share it with your technology leadership team and make sure that they are and your team is building their roadmap. And if you want help, companies like ours, uh like mine and like Gary's are working with enterprises every day to build secure AI architectures and governance frameworks. Yeah, yeah. I hope this was helpful for everyone to Scott's point.
30:53
take it uh as a reference for your teams. Thank you for listening. It was definitely an important topic that we wanted to cover. There'll be more around security in future episodes. And if you enjoyed this episode, follow the show, share the episode with your peer network, and let us know what topics you want us to tackle next. Thank you so much. And for now, I'm Gary Slover. And I'm Scott Bryan, and thanks for joining the American Way AI Podcast. See you next time.