Found in AI: AI Search Visibility, SEO, & GEO
Found in AI is a podcast for marketers, founders, and content strategists who want to understand—and win—AI search visibility in the new era of search.
Hosted by Cassie Clark, fractional content strategist and AI search optimization expert, the show explores how platforms like ChatGPT, Perplexity, Gemini, and Google’s AI-powered search experiences discover, select, and surface content.
Each episode breaks down real-world experiments, SEO, GEO / AEO, and content marketing strategies designed to help brands get found in AI-generated answers, not just traditional search results.
You’ll learn how to:
-Optimize content for AI-driven search and answer engines
-Blend traditional SEO with AI search optimization
-Build entity authority across search, social, and AI platforms
-Drive traffic, leads, and trust as search behavior continues to evolve
If you’re trying to future-proof your content strategy and understand how AI is reshaping discovery, Found in AI gives you the frameworks, insights, and tactics to stay visible—wherever search happens next.
Found in AI: AI Search Visibility, SEO, & GEO
OpenAI’s Prompt Injection Framework + Google’s Personal Intelligence: What Marketers Need to Know
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
📬 Love the podcast? You’ll love the newsletter.
Get the weekly 3-2-1 on AI search + marketing: Subscribe
This week on Found in AI, Cassie is covering two major developments that signal where AI search is heading, and what marketers and founders should be paying attention to right now.
First, OpenAI published a detailed security framework on how they’re designing AI agents to resist prompt injection. They’ve confirmed this is an ongoing challenge that won’t be fully “solved,” and they’re training their systems to be increasingly skeptical of content that tries to game them. For marketers, this reinforces why genuine authority and clean structure matter more than ever.
Then, Google announced the expansion of Personal Intelligence across AI Mode in Search, the Gemini app, and Gemini in Chrome. Search is now pulling from users’ Gmail, Google Photos, and purchase history to deliver hyper-personalized answers. For brands, this means first-party data has just become a critical part of an AI visibility strategy.
Both stories point to the same conclusion: AI systems are getting smarter about trust, and the brands that earn it are the ones that stay visible.
Covered in this episode:
- What prompt injection is and why OpenAI says it’s unlikely to ever be fully solved
- How OpenAI is using automated red teaming to find new attack patterns before bad actors do
- Why AI agents being trained to resist manipulation is a signal that trustworthy, well-structured content wins
- What Google’s Personal Intelligence feature does and how it’s changing search personalization
- Why first-party data (collected ethically) has just become even more critical for AI visibility
- How entity authority is expanding from the public web into someone’s personal Google ecosystem
- How both stories connect back to the FSA Framework and what it means for your content strategy
Let’s connect:
LinkedIn → Cassie Clark | Fractional Content Strategist
Website → https://cassieclarkmarketing.com
Download Freshness, Structure, Authority: The Framework for AI Search Visibility:
P.S. Is your brand losing its "Answer Authority"?
Most series A/B and enterprise brands are being "nudged" out of AI search results because of entity gaps and "stale" content. I am opening a limited number of specialized audit slots to help you reclaim your Share of Voice using the FSA Framework (Freshness, Structure, Authority).
Request your 7-Day AI Search Visibility Audit: https://cassieclarkmarketing.com/ai-search-visibility-audit/
Hey marketers, welcome back to another edition of Found in AI. I am Cassie Clark, a fractional content strategist, AI search optimization expert, and the host of the show where we break down AI search, GEO, and AEO strategies so we don't get left behind in this new wave of user search behavior. Today is Thursday, March 19th, and this week we have two big stories. Honestly, I think they're connected in a way that I think is gonna click for a lot of us. And maybe separately just talking about them, they'll be like, oh, cool story, but together it makes a lot more sense. First, OpenAI published a detailed framework on how they're designing AI agents to resist prompt injection. Basically, how they're training these systems to be skeptical of manipulative content. And then Google announced the expansion of something called personal intelligence across AI mode and search, the Gemini app, and Gemini and Chrome. Now, the through line of all of these stories, both of them, well, the AI systems are getting smarter about what they trust, and more personal about how they serve up answers. Both of these developments have real implications for how we think about content strategy. Okay, let's get into it. Alright, let's start with OpenAI. On March 11th, they published a pretty comprehensive overview of how they're thinking about securing AI agents against prompt injection attacks. Now, if this is the first time you've heard the word prompt induction, here's the quick version. It's when someone hides instructions inside external content, whether that's a web page, an email, a document, whatever, with the hope that an AI agent will read those instructions and then follow them instead of doing what the user actually asked. Now, think of this like social engineering, but instead of targeting people, it's targeting AI, which in turn still targets people. So someone edits a web page or slips something into an email and that AI agent picks it up and then acts on that instead of the original ask. Now, OpenAI when they announced this, they shared a pretty wild example about it. In one test, a malicious email was planted in a user's inbox. The user asked Chat GPT's Atlas agent to draft an out-of-office reply, and instead the agent followed the hidden instructions in that email and composed a resignation letter to the user's CEO. That out of office email, it was never sent. That's not a theoretical risk, that is a real demo that they shared. Now, here's where it gets interesting for us. OpenAI is treating this problem as something that they just can't fix. They explicitly said that prompt injection, like scams and social engineering on the web, is unlikely to ever fully be solved. So instead of trying to perfectly filter every malicious input, they're redesigning the system so that even if an attack partially succeeds, the damage they is contained. Now, the way they've talked about this, they're comparing the AI agent to a customer service representative. The agent has authority to do certain things, but there are guard rules that limit how far it can go if it is misled. They've also built what they're calling an automated red teaming system. Basically, that's an AI-powered attacker trained with reinforcement learning to find prompt injection vulnerabilities before the bad actors do. They said that this system discovered attack patterns that didn't appear in their human red teaming or in any external reports. And they've already rolled out an aggressively trained model update to all chat GPT Atlas users. So, I can hear you asking, why should we as marketers or founders care about prompt injection security? Well, here's the implication that not many people are talking about so far. If AI agents are being trained to be increasingly skeptical of the content they encounter on the open web, then what does trustworthy content, putting that in air quotes, what does that look like to these systems? Well, from what I can tell right now, it's the content that's clearly structured, has consistent authority signals, and the content that doesn't try to game the system or use manipulative patterns. Now, this connects directly with the FSA framework that we talk about here on the show. The structure piece, that's about making your content easy for machines to read and extract cleanly, not to manipulate them, but just to be legible to them. And that authority piece, that's about building a consistent, trustworthy brand presence across the entire internet so that AI systems can verify that you are exactly who you say you are. Now, here's the nuance to all of this that I think we really need to kind of just sit with and think about for a minute. As AI agents get more sophisticated at detecting manipulative content, the gap between brands doing real content strategy and the brands trying to game their way into AI answers is only going to widen. The brands stuffing content with you know wonky keywords or trying to reverse engineer prompt patterns or using shady tactics to show up in those AI answers, which there are brands doing that. I did see a LinkedIn post about it, and they're like, oh well, that's what we're doing. Those are the kinds of patterns that these systems are being trained to resist. So if that's what you're doing, be cautious. I don't know how long that'll last. But the brands that are out there building genuine authority with clear structure and fresh expert level content, I think they're gonna be looking more and more trustworthy to these systems over time. I said this a while back and I think it still holds. AI visibility rewards the brands that build trust, not the ones that are trying to trick the algorithm. And OpenAI's announcement this week is essentially confirming that from the security side of all of this. Alright, let's shift to the second big story of the week. On March 17th, so literally two days ago, Google published a blog post announcing the expansion of something they're calling personal intelligence. Personal intelligence is now rolling out across AI Mode and Search, the Gemini app, and Gemini and Chrome for free tier users in the US. Here's what it does: it connects our Google apps, so Gmail, Google Photos, more, all of them, to give you the answers that are uniquely tailored to you. Not generic answers, not top 10 list, but answers that are based on your actual purchase history, your travel confirmations, your photos, your preferences, whatever Google knows about you. Now, in the announcement, Google gave a couple examples. You could ask about a product you bought without remembering the brand name, and it'll pull from your purchase receipts to give you specific troubleshooting steps for that exact model. Or you can ask for restaurant recommendations here in a layover, and it'll factor in your dietary preferences, your gate numbers, and how much time you actually have. Now I've been gluten-free for over 10 years, and that sounds really nice to me. But the example that really stands out, they said if you ask for shopping recommendations like a bag to match new shoes, it's not gonna show you just a list of bags, but it'll show you bags that match the specific shoes you recently purchased right down to the hardware color. That's a level of personalization that changes the entire game. So let's be clear about what's happening here. Google is turning search into a personal assistant that knows your history. This is not traditional search, this is not what you're we're used to. This is not even the AI overviews we've been tracking. This is AI-powered search that uses your first-party data, your emails, your photos, your purchase history, all of that to generate answers that are specific to you. And Google is being deliberate about framing this around user control. They're saying that users choose which apps to connect, they can turn connections on and off at any time, and the system doesn't train directly on your Gmail inbox or photo library. That's what they're saying. But here is the strategic implication that should be flashing in every um marketer's mind right now that's listening. First party data just became even more critical. Think about that for a second. If Google is now pulling from Gmail to personalize search answers, then every email that you send to a customer, every order confirmation, every onboarding sequence, every newsletter, it's now potentially feeding into how Google understands and recommends your brand back to that person. That means the brands that have strong email relationships with their customers, brands that collect first-party data ethically and use it to deliver genuine value, they're gonna have an enormous advantage in this new environment. So your post-purchase emails, those are not just customer communication anymore. They're potentially data points that Google's AI might reference to recommend you again. Your newsletter? Well, if someone's reading it in Gmail, that's a signal of brand affinity that can influence future AI personalized recommendations. And I want to emphasize ethically here, Google is framing the entire feature around user control, transparency, and choice. Brands that are spammy, manipulative, there's a word again, or reckless with customer data, they're gonna be on the wrong side of all this. It's not a loophole, but this is a reward for brands that are actually caring about their customer relationships. So the generic top 10 content play ballisticals, they're losing even more ground. I mean, they're showing up in those AI answers, but I don't think that they'll be super relevant forever. I could be wrong there. But if someone can ask Google for restaurant recommendations and get answers tailored to their actual dietary preferences and their travel schedule, why would they ever click on a generic list to call? The content strategy in this environment is going to have to shift toward being part of someone's ecosystem, not just showing up in search results. And what that means is building real email relationships, not just growing the list for the sake of a number. It means making sure your post-purchase communication is genuinely helpful, not just transactional. It means investing in brand experience that creates the kind of customer affinity that Google system can detect. And it also means thinking about entity authority, not just across public websites, but across every single touch point where your brand shows up in someone's Google ecosystem. If you've been thinking about building entity authority as just a public web play, like getting mentioned on third-party sites, or building backlinks are showing up in Reddit, this is a wake-up call. Entity authority is now expanding into the private personal layer of how people interact with Google. And the brands that show up in someone's inbox in their purchase history in their daily life are the ones that personal intelligence is going to surface. So when we zoom out and look at both of these stories together, here's the picture that it paints. On one side, open AI is training AI agents to be skeptical of the content that tries to manipulate them. The systems are getting better at detecting manipulation and constraining the damage when it happens. That means the bar for what counts as trustworthy content is gonna go way up. On the other side, Google is making search deeply personal. The answers people get are no longer just based on what's publicly available, they're also based on that person's own history, preferences, and relationships with brands. Both of these developments point to the same conclusion. The brands that win in AI search are the ones that earn trusted at every single layer and every touch point. Trust in how your content is structured and presented, trust in how your brands show up across the web, and trust in how you show up in someone's personal digital life. That means their inbox, their receipts, their daily experience with your product. The FSA framework, freshness structure, authority, still applies here. But I think what we're seeing this week is that authority is expanding, and it's not just about being mentioned on those third-party websites, but it's also about being a trusted presence in someone's actual life. Okay, that's it for this week. Two big stories, one clear signal. AI is getting smarter about bearing trust, and the brands that build it genuinely are the ones that stay visible. I'll keep tracking all this as it unfolds and showing what actually changes versus just what sounds good in a blog post. But if this episode helped you make sense of the noise, hit subscribe. I would love you forever. If you're trying to figure out how your brand fits into an AI first search or where to start with your strategy, head over to CassieClarkmarking.com to get started with your AI search visibility audit. I will see you in the next episode. Until then, stay visible.