Found in AI: AI Search Visibility, SEO, & GEO
Found in AI is a podcast for marketers, founders, and content strategists who want to understand—and win—AI search visibility in the new era of search.
Hosted by Cassie Clark, fractional content strategist and AI search optimization expert, the show explores how platforms like ChatGPT, Perplexity, Gemini, and Google’s AI-powered search experiences discover, select, and surface content.
Each episode breaks down real-world experiments, SEO, GEO / AEO, and content marketing strategies designed to help brands get found in AI-generated answers, not just traditional search results.
You’ll learn how to:
-Optimize content for AI-driven search and answer engines
-Blend traditional SEO with AI search optimization
-Build entity authority across search, social, and AI platforms
-Drive traffic, leads, and trust as search behavior continues to evolve
If you’re trying to future-proof your content strategy and understand how AI is reshaping discovery, Found in AI gives you the frameworks, insights, and tactics to stay visible—wherever search happens next.
Found in AI: AI Search Visibility, SEO, & GEO
How Should Brands Measure Visibility in AI Search?
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
📬 Love the podcast? You’ll love the newsletter.
Get the weekly 3-2-1 on AI search + marketing: Subscribe
Do AI search rankings actually matter?
In this episode of Found in AI, Cassie breaks down new research from Rand Fishkin and SparkToro that challenges one of the biggest assumptions marketers are making about AI-powered search: that rankings still matter.
She walks through what the data actually shows about how AI engines like ChatGPT and Google AI generate brand recommendations, why rankings are wildly unstable, and why visibility — not position — is the metric brands should be paying attention to instead.
Using both Rand’s research and her own Search Engine Journal displacement case study, Cassie explains how smaller brands can compete with legacy publishers in AI-generated answers by optimizing for visibility, probability, and share of voice.
In this episode, you’ll learn:
- Why AI search engines don’t produce stable rankings — and never will
- What Rand Fishkin’s research reveals about how brands appear in AI answers
- The difference between visibility frequency and AI Share of Voice
- How to correctly calculate AI Share of Voice (and what it actually tells you)
- Why rankings have nothing to do with appearing in AI-generated recommendations
- How a GEO strategy helped a smaller brand displace Search Engine Journal in AI answers
- Why AI search rewards consistency and pattern recognition over domain size
- What marketers should understand before spending money on AI visibility tools
If you’re a marketer, content strategist, or founder trying to understand how AI-powered search really works — and how to increase your chances of being included in AI answers without chasing meaningless rankings — this episode will help you rethink what visibility actually means in the age of AI.
Resources:
Case Study: The Displacement of Legacy Authority: A 125-Hour AI Share of Voice Case Study
Let's connect:
LinkedIn → Cassie Clark | Content Strategist
Website → cassieclarkmarketing.com
P.S. Is your brand losing its "Answer Authority"?
Most series A/B and enterprise brands are being "nudged" out of AI search results because of entity gaps and "stale" content. I am opening 3 specialized audit slots for February 2026 to help you reclaim your Share of Voice using the FSA Framework (Freshness, Structure, Authority).
Request your 7-Day AI Search Visibility Audit: https://cassieclarkmarketing.com/ai-search-visibility-audit/
(Transcribed by TurboScribe.ai. Go Unlimited to remove this message.) Hey, before we get started, I want to give you a quick heads up. Now, this episode might sound a little different than usual, not because the audio quality is bad, but because I'm dealing with some massive brain fog today, and it's making it harder than normal to find the words or get sentences out smoothly, even though I have notes right in front of me. So, I have a medical thing coming up that should address all this, and I am okay, I promise. But for today, I'm asking for a little patience if I sound slower or more jumbled than normal, which has been going on for a hot minute, but just now seems like it might be an actual thing. So, that said, the topic today is super important, and I don't want to skip it, so thank you for sticking it out with me throughout this episode, even if I stumble over a bunch of words. But welcome back, I'm Cassie Clark. I'm a content strategist and AI search optimization expert, which is still weird to say, but you are listening to Founded AI, the show dedicated to helping marketers and founders learn AI search and GEO so we don't get lost in this new wave of user search behavior that is coming or here, depending on how you look at it. Today, we're talking about a piece of research that made me go, finally, finally, the data has caught up, and I'm excited about it. Rand Fishkin at SparkToro just published new research last week showing what many of us elbow deep in AI search experiments have learned. AI search doesn't behave like classic search, it is super inconsistent and hecka probabilistic, and if you're trying to rank in number one in chat GPT, I need you to throw that phrase out of your vocabulary because it does not apply to engines like chat GPT, perplexity, AI overviews, whichever one that you're using. Here's why. Companies are spending real dollars on AI visibility tracking. As Rand points out in his post, there's already an estimated $100 million a year going into this category, and if you look at these tools, well, the obvious question is, are these AI engines consistent enough for rank tracking to mean anything? So Rand and a partner at Gumshoe AI decided to test it, and you know, as much as I do, we do love good experimentation over here. So for their research, this wasn't just a, I ran three prompts and had a thought about it. Instead, they had volunteers, like a bunch of them. 600 volunteers ran 12 prompts across chat GPT, Claude, and Google's AI overviews, Gemini, Google, for a combined 2,961 runs, and then they normalized the results into ordered lists so that they could compare apples to oranges, or apples to apples, or apples to grapes, or whatever, and then they shared it with all of us. So here's the headline that came out of this research. If you ask the same AI tool the same best brands or products question 100 times, there's less than a 1 in 100 chance that chat GPT, or we'll call it Google AI, gives you the same list of brands twice, and for the same order, it's closer to 1 in 1,000. Also, the length of those lists changes. Sometimes the model gives you two to three recommendations, sometimes it's 10 or more from the same exact prompt. So if your dashboard is telling you, you moved from number four to number two in chat GPT, I need you to hold on just a second because that's more astrology with user interface than real insight. Now, here's where I really love Rand's research and his report that he put out, because he didn't stop at dunking on rankings, because the next part of it that was super important is that even when an order is random, the set of brands can be weirdly stable. Now when I say set, I mean the brands that the engine recommends over time. So for example, when Google AI was asked to recommend digital marketing consultants for e-commerce, one agency, Smart Sites, appeared 85 out of 95 times. So the ranking position in an AI engine, calling it a ranking position, is junk. The real question is, does the brand show up at all? That's what changes from traditional search, because we need to stop tracking position. When I say that, I am talking solely about AI engines. We still do need to track our position in traditional search, but we also need to start tracking probability instead for these AI engines. Rand and company also tested something that matters a lot in the real world. People don't all search the same way anymore. They measured the similarity of written human prompts and found the semantic similarity was something like 0.081, meaning people can have the same intent, but phrase it differently. And if that's not true to how humans talk and interact with each other, I don't know what's true. Despite all of this, the models still returned a relatively consistent core set of brands in some categories. So in one test, across 142 human-crafted prompts and 994 AI responses, headphone brands like Bose, Sony, Apple, and another one that I'm going to pronounce incorrectly because it's that kind of day, they appeared 55 to 77% of the time. So even when prompts are messy, AI engines pick up on intent recognition, and that's where visibility percentage really holds up in what we should be paying attention to. Now, before I go on, I want to slow down for a second. When we talk about AI visibility, we're really talking about two different measurements, and they answer two different questions. The first question is simple. Do you appear at all? That's visibility frequency. It's about how often your brand shows up across repeated runs of the same prompt or across closely related prompts. There's that stumbling coming in. Now, because AI answers are probabilistic, this number will fluctuate, but over time, those patterns emerge. That's why you'll see something like 55 to 75 or 77% of the time these certain headphones appear. Now, the second question is more specific. When you do appear, how much of the answer do you own? That's AI share voice. I've talked about it a couple of times on this podcast. The math for AI share voice needs to be right, but it's super, super simple. To calculate it, it's per prompt, not across batch. You just take the number of times your brand is mentioned in the AI answer and divide it by the total number of brand mentions for the same answer. If you ask, who are the best AI search optimization consultants in the AI list for companies total, including yours, your AI share voice for that prompt is 25%. That number actually means something. It doesn't really care if you were listed first, third, or last. It just tells you how much of the recommendation space you occupy in that moment. Now, here's where Rand's research connects directly to something I have been saying. Because rankings reshuffle constantly, the positioning in traditional SEO sense doesn't really matter, but when a brand appears consistently and then earns meaningful share of citation inside of those answers, well, it becomes part of the model's default consideration set from what we can see right now. That is what brands should be targeting. That brings me to the case study that I shared back in December titled The Displacement of Legacy Authority, a 125-hour AI share voice case study. I will link that in the show notes if you want to dive deeper. In this study, I found that after a targeted GEO update, my brand began appearing more often, and when it appeared, it occupied a larger portion of the answer. So, if you've heard me talk about it, you know the brand that I displaced was Search Engine Journal. My brand didn't outrank Search Engine Journal because there is no stable ranking to beat. But over time, the AI engine started selecting it instead of the legacy giant in certain responses. Now, again, we can't say that my brand outranked it because different runs of the same prompt result in different citations, but we can say that the probability of being mentioned increased because of the content being more aligned with the user intent. This is how smaller brands can compete in AI search. These engines don't care about domain authority or your website size. They do pick up on consistent reinforced patterns across channels, social media, landing pages, websites, wherever you are putting your brand out there on the internet. If the model learns that your brand is relevant, credible, and contextually appropriate, it will include you regardless of how big you are. So, back to Ran's report for a second. I'm going to read his conclusion opener word for word with the added context before I get into it. So, before his conclusion, he had a bunch of questions like, how many times should a user run a prompt to get statistically sound answers about a brand's relative visibility? His conclusion opener said, more data is needed. I fully agree with that. More people should look into these questions. I agreed. And then he says, anyone who's selling AI tracking should be ashamed of themselves if they don't publish transparent public reviewable reports on this stuff. Want to pause? Because let me preface all this by saying, tools are not the enemy. There are very smart teams building in this space who are working on tools to help with AI tracking. And I have spoken with many of the builders and many of them for the show. You've heard their episodes. Measurement will absolutely matter when the dust settles on AI search and how we optimize for it. But tools are amplifiers. They are not teachers. I say that to say, if you don't really understand how AI systems generate answers, why those rankings fluctuate, or what visibility actually represents, or which prompt matters to your brand, I'm talking about those money prompts, a dashboard's not going to help you too much. It will just give you numbers that you don't know how to interpret. But I say that lovingly because it goes with any tool that you use for anything. So my advice to you when thinking about AI visibility and how to track it is before you spend money, especially money that stretches your budget, because I know some tools are kind of pricey at the moment, I want you to take time to understand the mechanics first. Decide what visibility means in your category. Decide what success actually looks like. Then go chose a tool that aligns with those goals and can give you the measurements that you're after, not the other way around. No. Here's the too-long-didn't-read version of all this. Rand's research didn't prove that AI visibility can't be measured. It just proved that rankings are the wrong metric. Visibility is a probability game now. If you want help understanding how AI engines evaluate brands, measuring your real AI share of voice, or even help building a geo-strategy that increases your chance of being included, that's exactly what I help brands do. Head over to my website, CassieClarkMarketing.com, for more resources, and if you want to start your AI search visibility audit this month, check the show notes for the link. Okay, that's it for this episode. I will link Rand's research in the show notes below. Until the next episode, stay visible.