Found in AI: AI Search Visibility, SEO, & GEO

Does Domain Authority Still Matter in AI Search? (A 96-Hour Case Study)

• Cassie Clark • Episode 20

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 17:12

Send a text

📬 Love the podcast? You’ll love the newsletter.
 Get the weekly 3-2-1 on AI search + marketing: Subscribe

Why isn’t domain authority enough in AI search? Because authority in AI-powered systems is continuously reassessed based on freshness, structure, and how closely content matches user intent in the moment.

In this episode of Found in AI, Cassie walks through a real-world case study where a single content update—optimized specifically for AI search—displaced legacy SEO publishers in under 96 hours across systems like Google Gemini and Perplexity.

The episode breaks down how AI search engines evaluate sources differently from traditional search, why domain authority is no longer a permanent advantage, and how visibility shifts when AI systems identify a clearer, more reusable primary source.

In this episode, you’ll learn:

  • Why domain authority alone is no longer a reliable safety net in AI-generated answers
  • How one content update led to a 96-hour shift in AI Share of Voice
  • What AI Share of Voice (AI SoV) measures and why it behaves differently from keyword rankings
  • How AI search creates a winner-take-most dynamic around a primary source
  • Why legacy publishers weren’t penalized — they were simply outcompeted
  • How the FSA framework (Freshness, Structure, Authority) aligns with how LLMs evaluate content
  • Why SEO still matters, but must be paired with structure AI systems can interpret
  • What this case study signals for Series A, Series B, and enterprise teams heading into 2026

If you’re relying on domain authority as a long-term moat, this episode shows why AI search changes the rules — and what to do about it.

Let’s connect:

LinkedIn → Cassie Clark | Content Strategist
Website → cassieclarkmarketing.com

P.S. Most series A/B and enterprise brands are being "nudged" out of AI search results because of entity gaps and "stale" content. I am opening 3 specialized audit slots for January 2026 to help you reclaim your Share of Voice using the FSA Framework (Freshness, Structure, Authority).

Request your 7-Day AI Search Visibility Audit: https://cassieclarkmarketing.com/ai-search-visibility-audit/

(Transcribed by TurboScribe.ai. Go Unlimited to remove this message.) What if everything we knew about Authority was wrong? For two decades, we've been told that backlinks and domain age are the keys to the kingdom. But inside generative systems like ChatGPT, Perplexi, Gemini, Grok, whichever one's your favorite, well the rules are being rewritten in real time. Last week, I ran a test that proved a boutique agency can displace a legacy publisher in less than four days. That's not by outspending them, but by outstructuring them. This is the case study of a 96-hour takeover. Hi, I'm Cassie Clark, a Factual Content Strategist and the host of the Found in AI podcast. This show is where we talk about how AI search actually works by doing experiments and testing it out, and then what those experiments mean for brands trying to stay visible while the rules are changing in real time. Listen, I know it's the holiday week. I was not planning on putting on an episode this week, because I know you're busy. I'm busy. I have butter sitting on my counter right now, ready for cookies once I hit record and publish. But honestly, I've just been so flabbergasted. Yeah, flabbergasted is a good word. My flabbers have been gasted. Oh, for what I've seen with these AI search engines, and I just had to sit down and share it with you. Shout out to Travis, who has been getting my updates in real time. Travis, I appreciate your enthusiasm. Thank you for letting me nerd out over this. I appreciate it. Sorry I've been in your DMs lately. Anyway, that 96-hour takeover that I just mentioned, well, it wasn't a fluke. It was a deliberate calculated application of the FSA framework. So if this is your first time hearing that phrase, FSA stands for Freshness, Structure, and Authority. And it's a good framework to keep handy when updating or creating new content that is specifically optimized for the LLMs. It kind of sits on top of your SEO strategy that you're already doing already. So today, I want to walk you through how that takeover happened. I want to talk about the data, and I want to talk about why this should change how we think about authority in AI search going forward. So think of today's episode as like an audio case study. It is not a victory lap, and it's not about beating the bigger brand. Instead, it's just about what happened when I updated a single piece of content specifically for AI search, and how that one change caused legacy SEO publishers like Search Engine Land and Search Engine Journal, even though Search Engine Land wasn't included in my chart, if you go look at the case study. Anyway, they quietly disappear from AI-generated answers within hours after that content update. When I did that content update, there were no back things added, no paid promotion, no push to get the idea in this post out onto other platforms. I just went in and updated the definition, added an FAQ schema correctly this time, and an author schema, and that's about it. So I just changed how it was structured and then positioned for large language models, and what this experiment revealed is something that I don't think most teams are fully prepared for yet. An AI search domain authority is no longer the safety net it once was, which is opposite of what we've been hearing in terms of how we appear in these answers for a bit. Here's what I mean by that. Most people hear authority and they think, oh yes, a 10-year-old domain. But in AI search, authority is contextual, and that means just being the most reliable source for specific intent right now. So when I applied the FSA framework, which stands for Freshness, Structure, and Authority, to a single page, to a single blog post on my website when I did this update, I wasn't looking for a ranking. In fact, as of now, I don't even think this particular blog post links on the first page of Google. I'm not talking about the AI overview as it is in that, which was odd to see since I wasn't really tracking it. I just mean traditional 10 blue links on the first page. Instead, I just provided a path of least resistance for those LLMs. So I tracked this update in Perplexity, and within 96 hours, AI Share of Voice for Cassie Clark Marketing rose from a baseline of about 27% to 72.7%. More importantly, that visibility held over time. So over the following days, the legacy publishers that had previously dominated the prompt dropped to 0% visibility. While the updated blog post remained, and continues to remain, I just checked it before hitting record, their primary cited source. Now I want to be very clear again, this isn't a story about winning the internet, and it's also not proof that small brands are inherently better than the big ones, but what this is is a closer look at how AI systems decide what to cite, and why authority and generative search is something that brands can earn continuously rather than something they inherit forever because they've been, well, just too big to ignore. To understand why all this matters, we need to talk about AI Share of Voice or AISOV. So AI Share of Voice measures how often a brand appears inside AI-generated answers compared to other sources when a specific prompt is asked. So unlike traditional SEO, where we're focusing on rankings and clicks, AI engines, they go out and they assemble answers. They select a small set of sources the model trusts enough to cite, and then they stitch it together and present it to the user. AI Share of Voice helps us see how that trust is distributed across the answer. At its simplest, the math is pretty straightforward, and it's the number of times a brand is cited divided by the total citations multiplied by 100. So if five brands are cited in response to a prompt, AI Share of Voice shows who controls most of that answer. So what's critical here is that AI Share of Voice behaves very differently from keyword rankings. So when one brand gains visibility, another usually loses it. So in practice, it creates a win-or-take-most kind of dynamic. The primary source captures the majority of the answer, while secondary sources receive partial mentions or, in the case of this experiment, just kind of quietly disappear altogether. So in this study, I tracked AI Share of Voice over time, mostly to see whether early gains would stick, what would happen to that particular blog post, and whether the system would continue reshuffling sources as it re-evaluated trust. To keep the experiment clean, only one variable was introduced, and the prompt I tracked was, I need to do a baseline AI visibility audit. Help! This was chosen intentionally, and it's really a real early-stage research question that my particular audience is going in and asking. So it's kind of the thing that someone's going to ask before they know which tools, frameworks, or vendors they should trust when it comes to these audits. If you hear something in the background, that's my cat, Phil. Phil has decided that he needs to be in here, mostly because I have the heated blanket going, and that's now his favorite place to lay during the day. We fight over it. But anyway, back to this. So I went in, I only updated one blog post. That page is a baseline audit for AI search visibility, and it includes a template and KPI definitions. No other content was updated. Nothing was promoted. I simply added in a new definition, I fixed the schema markup, I added an author schema, and restructured a few lengthy paragraphs. Now, this blog post was shared early fall before we really started figuring out what's working in AI search, so I knew FAQ schema needed to be added, but turns out I just added in the wrong thing. Phil is now on my desk, so heads up. Anyway, so to track AISOV, all of the prompts were run in a blogged out private browsing session, and I tracked that across both desktop and mobile, really to remove personalization and device bias. So, AISOV, I tracked it continuously from December 17th to December 22nd, and that captured more than 125 hours of system behavior, including multiple re-ranking cycles. And this is where things kind of get a little bit interesting. So before the update, AISOV for Cassie Clark this particular prompt was at 26.67%. Citations were spread across several different sources, including niche sites and legacy publishers, and there was no single source that clearly dominated the answer, which is exactly what you'd expect when a system hasn't really identified a strong primary reference yet. But because I've been testing other prompts just to figure out how perplexity works, I know perplexity starts ingesting new sources within two hours of an update or a new publish. Hold on, Phil's got my notes. Kitty, that's my data. I need that. Anyway, so I started tracking. I tracked that, like, right before the update, two hours later, perplexity began reassessing those sources. AISOV jumped to 45.45% at that two-hour mark, and it kind of became like an early signal that, hey, the model is doing something, that maybe this piece of content is clearer, and the structure is better, and it's definitely fresher because it was just updated. But so I decided just to keep watching it. By December 20th, perplexity seemed to appear to finish rebalancing the scales here. AIShare voice peaked at 72.7%, and the updated page was locked in as that primary source. Now, at the same time, legacy publishers, including Search Engine Journal, well, they dropped to zero percent visibility for that particular prompt. And this is the part that really matters here. Those brands weren't penalized. They weren't demoted. They were simply outcompeted by content that was easier for the system to work with. And when we've been told that good SEO translates to good GEO, well, for the brands that have been doing good SEO, it might come as a surprise when their content starts losing out on citations because it's not properly structured for the LLMs. So the way I think about it is a GEO strategy is 80% SEO and 20% GEO. We still need all those keywords. We still need all our technical markup. But now we have to restructure our content so that it best matches how these AIS system works. Now, the final signal came with this experiment after the spike. By December 22nd, AIS share of voice stabilized at 67%. And it's held steady across multiple re-ranking cycles. And again, even this morning. That retention tells us the system didn't just choose a new source. It learned my blog post as a new canonical reference. And that's kind of the part that teams are missing here. AIS search visibility is zero sum. So as one source becomes clearer and more useful to the model, others just fall out of those citations. So once the system selects a primary source, it tends to consolidate around it rather than distributing visibility evenly across competitors. So from a buyer's perspective, the primary source becomes the default frame of reference. For AI, sorry, for series A and B in enterprise teams, I think this kind of highlights a shift that maybe hasn't fully landed yet. Legacy authority for AIS search is more fragile than it looks. Higher domain authority still matters. But it's no longer the shield for visibility in those AI engines. So if your content is stale or it's just walls of text and loosely structured, and maybe misaligned with how those AI systems assemble answers, well, even well-known publishers who are giants within SEO, they can be bypassed. And the speed of change here is also something that we need to talk about because it accelerates dramatically. Traditional SEO updates, they play out over weeks or months. If you work with an SEO strategist, they're going to tell you, okay, this is going to take a couple of months before we see some real movement here. But in AI search, visibility can shift in hours. This is both good and bad, depending on how you look at it, because it means there's both risk and opportunity. And they exist on a much shorter timeline than most teams are used to managing here. And finally, AI search is a winner-take-most. So being cited once, well, that isn't enough. The primary source captures the majority of the answer, while secondary sources receive partial mentions or disappear altogether. You want to become the primary source. And in practice, this means AI search visibility comes down to whether your brand is present and trusted at the moment decisions are being formed. So if a boutique agency like Cassie Clark Marketing can displace a legacy SEO publisher in under 96 hours, the real question isn't how impressive that is. The real question is where your brand stands inside AI-generated answers today and whether that visibility will hold if the system were re-evaluated tomorrow. And that's the bigger story here with this case study. And I want to be really clear about one last thing. None of this means we give up on SEO. SEO is still massively important. The fundamentals still matter. And they still feed these systems in very real ways. What's changing is how that work shows up inside AI-generated answers. We can't assume that traditional domain authority alone will carry us anymore. We have to be more intentional about how content is structured, how fresh it is, and how clearly our brands signal relevance for specific intents. Now in a world where AI systems are choosing who gets cited, who gets summarized, and who becomes the default frame of reference, visibility isn't really something that we can just set and forget. To stay visible, we really need to rethink how we approach content. Not just how we rank in traditional search, but how we're interpreted and reused by AI systems that deserve that decide who deserves the spotlight. And I think that's really important that we start thinking about, especially as 2026 comes up and we're rethinking our content marketing strategies. So if you want a clear and honest read on that, I am opening a limited number of AI visibility audits in January. These audits are designed to show where your brand appears in AI search today, maybe where you're at risk, and what signals you need to strengthen if you want visibility that actually sticks. You can find more resources, case studies, and guides on my website, CassieClarkMarketing.com. Everything is linked in the show notes. Now, that's the end of this episode. Again, we're all busy. We're all super busy. I will be back next week after the holiday. There's no news update this week because this week it falls on Christmas. But until then, take care and I will talk to you soon. I'll see you in the next episode.