Found in AI: AI Search Visibility, SEO, & GEO
Found in AI is a podcast for marketers, founders, and content strategists who want to understand—and win—AI search visibility in the new era of search.
Hosted by Cassie Clark, fractional content strategist and AI search optimization expert, the show explores how platforms like ChatGPT, Perplexity, Gemini, and Google’s AI-powered search experiences discover, select, and surface content.
Each episode breaks down real-world experiments, SEO, GEO / AEO, and content marketing strategies designed to help brands get found in AI-generated answers, not just traditional search results.
You’ll learn how to:
-Optimize content for AI-driven search and answer engines
-Blend traditional SEO with AI search optimization
-Build entity authority across search, social, and AI platforms
-Drive traffic, leads, and trust as search behavior continues to evolve
If you’re trying to future-proof your content strategy and understand how AI is reshaping discovery, Found in AI gives you the frameworks, insights, and tactics to stay visible—wherever search happens next.
Found in AI: AI Search Visibility, SEO, & GEO
Can You Rank #1 in ChatGPT?
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
📬 Love the podcast? You’ll love the newsletter.
Get the weekly 3-2-1 on AI search + marketing: Subscribe
In this episode of Found in AI, Cassie is joined by Kristina Frunze, founder of WebviewSEO, an SEO and AI search optimization agency for B2B SaaS. Together, they unpack what AI visibility really looks like in practice—and why share of voice, intent alignment, and brand consistency matter far more than screenshots of a single AI answer.
Rather than chasing the illusion of ranking, this episode reframes GEO around measurable influence, bottom-of-funnel visibility, and the structural signals AI systems use to interpret and recommend brands.
In this episode, you’ll learn:
- Why “ranking” is the wrong mental model for LLM visibility
- How AI answers vary, even when prompts stay the same
- What AI Share of Voice actually measures (and why it matters)
- Why tracking 150 prompts is usually unnecessary—and what to track instead
- How bottom-of-funnel prompts drive more meaningful visibility than top-of-funnel queries
- Whether long-form content still works in AI search (and when shorter answers perform better)
- How to build objective, methodology-driven listicles that AI systems trust
- Why brand consistency across LinkedIn, directories, and owned assets impacts LLM understanding
- How messaging ambiguity weakens AI confidence signals
- What brands can do today to improve frequency of mention in AI-generated answers
If you’re trying to separate hype from strategy in the GEO conversation—and build real, defensible AI visibility—this episode will recalibrate your approach.
Let’s connect:
LinkedIn → Cassie Clark | Fractional Content Strategist
Website → https://cassieclarkmarketing.com
P.S. Is your brand losing its "Answer Authority"?
Most series A/B and enterprise brands are being "nudged" out of AI search results because of entity gaps and "stale" content. I am opening a limited number of specialized audit slotsto help you reclaim your Share of Voice using the FSA Framework (Freshness, Structure, Authority).
Request your 7-Day AI Search Visibility Audit: https://cassieclarkmarketing.com/ai-search-visibility-audit/
(Transcribed by TurboScribe.ai. Go Unlimited to remove this message.) A lot of conversation in the AI visibility space. I mean, for good reason. It's a new layer of discovery that we really ought to figure out now sooner than later. But you've probably seen those big promises and, well, overly confident claims. But LLMs don't work like traditional search engines. They're probabilistic. And based on Rand Fishkin's research and what we've seen with our own tests here at Founding AI, the same prompt can produce different answers just minutes apart. My cat is deciding that now is time to practice for the Olympics. I'm so sorry for the background noise. Anyway, in today's episode, I'm joined by Kristina Frunze, founder of WebUSCO, an SEO and AI search optimization agency for B2B SaaS, particularly construction tech. And we're unpacking the myth of ranking in LLMs. I put air quotes around ranking in LLMs. You can't see that, but they're there. We're also talking about what AI shared voice actually means, why bottom of funnel prompts matter more than banded team visibility, and one of the most overlooked GEO strategies that you can do today. And that's just making sure your brand consistency is the same across the entire internet. So if you're trying to build real AI visibility instead of just chasing random screenshot metrics, this conversation will probably help get you there. Let's get into it. Hi, KFC. I'm Kristina Frunze I am the founder of WebviewSEO, the SEO and AI search optimization agency for B2B SaaS, mostly in the construction technology space. That was a mouthful. But yeah, it's something that I find myself doing every day and we are growing slowly but steadily. Yeah. And you've been on the podcast before. We've talked about the metrics. And one thing that you posted on LinkedIn a few days ago, I'm going to read your post. Do you say a GIF or do you call it a GIF? I say a GIF. It's totally, totally not related. But okay. So you have a GIF from the office. And it's an eye-rolling. But your post says, me when seeing another post from the expert on how to rank number one in LLMs. Then you go on to say there's no such thing as ranking in LLMs. It's either you show up or you don't. It's always random and no one can guarantee that your brand will show up in LLMs. If someone does this, please run. Tell me, tell me the deeper thing about all this from where you're sitting. It's just, you know, I've been, obviously, a big part of my life is crawling LinkedIn and looking at different posts. And what I'm seeing more and more is there are experts like SEO experts that are, you know, doing genuine research and trying their best to understand how LLMs work. But then there's also, unfortunately, that part of the internet where people take advantage of the hype and kind of misleading, misinterpreting, you know, different things and kind of creating that fear of missing out for the people that do not understand anything about it. So one of the, what prompted me to do that post was actually one of the posts that I've seen, which actually said that exact thing. I'm going to show you how to rank number one in whatever, Chajupiti, things like that. And that just made me so angry inside because there is no such thing. Like coming to the Ryan Fishkin story where, you know, no matter how many times you're going to type in the same thing, you're going to get a different result every single time. So it's just mind boggling to me how much misinformation there is at the moment. And yeah, I guess it was just out of frustration. Plus, I mean, The Office is my favorite show, so I could not use that. So I do, I do understand. Like I even early days of experimenting with this, I had a post out, oh, I ranked in Chajupiti. At the time, there was no other way to say that. That's how we were talking about it, because it closely resembled SEO, like at the time. Now we know better. But the Ryan Fishkin research, what was your take on all of that? Yeah, it's, I think it's, even that is, there are some limitations, but obviously he's done a tremendous job there with the team and, you know, testing in different industries with different prompts. And it's still, there's so much room for error, I think. And there is just so, we are at the very, very beginning that nobody knows how it works. And I've observed very much the same with my clients, because I use some different tracking tools, AI tracking tools at the moment. And I tend to obsess about them sometimes. So I go in and I check pretty much, you know, on a daily basis, how the client does for the target prompt that they want. And it's like, there is never a very good kind of consistency, you know. And then I also, what it does, it tracks the, my clients, let's say, presence for the certain prompt across the landscape of their competitors as well. So I'm seeing that their competitors are also not showing up very consistently in terms of the position. It's always different. So at the moment, I just don't think there is such a way to, and I'm grateful for that, that there is no way to actually manipulate that. Obviously, you can improve your chances of showing up there by doing certain things. But even those I've heard are kind of getting a little bit harder by the day, which I've actually read yesterday on, I'm not sure if you follow Lily Ray. Yeah, I did see where she was talking about the best of X posts or whatever. Exactly. Yeah. It's a bit scary. I mean, it kind of makes sense because, you know, everybody does this right now. And it is self-promotion, right? Every single company will put themselves first, no matter how in-depth their research is. So yeah, it leaves a very little room for the actual manipulation. But what it does is pushes people towards actually producing valuable content that answers the questions. So I think that's, it's not the easiest thing, but it's probably the most valuable thing for the user. So just for reference for listeners, today is February 5th when we're talking about this. So this might come out a couple of weeks later. But did you see Steve Toth? I think that's how you see his last name. His post this morning talking about the listicles. Did you read that post? No, I did not. So his insight is that the best of X listicle, whatever kind of thing, comparison post that you're doing, does work if you are objective all the way around. So with a competitor, you're not just bashing the competitor to make yourself look better, but you give them the wins where they deserve it. So best for whatever use case. And then when it comes to yourself, so your own brand, you're also giving the cons for yourself too. Not just painting it out to be like, oh, I'm the best and here's why. Which makes a lot of sense because if you're pushing out 200 listicles where you're the best of, and there's no comparison, it's going to read as spam. So for what he was saying, that was like, yes, Lily's research is correct, but it also works if you do it right, which I thought was interesting. I'm glad because I can't say I'm guilty of that, but the way I approach listicles, honestly, one of the latest listicles that I covered for my clients was like 17,000 words. And that was a giant. It's performing really well at Lens right now. But what I've done is at first, before creating the listicle, I've created a methodology for the client. So what I've done is like, it's a different, almost like a page that says client software methodology, review methodology. And there I kind of break down which scores we look at, how we look into the different functionalities of their software. Maybe we review the reviews on different aggregators, site aggregators and things like that. So at first I based that, created that methodology. And then based on that methodology, I created this giant of an article. And then now it's actually even easier because every single article that I put out, it links back to that methodology saying, hey, this is actually why we think all these scores make sense. So I think it's definitely much harder, but then it's definitely worth the effort because I'm seeing in just one month for that client, that single article, it became the number one cited piece of content on their website. Wow. And that's a 17,000 word document. Yeah. Okay. Which brings me to another thing. Like I'm seeing AEO experts, like I'm doing air quotes or not because maybe that's the right term. I say that sometimes in my podcast for the LLMs, but I still like no one's truly an expert yet. We're still learning. But I noticed that some people are talking that shorter posts, like a thousand words perform better within AI search. So it's interesting that you've mentioned this one is 17,000 words and is performing well. I think, you know what, there is time and place for both because what I've also heard and I've tried testing is shorter articles that are very focused on a specific answer. So I've actually, I watched a video from HubSpot, how they approach their AEO, Ageneo strategy. And one of the things that they mentioned is that they researched very much in depth, their ICP, what questions they're answering. And then they covered short article answering those questions and that performed really well. So like one way of doing that is answering those questions within FAQs, within like bigger articles that are guides. But then I think there is also a sense for experimentation with shorter articles that are just answering that question very much to the point and just covering nothing else. Because I think that's really helpful for the user, right? There is no other fluff but the answer to your question. Like straight into the point, I think is best for the reader and for that AI engine. So in Rand's research, you had some questions there at the end. I kind of want to go back to it because we were just talking about it. We kind of veered off for a second, but that's okay. So the first question is how many times do you need to run a prompt to have statistically sound answers about a brand's relative visibility? What are you seeing? Do you just run it the one time and then make a note of it and then run it a couple of days later? Or do you run it a couple of times when you're tracking AI share voice or whatever metric you're looking at? So at the moment with the tools that I use, they run it once every day, I think. I don't think that's the most accurate way to look at it. But that being said, it gives a pretty okay idea because it does it every single day. So you kind of can understand the overall picture in the month, if you want. But what I've also done is I've taken that very much the same prompt and I run it back to back 10 times. Not a single time it was the same answer. So it's a very interesting way, the way trajectory and different LLMs work. And at the moment, I agree, I think the best way to track the success is the just share voice. It's not like you are looking at yourself in a vacuum, but see how you are showing up among your competitors as well. So that helps to build a picture for the client. Yeah. Now, thinking about the prompts too, I know you're also an exit five and there was a post the other day that someone made of we've identified like 100 to 150 prompts. Okay. The discussions in that, that was something else in the comments, but I know that you mentioned maybe just tracking like 10 to 15 of those prompts. What's your guidance on that? Because that was also what I was thinking. I'm like, oh, Christina, I know she knows what she's talking about. Like we're on the same page here. Yeah. I mean, 150 is, I just can't imagine. It's like, if you, like at what point do you stop? You know, like at what point do you say, okay, 150 is enough. Like, I can't even imagine the topic that, so I tend to do 15, you know, per topic just because, well, A, it's pretty pricey to track all of them. That's the first concern is, you know, how expensive that is. But then also I feel like the more variations you have, it's just, it doesn't give you a very big leverage in terms of the data. Like, I think when it comes to narrowing down the prompts, it's, I'm thinking since I come from the SEO perspective, right? It's almost like I want to base it on content silos, you know, like for example, if it's a construction project management software, then you want to, you know, construction, bid management, construction, submittal management, things like that. So you have like different silos and it's almost like you want to ask questions that are in the bottom of funnel. So that's actually another big point. I don't really track very top of funnel content prompts because I don't see the sense in those. I really like only tracking bottom of funnel prompts because this is what actually gets the client those conversions. So, you know, so it's, you know, something that people would ask when they are very close to making a decision. Maybe, you know, comparing the client against their competitors or again, what's the best solution for X. And I kind of call it a day because then I just find myself, you know, having a hard time finding prompts that are, that will cover everything. So I'll just concentrate on the ones that I know will bring a client a lead. So I think that's the good way to think about it. I think that's also kind of what Rand Fishkin's research kind of boiled down to. I mean, he said something like, hold on, I just had it. Oh, users almost never craft similar prompts, even when they have the same intent. The variation of brands recommendations and AI answers around a space in the messy wilds of AI prompting is likely much higher than what a controlled experience revealed here. So the way I take that, and maybe this is just me not being able to read right today because, you know, brain problems, but whatever. When I read his research, it kind of sounded like it didn't really matter how it was prompted just as long as the intent was the same. So if you can just boil down to, again, those money prompts of best X or whatever, I think you'll be fine. Do you think that too? I think at the moment, this is the best that companies could have until we maybe get some more robust data of, you know, how actually this works, how LLM's, you know, forming answers, where they're pulling from. And maybe if there will be some consistency in that, but I think there is definitely more value in bottom of funnel prompts versus top of funnel because, well, I think already the reason to track bottom of funnel is also because then it's converting better. So that's right. Also giving you kind of an idea of how well your brand is represented in those bottom of funnel queries and also what is the sentiment around this? Like how people, how your brand is presented? Is it actually showing up for the specifics that you want? Because for example, I've noticed with the clients that for some of them, they don't want to target some ICP, like some segment of the ICP, like for example, smaller contractors, they don't want to go after them. They want to go like after enterprise contractors, things like that. So if I see that that's the case and that's how we're showing up in LLMs, that's perfect. If that's not the case, then we need to do something about that. So last thing I want to talk about, going back to your original LinkedIn post that we started with, one of the comments said, what do you think about people promising to get you onto LLMs more frequently? There's a startup that charges thousands of dollars to ask a bunch of prompts, see if you come up and do something to help you appear more often. Valid. That's, I feel like there's missing something that he didn't finish. Your response was not getting mentioned more frequently in LLMs is a different story. So how would you suggest like number one tip for a brand listening to this? How do they increase their frequency right now? Like something they can do today? So what I found has the biggest, the biggest impact on how your brand shows up in LLMs is your brand, how you position your brand across the whole web. So for example, you know, if your website says one thing, but then on your LinkedIn, you're saying a little bit different thing. And then all the director is a little bit weird. And then on your Google business profile does not even mention what you do. So LLMs have a hard time understanding what you are about. So what I do, the first thing that I do when I come in and start working with the client, I analyze their AI visibility kind of landscape. If they are successfully communicating their offering in a very similar and non-ambiguous way to the web across, you know, as many owned and rented assets as possible. So, you know, I make them fix their descriptions on Instagram, YouTube, LinkedIn, different directories, things like that. It all should say the same thing. And that kind of helps a lot with, because your website is just one of those, you know, multiple touch points that LLMs have. So I think that's the first thing that I do. And we've done it for one of the clients because, you know, it always happens. You updated your LinkedIn profile like five years ago, which when you were doing one thing, now you have completely different offerings, but they don't think that matters, but it does these days. Okay. Before we wrap up this episode, I want to leave you with something practical that you can do today for your GEO strategy. Open up your website homepage, LinkedIn company page, founder LinkedIn profiles, Google business profile, any major directory listing, anywhere you have a social media profile. Now read your bio descriptions back to back. Are you describing your company the same way everywhere? I mean, the same ICP, the same problem, the same category, the same positioning. If your website says one thing, LinkedIn says something slightly different, and your directory listings are vague or outdated, those AI engines are going to struggle to confidently understand who you are and when to recommend you. As much as I hate the word clarity, clarity does beat cleverness in AI search. That's your first GEO move. We're still practicing for the Olympics over here, so sorry. That's your first GEO move. You need your messaging consistency across every owned or rented asset, and your messaging needs to be the same, particularly with the same exact wording or phrase everywhere. And if you want help identifying visibility gaps like that, like how AI engines perceive your brand, where your confidence signals are weak, or where you're missing bottom of funnel opportunities, I run AI search visibility audits designed specifically for B2B companies navigating this shift. You can find more information about those in the show notes. And if I don't ask, it's always a no. If you're enjoying Founding AI, please leave a review so others can find it too. It really does help more marketers and founders discover the show, and I appreciate your efforts more than you know. Thanks for listening, I will see you in the next one. Until then, stay visible.