The Best SEO Podcast: Defining the Future of Search with LLM Visibility™
The Best SEO Podcast: Defining the Future of Search with LLM Visibility™
With over 5 million downloads, The Best SEO Podcast has been the go-to show for digital marketers, business owners, and entrepreneurs wanting real-world strategies to grow online.
Now, host Matthew Bertram — creator of LLM Visibility™ and the LLM Visibility Stack™, and Lead Strategist at EWR Digital — takes the conversation beyond traditional SEO into the AI era of discoverability.
Each week, Matthew dives into the tactics, frameworks, and insights that matter most in a world where search engines, large language models, and answer engines are reshaping how people find, trust, and choose businesses. From SEO and AI-driven marketing to executive-level growth strategy, you’ll hear expert interviews, deep-dive discussions, and actionable strategies to help you stay ahead of the curve.
Whether you’re a C-suite leader, marketing professional, or founder building your brand, this podcast is your guide to understanding the evolution of SEO into LLM Visibility™ — because if you’re not visible to the models, you won’t be visible to the market.
The Best SEO Podcast: Defining the Future of Search with LLM Visibility™
How To Use AI Without Getting Deindexed With Jon Gillham
Matthew Bertram and Jon Gillham unpack how AI content, plagiarism risk, and Google’s crackdowns reshape SEO, then lay out guardrails that protect rankings while building real LLM visibility. The focus stays on practical governance, provenance checks, entity health, and adding value beyond words.
• Rebrand context and why AI integrity matters
• Study showing AI overviews citing AI content
• Risks of synthetic data and value of human signals
• School and workplace guardrails for AI use
• Google’s stance on helpful content vs scaled abuse
• Penalty patterns, core updates, and indexing lags
• Plagiarism trends, fair use thresholds, and QA checks
• LLM visibility strategy and entity consolidation
• Editorial workflows to detect copy‑paste AI
• Actionable playbook for responsible AI adoption
Guest Contact Information:
Website: originality.ai
LinkedIn: linkedin.com/in/jon-gillham
More from EWR and Matthew:
Leave us a review wherever you listen: Spotify, Apple Podcasts, or Amazon Podcast
Free SEO Consultation: www.ewrdigital.com/discovery-call
With over 5 million downloads, The Best SEO Podcast has been the go-to show for digital marketers, business owners, and entrepreneurs wanting real-world strategies to grow online.
Now, host Matthew Bertram — creator of LLM Visibility™ and the LLM Visibility Stack™, and Lead Strategist at EWR Digital — takes the conversation beyond traditional SEO into the AI era of discoverability.
Each week, Matthew dives into the tactics, frameworks, and insights that matter most in a world where search engines, large language models, and answer engines are reshaping how people find, trust, and choose businesses. From SEO and AI-driven marketing to executive-level growth strategy, you’ll hear expert interviews, deep-dive discussions, and actionable strategies to help you stay ahead of the curve.
Find more episodes here:
Follow us on:
Facebook: @bestseopodcast
Instagram: @thebestseopodcast
Tiktok: @bestseopodcast
LinkedIn: @bestseopodcast
Connect With Matthew Bertram:
Website: www.matthewbertram.com
Instagram: @matt_bertram_live
LinkedIn: @mattbertramlive
Powered by: ewrdigital.com
This is the unknown secrets of internet marketing. Your insider guide to the strategies top marketers use to crush the competition. Ready to unlock your business full potential. Let's get started.
SPEAKER_01:Howdy, welcome back to another funfold episode of the Unknown Secrets of Internet Marketing, which I am trying to drop that piece of it. Okay, so Best SEO podcast, the best SEO podcast. We have all those handles. Um and internet marketing secrets. Uh we've decided to drop that, so I need to change the bumpers, but you can find us anywhere at best etseo podcast. This is the unknown secrets of internet marketing. We've been running for 12 years straight. Um, and we talk about everything digital marketing and SEO and well, AI, because AI has taken over. And I thought it would be good as we're continuing to have these AI discussions to hold up, pump the brakes, and say, okay, plagiarism, uh AI content generation, how are we ranking what's going on? I even remember uh a publicly traded client we have uh early on was like 92% uh uh human generated, uh rejected. I was like, well, he wrote it, you know, he wrote it. And the LLMs are are trained on human writing. So a lot of it's gonna be human-written. Like I knew this person written the article, like he knew nothing about AI. I was like, there's no way that he wrote this. And so um that started the education process, AI governance process to understand uh how we need to frame these things, how we need to look at these things. And uh I wanted to bring on John Gilham from originality.ai. Uh, he's got an AI checker, plagiarism checker, fact checker, because well, there's an issue with we were talking at the pre-interview, uh, the the integrity of the internet as a whole, as LLMs are referencing LLM sources. You said you just uh completed a study on on how many AI overviews are referencing AI generated content.
SPEAKER_03:Yeah, thanks, Matt. Thanks for having me. Yeah, we uh yeah, we did just did a study. Um, I think it's uh sort of we find it infinitely fascinating to sort of understand how AI generated content is sort of proliferating across the across the internet. And so we look at sort of studies on where where that's happening, where it isn't happening. We saw with this study we looked at um hundreds of thousands of AIA overviews and then that the websites that they had cited and then ran those websites through our AI detector, our AI detector, highly accurate, but not perfect, but across a large data set is is very telling and and can be relied on for uh for understanding. And we saw sort of 15, 15 at times 20 percent um of your money, your life um searches uh ref citing uh AI generated piece of content. And so it sort of yeah, definitely raises the question of sort of the uh the snake eating its own tail as some sort of visualize it. But if AI is rooting itself in AI, there's a whole world of problems that can come from that.
SPEAKER_01:Yeah, the the degradation uh of data uh as AI uh feeds itself. Um I would love to even uh go down the rabbit hole later on synthetic data um and and how that works. That's something that we're starting to get into and and test out some different tools on. But like I remember Elon Musk, right, bought Twitter and then, you know, for freedom or whatever, uh, like I think it was a great move. Uh, but I think he, I mean, it's clear, I and I believe this to be true, he bought it to train Grok, right? And he also was tweeting about how many bots there were on Twitter. And he had to get all of that synthetic data or that uh AI generated data out of there because it needs to be trained on real humans. That's also why Google did the deal with Reddit, because you know, real humans need are are providing the inputs. And I think that that's probably why Chat GBT or OpenAI um uh made it free for so many people because they they need the engagement with real humans to to train to train these models.
SPEAKER_03:Um certainly any time that um LLMs have have attempted to do sort of a very serious effort on training on synthetic data, it has uh not not gone well. Um and so the majority of um LLMs are being trained on sort of all all data that is all all human-created data. And it's sort of a one of those sort of um sort of AI paradigm shifts where you know for the rest of humanity, all known human text data sets have have been created. Um the AI has has sort of infiltrated itself into so many things whether even even when you don't think it has, if you're accepting Grammarly edits, then there's a little bit of AI getting added into that that human text. And so for sort of the rest of humanity, any human text, any human data set um will have some amount of AI in it um compared to sort of pre-2020.
SPEAKER_01:I've never really looked at it that way. That that is really philosophical, actually, right? Like that that I I mean we saw it with Grammarly and and some of these other tools. Um, you know, Atsio Surfer, what is it? Uh surfer, yeah. Yeah, and like you can't get away from data that doesn't have autocorrect or um you know help helps you ride it. So there's some level of infiltration in all information going forward, right? That is that is like wild to think about. Uh in certain areas, it certainly accelerated more. I mean, what to set the table even more? Uh, I mean, school systems right are are dealing with this nonstop. Like, hey, like, did you write this paper or not? And that goes even back to the question of when I was in school, it's like, I'm always gonna have a calculator. If I'm never gonna do any math, I'm always gonna have a calculator. Why do I need to learn a long division? Right. That was my argument for a long time. Um I I think school's changing, like you're always gonna have an LLM as a co-pilot to, you know, um, or a sidekick to what it is you're doing. And you have now the most intelligent PhD level in every category at your fingertips with you at all times. I I feel like schools should not fight that, but embrace that and teach people how to think in systems. And um, you know, but I I don't know the rules around the plagiarism, the fact checking. I I know I mentioned to you, uh, I've seen in a couple of conferences someone uh created a city and got that cited, but it was a fictitious city. And then I've seen it with fault publications on like uh sport, sporting scores or something like that. It gets sucked up and ingested so quickly, and then it gets propagated, and then there's reference points, and now you're proliferating fake data. I think in the election cycles, you're gonna have some, some it's gonna get pretty bad. It's already started to get there. Um, I mean, just to set the table more and when we'll zoom down on you know how to better rank a site uh using maybe AI generated content. What are the rules around this? Like what are the guardrails? What are the rules?
SPEAKER_03:Uh yeah, so so in in academia, I can I can talk to sort of academia and marketing. Like so, in the world of academia, I mean, as as you can expect, slower, slower to adapt, resistant, some people being super progressive about saying, you know, LM's allowed, you're going to use that exactly as you said, calculator example. Others saying this isn't how brains are formed, where brains are still soft and they need to go through this lifting of the weights of sort of writing and thinking through that process to to get to a fully to be capable of using these these tools to their full potential. You don't throw somebody who with with who has never been a race car driver in a 300 power horsepower charger and say, good luck, and and is and it's sort of that's the other other analogy. And so it's it's I I think you know, um I've got young kids, sort of 12, 12, 10A, um the lab rats for for like they're they're lab rats for for sort of we as as you know I was for the internet. Um and and and it it's uh so what is what does that mean for education? It that's a tricky question. Um, I think if if you're studying at a higher ed institution and you're uh and then an LM can get a better mark than you can, you should maybe rethink whether or not the thing that you are studying is producing a lot of value in the world. Um, and so that I'd say that's sort of uh that's the big question, right?
SPEAKER_01:That's a big question that everybody and then you know you made me think of that MIT study that came out that showed that people are relying on LLMs for thinking, uh not not leverage, but thinking, and they're their their brains are shrinking or whatever, uh, or some some some capacity of that. So so I I think any really sharp blade or uh whatever analogy you're using, there's two sides of it, and you and you have to be cognizant. So I think that's a great uh weighted point there, John.
SPEAKER_03:So yeah, and then and then sort of on the on the marketing side. So if we're talking about like, hey, you're in an organization, you function as a uh marketer, your job is to produce content that gets you trafficking, customers and uh users, whatever whatever your objectives are. Um the the I think the most important um piece that I think is often being missed right now is alignment around uh the proper usage of AI, where it is allowed to be used, where it is not allowed to be used, and the controls put in place to be able to manage that. What we're seeing um sort of a sort of an extreme example, but we're seeing sort of interns come in with no guidance around AI usage, spinning up an API, spamming the site with AI generated content, and sites getting getting um absolutely crushed by by Google. That's sort of like the extreme example. Um, but the sort of the risk owner, the business, like the business owner who who is sort of accepting this risk is doing so with sort of no knowledge and awareness of the risk that they're they're accepting. Um, and so that sort of step around alignment on where it can and can't be used, whether that's in the editorial guidelines or kind of whatever the AI usage policy um that gets created, um that that's sort of that first step that I think is that you were alluding to, but that often is often missed. Um, and there's some significant consequences uh when some people are just turned turned loose.
SPEAKER_01:Yeah, I think that um executive teams and owners need to understand this technology to understand how people are using it. I I see uh a bifurcation uh many times of uh legacy businesses that um, you know, even digital marketing, even components of digital marketing, they don't completely understand uh data governance, right? Um and with the LLMs, if you're connecting it up internally, maybe to you know a Google Drive, it can pull everything. I've heard horror stories of like uh HR materials getting uh accessed. And um, if you have an intern that understands how to use this stuff, sort of, right, and you're giving them the race car uh and no rules, and they're driving it all over the place, it's gonna break some stuff, it's gonna destroy some stuff. So I think data governance, AI governance is a big topic, as well as like ethical AI. Like I think there's some real societal uh issues of this technology seeping into everything, and everybody has access to it. Um, even there's some real, even personal horror stories of um guardrails that need to be put on uh AIs. What what are the what are the big frameworks or the uh the the the big um bumpers uh that that just you think everybody needs to know from an education standpoint that's maybe creating content online? Let's start with the first principle basics here.
SPEAKER_03:Yeah, yeah. So um I think it's important to see that like not all not all AI content is spam. All spam in 2025 is is AI generated.
SPEAKER_01:Um, and that that that's a point that Google came out and I felt like they were waving the white flag and said, Hey, uh that's when they added to eat expertise authority trust, they added that experience component to try to say prove it, right? Because you can just be the expert. But they said if it's useful content, AI content is is okay. So if you're you know working on the content and you're linking it and you're citing references and you're you're adding you know images, like that content is okay. Sorry, I I kind of cut you off.
SPEAKER_03:Um yeah, no, so so so like so, like back to sort of like that that first principle, the basics of like, okay, not all a content is necessarily spam, but if you just leave it to its own devices and somebody without controls, it'll turn into spam pretty quickly because of optimizing for the wrong the wrong metric. And so that that's sort of one one important guideline. Um, another sort of like important guardrail, and I think you kind of alluded to it, but if you're if you're competing on just words right now, then you're you're facing a challenge. And so if you're sort of in the business of trying to put out 750 words on topic X, um that's you're now competing with um with LMs that can produce sort of you know near top level intelligence words at near zero dollars at basically the cost of electricity. Um and and that's a you're you're gonna lose that battle over time. And so you need to think about um adding value beyond just the words. And so that's sort of another sort of like talk about bumpers. Like if you think if you're viewing if you AI left to its own devices, somebody left to their own devices going wild with AI, we'll end up producing spam. Google is on it has an ex is has an existential threat um to their business. If their search results are overrun by nothing but AI spam, why would anybody go to Google? They would just go to the LLM that had the most knowledge uh of them, which might end up being Google, but it's not going to be you 10 10 you know 10 blue links a click away. And so there's sort of the existential threat factor that if Google search results are overrun by AI, um, then then Google search results as we know it will die, which maybe they are now already. Um, and and and so that's kind of given all of those, uh what I would say is sort of the the guardrails to to understand when it comes to um Google doesn't hate AI, but hates AI being uh the sort of overrunning the search results and and having no extra value being added.
SPEAKER_01:Okay, so the first thing you said, uh we we talk about a concept called human in the loop. Like you need to have somebody uh in there looking at it, checking it, like making sure it's optimizing for the right things. On on the second piece, I absolutely think Google's being overrun and they need to really do a lot better job with AI mode. It I think it's pretty bad as of uh this moment. I used it, it's very clunky. Um, you know, Gemini's okay, but you know, Chat GBT is crushing it uh because of the positive reinforcement or however they've weighted it. Um and I I think that maybe you can speak to this. With these Google updates, is there a threshold of a site that's like, okay, if it hits 20% or 25% AI generated content, it's it's getting flagged or it's getting um uh deprioritized uh in the search rankings. Have you seen anything or know any data related to that?
SPEAKER_03:Yeah, so we we have some data. So we're we're we're tracking the amount of search results that have AI content uh in them. We're and we're seeing that uh rise at a much slower rate than most other online platforms. So we're seeing medium is at like 50% of content uh um at times uh has been suspected of being AA generated. LinkedIn, over 50% of long-form posts are were in their last study, were likely AA generated. Um Google is likely 20% is is kind of where it's it's staying. And there has been times where that has declined, where which lines up with Google taking significant action. And so March 2024, um, Google did a manual, I called it a like a PsyOps update where they did a manual um deindexation on a ton of sites. The vast majority of those sites had the majority of their content being AI generated. And so what we are seeing is it's hard to say, like at this at this percentage, um, you're you're at risk. Well, this percentage you're safe. I would say what we are seeing is that um you know Google calls it scaled scaled content abuse. Uh, when when a large number of pages are getting published, that is something that is easy for Google to identify. It looks like at that point there is sort of a second-level um check on the site done, and then those sites are getting are getting nuked. And so if you're scaling content rapidly, you're very much at risk of of having having a penalty in some capacity applied to your site. Um if you're scaling AI content, you're definitely at risk of of both getting it flagged and then getting getting punished. Um how sites have been using it successfully, um, it it's it's again really up to if if you I think my I'd say my sense is if you stay off the radar of the scaled content abuse and your um you helpfulness stays extremely high, and that is sort of shows up in the in the user metrics um that they rely on, then you're you're probably kind of in that that okay, maybe in the maybe okay camp.
SPEAKER_01:Yeah, and algorithms are uh deciding whether you fall into these categories or not. So there's definitely weighted thresholds on all this stuff. How how in your world are how is plagiarism being viewed today? Because and also hallucinations by these LMs, um, because they're just predicting what is most likely the next word. And I've found it to be it's like oh, linked to this content, and like, oh, that page is not built. Maybe we need to build that page. That could be a good page.
SPEAKER_02:I you know, um as an interesting, I haven't I haven't heard thought of that insight, but that's a great insight. We believe the LOM wants this thing to exist so that it can cite it.
SPEAKER_01:Yeah, yeah. Well, so so we're working around a term, like there's a lot of these different terms, like chat GPT, SEO, GIO, AEO. Like I don't think the industry's decided on it. Uh, we've really gone with LOM visibility and uh said we're building around LOM visibility. That's ultimately what this is. This is what we're trying to do. And we built like a custom framework to to do that and a strategy uh on on how to make that happen. And um, we're even working on and working with some partners on on some tools and and indexes and things. And um, you know, there's a lot of noise, uh, but I think back to your point, uh, which I do want to talk about plagiarism, but Google is dying. I think um Google's trying this is how I see it. I see Google's trying to take on Amazon with like buy right now, right? We have the search traffic, buy right now, and then chat GBTs becoming like taking over Google from a search standpoint at scale, right? Like they're just winning the at scale. So I feel like the business models are shifting. Even Google announced the big launch with YouTube, and I think that that's one of the battles is uh AI generated people are coming and are here. Actually, they're totally here, but the vast majority of people on YouTube are actual people, and that's really rich content that uh can be used a lot of different ways. And so Google and ads and everything of what I'm seeing is being pushed to to YouTube and to like buy now. Um, I don't I don't know that that's at a high level what I'm seeing.
SPEAKER_03:Yeah, I I mean I think I I I think what what's certainly evident, and I'd say, well, well, you know, what's what's gonna you know, rubber crystal ball, what's gonna happen with with Google, we're gonna see, you know, sort of in like basic metric stuff, but it's like we're gonna see reduced clicks, we're gonna see increased conversion from those those clicks that we get. The the the funnel will be collapsed. I think we're seeing that already, where people are going to be in their their LM of of choice to uh do the research, gain the knowledge on whatever they're wanting to do, and then their transaction that they'd move to the web for for transaction. Maybe that'll eventually get to you, you know, to the the one-click situation of okay, I plan my trip, go book it all.
SPEAKER_01:Um yeah, oh the age in economy. You're you're talking the yeah, yeah, yeah, I agree.
SPEAKER_03:And and and so you know, I think I think we're gonna probably step our way there. And I think sort of what the world will look like over the next couple of years, maybe maybe you know, hard to make predictions that age well in AI. But it's uh it will be can people continuing to exist in the LLMs for that for that knowledge gaining. Um as a result, most websites seeing a reduction in clicks, especially if it's superficial layer uh information, and then um a increase in quality of the clicks that they get from a standpoint of of conversions, and that yes, LLM visibility and seeding LMs with the the knowledge that um drives them towards your uh your solution as being the optimal solution for that for that problem is is I think definitely the the name of the game.
SPEAKER_01:Yeah, and I I know that we're uh SEO podcasts, but man, I also see to the L the the agent economy, I see crypto, right? I see the money of the internet coming involved in this to be able to transact. Like I think there's just so much transformation that's happening, and and to your point, uh predictions don't age well. Uh who knows where where things are are gonna go. Um and and I like I like, yeah, a lot of it is about seeding the LMs. And I talk about right now is the opportunity, right? Right now is the opportunity where everybody's trying to kind of figure out what's happening. Google, I feel like AI overviews was like a stopgap measure to keep people from moving to the LLMs so quickly. But as people use the LMs, now a study did just come out. I can't remember who put it out. I'll I'll have to have them on the podcast. But um basically it showed, and this is early, that data from the LLMs were just as good, maybe slightly better, but not um as much better as I thought it would be uh than Google itself. So I I thought that was interesting. I, however, think that the customer journey is about brand management. And if people are using last clack last click attribution, um who knows, right? They I you know things come in on an ad, but they've seen it on Facebook, they've you know, they they've saw it on Reddit, they've maybe used multiple LLMs to make decisions. Um it's hard to say what's happening and how people are making this decision. It's it's really about how is your brand showing up, how visible are you uh in these LLMs. Uh but you know, I'm dealing with John entity issues. So you can see my name here is Matthew Matt. So I'm showing up in the knowledge graph as two separate people. Um because my my titles change, my names change, we've changed the name of the company, we've changed the name of the podcast, and there's also other people out there with my name. So there's some ambiguity around who is me or what. And it's uh it's becoming a greater and greater uh issue. So I'm trying to disassociate as well as I'm trying to uh overlap these tasks because my my actual name is Matthew Bertram, and then Matt Bertram and Matt Bertram Live are just aliases, actually, or nicknames or whatever you want to call it. And so understanding how all these things are connected kind of go back to that concept of of seeding the LMs, but speaking to the LLMs in a way that they understand what you're trying to tell them because they're doing a really great job of having to sort through all this information and give you the best possible answer to what you're looking for. So this is really an important piece of the future, I think.
SPEAKER_03:Yeah. No, it's uh it for sure. We're we're we we have a content optimizing optimization tool within within originality. Um, and we are sort of adding a feature very shortly that is all about sort of LLM visibility optimization and and sort of tuning that content to the to the um sort of what what has been sort of known to date around studies around stat placed high in the uh in in in the findings quote from expert sort of what are the features that LLMs want to uh want to ingest.
SPEAKER_01:Yeah, no, all right, let's circle back to plagiarism. Okay. Uh and and start, you can start very big picture and then tighten it all the way down to like what the marketers want to hear.
SPEAKER_03:Yeah. Um so I mean plagiarism been especially in academia, monster business. Um, everyone's been been through it since sort of the the birth of the internet. Um, there's been been plagiarism plagiarism checkers. Um what we have seen um so both both in in that world and in sort of the digital marketing world, um the amount of plagiarism that's happening has been on a massive decline. Because why would somebody plagiarize when they can just copy and paste and get something out of Chat GPT? And and so whether it's been plagiarized and then rewritten by AI, um, there there's certainly some some risk of of all of those things happening. Um, but what if Google the Google trend is really fascinating? We we actually should mention this earlier because it's kind of funny. We launched um originality.ai uh three days before Chat GPT launched. Uh and when we launched, there was zero search volume for AI detector. Now it's a four to eight million search volume keyword a month, depending on depending on when it's happening. Whereas plagiarism is like uh sub one million. Um, and so if you look at the Google trends between the two, it's quite fascinating to see this sort of plagiarism checker seasonality been there forever, growing year over year. Chat GPT comes out, a detector takes two years, but now skyrockets above above plagiarism checker, and plagiarism checker start is starting to decline. And so um I I think it's still something that is worthwhile checking because there is legal risk associated with direct plagiarism as a marketer. Uh, and so it makes sense to sort of uh include that in your QAQC process, but the sort of prevalence of it is is significantly declining.
SPEAKER_01:Let's talk about this sensor around that. Um, what are all the major laws or guideposts that people need to follow if they're trying to build a program and incorporate AI? Like, is there references that they should look at? Is there laws they need to be aware about? Like, what would be the best kind of governance policy around this?
SPEAKER_03:Yeah, so so it it depends on the use case. And so there's certainly fair use is allowed of of other people's content depending on depending on that use case and depending on the proper citation of that of that content. And so what are the best practices around ensuring you don't get your business in trouble with with plagiarism? Um, is running a plagiarism check, identifying the sources that get identified, and then doing a manual review to see is that truly copied, or is that just your own words written, written the same five, same five words can be used in by by multiple people multiple times. Um and so I I would say it's um some organizations so the the rules around it for most markets would be around sort of like fair use. Um five percent, ten percent, fifteen percent thresholds are are not uncommon as sort of for for companies to to sort of require and then ensuring that anything is properly cited. You are you are certainly on solid footing if you're sort of ensuring that there's five percent or less uh plagiarism in a piece of work, and that anytime that there is. copying or uh other sources identified, um citing, reviewing if those should be cited or not. That that gives you a pretty solid footing that you are not going to get yourself into any any issues.
SPEAKER_01:No, I I I like that. I I started to think um images, images, there's a a whole business out there of you've used this image for fair, you know, it's not fair use and you know they're trying to get money and people are doing that. And now uh just like content, uh you have AI generated images which are completely new images. And I was listening to one of the uh Google podcasts, the Google Webmaster podcast, and it was um it it was saying that that's totally okay. It was like that's completely new that's totally okay. And then the proliferation of content that they're having to index and that's why the unindexing uh is starting to happen because uh you the standards across the board if everybody's you know it has moved up right like the the standard has has moved up I would love to hear from you maybe on the uh horror side of things as well as like the awesome side of things some case studies on how people have used your tools and and what are some of the success stories around that.
SPEAKER_03:Yeah yeah sounds good so I'd say kind of a uh a a horror story um and we unfortunately hear it too often um but someone typically a website owner will come to us and say hey our site just got de-indexed we weren't using AI um and it's like okay like me like maybe um let's like run we have a website scanning feature um we feel terrible here's here's a bunch of credits that that that has happened to your your business um go go run website scan and see what the what the tool says and then inevitably there's a point in time so you can see this graph of the website's content and inevitably there's a point in time where whether something something changed in that in that editorial process and all the content went from call it one post a week to eight posts a week and they were all ai generated and then the the the site the site tanked um the those are those ones suck because there's there's real pain associated with those um businesses laying off employees um livelihoods lost because a website got tanked because somebody on that team was taking on risks that the um risk owner the business owner didn't didn't understand um and so that that's been a been sort of a a pretty common use case that that we have we have um seen where it's it's definitely a a horror story when we see it have you seen a threshold I feel like google's taking a lot longer to index stuff I don't know the sandbox terminology uh as well as the the the delisting or unindexing uh of the content did you have you seen like a a point where um they're they're indexing it but they're not really ranking it like there's a lag time and maybe they're they're running it through some of these systems or also is there like a grandfather period that I feel like older content does better it there there's some equity accrual uh or link equity accrual that that potentially happens over time from what I've read uh in the the um the the trade the the the the patents or whatever so um there's a lot of tools now that you can pull out from the patents like what's happening but I wonder if there's a way to to know where that cutoff is or where that threshold is because new contents taking a lot longer to to get indexed.
SPEAKER_01:And so when we're working with clients and you know they're they usually come when they're in a bad situation you know to turn it around we we want it to happen quickly but it's happening a lot slower than we would like to see it. Like Google doesn't turn on a dime anymore.
SPEAKER_03:Yeah I mean I think I think core I mean it it seems like there's more there my sense is that there's less movement between core updates than what potentially used to be the case in in Google. And so like movements happen when those core updates are happening. In between those core updates um there's not as much movement in in the results as as there used to be I believe that like that's certainly what we sense and that's I believe that's kind of what the data data supports. And so I think that lends to sort of the the sense that new content getting published doesn't takes longer to to drive results because it's sort of waiting for that especially if a site is sort of in that in sort of a a down a downward trajectory it takes that till the next core update to to sort of see that okay we've addressed the eat issues or or whatever whatever it might be.
SPEAKER_01:Yeah yeah I there's a lot of conversations I think at Google around trust like levels of trust and you know thresholds and you know there's it it depends right that's a horrible answer. I know yeah that's not always the answer. What are some really positive use cases uh that you've seen with originality uh AI to help help save something or saw something and cut it caught it before maybe they they got penalized by by Google or something.
SPEAKER_03:Yeah so we we we I mean the the common use case for originality is that there is an somebody functioning as an editor within a marketing company within a a website writing team they function as an editor they have their team submit content to them and then they run a QAQC process on on that content. Some of the things that commonly happen are writers swear up and down that they didn't use any AI runs through the tool says says that it was AI and then they share the the and we have a free Chrome extension for these situations where sort of the writer says I didn't use AI the tool says you you did um you you likely did the tools provide sort of a probability um not an absolute judgment um and then it says then then the editor takes that document the Google document to our free Chrome extension which then visualizes the entire writing process and uh the at that point the right the editor can see that the writer just copied and pasted um the entire text and the doc and the document yeah in the document couple formatting changes and then got 100% AI score shows that to the writer and the writer says yep you're right I I apologize I lied um I did use I did use AI.
SPEAKER_01:So go back to that one piece.
SPEAKER_03:So if someone's cutting and pasting something in a document there are markers or tokens what are what are like the little dashes right that that come out um because AI can't uh uh it's easier to put together a thought without putting together a full sentence I think it's just um and so I think they put that I I felt like it was like a watermark for a long time um like what what are the fingerprints what are the telltale signs that something's AI generating yeah it it's um it's an unsettling answer that I'm gonna provide um but and and it's sort of similar to like ask ask John Mueller or anyone at Google you know why why is this ranking above this and it'll be a bunch of sort of general platitudes um that that but the reality is they don't know um it's it's it's an AI it's a black box they can sort of understand its behavior on a on a large scale and similarly we can understand our detectors behavior on a large scale but on any individual piece of content why did this why was this identified as AI or not that's very challenging to to do um because the AI detector itself is a black box uh we have a feature coming out called a deep scan which looks at um uh a uh understanding better better understanding um how the text could be sort of adjusted if it was incorrectly identified as as AI so that people can sort of writers can sleep at night knowing that their work is going to pass an AI detector. And so it's it's a unsettling answer. There there are some things that um can lead to it. So if it if it is AI content it is more likely to get identified as as AI content. AI content if it is highly formulaic um highly formulaic very structured writing um or oddly formatted those can lead to so highly like highly formulaic highly structured writing can lend itself to looking more like AI um oddly formatted text will reduce the AI detector's accuracy and then therefore will result in more times it being identified as AI. So uh specifically your tool you were saying that like it was like cut and paste it like is is the tool looking at the number of drafts or how long the document is that what it's looking at how long the document took to generate the document yeah so so so sort of best practices are that an editorial team says if if they're using sort of the originality tooling is that the editorial team says writers you must use a Google document from from start to finish and then that Google document will get a score for AI the probability of it being AI and then it will receive um a report that shows the length of time the number of writing sessions characters per characters over time and the if if you see this sort of thousand word document that was worked on for two hours with a bunch of edits and deletes and then one little section is identified as potentially AI and you've worked with that writer for a long period of time you can be confident that no this you can sort of see the writer have having read it written it um it's mostly so the the detector has said this is likely human but there's a some some uncertainty you look at the Google document the document and that and that chrome extension and you can see them putting in two hours of work into that into that piece of of content you can be fairly confident that that is that is human generated human is in the loop um and this isn't just sort of a copy paste out of chat GPT got it perfect okay so is there anything that I'm my brain's full so um is there anything that we can that we haven't covered that you think's really important for us to discuss based upon our current conversation I I think a couple a couple things that I always love to sort of like make sure are are sort of understood. Like I think there's a lot of this is something that didn't exist like the from a search volume keyword A detectors didn't exist um you know a couple couple years ago um and and there's been a lot of misunderstandings around around them um they're highly accurate uh not perfect um and so sort of on any individual case they they can have false positive they can have false negatives where they incorrectly identify something as AI or human and so that's that's important to sort of understand um and then the second piece is that um originality in most tools provide a classification of AI versus human and then a probability um a confidence score so it's I think it was a the AI detector will say I think it was AI or I think it was human and here's how confident I am in that prediction that often gets misunderstood as I'm 70% the this content shows up as 70% AI 30% human um and that's that's not exactly what that means. It's a binary classification AI or human and then a a confidence score in that prediction.
SPEAKER_01:Got it okay very cool.
SPEAKER_03:So what are the like biggest takeaways or biggest tips that you can give to marketers today that are jumping in headfirst into AI because you need to be um uh but doing it uh in a in a responsible way yeah I think first is is ensuring that the risk owner of your company that you're doing marketing for under understands the and agrees upon the the where the AI is being used and the risks associated with that. So I'd say that's first second is um ensuring you're in the content that you're producing ensuring you're adding value beyond just words. If you're competing with words you are going to you're you're competing against sort of infinite free words um that that's hard. And so finding ways to add value beyond words tools graphs primary data um finding finding a way to add um and then third um it's you know I've been sort of doing SEO for long enough it's always very very very tempting to find the that shortcut and click a button um that that doesn't that does that has never and and doesn't exist now that that makes it makes this whole process easy. I love that well how do people get in touch with you find out more about your work your studies um they can of course go to the website but I'll let you kind of share uh your your handles and stuff yeah so um yeah we publish publish studies constantly uh heavy focus around sort of how ai is impacting um the the sort of internet and how many people are using lms dot text we have we have a study sort of running tracking the number of websites that are using that which is always interesting to see um you can see all of that at originality.ai and sign up for our our newsletter and we we keep sharing um people can get in touch with me uh j-on at originality.ai or uh find me find me on LinkedIn.
SPEAKER_01:Awesome well john gilham everybody uh thank you john so much for coming on if you got value out of this podcast please go to the platform you're listening on and leave a quick review it would be super helpful um you know if it has to be AI generated that's fine uh but uh we really need some reviews uh from real people and uh I you know if there's things that you would like to hear about or topics or uh feedback please please leave that as well. We want to engage with you. This is an exciting industry that is growing. I think it's transforming from just you know SEO and a vertical just like traffic is left 50% of traffic has left the website and has gone everywhere else um LOMs are are coming into it as well. So uh share this with somebody that you thought was valuable like it tag us uh Shaiko us share like follow we really appreciate it um if you want to grow your business with the largest most powerful tool on the internet uh which I guess is well bigger than the internet now maybe soon to be uh LLMs uh reach out to EWR for more revenue in your business uh follow me on LinkedIn I'm trying to post more we are launching our LOM visibility certification very soon uh John thank you so much uh for coming on until the next time everybody my name is Matt Bertram bye bye for now