Search as a Channel
Explores how discoverability powers modern marketing from SEO and paid media to social discovery and product growth.
Search as a Channel
The Agentic Web Is Coming for Your Marketing Stack
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
If your brand is ready for AI answers but not AI actions, you're already behind. Most marketing leaders are still thinking about AI as a smarter search engine. But the real shift is bigger: AI isn't just answering questions anymore. It's learning to take action on behalf of customers. Booking, buying, comparing, recommending. And the brands that win won't be the ones with the most content. They'll be the ones AI can actually do business with.
In this episode, we break down what the "agentic web" means for agency owners, CMOs, and marketing leaders, no engineering degree required. We cover why being findable by AI is no longer enough, what it takes to become actionable inside AI-driven customer journeys, and the practical steps your team should take in the next 90 days. If AI agents became a real acquisition channel tomorrow, would your business be ready? That's the conversation we're having.
I want you to imagine something for a second. Um, especially if you are a digital marketing agency owner or maybe, you know, someone making the really big decisions for a brand's digital presence. Right. Imagine a world where your team spends, I don't know, months meticulously crafting this visually stunning website. Like you've gone through endless revisions to get the perfect color palette.
SPEAKER_01Oh, yeah. The endless client feedback loops.
SPEAKER_00Exactly. The typography is completely bespoke, the user journey flows beautifully down the page. But here is the catch. That beautiful website is completely 100% ignored.
SPEAKER_01Wow. Yeah.
SPEAKER_00Because the primary entity that is visiting your client's business and, you know, evaluating their products, deciding whether to recommend them to a multi-million dollar buyer, it doesn't have eyes.
SPEAKER_01It is a completely blind evaluation. I mean, that system is not looking at your hero image or uh your CSS animations. It is judging the business purely on how well machines can read, understand, and most importantly, actually trust the back-end data.
SPEAKER_00And that is a terrifying thought. Like if your entire marketing budget or your whole agency service offering is currently tied up in just visual design and traditional search engine optimization, you're in trouble.
SPEAKER_01Yeah, you're playing a game that's ending.
SPEAKER_00Exactly. We are talking about a massive structural evolution today on this deep dive. We're moving away from the era of SEO, where the singular goal was just, you know, getting visibility on a search results page.
SPEAKER_01Those blue links, yeah.
SPEAKER_00Right. And we are moving to what is now being called the agenc web. And this is a web where AI doesn't just answer questions for users, it actually takes actions on their behalf.
SPEAKER_01Right. It books the meeting, it negotiates the software tier, it schedules the service.
SPEAKER_00Aaron Powell, which means if your client's infrastructure isn't built for that specific type of interaction, they won't just rank lower. They simply won't be part of the transaction at all.
SPEAKER_01They'll just vanish. It will completely vanish from the customer journey.
SPEAKER_00Okay, let's unpack this. Because we have some incredible sources guiding our deep dive today. We're pulling insights from Dwayne Forrester on brand architecture for AI and Sloboda and Manic's breakdown of these new agentic web standards.
SPEAKER_01Really great foundational stuff.
SPEAKER_00It is. And our mission today is really to explain why machine trust is the absolute new competitive advantage. We want to give you, the agency owner, a roadmap to make sure your clients are actionable, not just visible.
SPEAKER_01And that distinction is huge. Actionable versus visible.
SPEAKER_00So right now, the entire digital marketing world seems to be in an absolute frenzy over one specific, seemingly simple thing to solve this problem. A file called llms.txt.
SPEAKER_01Oh yeah, it's everywhere. Everyone is talking about it.
SPEAKER_00Literally everywhere. But before we get into why our sources argue this might actually be a trap, what actually is it? Like if I'm an agency owner whose eyes glaze over at file extensions, what does an elms.txt file actually look like?
SPEAKER_01Well, at its core, it's just a very basic, completely flat text document that sits in the root directory of a website. It's written in Markdown, which is uh just a super simple way to format text without any complex code. So think of it as a bare bones table of contents.
SPEAKER_00Just text, no styling.
SPEAKER_01Exactly. The whole idea is to point an AI system directly to a brand's most important information, completely stripping away the navigation bars, the pop-ups, the design elements, all of that.
SPEAKER_00Just giving it the raw data. Trevor Burrus, Jr.
SPEAKER_01Right. Offering clean, low-noise text for large language model to read easily. Trevor Burrus, Jr.
SPEAKER_00Which honestly sounds great on the surface. Like I kind of like to think of Elmals.txt like a restaurant menu taped to the front window.
SPEAKER_01Oh, that's a good way to look at it.
SPEAKER_00You know, it's a sign on the door. It's incredibly convenient for a passerby who is just um looking for a quick list of options. But here is the problem our research is pointing out. Focusing all your agency's energy on this file is already an outdated strategy. Yeah. It's a false finish line. But wait, isn't LMills.txt what everyone is scrambling to implement right now? Like, are we saying they're focused on the wrong thing entirely?
SPEAKER_01What's fascinating here is that the instinct behind the trend is entirely correct. AI absolutely needs clean access to your data, but providing a flat text file is only step one of a much, much longer journey.
SPEAKER_00Right.
SPEAKER_01Because of the structural problem with a flat document is that it has absolutely no relationship model.
SPEAKER_00Okay, hold on. What do you mean by a relationship model?
SPEAKER_01I mean it cannot express how different pieces of information interact with each other. Like it can tell an AI system, here's a list of features our software has.
SPEAKER_00Okay.
SPEAKER_01But it cannot explicitly express the historical context. So for example, it can't say that feature X was actually deprecated last month and completely replaced by feature Y.
SPEAKER_00Oh wow. Okay.
SPEAKER_01Right. It can't map out that product A belongs to a specific enterprise category or that a certain engineer is the authoritative source for a technical claim. It's literally just a static list with no underlying graph connecting the concepts.
SPEAKER_00Aaron Powell So going back to that restaurant menu taped to the window, it's fine if the AI just wants to know, like, if you serve pasta. But if an AI assistant is trying to actually book a table for 8 p.m., verify live allergy information against the kitchen's current ingredient list, and then, you know, securely pay a deposit for a large party.
SPEAKER_01Right. That piece of paper taped to the glass is totally useless. It can't do anything.
SPEAKER_00Exactly. It's not actionable. Trevor Burrus, Jr.
SPEAKER_01That is a perfect way to look at it. And the stakes get even higher when you look at the enterprise burden.
SPEAKER_00Aaron Powell What do you mean?
SPEAKER_01Aaron Powell Well, imagine an agency managing complex product sets for a massive global client. Every single time that client updates their pricing tiers or launches a new case study or even just tweaks a feature name, someone has to remember to manually go in and update that static text file.
SPEAKER_00Aaron Powell Which, knowing how messy client operations can be will absolutely be forgotten within a month.
SPEAKER_01Inevitably. I mean, it becomes a massive operational nightmare. And when it gets out of date, the consequences are severe. Trevor Burrus, Jr.
SPEAKER_00Because the AI is reading old data.
SPEAKER_01Aaron Powell Exactly. When an AI agent is trying to compare two competing software platforms for a buyer, and it pulls data from a stale, flat text file with no relationship metadata to provide context that is the exact recipe that causes AI to hallucinate.
SPEAKER_00Oh man. So it'll just make things up based on bad info.
SPEAKER_01Aaron Ross Powell It will confidently tell a buyer that your client charges$50 a month for a feature that actually costs$500.
SPEAKER_00Aaron Powell And the brand, or I guess the agency managing that brand pays the reputational cost of that hallucination. The client loses the deal because the AI gave out bad info.
SPEAKER_01Aaron Powell Exactly.
SPEAKER_00So the goal line has fundamentally moved. We are shifting from discoverability, which is just, you know, having an AI mention your client's brand and an output to operability.
SPEAKER_01Trevor Burrus Yes. Operability is the entire game now.
SPEAKER_00Aaron Powell Does the AI trust the client system enough to actually execute a real financial transaction? But that begs a massive question.
SPEAKER_01What's that?
SPEAKER_00If a flat text file isn't enough to make a website truly operable for an AI, what is the actual plumbing being laid down to make this happen? Because we're talking about AI agents independently browsing the web and booking things. There must be real standardized infrastructure being built behind the scenes.
SPEAKER_01Aaron Powell There is, and it's rolling out at an astonishing speed. I mean, it's happened at internet speed. But to understand why these new protocols are being built, you first have to understand the M times N problem.
SPEAKER_00Aaron Powell Okay, I saw this in the research. Walk me through the M times N problem because it seems to be the load block that every tech company is currently slamming into.
SPEAKER_01Aaron Powell So think about it like this: you have M number of AI models out there right now, ChatGPT, Claude, Gemini, Copilot, plus thousands of open source variants.
SPEAKER_00Aaron Powell Right, tons of them.
SPEAKER_01Aaron Powell And then you have N number of business tools: your CRM, your inventory database, your scheduling software, your client's website. If you want every AI to be able to talk to every tool, you have to build a custom integration for every single pairing.
SPEAKER_00So Salesforce would have to build a specific bridge to Claude, and then a totally different bridge to Gemini, and another one for Chat GPT.
SPEAKER_01Aaron Powell Exactly. And the AI companies would have to do the same in reverse.
SPEAKER_00It requires an exponential, unsustainable number of custom integrations that would completely fragment the internet.
SPEAKER_01It's the equivalent of having to buy a different physical phone charger for every single brand of wall outlet in your house.
SPEAKER_00Aaron Powell Which takes us right back to the early days of the Internet, before we had universal standards like HTTP and HTML.
SPEAKER_01And that shared history is exactly why the tech giants are taking a different approach this time. They formed the Agenic AI Foundation under the Linux Foundation. Right. You have fierce, bitter competitors, OpenAI, Anthropic, Google, Microsoft, all sitting at the same table, collaborating to build shared open standards for the agentic web.
SPEAKER_00Okay, I have to push back on this a little bit.
SPEAKER_01Sure.
SPEAKER_00Because these companies are in a brutal multi-billion dollar war for AI dominance right now, if they're all collaborating on this open infrastructure, does that mean the old walled garden approach of the internet is officially dead? Like why would they surrender their walled gardens?
SPEAKER_01Because keeping the garden walled in this specific scenario would actually starve them.
SPEAKER_00Really? How so?
SPEAKER_01They all recognize that proprietary standards at the infrastructure layer would hold back the adoption of their own AI models. If an AI agent can't interact with the web smoothly because it's locked out of 80% of the world's databases, that AI model isn't useful to the consumer.
SPEAKER_00Oh, that makes sense. The consumer will just switch to an AI that can book the flight or buy the software.
SPEAKER_01Exactly. So they have to collaborate on the plumbing. And that plumbing relies on four major protocols that agencies absolutely need to understand today.
SPEAKER_00Let's get into the alphabet soup.
SPEAKER_01Let's do it. Let's look at the first one MCP or the model context protocol.
SPEAKER_00This one was created by Anthropic, the makers of Claud Right.
SPEAKER_01Yes. And the best way to understand MCP is to think of it as the universal USB-C port for AI.
SPEAKER_00I love that analogy. You build one USB-C port and suddenly the microphone, the monitor, and the hard drive all just plug in effortlessly.
SPEAKER_01That's the exact mechanism. Instead of building 50 custom integrations, an agency or a brand builds just one single MCP server that connects to their live data.
SPEAKER_00Just one.
SPEAKER_01Just one. Once that server is up, any AI model that supports the standard, whether it's Claude today or Gemini tomorrow, can securely plug into it and access your live pricing, your real-time inventory, or your customer records.
SPEAKER_00That is massive.
SPEAKER_01The adoption rate is unprecedented. It reached 97 million monthly SDK downloads in just over a year.
SPEAKER_00Wow. Okay, so MCP solves the problem of connecting the AI to the database. But what happens when multiple AI agents need to talk to each other?
SPEAKER_01Ah, yes.
SPEAKER_00Let's say my client is an e-commerce brand. They use a custom AI for their customer service chat, but their warehouse logistics are handled by a totally different AI from a different vendor. How do those machines coordinate?
SPEAKER_01That requires the second major standard, A2A or agent-to-agent protocol. This one is being spearheaded by Google. A2A allows autonomous agents to discover each other and collaborate securely using a mechanism called an agent card.
SPEAKER_00Wait, an agent card? It sounds like they are passing digital business cards around.
SPEAKER_01It functions very much like a business card, but with cryptographic security. Okay. When your client's customer service AI realizes a user needs a refund, it broadcasts an agent card that essentially says, I am authorized to request refunds up to$50, and here are my parameters.
SPEAKER_00And then what? The other AI just reads it.
SPEAKER_01Yep. The billing AI receives that card, verifies the cryptographic signature to ensure it's not a malicious actor, processes the refund, and hands a confirmation card back.
SPEAKER_00That's incredible. And the user sees none of this.
SPEAKER_01Exactly. The human customer just experiences one seamless chat interaction, completely unaware that multiple vendor AIs were negotiating the task behind the scenes.
SPEAKER_00That is wild. Okay, but I'm thinking about the millions of small and mid-market business websites out there. A local plumbing company or, you know, a boutique sauce brand doesn't have bespoke AI agents running their operations.
SPEAKER_01Right. They just have standard websites.
SPEAKER_00Yeah, they just have a standard website. How does an AI talk to them?
SPEAKER_01That brings us to arguably the most important protocol for digital marketers, NL Web, or the Natural Language Web. This is a Microsoft initiative driven largely by RV Guha.
SPEAKER_00Now here's where the story gets really compelling for SEO veterans, because RV Guha is the original creator of schema.org.
SPEAKER_01Yes, he is.
SPEAKER_00He literally wrote the book on how to structure the web for traditional search engines. That's huge.
SPEAKER_01Which is exactly why we need to pay attention to what he is doing now. NL Web takes a passive traditional website and turns it into a natural language interface.
SPEAKER_00How does it do that without completely rebuilding the site?
SPEAKER_01Under the hood, it scans the structured data you already have on the site, your schema markup for products, your reviews, your location data, and it creates a conversational endpoint.
SPEAKER_00So instead of a human clicking through like five different drop-down filters to find a product, an AI agent can just query the site's NL Web endpoint directly and asks, um, find me a family-friendly restaurant in Barcelona with outdoor seating that has availability tonight at 7 p.m.
SPEAKER_01Yes. The site processes that natural language query against its database and returns the exact data the agent needs. And here's the crucial technical detail that ties this all together.
SPEAKER_00Okay.
SPEAKER_01Every single NL Web instance automatically functions as an MCP server.
SPEAKER_00Wait, really? Yes. So by setting up NLWeb for a client, an agency is instantly plugging that client's standard website into that universal USB-C port for every major AI model on the market.
SPEAKER_01You are instantly transforming their site from a passive digital brochure into an active machine queryable database.
SPEAKER_00That is a game changer.
SPEAKER_01It really is. And just to round out the technical landscape, there is a fourth standard worth mentioning quickly: agents.md.
SPEAKER_00Okay, what's that one?
SPEAKER_01This is essentially a rulebook file. It gives AI coding agents, like GitHub Copilot, specific boundaries and context about a code base so they don't hallucinate bad code or break a client's application.
SPEAKER_00Okay, so we have the four pillars. We've got MCP as the universal plug, A2A for agents talking to agents, NL Web for making standard websites conversational, and agents not MD for coding rules. Right. Now, knowing these global pipes exist is fantastic theory. But let's bring this down to reality. If I'm an agency owner, I can't just walk into my client's office and say, hey, we need to build an MCP server today.
SPEAKER_01Yeah, they look at you like you had two heads.
SPEAKER_00Exactly. We need a roadmap. How do we actually structure a client's brand data so it can hook into these new protocols?
SPEAKER_01This is where we shift from global infrastructure to specific brand architecture. The blueprint for this comes from Dwayne Forrester's four-layer brand architecture. Right. This is the exact roadmap for moving away from that flimsy elms.txt file and building true, verifiable machine readability.
SPEAKER_00Okay, before we jump into layer one, which the sources call structured fact sheets using JSON LD, we need to translate that. JSON LD sounds incredibly intimidating to a non-developer. What does that actually look like in practice on a client's website?
SPEAKER_01I know it sounds complex, but it's really just a specific format of code that sits hidden in the header of a web page. Okay. It's essentially a script that explicitly declares facts to a machine. Instead of hoping a search engine figures out that, you know,$49.99 is a price by reading the text on the page, JSON LD is a piece of code that explicitly states the entity on this page is a product, the price of this product is exactly$49.99, the currency is USD.
SPEAKER_00It feels almost like building a highly structured corporate org chart for your client's data. Like if we use that analogy, layer one, these structured fact sheets is just the employee ID badge. Yes. It just verifies the basic facts of who and what the entity is.
SPEAKER_01That's a great way to look at it. You are being incredibly precise with your basic organization and product schemas. And the ROI on this is already measurable. Oh, yeah. Pages with perfectly valid structured data are currently 2.3 times more likely to appear in Google's AI overviews.
SPEAKER_00Wow. 2.3 times.
SPEAKER_01Yeah. This isn't just about getting a nice visual, rich snippet in search results anymore. This is the foundational ID badge that machines require before they will even interact with you.
SPEAKER_00Okay, so if layer one is the ID badge, layer two in our org chart would have to be the actual reporting structure, like who reports to who, what department they're in, how it all connects. In Forrester's architecture, this is called entity relationship mapping.
SPEAKER_01Yes. So layer one establishes the individual facts or nodes. Layer two is about expressing the graph, the actual relationships between those nodes. Okay. You have to explicitly code how your client's specific software products relate to broader industry categories, how those categories map to specific use cases, and how all that links back to the authoritative corporate entity. But it's not just the list anymore. Exactly. This layered mapping allows an AI to traverse your client's digital footprint with total contextual awareness rather than just seeing isolated pages.
SPEAKER_00So layer one and layer two are basically structuring the static facts, but data changes, pricing changes constantly. So layer three in our org chart analogy would be the direct phone extension to reach that employee for a live update. Technically, this is content API endpoints.
SPEAKER_01This is where the real operability begins. This is where you actually utilize that MCP integration we talked about earlier. You have to move away from locking your dynamic data inside static formats.
SPEAKER_00Aaron Powell Give me an example.
SPEAKER_01Well, if your client has a PDF comparison table showing how they beat their competitors, that is a total dead end for an AI agent. You can't read a PDF reliably. Right. Layer three means providing programmatic, versioned access to those facts via an API so the AI can pull the live data at the exact moment of the query.
SPEAKER_00Which brings us to the final piece, layer four. In our corporate org chart, layer four is human resources stepping in to verify that the employee actually still works there.
SPEAKER_01Exactly, that they are legally authorized to speak on behalf of the company. In the data architecture, this is verification and provenance metadata.
SPEAKER_00Provenance metadata. So timestamps, detailed update history, and cryptographic authorship.
SPEAKER_01Yes. And if we connect this to the bigger picture, this raises an important question about how AI systems actually retrieve information. When we talk about AI searching the web, we're usually talking about rag retrieval augmented generation.
SPEAKER_00Let's pause and define Argy for a second. That basically means the AI doesn't inherently know the answer from its training data, so it goes out, retrieves real-time documents from a database or the web, reads them, and uses them to generate an augmented answer, right?
SPEAKER_01Exactly. Now imagine a rag system is trying to answer a buyer's query about your client's enterprise software. It goes out to the web and retrieves three different blog posts that mention your client's pricing.
SPEAKER_00Okay.
SPEAKER_01But all three blog posts list a totally different price. How does the AI decide which fact is true?
SPEAKER_00I mean, if it's just guessing based on text, it might hallucinate the wrong one.
SPEAKER_01Right. But if your client has implemented layer four, the AI doesn't have to guess. Provenance metadata is the ultimate algorithmic tiebreaker.
SPEAKER_00Because it proves who said what and when.
SPEAKER_01Exactly. A fact hosted on your client's server that carries a clear cryptographic timestamp and a verifiable source chain will beats an undated third-party claim every single time. By providing clean, structured provenance, you dictate how clearly the AI stores the memory of your brand. You eliminate the hallucination risk entirely, it creates much sharper vector index hygiene.
SPEAKER_00I can practically hear the agency owners listening to this saying, okay, Jason LD API's provenance metadata. This is fascinating, but it sounds entirely like a back-end IT engineering problem. I run a marketing agency. Why shouldn't I just forward this deep dive to the client's CTO and wash my hands of it?
SPEAKER_01Because if marketing agencies wait for IT departments to solve this, they're going to lose their entire revenue pipeline. Consider the alternative. IT builds the points, but they don't know the customer journey. The touch points that are most vulnerable to AI mediation right now are product discovery, vendor shortlisting, and appointment booking.
SPEAKER_00And those are fundamentally marketing moments.
SPEAKER_01Exactly. If marketing doesn't own the architecture of those moments, the brand will lose the conversion. The narrative of the web has completely shifted.
SPEAKER_00It really has.
SPEAKER_01Yeah, the SEO hacks.
SPEAKER_00Right. But in the AI era, the system exclusively rewards machine trust. Can the autonomous agent verify your client's identity, read their capabilities, and rely on their data without a human ever having to intervene and clean it up?
SPEAKER_01Machine trust is the new currency. Period. So the sources outline a very specific, aggressive 90-day playbook for agency owners to start executing today.
SPEAKER_00Let's walk through that playbook. Step one.
SPEAKER_01Step one, audit your client's machine readability. And I don't mean checking if the site is visually beautiful. I mean doing a clean technical audit.
SPEAKER_00Like using structured data testing tool.
SPEAKER_01Exactly. Run their core pages through those tools. Can an AI agent clearly interpret their product catalog and pricing right now, or is it completely buried in JavaScript? That makes sense.
SPEAKER_00You don't need to rebuild the entire website on day one. Look at where the actual transactions or bookings are happening. Focus your API and schema efforts entirely on those conversion ball necks first.
SPEAKER_01Spot on. Now step three requires internal diplomacy. You must align marketing, product, and sales around a single structured truth.
SPEAKER_00Oh, that's the hardest part.
SPEAKER_01It really is. If the humans inside your client's own company have three different spreadsheets for pricing, the AI will definitely fail to understand it. You have to establish the canonical factors.
SPEAKER_00And step four, treat the agentic web not as an experimental IT project, but as a core channel strategy. This is the next evolution of customer acquisition, which brings up, I think, the most practical question of all. So, what does this all mean for the agency owner who is walking into a client pitch tomorrow morning, like you're sitting across from the CMO? How do you actually sell the concept of machine trust to a client who only cares about traditional SEO traffic? Like, why should we pay you for layer three API endpoints instead of just buying more Google ads?
SPEAKER_01You sell it by highlighting the agentic gap. You look the CMO in the eye and say, right now, an AI assistant is evaluating your business on behalf of a buyer. And because your data is unstructured, the AI is returning to that buyer and saying, I found some options, but I'm not entirely sure which is best, and their pricing is unclear. Ouch. Yeah. Then you tell them that your agency's job is to upgrade their infrastructure so that same AI assistant says, Here is a brand I confidently recommend, here is their real-time availability, and I have already securely negotiated a meeting time.
SPEAKER_00That is such a stark contrast.
SPEAKER_01That gap, the space between I'm not sure, and I confidently recommend is where every single conversion and all of their profit margin is going to live for the next decade.
SPEAKER_00That is an incredibly powerful way to frame it. This has been a monumental shift to explore today. To bring all these technical concepts back down to earth, the primary customer journey is actively, irreversibly shifting from a human-to-website model to a human-to-agent to system model.
SPEAKER_01It's a completely new paradigm.
SPEAKER_00It is. It's no longer enough to just publish high volume content or brute force your way to the top with backlinks. Your client's brand has to be deeply legible, mathematically verifiable, and instantly actionable inside these new machine-mediated journeys.
SPEAKER_01Absolutely. The agencies that take the time to understand these underlying protocols, MCP, A2A, NL Web, and actively implement that four-layer architecture for their clients are going to be the ones that capture the massive wave of revenue when AI agents start executing B2B and B2C tasks at global scale.
SPEAKER_00Which leaves us with a final, slightly mind-bending thought to take away today. We talk at the very beginning about that blind evaluator. If the primary entity visiting and evaluating your client's business in the near future doesn't have eyes, if it only interfaces with data protocols, schema markup, and APIs, how much of your agency's current monthly retainer is being wasted on pixel perfect visual web design?
SPEAKER_01It's a tough question to ask.
SPEAKER_00It is. And how much of that budget should immediately be reallocated to building invisible trust infrastructure? Take a hard look at your tech stacks and your service offerings today. Are you getting paid to build a beautiful digital billboard for a road that nobody drives on anymore? Or are you building the API endpoints that the AI actually trusts enough to buy from?