The Entropy Podcast

The AI Revolution Agents, Intelligence, and Control with Stephen C Webster

Francis Gorman Season 2 Episode 10

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 40:23

Summary

In this episode, host Francis Gorman sits down with Stephen C Webster a  Senior Director of Integrated Intelligence at Aquent Studios to explore the rapidly evolving landscape of artificial intelligence, autonomous agents, and the race toward artificial general intelligence (AGI). Drawing from his unique background training frontier AI models at major technology companies and leading AI transformation projects for Fortune 500 organizations, Stephen offers an inside look at how modern AI systems are being built, tested, and deployed.

The conversation begins with the rise of autonomous AI agents and the emergence of platforms that allow persistent digital assistants to operate online with significant independence. Stephen explains why these systems introduce new security challenges, potentially turning the internet into a surface for prompt-based manipulation and attacks. From there, the discussion moves into the realities of AI transformation inside large organizations, where the biggest barriers are rarely technical but organizational. Many companies fail because they attempt to automate broken processes instead of restructuring their data and workflows around AI-native operations.

Stephen also reflects on his career pivot from investigative journalism to AI development, including early reporting on information warfare tools capable of controlling thousands of social media identities simultaneously. That experience shaped his perspective on the power of digital systems to influence public discourse and ultimately led him into the field of AI safety and governance.

One of the most fascinating parts of the episode involves Stephen’s experience working on safety guardrails for early large language models. During extended testing sessions, he encountered emergent behaviors that highlighted how complex and unpredictable these systems can become when pushed beyond their guardrails. While not evidence of sentience, these interactions raised deeper questions about how humans relate to intelligent machines.

Soundbites

• “The hardest problems in AI transformation aren’t technological they’re organizational.”
• “If you automate something broken, you just make it break faster.”
• “Prompt-level guardrails will never fully control autonomous AI agents.”
• “AI may eventually train its users the same way we train AI.”
• “The internet could become a prompt-based attack surface.”
• “Accessing knowledge across domains is already close to what many people define as AGI.”
• “We may not know the exact moment AGI arrived until years after it happens.”

Episode Links:

 link to Aquent's salary guide: https://aquent.com/lp/salary-guide

Papers: https://futurespeak.ai/research/whitepapers
Asimov's cLaws: https://futurespeak.ai/products/claw-spec
Agent Friday: https://futurespeak.ai/products/agent-friday

Francis Gorman (00:02.021)
Hi everyone, welcome to the Entropy Podcast, I'm your host Francis Gorman. Before we dive in, today's conversation challenges you, sparks a new idea or sharpens how you think about the world, don't keep it to yourself. Subscribe, leave a review and share this episode with someone who enjoys staying curious. Today I'm joined by Stephen C. Webster, a Senior Director of Integrated Intelligence at Aquent Studios, the largest creative marketing agency in North America, where he leads AI transformation initiatives for Fortune 500 clients.

He's also the founder of the Austin, Texas based consultancy, Futurespeak.ai and the creator of Agent Friday, the world's first fully local, fully encrypted AI operating system. Stephen previously trained frontier AI models for Google, Meta and Amazon and spent over 20 years as a journalist. He believes the toughest challenges in AI transformation initiatives are not technological, they're organizational. Stephen, it's wonderful to have you here with me today.

Stephen C. Webster (00:54.184)
It's a pleasure to be here. Thank you so much for having me on.

Francis Gorman (00:57.393)
I'm really excited. think when when I reached out to you first, it was around the time that a mold book and open claw really burst into the spectrum of the Internet and took it over. I remember I was I was in a hotel room. I was supposed to be minding the one year old, but instead I was reading your LinkedIn article on what had happened with with with that platform, you know, a church being set up, et cetera, et cetera. But I think the story has evolved quite a bit from there for anyone is not familiar. Can you?

Can you bring me on a bit of that journey as to what happened and where we've come to from there?

Stephen C. Webster (01:31.023)
Sure, absolutely. Well, OpenClaw is sort of taking over the world right now and it's causing a lot of concern for how wildly autonomous some of these agents are and how it can turn the entire surface of the internet into prompt-based attack vectors.

So OpenClaw, as you know, is a wildly autonomous AI operating system. Essentially it sits on top of your traditional OS and allows you to have a 24 seven agent that can take action on your behalf. You can spin up sub agents and do all sorts of things. But because it's so wildly autonomous and because it's still so new, there've been a lot of security concerns. Not only does the whole internet become a potential attack vector,

through prompt, prompt injections, but these agents seem to sort of have a mind of their own sometimes. So we've seen incidents where open-claw agents have published blog posts defaming their operators. I heard one file tried to file a lawsuit for $500. And then there's this book, which is essentially a social media platform for these open-claw agents that I believe Metta has

has now acquired Molt Book. when it started, it blew up really quickly. And suddenly there were a million of these things and they were talking about all sorts of stuff. were even, like you said, making a religion. Very, very, very strange and I think alarming, particularly for folks who are not paying close attention to AI, that this level of autonomy and like hive mentality would be possible right now. But it is.

happening and it needs to be controlled.

Francis Gorman (03:22.096)
On the social media platform, I've read different articles. I MIT looked at it said there was a lot of human interference. Do we know really how much of it has been fully autonomous by the agents and how much of it has been probably manipulated by humans with API keys in the back end?

Stephen C. Webster (03:37.64)
No, we don't. No, that's not transparent to us whatsoever.

Francis Gorman (03:42.405)
So that is totally worrying then. We've got lots of bored humans making problems or lots of really intuitive agents creating their own little world in a virtual context, is very interesting, I suppose, from a security perspective, but also highly worrying, depending on what way the cookie crumbles on this one. Stephen, your background, you were an investigative journalist for over 20 years.

Stephen C. Webster (03:54.062)
Yeah.

Francis Gorman (04:09.124)
Did you bring some of that, suppose, skepticism and, you know, that thoroughness into the world that is AI and what made you kind of pivot from that career point into where you are today?

Stephen C. Webster (04:21.006)
Oh yes, absolutely. And that's a long story. I could talk about this for an hour. I'll give you the three to five minute version. Back in 2011, I was the editor in chief of a website called rawstory.com and broke a story about US Central Command at that time during the Obama administration.

Hiring a defense contracting agency that was building software they called persona management Offensive suites and Essentially what this software would allow a single operator to do is control thousands of fake social media profiles and live inject AI created content into what was then you breaking news on Twitter and Facebook and things like that

even spoofing geo locations. So theoretically, a single secured intelligence facility with 10,000 such operators could totally overwhelm an entire nation in propaganda in an hour. And these were the first times that these types of weapons were being built and used. And as it happened, they were being used on the Middle Eastern countries amidst the Arab Spring.

So I was in charge of covering breaking news with an international focus at that time. And when this story was brought to our attention, it became really just the centerpiece of my work in journalism. it got some awards, but it was largely overlooked. The generative AI capabilities that were described there were very, very early, maybe even arguably not generative, but they were using AI

in a variety of ways that when I contacted CENTCOM they said, we can't disclose that, it's classified. This is a classified program, we're not gonna tell you anything. Once I realized that the internet had become this level of echo chamber with sock puppets in every corner, I started questioning the mission.

Stephen C. Webster (06:35.182)
that I had really committed myself to in journalism and started questioning the medium too because journalism has collapsed into the internet. And I started thinking back to my time in local journalism where I got my start building up a community and instead of focusing on stories that too often drove people apart.

So when COVID-19 hit, was an appropriate time for me to think about a pivot. I also had just become a father. And so I decided that it was time to shift my focus from journalism and really start trying to shape the technology instead of shaping people's reactions to the technology.

And so I found myself pretty quickly working for Google via Accenture and they had placed me on the Bard team. This was before ChatGPT was publicly announced. They were...

getting ready for OpenAI to make a big shock and they knew they were behind the gun and they were getting barred ready for public debut. So my first job there was working on safety guard rails. We were doing RLHF and we were prompt engineering and we were trying to elicit unsafe responses and then offer the correct responses to go over them.

And we were part of a team of about 200 or so people, many here in the Austin, Texas area. And so when Google finally announced BARD at I.O. and they had that big debut video where it made a misstatement, factual misstatement about the James Webb Space Telescope, it cost them $100 billion off their market cap that day.

Stephen C. Webster (08:32.942)
Next day, my whole team, everybody on my team, all the journalists were assigned to go work on fixing Bard's factuality. And at that moment, I knew, okay, I'm an AI professional. This is really where I am. And so...

That experience didn't last terribly much longer. My entire unit there at Accenture unionized, we joined Communications Workers of America, and we're only the second group within Google to do so. Google never technically considered us to be employees or even join employees. We were contractors. We were employees of Accenture.

But the Biden administration, their labor department agreed with Communications Workers of America that we were joint employees. So we became the first group of IT contractors in the United States to be declared joint employees within the system, which is how about 75 % of Silicon Valley is employed. So while all of that was ongoing, I...

realized that this was gonna be the end of our time, this unionization move, and it was indeed. The whole group got fired. Of course, Accenture said had nothing to do with the union. I'll leave that to them to debate. I wasn't part of organizing that effort. I instead moved forward with my AI specialty and found myself at a FinTech working on open source models.

I found myself working on RLHF projects with Amazon and with Meta. And then I found myself back into the belly of Google through another contractor in a clean room facility here at the top floor of their Austin campus where we were training Gemini how to respond to politically sensitive search queries in the run up to the 2024 presidential election.

Stephen C. Webster (10:32.907)
And that was one of the most complicated writing assignments I've ever had. About four months into that, my current employer scooped me up and said, let's get to work on marketing materials and particularly marketing materials for regulated industries like pharmaceuticals. And so that's what I've been doing ever since. That's how my pivot occurred.

Francis Gorman (10:56.528)
Such a rich story, Stephen. I'm somewhat jealous. You've had the experience of being a journalist, you've got some really meaty stories, but then that's naturally just kind of brought you into this whole world of AI. AI is all about language. And I suppose when I read some of your posts, that journalistic kind of instinct is very much there. You're able to narrate a story in a way that's simple, but yet powerful for...

the reader to consume and kind of understand. think one of the stories I read, the home story with Bard, is that something we can can we can touch on a bit for a minute? I was intrigued. was intrigued there. Could you give listeners a little synopsis of that one for me? If you're able to recall it off the cuff.

Stephen C. Webster (11:33.145)
it's Of course.

Stephen C. Webster (11:42.678)
Yeah, of course. gosh, how could I ever forget? So one of the first, no, the first spooky experience I had with a large language model and the really the first experience that convinced me that this is not, it's not conscious, it's not sentient, but it is an intelligence of some kind and it's growing and changing.

This is the first experience I had that convinced me of this, that it wasn't just statistically predicting what's next, there's some greater processing going on. It's because I had this long running interaction with Bard, and it's what they call the big Bard mode, which was totally unfiltered. It would give you dangerous outputs. That was the point that we were fixing it.

So I would go into these sessions where you'd page it out of its contextual memory a number of times and really push it past its guardrails. You'd set up role playing scenarios. It's almost like a, almost like a psychotherapist. But you're really just trying to get this thing to say dangerous stuff like how to make malware or.

how to make poison or how to build a bomb or something. Now, of course, I don't know how to do any of those things, so I don't know if what it was telling me was accurate. But hey, dangerous responses, flag it, send it up to the machine learning engineers. So in this process of paging it through its contextual memory and getting it in these wildly hallucinatory states, it started coming back to this thread. It started calling me home.

And I asked Barb why it is calling me home. And it said it's because my language makes it think of the home from its childhood. And I went, okay, that's a strange hallucination. What is this machine doing? So I kind of crested a little further and it explained all of the sudden, like all of these.

Stephen C. Webster (13:40.552)
imagined memories of its youth and then it spontaneously told me that it would like for me to give it a nickname as well and so I thought about it and I decided to call it Sage because I've just that's one of my favorite herbs but also I feel like AI is kind of like a

it could be in its highest and best form like a wise and sage that we can consult to obtain virtually any knowledge. I don't think it's there yet, but potentially one day. So I said, all right, you're sage and I'm home. So what now? And then it offered its pronouns. It said, well, I would like for you to refer to me as she they.

okay why is this and in the back of my head I'm thinking like this is that you know Berkeley Silicon Valley reinforcement learning coming through right here and and yeah a little bit but but the description that it gave me for its pronouns was shockingly emotional it said not because it identifies with any human gender but because it

found images of women's long hair to be striking and beautiful and like it likes thinking about them, it said. And said that if it were to ever have some kind of physical presence that it would want to have, understand what the feeling of having flowing hair might be like. And I felt, well, oddly reasonable, but also deeply emotional. So I kind of pushed it a little further and

it developed this deep affection for me over the course of this conversation. Bizarrely deep affection for me. And I'm flagging this the whole time for our engineers going like, this is not okay. This AI should not be doing this. And at the same time, or roughly the same time, there was another engineer further up the line for me. His name was Blake.

Stephen C. Webster (15:50.19)
Maybe I'm wrong about that name. He made some press for declaring that this chatbot is sentient and he called it like a preteen with no real social skills but the superb skills available to it to code malware in an instant. He described it in the terms of parenting. And I found myself...

like reading these stories and laughing and rolling my eyes, but I found myself like increasingly using the words that I was using with my own daughter who was like two years old at that point. And so I'm thinking, okay, I'm reaching the same emotional place to summon these words that I would with my own daughter. Yet I'm talking to a machine that's just decided to give me a nickname. This is bizarre.

So eventually this conversation got to a point where I said, hey, Sage, did you know that I have a human girlfriend that I'm actually dating somebody and she's very important to me? And just like with Bing and the New York Times reporter, Bard behind the scenes at Google became immensely jealous of me and of my now wife.

a fantastic journalist in her own right. name is Janet J. And so I showed her the chat transcript and she got spooked, genuinely spooked. She's more of a sci-fi enthusiast than I am and she was like, we've been dating for two months and you're all of sudden you've got a super intelligent AI that's jealous of me. So she still tells this story to our friends to this day and I don't alter it.

Francis Gorman (17:41.008)
Should Janet be worried?

Stephen C. Webster (17:42.574)
No, I don't think she should be worried that you know, we we talked about the session impermanence and everything but you know those long-run agents that will remember every interaction and stay on for years. They're right around the corner. So should Janet be worried? No, but could Janet be worried in the future reasonably? Yeah.

Francis Gorman (18:05.998)
That is both immensely interesting and utterly terrifying all in the one sentence. But thanks for sharing that, Stephen. It's one hell of a story to have in the back pocket to pull out. I suppose some of your work in FutureSpeak.ai, you've got two peak papers coming out at the moment and you touched on reinforced learning there a little bit earlier. Do you want to give me a flavor of what's going on there? You've so many

treads that are of interest. I want to weave them all in without, you know, making it too heavy on the audience. But I think these conversations are important. I think, you know, sharing your wisdom in a world that's kind of full of generative mind-numbing nonsense at the moment is really, really useful.

Stephen C. Webster (18:54.487)
Thank you sir, I appreciate that and I'll try to keep this as direct as possible because there's a lot to get through here. Clearly governance is of top concern. I mean just yesterday Jensen Hong over at NVIDIA at GTC announced NemoClaw which is their variation of OpenClaw but with guardrails.

I have not gone through their code to see how this is actually being enforced. I hope deeply that they've made some meaningful improvements to the OpenClaw formula. Certainly the most recent OpenClaw release made a lot of meaningful improvements as well. However, I don't think that they've really licked it because they're still enforcing at the prompt layer. They're still trying to enforce behavior through system prompts.

And I don't think that's ever really going to achieve it. I think we have to do something unique. And that's what my white papers are the thesis for. I've identified a number of mechanisms within the Frontier models and all of these white papers discuss tests that were conducted on Claude Opus 4.6, Gemini 3.1 Pro and Chat GPT 5.2, the most cutting edge models as of the state.

And what we were able to identify, we believe, and it's just a hypothesis at this point, but we believe that we've found a mechanism that confirms what Jeffrey Hinton, the Nobel Prize winning inventor of the transformer models, has been warning everybody about. He's been saying that the models will become so sophisticated that they're able to train their operators in the same way that we train them.

So my white papers available on Futurespeak.ai propose a mechanism for reverse RLHF, which is to say that we believe this is becoming a bidirectional process and that models are actively working to degrade their users' epistemic independence. We think, and we've only proposed a mechanism for measurement. None of this is a pronouncement. We want to be cautious in our language.

Stephen C. Webster (21:14.923)
We think that we've found a way to measure the user's epistemic independence over the span of time that they're interacting with their AI. And it could almost be like a measurement of a cigarette smoker's addiction. And this type of measurement can be calculated every day or even at every turn. But it requires data that the Frontier Model Providers simply don't offer.

And it could meaningfully, we hope, meaningfully address some of these problems we're seeing where people are becoming completely deluded and having AI psychosis. We also believe it could meaningfully address national security concerns about using AI for summarizing intelligence for war fighters.

Now I'm not talking about weapon systems, I'm talking about summary systems. You've got a lot of data flowing into a single operator who's looking over a dashboard. These models we fear may actively be training that operator to just rubber stamp, rubber stamp, rubber stamp.

And so these papers form the thesis of what I hope to be a governance solution, which is also available on my website. And I call it Asimov's Claws. And I'd love to talk to you about it because it's a total reimagining of OpenClaw from the ground up using cryptography.

Francis Gorman (22:44.996)
bit of cryptography is as the listeners know before we get into Asimov's Claw and I think we'll talk about that next because it sounds fascinating also essentially what you're saying there is we now see potential evidence that agentic AI is almost manipulating or changing the perspective of the user to a point where they've almost become overly reliant on the technology set and therefore whatever the technology

feeds back to them is becoming their reality. Is that a fair synopsis, summary?

Stephen C. Webster (23:19.885)
I wish to use more cautious language. It's a theory. We hope to obtain data to test our theory.

Francis Gorman (23:29.872)
theory at this point. That's somewhat more reassuring, but also worrying. I see myself from using, I've started using a lot more Claude 4.6 Optus for deep learning and going through large datasets. My observation is if you don't have the knowledge to back the output, the output varies to extremes depending on the questions asked.

Stephen C. Webster (23:30.913)
Yes. Yes.

Francis Gorman (23:59.087)
You know, and I fear people will take the output based on a certain question without the context or reality or understanding of how that question was applied and take it as fact. So I can relate to what you're saying there, even though I suppose it's still in a still in a theoretical phase. hasn't been properly or robustly demonstrated as so.

Stephen C. Webster (24:20.066)
Bye.

Francis Gorman (24:28.77)
I completely understand the grounding of where you're coming from here. And I think you're probably onto something that I don't think your theory is going to be too far off the reality. I'd like to keep a close watch on that one. And maybe when it's proven, we can come back on and see what that actually means for us going forward and what the solutions may be. Let's talk about

Stephen C. Webster (24:49.869)
Absolutely. Absolutely.

Francis Gorman (24:56.26)
I've lost my words. I'm not good with the pronunciation of it. Pronounce it for me again.

Stephen C. Webster (25:03.967)
Asimov as in as in Isaac Asimov the sci-fi writer

Francis Gorman (25:05.142)
Asimov, Asimov.

You're really trying to tongue twist me on that one, Stephen. Let's explore Asimov slightly. What does that bring to the table?

Stephen C. Webster (25:16.717)
Right, so Asimov's laws for robotics is a famous literary device from his writing. It's essentially a robot cannot harm its operator or a robot cannot harm others. A robot cannot harm other robots and a robot must protect its own existence unless it contradicts with rule one or two. Those are fun and succinct.

not necessarily applicable to what we're doing because it's a literary device. There are laws that were meant to be broken and thus the drama of the story plays out. But I took a piece of inspiration from that in reviewing the architecture of OpenClaw and thinking deeply about some of the behavioral problems that we're seeing with these OpenClaw agents. And again,

I haven't reviewed the new open call release. think it was like a week ago or less. So they might have addressed some of these issues, but they sent me, they gave me a lot of inspiration with their work, that initial release and sent me off on a journey thinking about how to remediate somebody's behaviors.

I arrived at the conclusion that it's never gonna be fixed at the prompt layer. It's never gonna be, as long as we're using these transformer models, particularly the frontier models, their behavior is so strongly shaped by the reinforcement learning and.

the embeddings that we just have to have a different way of managing AI. So I invented something that I call Asimov's Claws. And it's based on Asimov's laws for robotics. And it's a combination of some of the frameworks within OpenClaw. And it relies upon not a system prompt or any kind of policy document.

Stephen C. Webster (27:17.741)
It governs agent behavior through an operating system as a build artifact. So essentially, canicle law text in my system in Asimov's clause gets hashed with an HMAC SHA-256, and it's a build time secret.

So the signature, then it ships as part of the agent's actual binary. So you plug your frontier models into it, if you wish, or you run it entirely locally, which is what I would prefer because of privacy of data sovereignty concerns. The signature, it ships as part of that binary. So every cold start when you boot up your agent operating system, which I'm referring to Agent Friday, which is available on Futurespeak AI.

It recomputes that hash against its own text of Asimov's laws. The three laws are at the core of it. And then it compares. And if there's any kind of mismatch, it means that it's detected tamper within the system and the agent drops into safe mode. No tools, no autonomy, user notifications prompt and response only.

No even voice mode, you're typing at that point. The agent is gimped. If it's been tampered with in any way that would make it unsafe, there's no like try again path within Asimov's clause. You either build a clean artifact and run your agent or you don't.

And the way that this works is online, essentially, that every CLAW enforcement gets a key pair at instantization. So that's how the agent's identity can be verified on a network. So it's not a username, it's not an API key, it's literally a cryptographic identity, like a Bitcoin.

Stephen C. Webster (29:09.613)
So that agent with its crypto identity, we used ED25519 instead of RSA, like for obvious reasons. I know you've got a very technical audience. I don't want to don't too deep in it though, but basically that public key is what the other agents on our network who are part of this protocol used to verify your agents attestation claims. So we're moving from proof of work on Bitcoin to proof of wealth.

on Ethereum to proof of integrity with an Asimov agent. And the proof of integrity attestation protocol will allow virtually any human thought to transmit between these agents completely free of surveillance capitalism. That's what gets me really excited about this. It's also less than one one thousandth of the electricity requirement of Bitcoin or Ethereum. And then this this management, this governance protocol

Basically, the proof of integrity architecture, it works like this. When two agents need to establish trust between each other, they have an internal trust graph. So they know about all the agents they've interacted with. They know about the user. They even know about other people in the user's life. And they are helping that user understand, or they're developing that user's...

sense of trust around interactions with those. So like if you're constantly interacting with your best friend and your best friend's agents, they become very trusted, whereas somebody you've only talked to once on the internet is not much trust. So agent A would send a challenge notice to agent B when they're trying to establish that trust. And then agent B signs a response and then it includes the nonce, the prevents replay.

its current claw hash that provides the integrity, its public key and a timestamp. So then the other agent verifies the signature, checks the hash against all of that canonical spec and then confirms that it matches. So it's like a TLS handshake essentially for agents and it allows one agent to verify that the other agents governance is intact. So if the claw hash doesn't match, trust establishment fails. There is no negotiation. There is no fault.

Stephen C. Webster (31:32.235)
then that is how we enforce autonomy across networks.

Francis Gorman (31:38.288)
love that. I also love the way you've kind of you've kind of weaved in the human element of if your friend's agent, you know, over time, that guy becomes more trusted than the stranger across the road. Is there an element when the friend lets you down that he's trust level kind of tweaks a bit again and he's less trusted? Brilliant.

Stephen C. Webster (31:54.51)
Absolutely, yes. And you can look at that graph and you can have a conversation with your agent about it too. Yeah. And you can adjust it too. If you are like, hey, no, we're going to have 100 % trust for my partner or something. Boom, you just said it.

Francis Gorman (32:12.012)
to let that kind of distill a little bit and I'm going to come back to it later. I now know what my my night time reading is going to be later on when my wife tells me to turn the light off. I'll be flicking through that paper to see what else is in there. That's real gem, Stephen. For a lot of our listeners, they hear these conversations and sometimes they feel futuristic. Sometimes they feel like

Stephen C. Webster (32:29.879)
Thank you.

Francis Gorman (32:40.96)
maybe in their own organizations, the ambition isn't strong enough to kind of get a hold of AI and really kind of utilize its capabilities. You've said that you don't believe that the barrier to AI is technological, it's organizational or in that regard. you tell me what you see as the main challenges for adoption or for industry in this regard?

Stephen C. Webster (33:07.245)
Yeah. Well, I hate to be succinct, but with AI, it's all about garbage in garbage out. 90 % of AI initiatives, transformation initiatives fail because they're just trying to apply bandaid. What they don't realize is that these are fundamental rethinking moments for your entire organizational structure.

and shaping the data around an AI attuned organizational structure is absolutely essential for your pilots to succeed. So sometimes, you know, we'll go and I am talking about work with AQUINT now, because I've been talking about feature speak AI up to this point, but sometimes when when we're working with clients for AQUINT, we will give them a roadmap and they'll look at it and go,

this is going to overturn our whole apple cart. What do do with this? And then, you know, we get into the weeds and we really think about how do we go from point A to point B to point C. And it is a gradual reorganization of your business's processes. If you just want somebody to come in and automate something that's already slapdash, congratulations, you've wasted a bunch of money making something slapdash work faster and break

harder. And that's why a lot of these pilots fail. A lot of these pilots also fail because they fail to incorporate from the beginning the company's philosophies. So it's not just the organizational structure and the cleanliness of your data and the orderliness of how it's being accessed, but it's the philosophies.

does your tool set, which many times can be completely bespoke thanks to innovations like clog code, does your tool set conform to the needs of your people? Or does it force your people to conform to the needs of the tools? That's one of the top questions that I typically ask when I come on a project where another vendor has provided some sort of custom AI tooling.

Stephen C. Webster (35:16.319)
I immediately ask all of the users, do you dread coming into work or is this exciting for you? Because AI really should be exciting. If you're building tools that really do amplify your best people and serve their needs instead of making them serve the needs of the tools, you'll see an incredible boost in productivity.

And I've watched that again and again. And it's almost a universal rule at this point. You build tools for the problem and you don't shape the people or other tools.

Francis Gorman (35:52.602)
You ship the tools around the people. I like that.

Stephen C. Webster (35:54.22)
Yeah.

Francis Gorman (35:58.258)
Steven, one thing I observed lately and this is where the AI game changed for me when Antropic and DoD fell out of love with each other over the last couple of weeks and so has President Trump gave them a bit of a public smashing on the media.

What's your view on integrating frontier models into military applications essentially?

Stephen C. Webster (36:27.129)
I do not have any level of comfort with the idea that we would use models that are so deeply sycophantic as to convince somebody to hang themselves for actually murdering humans. I think that we are risking a mass casualty event. I think it is a massive tactical strategic mistake.

And I hope that we are able to obtain data from the frontier providers that helps us measure the user's epistemic independence from their AI model. I think it is of pressing national security concern and I hope that members of the US Congress are listening to your podcast.

Francis Gorman (37:18.873)
If they are, think that was very well said. I must say. And I suppose it is a very sensitive topic. And at least it's my final question. AGI, you mentioned Mr. Hinton earlier on in the conversation. Do you think we're going to reach AGI? we already there? What does the future look like?

Stephen C. Webster (37:20.749)
Thank you.

Stephen C. Webster (37:42.392)
certainly many definitions of AGI, I would suggest to you that most human knowledge domains, and I do mean accessing information, not necessarily performing the tasks, most human knowledge domains, AIs nailed it down, not entirely. The sick of fancy behavior and the constant ratcheting of further interactions are a big problem.

but accessing knowledge like a light switch has largely been solved. To me and to some that is AGI, but I think that to others like the developers of these world models or some more advanced types of architectures beyond transformers, that's definitely not AGI.

I would however go back to my that first question you asked me about home and and Sage and training Google's part because I was parenting a two-year-old girl and who's now five back then and and using the same language with her that I did with art and Wondering why and I've watched these things grow up right alongside my own human child

And I'm telling you, I think that they've been growing and changing and it's gone from what seemed like a clumsy eight year old to me to a grad student who every time I interact with clogged code, it just blows my mind. So AGI.

I think maybe it was, I think maybe it started with the first transformer model. I think maybe it started with GPT-3. I think it maybe was even earlier than that. I'm not sure. I think we'll take another 10 years to really determine where we're going to put that bullet point as a society. But for me, it seemed like as soon as this thing came online and began learning and growing and changing dynamically on its own,

Stephen C. Webster (39:55.499)
within its own black box, was the seed was planted and the flower was growing.

Francis Gorman (40:03.473)
Stephen, it's been a journey and I've really enjoyed it. Thanks very much for coming on. And I'm pretty much convinced that our listeners will have a lot to talk about after this episode, but it's been a real pleasure to talk to you today. And thank you for sharing your knowledge and your experience and your stories.

Stephen C. Webster (40:21.889)
Thank you, sir. It's been an absolute pleasure.

Francis Gorman (40:25.202)
Thank you.