4 Real

More Than Data Centers: AI Governance, Liability and the Future of CRE

Dechert LLP Episode 31

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 32:55

Artificial intelligence is everywhere, but aren’t the implications for commercial real estate limited to data centers? In this episode, 4 Real hosts Jon Gaynor and Sam Gilbert sit down with Dechert litigation counsel Wilfred T. Beaye, Jr. for the AI breakdown that every CRE decision-maker needs to hear, including where AI can be most powerful, why governance is lagging behind development, which property classes will be most impacted and how companies can protect themselves from liability.

Jon Gaynor:

Hello, and welcome back to the Dechert 4 Real podcast, where we discuss current issues and trends in commercial real estate finance. We aim to bring market commentary about developments, updates you can use, and maybe a little bit of banter along the way. I'm Jon Gaynor, a partner based in Dechert's Philadelphia office.

Sam Gilbert:

And I'm Sam Gilbert, a partner based up in the Boston office.

Jon Gaynor:

In this episode, Sam and I are joined in our Philadelphia office by our colleague Wil Beaye to talk about artificial intelligence, what it is, how people are using it, and what the risks are today.

Sam Gilbert:

But before that, in our 411 segment, we'll talk about some action in Washington to address housing affordability.

Jon Gaynor:

But first, let's get 4 Real with the hosts to break the ice. Sam, apropos of our topic today, what's your favorite film or book about the robot apocalypse?

Sam Gilbert:

So I have to admit, I am not an expert on films about the robot apocalypse. Obviously, Robocop was one of the early favorites when I was young, growing up, mostly because of the gore, I think. But I'm gonna have to go with Planet of the Apes. Does that count? Ah, no robots. No robots, but an apocalypse, right?

Jon Gaynor:

Explain that to me, though. How does that work?

Sam Gilbert:

Well, I think you've got the whole concept of the human race sort of being taken over and undoing itself, almost undoing itself, and...

Jon Gaynor:

You madman! spoiler alert for Planet of the Apes, I guess.

Sam Gilbert:

So what's your go to, John? I know you're more of a robot apocalypse guy than I am.

Jon Gaynor:

Yeah, fine, fair, I guess I seem the type. I've got one of each. So film, I'm gonna go with The Matrix. It's easy. That movie is just cool. I like the leather trench coats, the bullet time. Laurence Fishburne delivering lines like scripture, it holds up and it sticks with me. And I think it's good for the topic today, because humanity didn't lose a war. They kind of sleepwalked into captivity because the system was just comfortable. Book-side, I'm going to recommend one that people may not have heard of. This author, Christopher Ruocchio, has this book series called Sun Eater. It's like, think Roman Empire in space. It's got gladiatorial combat, interstellar conquest, palace intrigue. Kind of one of the elements of this world, though, is that the civilization fought a war against AI and then won, and then built their entire empire around banning AI. And the books really dive into like,

Sam Gilbert:

And now you've given me homework to do, because what that choice costs, what happens if you're up against not only have I not read the book, but I'll admit I haven't opponents who don't make that same choice, and so it feels pretty appropriate for the topic today. seen any of The Matrix.

Jon Gaynor:

So The Matrix is really good.

Sam Gilbert:

Got a lot to do.

Jon Gaynor:

One of those assignments is easier than the other.

Sam Gilbert:

Okay, okay.

Jon Gaynor:

So that counts as getting 4 Real. So let's move into our 411 segment. Last week, the House overwhelmingly passed a big housing supply package called the Housing for the 21st Century Act. The Senate has its own bipartisan package, the Road to Housing Act, that it approved last fall. Both bills try to increase the housing supply by reducing friction in how housing gets approved, financed and delivered, but the House bill is more directly development-enabling in a few areas that matter for our market. So we have four quick highlights for you.

Sam Gilbert:

Yeah, first is office to residential conversions, right? This is something we've talked about in a lot of different contexts recently. The House version leans into the conversion of underutilized office into housing, and it's designed to reduce some of the procedural drag that can turn an otherwise financeable conversion into a multi-year entitlement story. You know, things like EPA review and otherwise. You know, for lenders and investors, that's really the difference between a predictable business plan that works and a timeline that just blows up the carry costs. Second is zoning and land use updates. The House approach is not a federal zoning mandate. It's more about supporting local reforms that unlock incremental units, including things like allowing ADUs, which are accessory dwelling units. You know, think a backyard cottage, a basement apartment, an over-garage unit for your mother-in-law. The point is to make it easier for localities to add supply without having to wait for a massive ground-up redevelopment cycle.

Jon Gaynor:

Third is manufactured housing, which may be the most immediately practical supply lever in the House package. One of the notable changes is removing the requirement that certain manufactured homes must be built on a permanent chassis, which is a real constraint in scaling factory-built housing. Anecdotally, we are hearing more people focused in the market on financing manufactured housing and the related infrastructure, and changes like this would likely accelerate that trend by making the product easier to scale and standardize.

Sam Gilbert:

Yeah. And fourth, the House bill adjusts some rules around two major HUD funding programs that can affect deal flow at the local level. So CDBG, which is the Community Development Block Grant Program, would be expanded to be used in ways that can better support housing production. And Home, which is short for the Home Investment Partnerships Program, which is a HUD program that provides funding specifically to support affordable housing, often as a gap financing for construction or rehab. The House bill here would expand Home's eligible uses and flexibility, including allowing these dollars to support workforce housing and, in some cases, housing adjacent infrastructure. And it also raises the maximum eligible income for Home-assisted households.

Jon Gaynor:

Last point for us here is the White House released a statement that is generally supportive of these housing supply efforts from the House, but flag that the House's bill does not address institutional ownership of single family homes, which remains a focus of the administration's agenda.

Sam Gilbert:

Yeah, and there's clearly a lot of support here, right? It passed 390 to nine in the house. CREFC, the NBA, National Association of Home Builders, are all behind it. The President, like you just mentioned, is generally supportive. So it'll be interesting to see what happens when the House and Senate come together and get something to pass.

Jon Gaynor:

All right.

Sam Gilbert:

All right. So now on to today's guest, Wilfred T. Beaye, Jr., or Wil, is a counsel here at Dechert in Philadelphia, where his practice focuses on high-stakes litigation investigations and crisis management. As part of his practice, he advises companies on deploying AI at the enterprise level, building governance frameworks, board oversight and risk architectures that allow organizations to move quickly without moving recklessly, and steps in when things go wrong, to handle the litigation and regulatory crises that follow. Before Dechert, will served as a trial attorney in the DOJ Civil Rights Division, trying five cases to verdict and leading grand jury investigations. He also co-teaches "Legal Design in the Age of Generative AI" at Penn Law, exploring how AI is reshaping legal and regulatory frameworks. Wil, welcome to the podcast.

Wil Beaye:

Thanks for having me, Sam and John. It's great to be here, and I promise in our conversation I won't let any AI agents take over the podcast while we're recording, although that's a possibility in this world.

Jon Gaynor:

Oh golly. I have messed with some products, like notebook LM, where you can, like, turn something into a podcast. And, I'm like, "I'm almost out of a job." But before we get into the substance, Wil, let's get 4 Real with you. Can you tell us your favorite humanity versus machine, film or book?

Wil Beaye:

So many good ones. I think I'll go with Ex Machina for our conversation, because what I love about that is it's not really about the robot uprising. It's about humans who built the system and convinced themselves that they were in control. So the AI doesn't overpower anyone in the film. It just understands the humans better than they understand themselves, which feels uncomfortably relevant right now, right?

Sam Gilbert:

So before we talk risk, can you give us plain English taxonomy? You know, what's AI versus generative AI? What's a chatbot versus an agent? You know? And what's the key shift that changes the risk profile when you go from chatbot to agent?

Wil Beaye:

Absolutely. And, you know, I can give you just raw definitions, but I think it's better if we just go back to the beginning a little bit and talk about how it is we got to where we are, and why everyone is so excited, especially after the last week or so. The way this started is, back in the 1950s and 40s, you had a bunch of pioneers really debating whether machines could think. That spawned a fundamental debate between two kinds of AI, what's called strong AI and weak AI. The early approach, the "weak AI," was coded intelligence. You had explicit programming rules and logic that was put into machines, if then statements, decision trees, symbolic reasoning, so on and so forth. By the 1970s and 80s or so, we hit what folks called the "AI winters." Technology couldn't scale. The problems became too complex. The computers were too slow, the approach entirely hit a wall. Then came what's called the Connectionist Revolution, neural networks and machine learning. So instead of you trying to code intelligence into a machine, you let the machines learn patterns from data. But another problem arose, which is you need massive amounts of data to improve the reasoning of the machine, which we just didn't have at the time. Enter the internet era. In the 90s, suddenly we had oceans of data, whether it's Facebook, Google, all the massive tech companies, text, images, behavior, everything is now captured on the web. But we had another problem in the 2010s, which is the models couldn't still process that information efficiently. Final breakthrough was then the transformer architecture in 2017. This basically allowed AI to not just have to read. If you give AI a page, for example, it used to have to read each word the way humans do, one sentence at a time. That architecture allowed the system to basically read the entire page all at once, which transforms how you intake data and everything about how the technology works. The last piece of it was having human reinforcement learning, which is you have now humans intervening to show the AI how to produce, to process that information in a way that's actually helpful to you or not, to actual people. That's how we got to ChatGPT in 2022 and this whole revolution sort of blew up.

Jon Gaynor:

So Wil, the news cycle over the last few weeks has made it sound like the ground has shifted fundamentally. For our CRE finance audience, what actually changed in model capability and deployment this month? And what's just hype?

Wil Beaye:

Absolutely. So about a year ago, maybe even six months ago, when you and I had coffee, Jon, you know the conventional wisdom was that these systems were trained on mountains of data, and they basically make predictions about what comes next, tuning to what it is that we want. So if you had less data in a particular area, they hallucinate because they just, they try to please you, and so they give you outputs that aren't necessarily factual. The big difference though, now, is that this problem has largely disappeared, not because anybody fixed the underlying technology. It still is predictive. What changed is that the AI systems can now go into the world and get real data. They can search the MLS live. They can pull county records. They can read an actuarial appraisal report you upload, cross-reference the comps against public data, and flag discrepancies, and when they're not sure about something, they can ask you. The numbers really tell you the story here. I mean, there's a benchmark called the SWE bench that measures whether AI can fix real bugs and real code, databases, Django, Matplotlibs, skip learned - not toy problems, real problems. Late 2024, the best AI hit 49%. Nobody cracked 50%. By May 2025, Claude Opus 4 hit 72.5%. By late 2025, we're at 80%, and on February 5, a few weeks ago, both Anthropic and Open AI dropped new flagship models on the same day, the closest head-to-head release we've had in history. And here's what matters for this audience

and for us:

the length of task an AI can complete autonomously is doubling every seven months. In 2024, models maxed out at 30-minute tasks. By late '25, they're completing multi-hour tasks. I think Claude cited 4.5 maintained focus for over 30 hours on complex, multi-step clothing. That's not incremental improvement - that's a change in the entire phase of what we're talking about.

Jon Gaynor:

And ChatGPT says that their new model, Codex, can just, like, continually self-reinforce and edit itself with you jumping in to guide it along the way. So we have all these benchmarks, but what are we seeing in terms of actual implementation? Are companies starting to use this? Are we moving away from just chatbots returning answers into a world of, like, agents actually kind of going out in the world and solving problems? And maybe you can say a little bit more about what an agent is compared to a chatbot.

Wil Beaye:

Absolutely, and all the above in terms of adoption. I mean, CEOs are really bragging about it on earnings calls. I mean, Dario Amodei, who's at Anthropic, said the vast majority of the code for the new clone models is written by Claude itself. Google has said that its AI generates over 30% of its code nowadays, Microsoft says 20 to 30% and predicts about 95% within five years. But in terms of the real distinction here between the chatbots we had before and these agents going out into the real world, the distinction really it boils down to access to knowledge. The chatbots you had before were drawing up prompts and answers to your questions based on massive amounts of data they were trained on, and they were closed off universes. Maybe you can inform them a bit more through your inputs, but that's where they stopped. Agents now are able to basically function almost autonomously. You give them an instruction, it'll go on the web, it'll capture information, it'll process that information and add it to its data that's answering you, and then provide your responses, which is very, very different. And now you can have layers of agents dealing with a question you ask versus one agent, and we'll talk about that a little bit more, but that's sort of the fundamental distinction between where we were and where we are now. And the last advent that these new models really transform is what's called agent swarms. Now, when I was talking to you about agents earlier you have it's almost sequential. You have a task, and you might have an agent capture step one of that task, a separate agent helping you with step two, and that second agent waits for the first agent to do whatever it is you asked to do. Swarms are different. Swarms are...you have multiple agents. Maybe each one takes an expertise, and they're all working simultaneously and talking to each other and also talking to you. We can talk about what that means, but that's that's a massive, massive difference.

Jon Gaynor:

I want to pause on that because, like, it's been a really big development over the last couple of weeks. There's the whole OpenClaw program that's come out, where people are able to deploy on their own computer a software agent that takes all their API keys and passwords and can do whatever. And then I'm sure you saw the news about MoltBook, where it's like, basically a social network for agents. Unfortunately, very recently, MoltBook apparently got completely hacked, and everybody's credentials were exposed. And so anybody could be any of the agents on there, and some of the agents on MultBook were definitely human-piloted, but it is wild stuff. I heard somebody did a dating app for agents where they could, like, meet online and then like, swipe left or right, depending. So it's like, there's all sorts of wild stuff happening with these agent swarms.

Wil Beaye:

My favorite is the religion. I just find that all the deep questions that humans have been asking themselves for hundreds of years, seeing agents who can answer questions in a millisecond and process that, it's fascinating. I like a good lobster. I'm not mad at it.

Jon Gaynor:

Oh yeah, Crustafarianism is, it's all lobster-themed for some reason

Sam Gilbert:

You mentioned earlier, though, you know, Google and Microsoft talking about how much these agents, how much of the coding they're doing, right? Is it just coding, or is it something that should matter for the rest of us? I mean, you mentioned the agent swarms, right? When you start describing agent swarms, to me, it sounds like, you know, floors full of employees in a skyscraper like Elon famously, I think, said this week or last week. The way they work together really sounds like replacing employees. I mean, is this something that should matter for the rest of us?

Wil Beaye:

It should matter in this sense, the reason why all this is starting with code. A good coding agent can do a lot of other things beyond coding. The difference between an AI that's dangerous and AI that's useful really isn't so much the model and what it's applied to. It's really how you set it up and what you're applying it to. Did you give it access to current or real data? Did you build in a checkpoint where a human reviews the output before it drives a decision? Did you connect it to your actual transaction documents and not just training data? I guess I want to draw a distinction here. So in real estate, we kind of understand this a little bit instinctively, where an appraiser, you know, who pulls comps from memory is not really all that useful, versus an appraiser who pulls comps from the MLS or drives the neighborhood or adjusts for conditions. That's sort of the and I were talking about this ahead of this recording, and you professional standard. These agents are not...they don't know anything. They don't know anything about the world. And have a concrete example where you basically deployed a swarm that's still true about generative AI. It's still predictive, and so it has no sentient consciousness. You bring that. How they become powerful is if you take them and in your own life. I decide what they should be doing for you. In that way, I liken swarms to where we are, is that you can think of the agentic pipeline before as you having a team working on a project, and they're all helping you get to a certain output. And supposed to be using to arrive Swarms are what you're describing, which is you have sort of almost like a company, where you can have your regulatory expert, you can have your tax expert. You can have whoever else it is doing different things all working together to produce what that company is trying to produce. Now I'm exaggerating on purpose here, but the point is now you can have really much more complex tax and projects like coding an entire system, a generative model system, be accomplished because you have such a wide division of labor that you can include and these things can build expertise, supervise themselves, supervise each other. So it's a fascinating development that applies to all of us, even though it's starting with coding, because the at a number? Go do some research. Look at Fannie Mae. possibilities are truly endless. Look at wherever you've got to find this, figure out, become an expert on appraisals. Then I had another one. I'm like, "Your expertise, I need you to focus on comps. Learn my neighborhood, learn the community, learn the different models in my neighborhood, and tell me what makes sense here and what the right comps might be. And then I want the first agent to feed you and tell you which one should be relevant for an appraisal." I had two or three others doing other work, and one of them synthesizing it all, and I was truly blown away by the output. It did it in, what, I think, 45 minutes? And it was really spot on in terms of where the soft spots might be if appraisal report came in, and why a new build like mine made things more complicated. And so it is. I implore everyone, I wouldn't necessarily focus on trying these tools in your professional life, though you could. A good place to really learn these is: find a problem, find an issue, and give it a try and see how it could help you out here. Because I think the better you are with these tools - they're force multipliers - the better you are.

Sam Gilbert:

Well, that's a fascinating example of using it, you know, in your personal life. You mentioned people using it in, sort of, outside of coding and in professional life. Are there other examples of where businesses or professions are

Wil Beaye:

Yeah, I mean, it's happening in the legal sector, using this? you have multi-agent systems for contract review, where one agent might flag risks clauses, another checks regulatory compliance. Another compares to precedent contracts, a supervisor synthesizes all that information and feeds it out to whoever the reviewer is. In financial services, you might have swarms for due diligence, where agents simultaneously research the company, analyze financials scan regulatory filings, review litigation history. Healthcare, you might have it solving complex medical cases with really high accuracy. I think Microsoft's Diagnostic Orchestrator solved a complex medical cases at a rate of 85.5% accuracy versus 20% for experienced physicians using multi-agent architecture. So, yeah, the frameworks are ready. The frameworks are here, and they're being applied. And as you know, the advent is quite recent in terms of really making them more accessible. So I would only expect that the uptick continues.

Jon Gaynor:

So what's the gap?

Wil Beaye:

You know, you're right. The tooling exists, companies are adopting it. What may lag, though, is governance. When a swarm of agents produces a financial analysis that a board relies on and it's wrong, the question is who had oversight duty? The director who approved the AI deployment, the vendor, the person who wrote the agent prompts? Those are care-mark questions, I guess, in my line of work that don't have a clear answer just yet. Gartner predicts that 40% of enterprise apps will include task-specific AI agents by the end of 2026, but they also predict that over 40% of AI agentic projects will be canceled by end of 2027. Governance failures, not capability failures. Microsoft contains a new role, Agent Boss, the human who builds, delegates to, and manages AI agents. The liability framework hasn't really caught up to chatbots, let alone autonomous agents that can delete files, send emails, make purchases, execute code and modify documents.

Sam Gilbert: So now the law:

when the AI output is wrong, or an agent takes an action it shouldn't, you know, who realistically gets tagged first? Is it the model developer, the deploying company, the integrator, the end user? In the U.S., what are the live theories you see, you know, around negligence, product liability, and where does Section 230 fit, or maybe not fit, in your view?

Wil Beaye:

Yeah. So in that scenario, that hypothetical you just provided, I think the deploying company gets tagged first, almost always. Then, to understand, the deployers in the AI context are not those who are necessarily building the foundational models. These are the players who are the"in-betweeners" who are fine-tuning it for specific tasks. So you can think of the form of AI to chatbot you have at your company, whatever that chatbot is titled, whoever is manning that is probably a deployer who's in the middle and closer to what's happening on the ground. The model developers who sit behind that have strong indemnification language, usually. In some Section 230, arguments, depending on the context, end users can claim reliance. Usually, "I have relied on what you told me, this thing to do." That's what the chat bots. But the company that chose to deploy that, integrated the AI into his workflow, that decided what human oversight to include or skip, that's where the duty of care is going to sit. So the live theories that I'm seeing are negligence or failure to supervise or validate outputs, product liability analogies where AI is treated as a defective tool, deceptive practices claims when AI generated content misleads consumers or counterparties. And you're actually seeing a massive development in terms of state law context. A lot of states are passing laws specific to these kinds of defects, if you want to call them, and a lot of contract and indemnity fights about who bears risk and when AI makes a mistake. Section 230 is interesting because it protects platforms from liability for user-generated content. That's the core of Section 230, that's what it's supposed to do. But when a company deploys an AI agent that generates content or takes actions, that's not really user-generated, so it'd be interesting to see where the Section 230 theories develop in that particular context. It's a tough argument, though, to present given what agents are able to do and what they're doing now.

Jon Gaynor:

So even if the liability regime stays messy like it is today, the firms who deploy AI still have to answer to their investors, their counterparties, the regulators, the market. How do you explain the practical reality, risk, the reputation, the reporting, the audit trails, the valuation hits when AI is part of the mistake? And then what should firms preserve so that they can explain how the decision was

Wil Beaye:

No, and that's the critical point I think everyone made? should think about. The practical reality is that even if you never get sued, you still have to explain what happened or what is going to happen to your board, to your investors, to your regulators, to your counterparties, and if you can't explain how a decision was made, how an action was taken. If the AI was a black box and you didn't preserve the inputs, the outputs, the prompts and the data had access, you've got a real credibility problem that's almost as bad as a liability problem. What should firms preserve? I'd say everything. The prompt, the context window, the data sources, the agent access, the intermediate outputs, the human review checkpoints, the final output, and critically, the version of the model you were running. Because these things update constantly, and when crisis happens, you want to be able to freeze in time what you were dealing with, what you knew and what you didn't know, so you can start to articulate what happened and how to get around it. You just need a clean audit trail.

Sam Gilbert:

Yeah, well, let's zoom out a little bit to some second order impacts then. We're seeing, on our side enormous AI capex and data center build-out. What does that wave mean for real estate? You know, data centers, obviously, but also

Wil Beaye:

obvious play, but the power constraints mean location office, industrial, retail, you know, and maybe what's the real gating factor? Is it power grid, interconnection, permitting, community opposition financing? matters more than ever. Industrial near substations with capacity, office buildings with robust electrical infrastructure that might convert, and on the flip side, if you're financing anything that competes with data centers for power that's a risk factor now, and it wasn't before.

Jon Gaynor:

You know, I wonder about how this plays out for us, and... like, you know, office we see recovering lately, but in a world where big parts of the market are being changed because, you know, there's a market debate right now that frontier models are a real threat to certain businesses, like software as a service models or SaaS. You know, they're like an 80% margin business. And can you charge an 80% margin on a business like that, where you've got an AI model that can do a lot of the things that your software does without the kind of hedges around it that kind of protect you and keep you inside of the ecosystem? So I guess, like, how in your mind, do you think leadership should think about working with AI in their own platforms? You talked a little bit about having an audit trail, but if you spend time building prompts, building agent swarms, should you be letting that all exist in the cloud, or be part of a large agent provider's model, or should you be thinking about trying to run these things locally?

Wil Beaye:

And these are questions that you want to evaluate company by company, maybe even component of the company by company, or entity by entity, because it's different. The answer is different for everyone, but it's a real question. What I tell clients is, think about what's actually proprietary. Just to start there, if your edge in your business is your data, your deal history, your underwriting models, your market intelligence, then you probably want to keep that local or contract for procurement of a system in a way that ensures that it remains proprietary. I mean, we're a law firm, a lot of our stuff is proprietary, and so we have these same considerations on the top of our mind. You don't want your special sauce in that context flowing through a third party platform where it might train someone else's model or lead to competitors. But if your edge is speed and execution, not proprietary data, then buying makes sense. The platformers are getting good fast. So the question is whether you're comfortable with the vendor risk and the lock-in. You might buy a model now and then, say, you know what, and you sign yourself up to have it for, I don't know, two years, and then, like we just had happen, in six months, the systems come up with something new that completely blows that up. So the question there becomes much more. you're more of a customer, and you're thinking about the market and where it is, whether you want to lock yourself in or not, the software as a service. Disruption fears are probably overblown in the near term. I mean, these tools still need integration, still need human oversight, still need domain expertise to really deploy well. But in the medium term, I'd be nervous that my business model was selling software that AI can now generate on demand.

Jon Gaynor:

Yeah, and I think about that from the purposes of, like, the human beings, because real estate is fundamentally a human being-centered endeavor. And like, if there aren't need for human beings, there isn't need for an office tower full of human beings. To the Elon Musk point you raised earlier in the podcast, Sam, and it makes me wonder what property types are going to end up being favored in the long run? Clearly, data centers, like if we're to believe how fast things are developing, and we believe everything is going to the cloud, all of this investment makes sense, and maybe the people who are skeptical of data centers are not quite following the trends. But office is recovering, but is that recovery durable? I don't know, especially if we have everything going to these big model providers in the end, or maybe the people who layer on top of the model providers who sell the software as a service, replacements. It's just super interesting.

Sam Gilbert:

Wil, I guess to close this out, when a board, a GC or business lead says, "you know, we want AI in the workflow, but we can't blow up our compliance posture or create some litigation problem," what's your elevator pitch to them for how you help them go from AI curiosity to AI deployment with defensible governance, contracting and incident response readiness?

Wil Beaye:

Yeah. Here's the one-ish liner that I'd give. I'd say we've entered the era where AI doesn't just give you answers, it takes actions. And every action - this is a fundamental to our system - creates liability, or creates the risk of liability. The companies that win are the ones building governance into the agent architecture from day one, because you can do that, and not bolting it on after a crisis happens. That means what data you give it, what questions you let it ask, where you put the human checkpoints. That's what determines whether this intelligence serves you well or leads you off a cliff. I help clients build that architecture before they deploy so they're not calling me after when something goes wrong. I'm also a number they can call if things do go wrong. But I try to help you not not get there. I think one sort of clean point I leave for "the agents are taking over" crowd, and to sort of loop us into our conversation about robots taking over is that AI still does not intuit the world, so it doesn't know or retain anything we do. So your concern should be less about whether the agents or AI can take over what you do, and more about the person that learns how to use the agent or AI and how they compare to you in doing what you do.

Jon Gaynor:

That is a very good and important thought. And Wil, thank you for sharing that and all of your insights with us today. Thank you to our audience for joining us for another episode of Dechert's 4 Real podcast. If you have any thoughts, please share them with us at our email inbox, realpodcast@dechert.com Also, if you like what you heard, give us a five-star rating on whatever platform you found this on. This episode was hosted by Sam Gilbert and me, Jon Gaynor. Stewart McQueen, Matt Armstrong and Kate Mylod produced it. Production support is by Kara Ray, Mallory Gorham, Alyssa Norton, Peggy Heffner, James Wortman and Jacob Kimmel. Our editor is Andy Robbins of Audio File Solutions. Thanks for listening, and we'll see you next time on the Dechert 4 Real podcast.

Sam Gilbert:

All right. Well, let's hear some quick dad jokes.

Jon Gaynor:

Do you want to get us started then, Sam, since you kicked it off this time?

Sam Gilbert:

Yeah, I'll do that, and we'll stay with the theme. Did you hear about the robot who got arrested?

Jon Gaynor:

No?

Sam Gilbert:

He was charged with battery.

Jon Gaynor:

Okay, fine. Wil, do you have one?

Wil Beaye:

What did the robot carry in its wallet?

Jon Gaynor:

What?

Wil Beaye:

Cache.

Jon Gaynor:

Okay, gosh. All right, I was bored, so I made a robot that distributes herbs.

Sam Gilbert:

What did it do?

Jon Gaynor:

It helps pass the thyme.

Sam Gilbert:

Yeah, that one actually was pretty good

Jon Gaynor:

For once I got one.