Is That Even Legal?

Can AI replace Human Counsel?

Attorney Robert Sewell Season 6 Episode 2

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 40:05

Send us Fan Mail

ChatGPT can draft a motion in seconds, but what happens when the motion is polished nonsense and a real person signs it? We bring on Eran Kahana, a practicing attorney and Stanford Law School research fellow, to unpack a lawsuit that claims OpenAI caused harm by enabling AI generated court filings and effectively “doing law.” The story starts with a settlement, a case of buyer’s regret, and a flood of ChatGPT fueled motions that leave courts and opposing parties paying the price.

From there, we dig into the heart of legal AI ethics: hallucinated case citations, confident sounding errors, and why “it passed the bar” marketing can create dangerous expectations for everyday users. Eran makes the case that the better frame is often product liability, not unauthorized practice of law, because foundation model developers knowingly ship tools that can fabricate authority while still sounding right. We also talk about the practical reality inside law firms, where AI can save time when used for brainstorming, but can create real exposure when lawyers treat it like a research engine.

We close with the consequences and the future: Rule 11 sanctions, professional discipline, looming malpractice claims, and whether malpractice insurance even covers “delegating judgment to a machine.” Then we zoom out to AI governance and guardrails, including the idea of jurisdiction aware restrictions and stronger refusal modes for legal conclusions. If you care about legal tech, generative AI, and the future of legal practice, hit subscribe, share this with a lawyer friend, and leave a review so more people can find the show.

Although AI is not ready for the courtroom now, Eran says just wait. We won't even recognize "justice" a decade from now.

Why The Law Feels Confusing

Bob Sewell

Is that even legal? It's a question we ask ourselves on a daily basis. We ask it about our neighbors. We ask it about our elected officials. We ask it about our families. And sometimes we ask it to ourselves. The law is complex and it impacts everyone all the time. And that's why we are here. I'm attorney Bob Sewell. And this is season five of the worldwide podcast that explores that one burning question is that even legal? Let's go. Today's guest on the show is Eran Kahana. Eran is a practicing attorney. He's also a research fellow at Stanford Law School. He's on the advisory board for the Stanford Artificial Intelligence and Law Society. And he's pretty much an expert in all things legal involving AI. Eran, welcome to the show.

Eran Kahana

Thanks for having me, Bob.

Bob Sewell

You know, I wanted to have you on because I saw an interesting article a few weeks ago, and it's something that's been uh stewing in my mind for a bit. So this is what happened. There's a a litigant, and she ends up resolving a case against that she had against an insurance company, and she had an attorney, and she gets done with the settlement, and then she had buyer's regret. She's like, I think I could have done better. And this this is not uncommon. Buyer's regret after a settlement is not uncommon. And she takes her communication with her attorney, runs it through Chat GPT, all her her uh emails, runs it through chat GPT. And chat GPT spits out an answer like, you're right, your attorney does suck. I mean, it didn't say that way, but essentially. And she uses, this emboldens her and she decides she's going to reopen this case. And she files just a obscene number of motions. He sends a subpoena, he doesn't want to notice this, and it's all generated by Chat GPT, and it's all spurious, it's a junk. I mean, it's it's a trash. And and the insurance companies look frustrated, right? I'm certain this was not the first experience they had with litigating against chat GPT. And they file a lawsuit. And the lawsuit says, hey, Chat GPT, open AI. You're practicing law without a license. That's what you're doing. And you've caused us damage. Therefore, pay up. Okay. That's where the lawsuit goes. And it's like and I start to think about it. I mean, I get I get um part of me is like, you know, feels protected, protective up against my occupation, right? I mean, if chat GPT can draft a motion, where where am I going to be? And but the other part of me is so I want, I guess I should say, so I want Chat GPT to be in trouble. But the other part of me says it's just it's just a tool. What's going on here? You know, you don't you don't need to comment whether or not this is really practicing law without a license unless you want to. But what's going on here? Is this this the future of law?

Why This Looks Like Product Liability

Eran Kahana

Well, let me start by saying that um I don't think it's the future of law because we're dealing with um, at least in our for our profession, we're dealing with tools that are not sophisticated sufficiently to replace us. Well, whether or not it happens at some point, I I don't know. It it could happen. But not in my career, or let's put it this way, in the foreseeable 20 to 20 years, I will say, I'd be very surprised if a system were was actually capable of uh a generative AI system. Could be some other AI that comes out that blows this away. You know, in five years we're talking about generative AI, like we talk about black and white televisions. It would just be completely irrelevant, uh, be a vintage kind of a discussion. But what's going on is that, and I wrote about this way back in 2012. I I don't know if you read the blog post that I did on Stanford, but I I don't want to say I saw it coming, but I did. Uh, and I saw it coming when Siri was released, I think it was uh in 2011 or some point, and I said, wow, we're just gonna get to a point where there will be applications that were are going to give legal advice. And uh so I started writing about it in 2011. In 2012, I sharpened it, and then uh there was a follow-up in 2018, and then another follow-up in 2021. So I've been dealing with this for some time, uh stewing on this issue for some time. So what's going on is that we have these uh foundation model developers like OpenAI with ChatGPT, and they are developing extraordinarily capable tools uh that sound incredibly convincing, and in many respects, they do really a nice job. I use AI, I use almost all of the different applications every day in what I do. Uh, but it's a a behavior that is shaped around a tool that you need to perfect. And this is, you know, formal opinion 512 doesn't address this, obviously. It addresses at a very high level with the competence uh rule and come comment eight. Uh but it is one of those things that requires a lot of attention from us as practitioners and using these tools. Now, we know the we know the subject matter we're dealing with. So we, if we're paying attention, and we know a lot of lawyers that are not paying attention, but if we're paying attention to the output, we're seeing a lot of holes. This is not right, this is not the way it is. You're you're meant you're fabricating information, and you sound great at it, by the way. You sound like you know it really well, but you don't. And so this is the problem that you know, uh Graciela Della Torres from the Nippon Life case, uh uh, she encountered that when she was using ChatGPT. Uh, it was telling her exactly what she wanted to hear. Yeah, your attorney is gaslighting you, he's giving you bad advice and all this kind of stuff. And she just like hit enter, enter, enter, enter each time she's getting yet another uh filing that she can uh you know flood the court with. And this is a very uh you know uh difficult problem, not just from a uh the court perspective and our practice perspective, right? It makes it our our work becomes a lot that's say, I don't know what, I don't want to say difficult, becomes more tedious because we're dealing, I have clients now that say, hey, I just ran what you sent me. Uh I ran it through some AI thing, and uh this is what it says. What do you think? You know, that kind of thing. So it causes more work. And I'm thinking, well, can I charge for this kind of thing? Uh with the courts, it's like it's it's costing them more time to go through this, right? And uh, and the the party that's has to respond to these, they have to go through this. There's a lot of cost there. Uh so there's a taxpayer cost, right? The courts are paid by taxpayers, so there's a cost to every single one of us who pays taxes. But there's another cost to the end users. The end users are given a tool in the OpenAI case, uh, the nippon life case, uh, that OpenAI said very clearly and boasted about it. Uh, you there were when it first came out, you couldn't go through LinkedIn without seeing at least three or four uh uh posts on my feed about it that it passed a bar exam. Well, if you're if you're making those types of claims, you're clearly saying this can do law, right? You it that is an express warranty in my book. Um, and so they are creating these tools that really are from a, and I this is what I argue in the in the uh Stanford Post, that this is really a product liability issue. Um this is not an unauthorized practice of law. This is really an uh a product liability issue where these developers are putting out applications that they know very well have hallucinate and a boatload of other problems. And there are no sufficient guardrails put in place, although they could put it in place, but they don't, uh, because these tools are designed to agree with you, they're designed to sound convincing, they're designed to give you what you want because that's how they keep it going, right? If the tool gave you things that you can as a as an uneducated legal user, if the tools gave you things that look like garbage, you probably wouldn't even use it, even being an uneducated user, but it's designed to give you things that look good. I, as a lawyer with almost 30 years of experience, I look at some of this stuff and it looks on its face, it looks okay until I start reading it, right? Um there is no such protection for a Graciella Del Torre or other similar uh users. And I think this is just the beginning, but the nippon life uh case is one where I hope that we will see uh some uh you know uh the opportunity for the for this to be really about private liability and to say to the developers, and that's not just OpenAI, they're just now they're the ones on the hook right now, but it could be others. I mean, it could be Gemini, it could be Claude, uh Rock, you know, others, uh, and then the the ones that uh create wrappers around them, like Harvey and some of these other uh applications.

Bob Sewell

So And I think that's where you I think that's where you you got it right. Because when you're using the product, it doesn't say, by the way, I'm guessing. It doesn't say that, right? And you it it gives you the impression that it really does know. And so I mean, I had an experience in in a uh I I practice in a unique area, there's not a lot of people who practice in my area, and so and and that there's not a lot of case law in our state on the area. And so I I sit down and I say, okay, I'm uh with an associate, and and I say, uh, and a legal assistant to say, here's the issue, and I walk them through the theories. You're not gonna find case law on it, but this is, you know, you're gonna find this statute and that statute, and I walk them through the theories of this particular area of law, and and what how I think the court's going to respond to these particular arguments. So make these types of arguments, find these statutes, look at this law review article, the okay, this type of stuff. I get back a motion that was drafted, and it sounds fantastic. And I'm reading it, and I'm like, this is exactly what I was looking for. And and then I start deciding, checking this out. It's just like taking wait a minute, that's not what that case says. I've read that case a million times. That's not what that case says, and that's not what that statute says. Completely wrong. Go and say, hey, it's also you had uh yeah, don't don't use that again. This is a research tool. Don't do that again. And but it's it but it when you read the motion, it was convincing. And it was really close to what the law was and what I thought the law should be, but it wasn't cited, nothing that was appropriate.

Treat AI As Brainstorming

Eran Kahana

Right. So this is the problem that I mentioned earlier. The tools are designed to sound convincing. They want to make you happy, right? And that's how they get you to pay the. This is not going to a CLE and saying, check the box, did the training, I'm good. No. And this is what you know, if you could if you read uh opinion 512 and some of the other state uh clarifications on this, I mean, you come out with the idea that all I need to do is do some CLEs. And I completely disagree with that. I think that you need to become an expert at using these tools. And being an expert means that you use them day in, day out, and you not only get good at identifying where their flaws are, but you identify also where they're good at, and you stick to what they're good at, and you don't fall prey into the things that they're bad. For example, you cited cases. The other thing that they're really bad about is citing statistics. Horrible. Uh then when you challenge them, they go, Oh, you got me, you're right, to push on it. They give you these uh, you know, uh these answers that are just, you know, they're infuriating in a way. Well, why did you even go there in the first place? But you have to get really good at it. And that's a behavioral mind shift that lawyers don't have that. We are trained uh at on the level of you attend a class on a CLE or you attend some training for Wesla or Lexus uh or some other application, and you're good to go. This this platform, I'll call it a platform, AI is not an application, it's an entire platform, is a completely different animal. And I will say one thing I would I would uh push a little bit back on. I don't think it's a research tool. I think it's a brainstorming tool.

Bob Sewell

Yeah.

Eran Kahana

To treat this as a research tool, you're you're, I think for my just from my practice, I think you're setting yourself up for a lot of disappointment and a lot of extra work. Rather than I think you're right. Yeah. And and the the good thing about it is that if you use it as a brainstorming tool, you don't really, you're not, and this we have to be obviously very careful, we don't put any uh fine confidential information in there. We're talking about you know high-level ideas and things like that. Um, and so you're more using it as what do you think about this? What do you think about this? What do you think about this, rather than find me this case? And even I use Harvey and I use uh we use uh Harvey and co-counsel. Even there, I actually, and I'm not a litigator, but I have cases that I have to refer to. I still look at them because I can't try, I I cannot be in a position where somebody asks me, and you know, in my practice, it wouldn't be a client. I've had clients say, How do you know this? And uh, you know, I can't say, well, the AI told me, right? You look like an idiot. So you have to say, well, no, I looked at this case. I, you know, the AI may have pulled it in the in part of what I was doing, but I looked at it. But I start from the brainstorming uh part first. I try to distill what it is that I'm looking at to the best possible level that I am giving it instructions as to what I need from it rather than letting it do the work for, you know, say find me a brief, find, you know, draft me a motion, things that are very open-ended like that.

Bob Sewell

Yeah, and I I found that's when I've been most effective with it. I needed the the other day, I needed some requests for admissions. And we have a closed AI tool. So it doesn't release our client information outside uh the firm. And so I toss in my uh I had in this particular case, we have a lot of the goods, and so we, you know, the receipts, if you will, on on the the defendant. And so we had shared the receipts inside my complaint. I toss in the receipt, I toss in the complaint to my computer, my uh my AI tool, and I say I'm interested in recreating requests for admissions. Create for me 30 that all evolve around these particular issues, right? And and I think I used maybe one of them. And but what I did was I said, ah, okay, that gave me this idea, and that gave me this idea, and that gave me that idea. And now I have, you know, 10, I think five or 10 requests for admissions that are solid. And I and it probably saved me an hour of time where I where I would have otherwise sat there thinking and thinking and thinking and to to to create my 10 requests for admissions. And and um but yeah, there you go.

Eran Kahana

So we talk a lot about generative AI, but when my first interaction uh when I started at Stanford, this is now 17 plus years ago, but um when I've roughly right about the time I started, Stanford released the center that I'm at at Stanford Law School released a product called Lex Machina, which then got sold to Lexus. And that's not a generative AI tool in its nature. Uh it's we don't need to get into the you know engineering of it, but that tool was the first one where you could actually develop uh like uh NFL stats on cases, right? You could say, how long does this type of case take in front of this judge with this type of plaintiff kind of a thing? So you could develop really cool things very quickly, uh, and there was no issue with hallucinations as we are dealing with them today. So when people talk about AI in the practice of law, most people are talking about generative AI and not understanding that there's other types of AI, at least that I'm aware of, like Lex Matina, where it did not initially begin even with it was way before generative AI uh was uh was a thing. Uh maybe somebody was dreaming about it, but I don't know who it would be. Um but uh so there's a there's power in these tools. Uh we're just dealing with one type, which is the one everybody's talking about. The generative AI tool is one that is very dangerous. It's like buying a welder. I was looking at buying a weld a welding machine from my garage. And I was reading about all the bad things that can happen to you. You know, you get burned and all these kinds of things, you get blind and gas inhales, you know, all this kind of stuff. I was thinking, this is a great tool, you can do amazing things with it, but if you're not careful, you're just really going to damage yourself, which is what we're finding routinely in this case.

Bob Sewell

Yeah, and so, okay. In the litigation context, if Bob Sewell files a motion with no merit, and it's a total, you know, total trash, right? There's a remedy for that, and that remedy is Rule 11. There are other remedies too. I won't bore you with those, but that remedies Rule 11. Rule 11 says if you file something and you know it's it's uh false, you know it's bad, uh you're wasting everyone's time, you could be sanctioned. Okay. There's that's my layperson's interpretation. Okay. So we have that remedy. Well, when you go and you ask AI, and attorneys are using this stuff, this generative AI, and litigants are using it, and you say, Hey, uh draft me motion that says X and or draft me a complaint that says Y, and and it does, and they file it. These uh litigants have no idea whether or not it's right or wrong. These attorneys that would be so negligent as to do that have no idea whether it's right or wrong. They haven't paid attention. Do we need an updated guardrail in our court system to protect us from the costs and expenses that we're gonna see from this type of misuse of the product?

Eran Kahana

No, I don't think we do. I think the rules are very clear. What we need as judges to start, you know, you know, hitting people with a five-pound hammer on the head. It's just the sanctions are ridiculous monetarily, wise, they're they're not even rounding errors, right? So if if what we want to rely on is monetary sanctions, then we also are inevitably sliding into a discussion of whether or not the monetary sanction was sufficient. Will it deter future bad behavior? Look, even the bad press is not deterring lawyers. So it's like, what else do you need? What you know, what else is it to, what else can we do here with lawyers? I think you know, sending a lawyer to, and there's a handful of cases where this has already happened, where lawyers are sent to the professional responsibility board, but to reach the level of getting disbarred, to my knowledge, this has not yet happened. But the damage lawyers are doing in this case, not to only the profession, but to their clients. So I'm surprised, I'm surprised, to my knowledge, I don't track this very carefully, by the way, but to my knowledge, there is not yet been a case where lawyers got sued for malpractice for using these tools. I think it's just a matter of time before that happens. And there's be very interesting if the malpractice insurance actually covers this type of behavior because the lawyer, as one judge put it, has abdicated their professional responsibility uh obligations to a machine. And the way the insurance uh provision Are written is that they protect you from your application of your judgment, uh, misjudgment, uh, not the abdication or not the delegation of judgment to a machine. So it there is an argument here to be made that even if you have malpractice insurance, you're probably not going to be covered if you basically abdicated the responsibility of interpretation to a machine. So I think anybody that thinks, oh, well, we have malpractice insurance, I think you better be thinking about this a little bit more closely because I think you might have a different answer. And again, I'm not aware of a case showing this yet, but it's definitely going to come, I think.

Guardrails Developers Should Build

Bob Sewell

What do we you a lot of your research, I understand, focuses on governance and guardrails that we may need to put around AI. Uh what are you thinking that we should do? It's not going away, right? AI doesn't go away. We don't ban it. Um, it's not going to be banned. What do what do you think?

Eran Kahana

Well, again, I'm not a software programmer, so I wouldn't know how to code this. And although with having said that, I probably do it today with Cloud Code or some other uh uh tool. Uh you don't need to be a coder anymore. But uh I wrote back in, I think this was in uh in 2018, I wrote about uh you know guardrails. Uh so there's things that you can uh uh uh put in place into the system, and this would be up to OpenAI and uh uh Anthropic and Google, uh DeepMind. They would need to put very clear instructions around what this tool is allowed to do. Um now they will claim, and they're I'm not saying that they're lying about it, but they will claim that they don't even know what these tools do. They do things that are not clear. And there's there's a phenomenal, I highly recommend listening to this podcast with uh Lex Friedman. Uh, he interviewed uh Roman Yampulsky, who's an AI researcher, I think out of Louisville, uh Kentucky. Very, very interesting podcast. And he he he says there, and I'm again, I'm not a programmer, so I take the experts' word for it, that the LLM engineers, the large language model engineers, they don't fully understand the behavior of these models.

Bob Sewell

Yeah.

Eran Kahana

So if we agree with that, then that basically says, again, tying it back to the product liability case, that you are putting out stuff out there that is basically defective. So we're back to the product liability uh question. But there are, again, no product is going to be 100% safe, right? There's that is not a societal demand, not a normative demand that we've made, right? Cars are not 100% safe. Airplanes are not 100% safe. There is a social tolerance for a certain amount of error. But we, as we look at open AI and some of these out of foundation model developers, I think there's a good argument to be made that they have not reached the level of operational safety that they could have injected into their pro their tools. So what is the uh what is the deterministic guardrail here? What is it uh that you say, for example, you're asking me for a legal conclusion, I am not going to give it to you, right? We we see this occasionally and sometimes when we do things that it says, I'm not gonna do this. I have this with Midjourney where I was asking it to do something and it said, well, this would be a copyright violation, right? With something of that nature. So there is some of it, I just don't think there's enough. Uh there I talked about this is going back to 2012. I talked about schools having jurisdictional awareness, right? If you are, if you know the AI knows that it's in the United States, and it can know that because it's a you know situational awareness in terms of the an IP address, uh, it knows that it can't deliver legal advice, right? In the United States, that's not allowed. It may be allowed in Poland. I don't know where it would be, but I wouldn't say it's a blanket restriction. I would say that there has to be some more nuanced type of attention by the developers into what they're doing. And I think they uh and I'll end with just saying this. I think they need to be developing a really good story for the court to say, look, we've done everything possible, feasible, knowable, in the state of the art to make sure that this is not giving legal advice. And yes, it can be misused.

Bob Sewell

Yeah, it's interesting. And how that happens, I don't know, because it they don't, because you're right, the programmers who do AI, they claim that they don't know what's being sped out. It's it's using the large language model, and that isn't isn't saying one plus one is two, it's it's something entirely different. I will say though, that in the programming arena, I have friends who are program at high levels for companies, and they have they're they're using it similarly to the way Law is using it. They they'll say, hey, go and create this code for me, and it'll create the code, and then they'll talk to it. Well, no, make this change and make that change. But once when you're at a certain level, it just speeds up the coding. They're looking at it and they're saying, no, this is this is wrong, or this isn't gonna work with this other stuff. And you know, and they're having similar problems with the young uh coders who are coming up because they'll have something spit out back to them. And did you you put this to Claude, right? Yeah, and and you didn't check it first. Right. Uh right. And yeah, and so don't do that again because it doesn't work. You have to check it, and you have to do whatever they do in the coding world. But it's it could do simple things, but it can't do the complex things without human intervention.

Can AI Replace Human Counsel

Eran Kahana

And this is the advice I give to I do a lot of the eles on the ethics of this stuff. And I say one of the takeaways I have there is that become an expert. And the only way to become an expert is to do this every day. Every day. Read everything you can find. I devote at least two hours a day to reading everything I can find on LinkedIn, white papers, uh, you know, system cards. I read everything I can. Now, do I understand everything? No, I don't. And but that's just a matter of being curious enough that I over time you start developing the ability to connect the dots. Now I see how this is working. Now I see where this is not working. And if you do this day in, day out, and some people are probably catch on really quick and become really good at it, but that's the difference between lawyers who make it and the lawyers who don't, in this in this day and age, so to speak, is the people who really get to know AI really well, that use it really well, will be differentiated against those who use AI, like a Google, uh typing a search in a Google um in the address bar kind of a thing that we used to do, right? Uh if you if that's the level of your interaction with AI, I think you should really take a moment to pause that or stop that, not even pause it, and recalibrate because this is a whole different animal that requires very careful study. And it's a daily thing. Every day, look, you know, we have, I don't know, 10 hours a day to work. All those 10 hours should be you should be mindful of how you're using this tool. What are you knowing what are the limitations? And am I am I extracting the best I can? And am I doing it the best way I can? For example, I'll just give you one example. In Claude, if you're not using skills in Claude, you're missing out. You are not using this tool properly. I would argue you're not using this tool properly. I would even go as far as saying you're probably likely a violation of you know the rule of competence. Um, because this is where the state of the art with respect to Claude, and I think there's other, you know, other obviously other models have skills type things. Uh, but if you're not using that, you're not, you're not up with, you're not up to speed with how this is. So this again is not a CLE check the box exercise. This is a daily thing of understanding what this tool is. And by the way, my Claude, I use co-work a lot. It's I think almost every day it's prompting me to update the system. Now, I don't have time to read the updates, but I know enough to say this thing is evolving almost on an organic level. This is not just something that, you know, is like you do once in a while, you do an update like with Excel or Word or Microsoft 360 or 365. This is not it. This is a whole different uh setup.

Bob Sewell

You know, uh you mentioned before that you don't see AI replacing attorneys, maybe someday in the future, and maybe it that happens, maybe it doesn't. I have no idea whether or not attorneys will replace. I don't see it happening. And and I'll explain why, then I want you to tell me if you think I'm right on the right track for this. So, for example, in my area, probate and trust litigation, people come in, and the role of counselor is really takes on a new new meaning. And so I had this little old lady come in and she went through a period of really bad health, and she was manipulated during this period of bad health, and she was bullied, and she lost her house to a nefarious family member. And so I say, okay, Frank. Her name's not Frank, but we'll just say her name's Frank, the protected protector. Frank. Tell me about what happened. And there was she starts to tell me about this and the her experience, and none of it rose to the level of undue influence. And and I look at Frank and I say, Frank, I'm I'm hearing what you're saying. I don't think you're being truthful with me. And I walk her through the emotion of it all. And I said, I think you're trying to be brave. And she breaks down crying. And then she says, Yeah. I said, Now let's talk about what really happened. Did this happen? Yeah. And this during this period were you taking, you know, pain medications. Yeah. And and did people intimidate you? Yeah, they they intimidated me. They yelled at you, yeah. And they told you that they weren't going to take care of you unless you gave money. Is that that yeah, that's what happened. They explained that to me. AI doesn't do that. AI can't read facial expressions. AI doesn't understand the human experience the way a human understands it. AI doesn't have the ability to counsel in the same way. Where, you know, and so at the end of the day, my tool is I talk to the client, I hear their story. They don't often understand their own story, and I craft that story to reach legal facts. You know, legal, you know, the what does this story mean based on causes of action known to the law? I don't see AI taking over that.

Eran Kahana

So I think one of the things, I agree with you. One of the things that I think will change inevitably is that there will be a generation, they are toddlers today, that will be perfectly uh happy to accept any type of output or you know uh legal advice in this case from an AI. Uh, they won't have the same hangups as our generation does. Uh they'll be completely fine. Uh, you know, the user interface will be completely different by then than what we see today, will be laughable. It'll almost be like a DOS prompt uh in retrospect. But they will have a very different user experience, right? And I think that as long as you're you're willing, uh, your the the normative uh reconfiguration will be that we are willing to accept uh recommendations and advice from these uh AI entities at this point. Maybe it's a robot, actual physical robot versus just a screen, right? So that will will change. But another thing that I will highlight for you is that everything you said in theory, uh, and I the reason I I modify it with in theory, because there's there's a a high level of sophistication that's required here, um, that I don't know that is existent at this, I know that it's not existing at this stage, uh, of reading human emotion, reading human expression, and being able to decipher that into legal conclusions and legal guidance and things like that. But having said that, the you can distill it into an algorithm. And I I was listening to Demas Hasabis from Google DeepMind saying pretty much anything in life can be, I understand what he was saying, anything in life can be distilled into an algorithm, meaning that we can actually decipher this mathematically. So if we can mimic, mimic the uh, you know, uh Bob Sewell listening to a client, if we can mimic that sufficiently, then does it really matter? And this gets into some interesting philosophical question, but does it really matter that it's not a human doing that, but a robot? So the mimicry here has to be so sophisticated that it passes the the average human's cognitive threshold, if you will. Uh, and uh and at that point we accept it because it's it this is just so convincing uh and it fits our worldview and everything else, that it's just sufficient. We've we've made it sufficient. Now, of course, there's an issue of cost and all that kind of stuff. Uh, so there are a lot of boundaries and a lot of uh uh hurdles to be crossed, but the ability to render legal advice that is very sensitive on a personal level to an individual, I think we're on track to do that. And I'll just say one more thing about this is that studies by Anthropic and OpenAI and uh and uh DeepMind, they're showing that people are using these tools for very personal things and they're actually it seems like the tools are delivering things very uh in a very uh, I would say, uh surgical manner that is that is being bought by the end user to the point where they go back and back and back over and over and over again. So mimicking the I care about you as a client, it's just a matter of time before we get there.

Bob Sewell

Yeah, okay. Okay. I I I man, I'm such a skeptic. I I I hope you I hope you're right and wrong at the exact same time about that. I mean, when you were talking, it reminded me about of a body of research where in the healthcare industry involving race. And what they found was they they start putting uh patients that look like and are from the same culture as um the doctor. And they say, okay, go in and get your advice, and then they later come in and survey, how'd you feel about your advice? How open were you, you know, and as it so happened, uh if your doctor understands or is part of your same culture and raise, you tend to be you tend to be more willing to open up. You're gonna and that's essentially what you're saying is well, AI is gonna have to get to that point that you feel comfortable it that it understands you.

Eran Kahana

Well, again, what we're talking about is within our generation's framework. Think about try to uh disassociate yourself from yourself in this, you know, from now until 30 years from now, and think about it from the perspective of people who will be in their 30s and 40s and are really just over just about a year old today. They are going to have a very different, very likely different approach to an AI giving them advice of any sorts. Now, if it remains true that I am more likely to accept legal advice from somebody that is physically of an appearance that is more appealing to me, well, that's very easy to change. I mean, uh, all you have to do is watch X-Men and uh, you know, Mystique would change herself to appeal to whatever somebody likes, right? I mean, the ability to trans to morph and like that is is is you know theoretically possible. Uh, but if you don't like this, Rob, you know, it's like going to the doctor. Is it number one or number two? Ron or number two, right? The eye doctor when you're testing your vision. Well, if you don't like number one, you get number two. And if you don't like number two, you go to number three. You can that you can select which which entity you want to be dealing with. And that could be done even online, right? You can say, this is what I want my lawyer to look like, or my doctor to look like, or whatever it is.

Bob Sewell

Iran, this has been a fascinating conversation. Thank you for coming on.

Eran Kahana

Thank you so much, Bob. It's been a pleasure.

Bob Sewell

Thanks for listening to the podcast. Is that of illegal is now listened to in a hundred countries and available on virtually all podcast platforms. Leave us to review, send us to show ideas, and do so at producer@ evenlegal.com. Don't forget, as smart as we sound and as lovable as we are, we are not your lawyers. And we are not giving you legal advice. But if you need some legal advice to get stuff, there are some great lawyers out there, and we are always ready to help. See you next time.