Law, disrupted
Law, disrupted is a podcast that dives into the legal issues emerging from cutting-edge and innovative subjects such as SPACs, NFTs, litigation finance, ransomware, streaming, and much, much more! Your host is John B. Quinn, founder and chairman of Quinn Emanuel Urquhart & Sullivan LLP, a 900+ attorney business litigation firm with 29 offices around the globe, each devoted solely to business litigation. John is regarded as one of the top trial lawyers in the world, who, along with his partners, has built an institution that has consistently been listed among the “Most Feared” litigation firms in the world (BTI Consulting Group), and was called a “global litigation powerhouse” by The Wall Street Journal. In his podcast, John is joined by industry professionals as they examine and debate legal issues concerning the newest technologies, innovations, and current events—and ask what’s next?
Law, disrupted
Defamation and AI
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
John is joined by Robert M. (“Bobby”) Schwartz, partner in Quinn Emanuel’s Los Angeles office and co-chair of the firm’s Media & Entertainment Industry Practice, and Marie M. Hayrapetian, associate in Quinn Emanuel’s Los Angeles office. They discuss recent cases testing whether large language model AI outputs may give rise to defamation claims.
In one recent Georgia case, a journalist asked ChatGPT about a lawsuit and received a response stating that a company executive was an embezzler, even though the lawsuit did not involve any such allegations and he was not an embezzler. In another case, Google was sued after its AI overview tool incorrectly stated that a business was being sued by the Minnesota state attorney general for deceptive practices, an allegation that allegedly caused up to $200 million in lost sales. Other examples involve sexualized deepfake images allegedly generated from ordinary photos, creating reputational and privacy harms.
Defamation law assumes a human speaker who publishes a false factual statement with some degree of fault. AI systems complicate that framework. In the case of LLM outputs, it is unclear who the speaker is. Is it the platform, the data scientists behind the platform, the user who created the prompt, or the model itself? It is also difficult to fit AI output into doctrines requiring intent, knowledge, or reckless disregard, especially in public figure cases that require proof of actual malice.
In the Georgia case, the defense won a motion for summary judgment. The court concluded that the output would not reasonably be understood as stating actual facts because the system provided warnings about limitations and potential errors. That reasoning may be vulnerable on appeal, but it shows one approach courts may adopt to reject these claims.
Republication may also result in liability. If someone republishes defamatory AI output as fact, ordinary defamation principles could apply. An unresolved issue is whether the Section 230 safe harbor protects platforms when AI output is generated through interactions between user prompts and the model.
Current defamation law might ultimately be a poor fit for AI-generated speech. Assessing liability for AI-generated speech may eventually require a different legal framework, such as product liability law.
Podcast Link: Law-disrupted.fm
Host: John B. Quinn
Producer: Alexis Hyde
Music and Editing by: Alexander Rossi
Note: This transcript is generated from a recorded conversation and may contain errors or omissions. It has been edited for clarity but may not fully capture the original intent or context. For accurate interpretation, please refer to the original audio.
JOHN QUINN: This is John Quinn and this is Law, disrupted, and today we're gonna be talking about what else? AI. It's all AI these days, right? But this is a particular part of AI that we're gonna be talking about, an area of the law that's developing. We will see how far it goes, and that's the application of Defamation Law, to the output of large language models.
We're starting to see some cases that are being filed where people are saying, I was defamed by the AI. And here to talk to us about this is my partner, Bobby Schwartz, and an associate in our firm, Marie Hayrapetian. I hope I said that or close enough, Marie.
MARIE HAYRAPETIAN: You got it.
JOHN QUINN: Okay.
So tell us Bobby are, we're starting to see some cases filed. And tell us, give us some examples of some of the cases and what's the theory of the case?
BOBBY SCHWARTZ: Yeah, we had some cases filed, one case has resulted in a defense summary judgment win, and we'll talk about that. And some of the issues that arise in these cases are, for example, who's the speaker? Is it the AI bot? Is it the person who prompted the AI bot to produce an output? It turned out to be false and otherwise defamatory.
And how do you deal with public figures where you have to prove constitutional malice through evidence? Either that the speaker knew the statement was false or acted in reckless disregard for the truth. If the speaker is an AI bot, how are you ever gonna show that?
JOHN QUINN: Bots don't have intent, I would think. Alright, give us the fact pattern of some of them.
BOBBY SCHWARTZ: So here's one.
One of the cases is out in Georgia. It was against OpenAI, and a fellow Mr. Walters sues OpenAI for defaming him because somebody queried on ChatGPT to ask about a lawsuit to which he was a party. And after multiple prompts the ChatGTP output said that the CFO, the plaintiff in this case was an embezzler.
And that lawsuit had nothing to do with that. He had never been an embezzler, embezzled anything. And so it was obviously accusing him of a crime, defamatory per se, and so he sues OpenAI.
JOHN QUINN: How did he become aware of this output? Is this something that, had he put in the prompt or somebody else did and forwarded it to him, how did he become aware of it?
BOBBY SCHWARTZ: I don't wanna hog the conversation. Can I ask you to jump in and give us a little background on the case?
MARIE HAYRAPETIAN: That's a great question John and I had to go back into the complaint when I asked myself the same question and there was really nothing. From the plaintiff explaining how he even found out about it, it was in the motion for summary judgment and the court's order granting summary judgment where it became clear to me that the plaintiff found out after the journalist went to the company and asked if that was true or went to the plaintiff and asked if that was true and the plaintiff said, no, it's not true. It's still unclear to me how Walters found out about it.
JOHN QUINN: Okay, so a journalist put a prompt into an LLM, this was the output. So the journalist does his job, he starts investigating it, and the result is a lawsuit for defamation. Now what on, what was the grounds on which the court granted summary judgment?
BOBBY SCHWARTZ: There are two grounds, one of which I'm not a hundred percent sure of, and the other seems more sustainable on appeal. The first was that the open AI successfully argued that the output was not defamatory as a matter of law. It did not communicate a defamatory meaning.But what the judge in Georgia focused on was the legal principle that says that the statement has to be reasonably understood. As describing actual facts about a plaintiff and that no reasonable reader would have under these circumstances concluded that what ChatGPT was outputting were actual facts.
And that's because there were all kinds of warnings to the journalist making the queries saying, just remember, this is not flawless and there were additional warnings that said I can't go back in far enough in time to pull up the complaint you want me to ask me about.
JOHN QUINN: And this is the LLM saying, I can't do it?
BOBBY SCHWARTZ: Yes. Yes. So the LLM gave the journalist limitations, and that should have put the journalist on notice, or it was sufficient to put the journalist on notice that what ChatGPT was outputting were not actual facts.
JOHN QUINN: That doesn't sound exculpatory to me.
BOBBY SCHWARTZ: No, if I were open AI, I'd be a little worried about how, whether that's gonna survive appellate review, if there is any appellate review. But there's maybe some unusual fact here where the reporter already had the complaint or a copy of the complaint.
JOHN QUINN: Let's give us some other examples of cases.
BOBBY SCHWARTZ: The other cases haven't been resolved, but here are some other cases. And this one I think has more legs. This one's against Google, and a business is claiming suing Google for defamation because its AI overview tool spit back a result that said the Minnesota State Attorney General was suing the plaintiff for deceptive practices and that resulted in a lot of lost business, anywhere from a hundred to $200 million in lost sales.
JOHN QUINN: How did everybody learn about this?
BOBBY SCHWARTZ: That's a good question.
MARIE HAYRAPETIAN: So when you hit something on Google, the very first thing that pops up is the AI overview. And anyone in that timeframe that Googled it apparently read that particular fact.
JOHN QUINN: Oh, so that, that's the Gemini paragraph at the top of a search result.
MARIE HAYRAPETIAN: So a lot of contracts started becoming cancelled.
JOHN QUINN: That would be bad. If you're seeing that every time you search this business, that's up on top, that they're being investigated by the attorney general. Okay. Before we plunge into the elements, give us just another example so we have a flavor of what's happening in the courts in AI and defamation.
BOBBY SCHWARTZ: We get deep fakes, sexualized deep fake images, based on photos that, not of naked people, but let's say somebody posts a photograph of themselves on social media and then somebody else runs a query and that's a text to image query and says, show me photographs of so and in the nude or not in the nude whatever, and that's happened on Grok, which is the the x platform or the X AI tool, and so they've been sued anonymously by a woman who says, Hey that's me, I've never posted a photo of myself naked. And this is under California law defamatory, it's revenge porn.
JOHN QUINN: Do you think the law of defamation that's really gonna have applicability to the output of LLMs? You've identified what are some pretty obvious problems? We think, we know defamation is intentional tort. Bots don't have intentions. LLM’s don't have intentions. So far as we know, people who put in prompts may be due.
BOBBY SCHWARTZ: Yeah, that's where I think the dividing line will get drawn. That the AI platforms are less likely to end up having liability versus individuals who prompt AI tools get some statement and then republish it carelessly or otherwise. And maybe they should have liability. And by the way, lurking in all of this yet to be addressed in any of these cases is section two 30 of the Communications Decency Act, which provides a safe harbor for internet platforms for against liability for material that's posted by users.
Now the question is, does section two 30 apply in this context? And I realize maybe I'm jumping way ahead here, but it's an interesting question because normally that gets applied if a user posts a defamatory statement about somebody.
And then the subject of that post files a defamation claim against the speaker and the platform for republishing it or publishing it in the first instance. And the platform says, sorry, no, I didn't put that there. The user did well in this case. It's not clear. The user, certainly the user, prompted the platform to generate the output, but there was some interaction with maybe non-volitional, but nonetheless some conduct on the part of the platform and courts have yet to deal with that. So that's another issue lurking here that we'll have to get resolved.
JOHN QUINN: So other than the intent element, I guess there's also an issue about who is the speaker.
BOBBY SCHWARTZ: Yes.
JOHN QUINN: You know, it's the LLM, the speaker is the the data scientist behind the LLM. The speaker is the person who entered the prompt, the speaker.
BOBBY SCHWARTZ: And the platform would presumably take the position that they're not actually. And this gets into sort of the esoterica of how these large language models work. They're based on algorithms that look for probabilistic similarities or patterns in communicated human language.
And based on that, they articulate or come up with responses. So they're not necessarily thinking, oh, Sally Jones is a bad person, or whatever the defamatory statement would be. They're just trying to present, create language in response to prompts and are they really the speaker or who are they even speaking at all, even though we recognize it as language that we can comprehend.
The model doesn't think of it that way. And are they the speaker? Because all they were doing was reacting to prompts by a third party who wanted to hear something or get something about the subject of the prompt.
JOHN QUINN: Obviously, if there is an output which is defamatory and somebody then takes that and publishes the defamatory output, that would, could be a traditional defamation claim.
BOBBY SCHWARTZ: I agree. Yeah, that's, there's nothing unusual about that. That's just republication.
JOHN QUINN: The only reason we're talking about this is people are bringing claims against the open AI and Claude and the like, and it raises these issues about intent and who's the speaker and the like.
BOBBY SCHWARTZ: And when you're dealing with republication, so the standard is, and this has arisen a lot in social media context, not involving large language models, but just traditional social media or any other form of media. The question the courts ask is was it reasonable for the speaker to believe that when he, she or it said, whatever they said to this other person or these other persons, that these other persons would republish it to members of the public.
And if that's, if the answer is yes, then they can have liability. But if I were representing an AI platform, I would say of course we didn't have that expectation. We warned our users that our models are capable of hallucinating and that they should use care and it would not be very hard to, if it's not already there, to bake something into the terms of service or the end user license agreement that provides some language that would disclaim any intent or expectation and give some insulation against republication?
JOHN QUINN: What should somebody do if they feel that they've been the victim of AI defamation. What actions can you take?
BOBBY SCHWARTZ: Most of the platforms, I think all of the platforms have systems monitors, complaint procedures where you can bring to their attention an issue or a problem. And especially if it's built, baked into a social media platform, which is often the case, so that if somebody has posted something and even if it had nothing to do with an AI-generated output.
But if somebody's unhappy with something, somebody posts, there are mechanisms, they're not very effective. They're not overseen, if you will, by some statutory rubric. Like the Digital Millennium Copyright Act is for copyright infringement, but you have that recourse, and you can contact the platform, you can contact the user.
I think there, given the volume of this activity, it's impossible for the platforms to be able to meaningfully respond to it, take things down. Whatever they can do, realistically they will, but I don't think you should assume it. And the other problem with protecting your rights here is usually you're just confronted with some user handle that could be in a private mode you in other words, you may not be able to contact that person. You may not be able to even sue them other than through a pseudonym. You might have to sue the entity to get them to compel them to tell you who the user is so that you can then actually file a lawsuit against them as a real human being.
It's very hard to enforce these rights for defamation.
JOHN QUINN: It sounds to me like this is the intersection of law and defamation and AI and these cases that are coming up are interesting developments, but it doesn't sound to me like it has legs unless there are fundamental. We recognize fundamental changes in the law of defamation. Marie, do you agree with that?
MARIE HAYRAPETIAN: I definitely agree with that. Definition is usually described as an intentional tort, but that's a little bit misleading 'cause you don't have to intend to hurt someone. People defame others by accident all the time. What you need to have is to have meant to publish the statement, and AI starts to break that framework because in these cases, nobody really meant to publish the defamatory statement.
The user asked an innocent question, the company built a product and warned it might make mistakes. The model just predicted the next word. That's how these things work at their core. It's not retrieving facts like a search engine. It's assembling language based on patterns. So the harm can be real, but the causal chain can be clear, but nobody fits neatly into the traditional definition of a publisher. And in some sense, everyone did something reasonable and someone still got hurt and fault based tort law doesn't have a great answer for that, which raises a bigger question, do we eventually need a different framework entirely.
And one possibility people have been talking about is product liability, so you don't sue forward by proving they intended your airbag to fail, you show the product was defective and caused harm, and applying that idea to AI-generated speech would be a major shift. But these are commercial products deployed at massive scale, and the harms are foreseeable. So no court has gone there yet. It's just what we see people writing about.
JOHN QUINN: That's an interesting thought. Thank you both. Bobby Schwartz, Marie Hayrapetian. Thank you for joining us to talk about AI and defamation. This is John Quinn. This has been Law, disrupted.
Thank you for listening to Law, disrupted with me, John Quinn. If you enjoyed the show, please subscribe and leave a rating and review on your chosen podcast app. To stay up to date with the latest episodes, you can sign up for email alerts at our website, law-disrupted.fm, or follow me on X at JBQ Law or at Quinn Emanuel.
Thank you for tuning in.