Diritto al Digitale

What Happens When the AI Hallucinates the Courtroom?

DLA Piper Law Firm

What happens when an AI tool like ChatGPT invents a legal ruling—and that ruling ends up in a courtroom filing? In this episode of Diritto al Digitale, Giulio Coraggio explores two real cases, one in Italy and one in Canada, where lawyers unknowingly relied on hallucinated case law generated by artificial intelligence.

We examine the legal and ethical implications of these events, what they reveal about the profession’s readiness for legal tech, and why under the EU AI Act, such mistakes could trigger serious compliance risks.

Giulio also shares how DLA Piper is addressing this challenge with a dedicated Legal Tech practice that brings together legal and technical expertise to help clients adopt AI responsibly and effectively.

Tune in to understand why AI can assist—but never replace—human legal judgment.

Send us a text

📌 You can find our contacts 👉 www.dlapiper.com

Imagine standing in a courtroom, where every word, every legal citation, every ruling can shape the outcome of a trial. Now, picture one of those rulings never having existed—fabricated entirely by artificial intelligence. Sounds like science fiction? Unfortunately, it’s not.

A recent case before the Court of Appeal in Florence, Italy, revealed that an attorney submitted legal references that had never existed—hallucinated by ChatGPT. An incident that raises fundamental questions about the reliability of AI in the legal field—and far beyond.

I’m Giulio Coraggio, a technology and data lawyer at the global law firm DLA Piper, and this is Diritto al Digitale, the podcast where we explore the intersection between law and innovation, analyzing how emerging technologies are shaping our future.

The case took place before the Court of Appeal of Florence, in a dispute over trademark and copyright protection. While filing the defense brief, a lawyer included citations from the Italian Supreme Court that were completely fabricated—generated by ChatGPT.

This phenomenon, known as AI “hallucination,” refers to the creation of false yet plausible-sounding content. In this case, ChatGPT invented case numbers and rulings, presenting them as authentic. The lawyer later admitted that a colleague used ChatGPT for legal research and that he had failed to review and verify the results before submitting them.

The opposing counsel sought sanctions for aggravated liability under Article 96 of the Italian Code of Civil Procedure, claiming gross negligence or bad faith. However, the Court ruled that the fabricated references were only intended to support an already-developed legal position and were not submitted in bad faith. The claim was rejected due to the absence of demonstrable harm.

But this is not an isolated event. In Canada, lawyer Chong Ke, based in Vancouver, presented two entirely fictitious rulings to the Supreme Court of British Columbia during a custody dispute—also generated by ChatGPT. When opposing counsel couldn’t locate the cited cases, Ke admitted to using the AI tool without knowing it could produce inaccurate legal content. He formally apologized, and while the judge acknowledged the error wasn’t malicious, Ke was ordered to personally compensate the opposing party for the wasted time—emphasizing that AI cannot replace professional expertise in court.

These cases highlight a critical issue: the reliability of AI in legal settings. As sophisticated as AI may be, it does not possess the human ability to distinguish accurate information from fiction. When hallucinations occur, they can have severe consequences—especially in environments where accuracy is paramount.

Using AI tools like ChatGPT requires constant oversight. Every citation, every insight must be reviewed and validated by a human legal expert. AI can support our work, but it cannot substitute the critical thinking and professional responsibility of a lawyer.

At the same time, this case shows that the legal profession is seriously engaging with legal tech. But adopting AI tools effectively and responsibly requires the right positioning—not just legal acumen, but also technical understanding.

That’s why at DLA Piper, we have created a dedicated Legal Tech line of business. Our team includes lawyers who are also trained in technology, capable of evaluating and implementing AI tools while staying fully compliant with legal and ethical standards. Whether you’re a business looking to integrate AI into your legal processes or a law firm exploring legal tech transformation, we’re here to help.

If you’d like to know more, don’t hesitate to contact us.

These cases also raise broader questions around legal ethics and education. Legal professionals must be properly trained to use AI tools responsibly, understanding both their strengths and their limitations. The adoption of AI in the legal industry must be thoughtful and measured, to ensure that judicial integrity remains intact.

And under the EU AI Act, the use of generative AI in legal services could be considered a high-risk application—especially when used in contexts that affect rights and legal obligations. This means that AI tools used in legal research or decision-making must meet stringent standards of accuracy, transparency, and human oversight.

Submitting hallucinated case law, even unintentionally, could lead to serious compliance violations—not only for the lawyer involved, but also for the AI provider and the firm using the tool. To mitigate these risks, legal AI tools must be built on verifiable sources, offer clear traceability, and be implemented within robust governance frameworks.

So, what about you? Have you ever used AI tools like ChatGPT in your professional life? What precautions do you take to ensure that the information they provide is accurate?

Write to me at giulio.coraggio@dlapiper.com to share your experiences—or to learn more about our Legal Tech capabilities at DLA Piper.

If you found this episode insightful, don’t forget to give us five stars on Apple Podcasts or Spotify, and hit the bell to make sure you never miss the next episode of Diritto al Digitale.

I’m Giulio Coraggio, this is Diritto al Digitale. Arrivederci.

People on this episode