The Grace Period: Shining A Light on Lawyer Wellbeing
A podcast for lawyers that explores the realities of big law, provides tips for better practice management, and shines a light on lawyer wellbeing.
The Grace Period: Shining A Light on Lawyer Wellbeing
Bonus: Responsible AI For Lawyers -- What I Know + What I Want to Learn
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
We react to a major law firm AI mistake and use it as a reality check for anyone adopting generative AI in legal work. We lay out what responsible, practical AI use looks like when you prioritize trust, process, and human judgment over hype. In this bonus episode, you'll find:
• a high-profile example of AI hallucinated citations and why it still happens
• the shift from whether legal teams use AI to how they use it safely
• starting with clear use cases like research, review, and contract analysis
• setting boundaries for data, oversight, and review standards
• building trust through training, transparency, and real governance
• keeping humans in the loop so lawyers own the final work product
If you have a question about AI and legal you want me to explore at CLOC's 2026 Global Institute definitely reach out and let me know.
Find out more at https://www.linkedin.com/in/emilystedman/.
Welcome And Why AI Matters
SPEAKER_00Welcome to the Grace Period. I'm Emily Logan Steadman, partner and litigator at an EMLA 100 firm. Today I'm here with a bonus episode, and it's about AI, AI illegal, and specifically how we can or should or try to move from hype to responsible and practical AI adoption. I'm recording this on the heels of Sullivan and Cromwell, yes, that white shoe firm of all white shoe firms, having an AI incident. They filed a document with a court that had hallucinated cases in it. The firm handled it as best they could. They fell on the sword, they explained what policies and procedures should have prevented this from happening, and they explained what they did to correct it once it happened. Even so, hallucinations by AI are old news, and yet they're still happening. Happening not just with pro-sey litigants or at small firms, but at the highest echelons of our profession. This leads me to a topic I'm obsessed with and learning as much as I can about ethical and responsible AI use, implementation, and governance. Right now, every law firm, every legal department is talking about AI. You cannot go to a legal conference that doesn't have a panel on AI adoption and use. The real question is no longer should we use it? The question now is how do we use it? How do we use it well? How do we use it safely and ethically? Excitement is everywhere, but excitement is not a strategy. Experimentation is not a strategy. Responsible adoption requires structure, judgment, and trust. This episode is part of a series I'm doing leading up to the 2026 Clock Global Institute, where I'll be attending as an influencer or media attendee in Chicago this May. Clock stands for the Corporate Legal Operations Consortium. It's the leading peer-driven, not-for-profit community for legal operations. Clock brings together legal ops professionals, service providers, and others from across the legal ecosystem to help learn how and learn how to and how others are transforming the business and practice of law. Their global institute is one of the biggest places where that legal ops community comes together to exchange ideas, frameworks, tools, and real-world lessons about how legal work is changing. This year's theme is stronger by design. I love this because stronger by design implies intention. When it comes to AI, intention is key. CLOC's Global Institute will include sessions on AI training, AI governance, AI resource optimization, AI for career growth and skills development, and how AI is impacting the evolving design of modern legal departments. And again, to me, that phrase, that theme, stronger by design, gets to the heart of the AI conversation. This is no longer a question about whether legal teams will use AI. It's whether teams in firms and in-house at companies will use AI with enough intention, structure, and judgment to make legal work better. Today, I want to talk briefly about what responsible AI adoption could look like or should look like. Here's what I think on what I know so far, which is not a whole lot because it's happening all very fast. But here's what I think responsible AI adoption should and could look like in the law. One, we must start with clear use cases. We must think about where AI fits, whether that's legal research, document review, or contract analysis. Experimentation is good, and that's how people will get over their hesitancies with AI, but we should also be using it to define and redesign our workflows. Next, we have to set boundaries around AI. It needs to be clear what data can and can't be used, what tasks require a human in the loop or human oversight, who reviews the output, who reviews the input, and what's the standard for good enough. Next, we have to build trust. At my firm, that is done through training and transparency, and that's what everyone needs. Everyone using your AI tools needs to know what's allowed and what the risks are. They need to know what an appropriate review process looks like. Governance isn't just a policy on a shared drive, it's a repeatable process with clear accountability and with access to leaders who model good judgment and good use of AI. Finally, or again, we must keep humans in the loop. AI can draft, summarize, and analyze, but lawyers must own the final product. We must validate the outputs, and we must ensure confidentiality and defensibility. Why does this matter? Because clients in legal departments are already asking, can legal work be done differently? Can it be done better? Can it be done more efficiently? AI is part of the answer to those questions. The real value comes from systems, oversights, and workflows that are intentionally designed for quality, trust, and repeatability, as well as the judgment we all took an oath to provide. The AI revolution is here. It's here to stay. But we shouldn't be using AI for the sake of using AI. We should be using AI to create systems and using AI under governing policies and procedures that encourage experimentation and creativity and support efficiencies, but keep our attorney licensed, ethically obligated judgment, competence, diligence, and responsibility in the loop. Just like we can't use AI or shouldn't use AI for the sake of using AI, we cannot seek out efficiencies for the sake of efficiencies. Humans must remain in the loop. Why is that? Well, again, it's because every lawyer swore an oath to and is bound by ethical obligations to be competent, diligent, and responsible. Our judgment depends on our own brains. Our supervision of those we delegate to, including large language models and other humans, depends on our own brains, intellect, experience, and judgment. At Clocks Global Institute next month, I will be in search of answers and learning and examples of how AI can be done well and with the right governance in place. I'm looking for answers to things like how are legal teams training people? Where are they seeing value? What governance structures over AI are actually working? And what are people learning the hard way? If you have a question about AI and legal you want me to explore at Clock, definitely reach out and let me know. I'm gonna continue sharing reflections leading up to the conference as well as during and after the conference, so stay tuned for live takeaways and practical insights from Clock's 2026 Global Institute. Thanks for listening to this bonus episode of the Grace Period. I'll see you next time. Disclaimer the views expressed here are solely my own and do not represent the official policy or position of my firm or any organization. This podcast is for informational and entertainment purposes only. It is not professional or legal advice, and it does not create an attorney client relationship.