Don't Skip the Legal Podcast
It's time to get ready for change.You're growing and building your business, and you have a vision for the future. You want to know what legal hurdles you might encounter so you can take care of them before they grow out of control.This is where we come in. We are bringing you the "Don't Skip the Legal" podcast. A place where you can learn how to grow your business and build a better future for yourself and your business through the lessons and experience of other business owners, just like you. You know there are legal hurdles on the horizon that need to be taken care of before they grow out of control. This podcast will help you learn to make a strategic response to the constantly changing business landscape during stressful situations reassures, and empowers you with a framework to respond and take smart actions so that you can protect yourself, your customers, and your business's future.
Don't Skip the Legal Podcast
How AI Can Expose Your Business | Ep. 202
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Artificial intelligence is changing how businesses operate, market, negotiate, hire, and make decisions. It is also creating new legal risks that many companies do not see until the damage is already done.
In this episode of Don’t Skip the Legal, Andrew Contiguglia shares a presentation from the 2026 FALA meeting in Nashville on how lawyers can counsel businesses through the growing risk of AI. From AI generated contracts and confidentiality leaks to intellectual property disputes, reputation damage, discovery issues, and weak internal controls, this episode breaks down where business exposure is showing up and what companies should do now to reduce it.
Andrew explains why AI is no longer just a tech tool. It is now a business risk issue, a legal issue, and a governance issue. He also walks through practical steps lawyers and business leaders can take to protect confidential information, improve AI policies, strengthen human review, and avoid costly mistakes before they become lawsuits.
If you are a business owner, lawyer, executive, marketer, or advisor trying to understand the legal risks of AI in business, this episode gives you a practical framework for spotting exposure early and responding with better judgment.
In this episode, you will hear about:
How businesses are using AI in contracts, content, operations, and decision-making
The legal risks of AI, including confidentiality, intellectual property, data exposure, and reputation harm
Why every company should have an AI use policy and human review process
How AI can affect privilege, discovery, and future litigation
What lawyers can do to help businesses manage AI risk before a crisis hits
This episode is for anyone asking the right question right now: how do you use AI in business without creating legal exposure you never saw coming?
Don't Skip the Legal podcast brings you insightful conversations with successful entrepreneurs, providing real-world lessons on business growth, legal considerations, and much more. Subscribe now for more enriching episodes and practical insights for navigating the complexities of the business world.
Find Andy on the following social platforms:
The Contiguglia Law Firm
Instagram
Facebook
Twitter
TikTok
LinkedIn
YouTube
Disclaimer:
Please note that the legal information shared in this podcast is for general informational and entertainment purposes only. It is not a substitute for consulting with a licensed attorney for specific legal matters. Past performance does not indicate future results; every legal case is unique. Consult your own attorney for personalized legal advice.
Transcript
Andrew Contiguglia
All right, so for those of you who have not been paying a lot of attention, when we were in Vancouver, I put on a presentation about the practical use of AI in your law practice. Basically, an overview, a 35,000 foot view, on how to use AI, the different types of AI, the LLM systems that are out there, and how you can use them and implement them into your law practice. That was sort of part one of this four part series.
Part two, Will King and I put on an ethics presentation about the ethical implications of using AI in your law practice, sort of emphasizing that, as lawyers, we have a duty to maintain our knowledge and move with the times as technology advances, and that is part of our role as competent attorneys.
Then, back in January, I put on a Zoom presentation for people, and we did part three, which was sort of mastering AI in your law practice, where I really walked you through the creation of a demand letter, how to use different resources, and how to integrate different LLMs into your law practice as you developed a demand letter in relation to all the information that you are working with.
So now, we are moving to part four of this presentation. For all of you who have listened to me present in the past, you know I hate doing this lecture thing where I just talk and you listen. So I want to encourage everybody to ask me questions as we go.
I think this technology, and AI implementation within our law practice, always spurs questions. So I am going to encourage you to ask me questions as we move through this material.
The direction this presentation takes is that we are now at the point where our clients are starting to use AI.
Audience Member
Not just clients, but people adverse to clients too. They get these demand letters and freak out because they think it is from a lawyer, and it is not. It is beautifully done by AI.
Andrew Contiguglia
Exactly. I had a client the other day, and imagine this story. We had just finished working on the buyout of her portion of a corporation. She was out applying for a new job, and one thing she was concerned about was whether applying for that new job implicated the settlement agreement or the buyout agreement she had just signed.
So what did she do? She took the entire agreement, threw it into ChatGPT, and asked, “Hey ChatGPT, am I allowed to apply for this job? Does it violate this agreement?”
I am oversimplifying, but that was basically it. ChatGPT came back and said no, it does not. Which maybe it did and maybe it did not.
I went back through it, and of course I reviewed the agreement because I helped put it together, and I told her, “Listen, you applying for this new job is not going to create any problems, but do not throw things into ChatGPT anymore.”
I have other clients now coming to me and I ask, “Did you prepare a contract as it relates to this business deal?” And they say, “Well, I threw it into ChatGPT and ChatGPT gave me this contract, and that is the contract we are using.”
Of course, I give them a partial high five because at least they have a contract. It used to be that they were just talking, spitting in their hands, and shaking on it. So now, at least they are using something.
What I try to keep reminding people is that bad contracts are still binding. And they are slowly beginning to recognize this.
Audience Member
Sometimes worse than no contract.
Andrew Contiguglia
Exactly. It might be.
And so now we are in this position where we need to counsel our clients. What are they doing, and how are they starting to implement AI into the work they are doing?
There are a lot of changes in the law. I think Barry is going to talk about the Heppner case after our break. I am going to touch on it very briefly. I promised him I would not step on his toes, because that is a big deal that just came out of the Southern District of New York.
But what we really want to talk about here is how AI risk is going to be managed by us as attorneys. How is AI going to harm our clients, and how can we protect them from their own use and their own ignorance about using AI?
And basically, what the new legal battlefield is going to look like. There are going to be a lot of changes in law, and a lot of changes in how discovery is performed, how we prepare for litigation, how we prepare for trial, and how we use this really great technology to make our lives simpler.
Audience Member
If they are incorporated and they draft their own AI contract, is that the unauthorized practice of law?
Andrew Contiguglia
That is a great question. And I think those are the types of things we need to start thinking about.
So the situation we are in now, this is our moment. AI is no longer experimental. We are seeing the use of AI with our clients. They are using it across marketing, in research, in product development, and in the operations of their companies.
What you really see people doing nowadays is that rather than going to what Google used to provide, they are going to Claude, or ChatGPT, or Perplexity. They are writing in their search queries and using these LLMs as their search engine.
Now, if you use Google, Google gives you this AI generated content called Gemini. That is their AI model. It is powered by Google, so it has access to everything all of these other LLMs have. And each of these, as we talked about in part one, gives you a different output and a different methodology in how they operate.
Audience Member
What do you mean by LLM?
Andrew Contiguglia
Large language model. LLM is the technical term for AI. I am just not big on using the jargon.
We also see situations where businesses are being harmed by AI. As Lou and Gil were saying, we have lawsuits being filed against corporations for the unauthorized practice of law. Our legal system is really just beginning to respond in terms of how it is going to deal with these situations and the legal implications that come up.
So really, what we need to start doing as lawyers is figuring out how we are going to protect clients from their use of AI within their businesses. Because they do not know what they do not know until we tell them what they do not know.
We need to help counsel our clients so they stop making these mistakes, or at least limit the mistakes they are going to make. There are really two directions this is going to take as we move forward.
When you are looking at exposure to your clients, you are looking at legal exposure. How binding is this agreement? What have they done in terms of confidentiality? Are they exposing the data of their business in a manner that ruins confidentiality, creates a trade secret problem, or puts protected material out in the open?
Does it create reputation exposure? That is a big one right now. When you are putting things into AI, are you somehow diminishing the reputation of the company or of yourself within your community?
What about intellectual property exposure? This is huge right now. There are lawsuits going on against OpenAI involving the way these models were trained by artists and by intellectual property to create the images they now generate. If you trained off my photographs or my artwork to create the image you are creating, should I be compensated for that? Is that a form of intellectual property infringement?
Then there is market and competition exposure. What are you doing by putting your information into a space that exposes you to data leaks or to competitors seeing what you are doing and ruining your ability to keep your processes and systems confidential?
Many AI risks remain hidden until something goes wrong.
We see situations where someone looks at an AI generated image and says, “Wait a minute, that looks a lot like something I created.” You are no longer just gathering information to create the AI image. You are now dealing with the fact that somebody else created an AI image that looks like your work.
Data leaks. Deepfakes. Regulatory scrutiny. Think HIPAA. Think confidentiality. And then these lawsuits arise well after the damage has already begun.
My objective here is to give you a high level view of how to talk to your clients and make sure they are not using AI in a way that hurts their business, and ways that you can help counsel them so they are better protected.
I do not think we can immunize everybody from this, but we can certainly limit the exposure our clients have and make sure the way they are using these AI models protects their business from start to finish.
We really have two types of clients. There are some who fall into both categories, but for the most part we have clients who are using AI in their business, and we have clients who are being harmed by the AI use of others.
Let’s talk first about protecting our clients who are using AI.
As attorneys nowadays, you need to start looking at your clients and really acting as a risk management architect. You need to start asking them, “What are you using AI for in your business? Are you using it to create contracts for employees? Are you using it in vendor agreements? Are you using it in systems within your organization?”
Your goal here is to prevent the exposure before the problem happens. So you need to be proactive. Think of yourself as the doctor advising your client to lose weight or eat better before they have a heart attack.
Now you need to start telling your clients, “You need to stop using AI this way, because it is going to hurt your business or expose your company to risk.”
You have to think through the scenarios. Has data been released? Are copyright protections gone? Is confidential information out in the world?
I see people all the time in marketing using AI to generate content, images, blogs, and other material. But think about this. Suppose you represent a marketing company that is doing work for other people. Lou hires my company to create content for him and his firm. I plug everything into ChatGPT or Claude and produce all this great material.
Now Lou has all this wonderful content. But maybe none of it is copyrightable.
So when I am promising Lou that I am going to deliver original work, maybe that is true in a practical sense, but it may have no intellectual property value to the company as a company.
For those of you who have done mergers and acquisitions, one of the things you are buying is intellectual property and goodwill. If the seller represents that everything the company owns from an intellectual property standpoint is legitimate and not subject to dispute, but a bunch of it was created through AI and cannot be copyrighted, then that is a problem. That could diminish the value of the company being sold.
So these are the things you have to think about when you are consulting clients. Is it wrong to be using AI in your business? Absolutely not. Is it wrong to be selling things you created with AI? Absolutely not. But what is wrong is failing to disclose that you did that, or using it in a business valuation as though it carries the same protected value.
Those of us working in this space need to think about these things as we advise clients.
We also need to focus on governance, IP hygiene, data protection, and contract protections.
Let me ask you this. For those of you in firms or large organizations, do you have an AI use policy within your company? Raise your hand.
Only a few hands go up.
This is a room of about thirty people, and maybe five hands went up. So only a few groups here have an AI use policy within their organization.
Audience Member
You keep saying the material created by AI is not copyrightable, broadly. I have a client who uses AI to help develop code, and I am just curious how broad that rule really is.
Andrew Contiguglia
Let me explain my understanding of it, and if anybody wants to chime in, great.
If I put a prompt into an AI model and say, “Draw me a picture of a man knitting,” and it creates that image, and I try to copyright it, it is going to get rejected because there is not enough human interaction in the creation of it.
The copyright laws are pretty clear that it has to be human generated. Now, the law in this space is morphing. The Copyright Office is allowing copyrights in some cases if there is enough human interaction in the creation of the prompt and the overall process.
How much is enough? Great question. I do not know the answer. I do not live in that space. I know that area of the law is evolving.
I read something recently suggesting that one of the circuit courts basically said that if you are using any type of AI, it does not matter how much you put into it, the end result is not human generated. So I think that whole area is changing in real time.
Audience Member
This may be basic, but when you use AI and upload deposition transcripts or pleadings into it, how do you protect client confidentiality?
Andrew Contiguglia
Let me ask you this. Was the deposition confidential?
Audience Member
No.
Andrew Contiguglia
Then where is the confidentiality issue?
Now, if you are talking about work product, that is a different discussion. There is room to argue that if you are doing things that fall more into work product, they are probably protected.
Even when you hop onto Google and start doing searches and looking for articles or doing research, that is still arguably your work product, even though you are using a public platform to do it.
Audience Member
Does that not turn on having a closed universe? That is why we have contracts with certain AI vendors.
Andrew Contiguglia
Absolutely. I think that is a big piece of this.
A lot of these LLMs give you the opportunity to close them off from training other models. ChatGPT now has versions that can live on your desktop and not touch the internet. Claude can do this at the enterprise level. You can request these types of closed systems.
But that is where the terms and conditions of the LLM matter.
I also think you can avoid some of the problem by not putting confidential information in there at all. There are tools that can anonymize documents, replacing names and identifiers so you can work with something stripped down, get the output you need, and then restore the information later.
Audience Member
Are the paid versions always enough?
Andrew Contiguglia
Not always. Sometimes you need enterprise. You have to look at the terms and conditions.
I think you also need to be careful about the type of information you are sharing. You can create hypotheticals that track the facts without disclosing the client.
Tell your clients not to upload their conversation with you into ChatGPT.
Audience laughter.
Andrew Contiguglia
Oh, they do. I am sure they do.
And that is another piece of what is happening. People on opposite sides of a business deal are throwing things into ChatGPT.
Funny story. Everybody here know who Ryan Serhant is? He posted something recently where he said, basically, “ChatGPT blew up my fifty million dollar deal.”
He had a buyer and a seller, and the property was priced at fifty million. The seller put the address and the property details into ChatGPT, and ChatGPT said the property was underpriced. The buyer did the same thing, and ChatGPT said the property was overpriced.
Now you have both sides of the deal using AI to reinforce what they already want to believe.
That is the kind of thing you are going to see. In negotiations, people are going to run things through AI to see what AI thinks their version of events should be, and your clients are going to do the same thing.
Audience Member
There was a story in The New York Times today about people asking ChatGPT for personal advice. These LLMs tend to tell people what they want to hear.
Andrew Contiguglia
Exactly. One of my big points in part one was that these LLMs are designed to make you happy. Their output tends to agree with you unless you prompt them not to.
You need to teach your AI not to just validate you. But most people do not want that. Most people want praise and approval. That is what these systems are built to serve up.
So let’s look at AI as a business risk.
Clients see AI as an efficient tool. It increases speed, saves time, and is easy to use. You can jump onto any of these platforms for free and start typing.
That is great. But for us as lawyers helping clients, it can also be a disaster, because they are writing bad contracts, violating confidentiality, and creating legal problems without realizing it.
What we are seeing now is liability and exposure. Copyright contamination. Trade secret leakage. Regulatory risk. Possible contract violations. If you are using these systems in a manner that does not keep things protected, it puts you in a very tough position.
When we are talking about intellectual property issues, what data are your clients feeding into these tools? Are they using somebody else’s copyrighted materials? Are they taking images and uploading proprietary material into the nether for anybody else in the world to use?
These AI models were trained on something, and that something is the internet, the collective body of what everybody has already put online.
Then the question becomes who owns the output in the end?
If I am taking a photographer’s body of work and trying to create something in her style, there is value in the way she sees the world. If I am now infringing on that, that changes everything.
Looking at AI hygiene, if your clients are not thinking about this now, they are creating litigation problems for later.
We talked about AI use policies within organizations. All of your clients should now have an AI use policy. They should define how they are going to use it, the concerns they need to be aware of, and what the internal rules are.
This internal governance component is really important. And this is something every one of you should be able to bill five to ten hours on. You can educate the client, talk them through it, and draft an AI use policy for the organization.
Audience Member
So then I go to ChatGPT and ask it to draft the AI use policy.
Andrew Contiguglia
There you go.
Audience Member
Or I can ask you because I already have one not drafted by ChatGPT.
Andrew Contiguglia
Exactly. Help your clients with that.
When we are looking at data risk, employees are frequently pasting sensitive data into AI tools. Maybe it is shipping logistics. Maybe it is a contract. Maybe it is healthcare information. That could create real problems.
If you represent clients in the healthcare industry, they may be violating HIPAA by throwing protected data into these systems.
If you represent clients in the adult entertainment industry and they are uploading performer contracts or private details, now you are exposing those performers in ways never intended.
So you need to make sure your clients are thinking carefully about confidentiality and not putting too much information into the world.
Now let’s talk briefly about the Heppner case and AI and privilege.
When does AI use destroy confidentiality?
In Heppner, the judge ruled that AI generated documents were not protected by attorney client privilege or the work product doctrine. The defendant used a third party AI platform, I believe it was ChatGPT, and as a result, confidentiality was waived. The materials created outside counsel’s direction were not protected.
This was very fact specific. You had a gentleman under investigation for securities fraud who started plugging information into AI to determine whether he had committed securities fraud.
It is a little like Googling how to hide a dead body. We have always had search histories and online activity available for investigation. If you have done criminal defense work, you know this is where stupid defendants get themselves in trouble.
So the court made clear that AI is not a lawyer. You cannot retroactively create privilege by sending the AI outputs to counsel. Just because you used ChatGPT and then handed the results to your lawyer does not create attorney client privilege.
Public AI tools do not provide confidentiality protections.
Audience Member
I want to push back. I think the case is really about the fact that he did not do it at the direction of his attorney.
Andrew Contiguglia
I totally agree.
Audience Member
And on the point about public AI tools not providing confidentiality, I think there are cases developing that say the modern use of public digital services does not automatically waive privacy rights, even if no one reads the terms and conditions.
Andrew Contiguglia
I think that is worth reading and worth considering. I look at the use of these AI models a lot like using Google or even using public Wi Fi to send emails. There are going to be distinctions, and the law is going to develop around those facts.
This is why we need to be counseling our clients not to go into ChatGPT and ask things like how to avoid an indictment for securities fraud.
The broader point is to be very cautious with what you put into AI. Keep it generic. Keep it quiet. Create as little trouble as you can.
Audience Member
What is the difference between chatting with ChatGPT and going down to the bar and talking to a stranger?
Andrew Contiguglia
That is a fair question.
And what is going to get interesting is discovery. Are we now going to start requesting AI activity in discovery?
Audience Member
Absolutely.
Andrew Contiguglia
Yes, and I am going to touch on that in a minute.
Another concern is embedded AI.
Audience Member
I stopped using Microsoft products because you cannot turn the AI off. Copilot is running in the background all the time.
Andrew Contiguglia
That is a fascinating point. Copilot is running constantly. Grammarly is offering suggestions. Autocorrect is doing its thing. AI is being jammed into everything.
At some point, courts may have to recognize that this is now mainstream and build legal protections around the fact that so much of modern work passes through AI assisted systems.
What we need to do here is watch what our clients are putting in and make sure they understand that if they are using AI outside the direction of counsel, they may be creating evidence for the other side.
Now when we look at contracts, this is another area I think matters a lot. In my own contracts with clients, I disclose that we use AI. And if a client does not want me to use AI, they need to tell me that.
At the big law level, some clients are already saying they do not want to pay associates hundreds of dollars an hour to do things that AI can do in fifteen minutes.
That is something we are all going to have to navigate. Some people say you cannot just throw it into ChatGPT. Maybe you can. Maybe you have trained your model well enough to create thorough documents. But that is where your training data matters.
Are you relying on generic data from the world, or are you creating the data set the model is supposed to work from?
Within ChatGPT, Claude, Gemini, and similar systems, you can create projects, upload the specific materials you want the model to use, and isolate the work that way.
If your clients are using AI in work they are delivering to others, they need to disclose that. I tell marketing agencies this all the time. If you are using AI to create images and content for a client, you need to tell them. Otherwise they may think they are getting something unique that was actually generated by a chatbot.
You also have to look at how the LLM itself allocates risk, because all of these systems are going to say, “We are not responsible for the output. That is on you.”
At the end of the day, ownership of the risk falls on the person creating and using the material.
You also need to make sure these companies are using real human oversight and not just taking whatever the model gives them and handing it over.
Audience Member
Is it really valid for these companies to say they are not responsible for what their systems generate?
Andrew Contiguglia
I think you are going to see regulations. I think you are going to see changes in the law. This is a general answer, but I think this area is going to adapt the way law has adapted to other technology changes over time.
Right now, in a perfect world, the LLM would bear the risk. We know it does not. Our clients, or we, are going to bear the risk. So assume your client owns the risk, and that is what you need to be telling them.
If anything goes wrong, it is falling on them. So minimize that risk as much as possible.
In terms of governance, we know companies are using AI, but they do not have policies in place to protect themselves. So they need AI use policies, human review procedures, and rules for handling confidential information and training data.
I always tell lawyers to think of the output from an AI model as a first year associate. You would not take a brief from a first year associate and submit it to the court without reviewing it. You would check the organization, the reasoning, and the legal conclusions. AI needs the same treatment.
Now let’s talk about what happens when prevention fails.
Litigation can arise from misuse of intellectual property, deepfakes, market substitution, and all kinds of AI misuse.
You have issues involving voice cloning, name, image, and likeness, false endorsements, defamation, and fraud.
If you represent athletes, entertainers, adult performers, creators, or really anyone with a public identity, this matters. AI can replicate voice, face, and speech patterns, and create enormous exposure.
There was a case in the Northern District of California, Anderson v. Stability AI, dealing with artist claims that their work was used to train AI systems without consent or compensation. That raises fundamental copyright questions.
So now courts have to decide whether training AI models on other people’s creative works is unlawful copying and whether those people should be compensated.
You can also end up creating harm by imitating somebody’s style or creating a substitute in the market, which may create trademark or Lanham Act issues.
Synthetic works are replacing human creators in marketing. Right now, you can create an AI model to promote your products and never have to hire a real person to do marketing content again.
And then there is the employment space. People are taking resumes, throwing them into ChatGPT, and asking whether they should hire the person. But what if the model is screening out people in a discriminatory way?
Audience Member
There is already litigation about that.
Andrew Contiguglia
Exactly. And that takes us to discovery.
What are future litigation cases going to look like?
We are going to be asking for data sets. What data did you use to train your AI for this output? What prompts did you use? What output records do you have? What policies did you have in place? What were the LLM’s terms and conditions? Did the platform maintain confidentiality?
That is where this is going.
And as we move forward, remember your role as a lawyer in the AI economy. Risk counselor first. Rights enforcer second. You are helping your clients transition and acting as the translator between technology and law.
We are not really creating new problems. We are emphasizing old ones in a new environment.
Audience Member
Is there any hygiene you can do to help prevent future problems, like deleting your history?
Andrew Contiguglia
Yes, absolutely. If you create projects within the system and give it the data set you want it to work from, you can then delete things. A lot of these systems will remember history, and you can go in and delete portions of it, including inaccurate things it remembers about you.
Audience Member
Just remember, if you delete, you need to have a purging policy, because otherwise that can become spoliation.
Andrew Contiguglia
Yep. And some data retention policies allow the platform to keep certain data even if you delete it from your visible history.
I am still curious because I have yet to read a case where somebody has directly gone after the LLMs themselves for the data users put into them. That has not happened yet, but I think it is coming, and I am very curious how that develops.
Thank you, everybody.
Real quick, thank you for the clapping you did not have to do.
For those of you who did not go through my part three online, I gave Ed the link to that. We will post it with the materials for this meeting. If you want to go back and watch the video, it is about an hour and a half long. It is a good walkthrough, and you are welcome to use it.
Thank you, thank you, thank you. This is where it is at.