For Better, Worse, or Divorce

Episode #120: AI's Role in Family Law

Walters Gilbreath, PLLC

In this episode, Jake Gilbreath and Brian Walters discuss the growing role of artificial intelligence in family law. They break down how AI is being used in legal research, document drafting, mediation and financial discovery - and where it may (or may not) replace traditional legal practices. Jake and Brian wrap up the episode by touching on the ethical responsibilities outlined in a recent Texas Ethics Opinion, including the importance of technological competence, protecting client confidentiality and ensuring accuracy. 

If you are interested in speaking to a member of our team or have a topic you would like to hear on our podcast, visit www.waltersgilbreath.com or email us at podcast@waltersgilbreath.com. 

SPEAKER_02:

AI is probably, it's accurate, it's correct 80% of the time, and 20% of the time it's wrong to devastating effect. Right. Like if you that 20 percent or call it 10 percent or call it 5 percent, it doesn't matter when it's wrong. It's wrong in a big way and it can really affect things. And so that just goes back to why it's so important to use it as a tool, you know, work with your lawyer. And why it's so important as lawyers to embrace the part of it that is right and is correct and embrace the part that's helping guide our clients in making what we do more efficient.

SPEAKER_00:

Your hosts have earned a reputation as fierce and effective advocates inside and We'll see you next time.

SPEAKER_02:

Well, thanks for tuning into For Better, Worse, or Divorce podcast. This is the podcast where we provide you tips and insight how to navigate divorce and child custody situations, particularly in the state of Texas. So I'm Jake Gilbreth. I'm joined by my partner, Brian Walters. Today, we are going to talk about AI, which of course is a hot topic of these days, and specifically how AI has evolved in its interplay with law, and particularly family law, and just kind of our own personal experience in what we're seeing from our clients. And then we'll turn from that discussion to talk about a recent ethics opinion that the State Bar of Texas issued concerning the use of AI in law. Brian and I think we're the first ones to tell you that we're by no means experts in this. We are always curious. We try very hard to utilize technology in our law firm. We've been like that since we first started partnering up, looking for different ways to leverage technology, make things more efficient for our clients, for our staff. We think there's a better work product that way. And it saves the client's time, effort, or money if you do it that way. So that's our general philosophy. And now that AI is becoming more of an issue and we see more of its use in our day-to-day lives without we just dressed on the podcast. So Brian, I guess sort of Starting off, just speaking colloquially, how are you seeing AI sort of turn up in your cases? And let's just talk about from the perspective of potential clients or your clients that you're representing and how you see them using it or

SPEAKER_01:

asking you about it. Yeah, it does come up. I tell you, I was a little skeptical the whole thing at the beginning when it came out. I thought it might just be another one of those kind of tech manias that happened that went away. But it's become, I think, in several ways more common with what I deal with at the First of all, clients who've always, you know, been intelligent and done research on their own, whether that I think in the pre-internet days probably meant talking to their cousin who got divorced and then in the internet days, just kind of Googling things. Now they'll put same questions through AI. You know, they'll put a question of, hey, you know, what's if I'm a dad and who does blah, blah, blah in Texas, what's my, you know, what's likely to happen in my custody case or something like that. So you get more sophisticated and informed clients. I think generally it's better information than just Googling it. We don't get a lot of the kind of common misconceptions as often it seems like that I used to with just when people would Google things about, you know, I want 50-50 or, you know, alimony or that kind of stuff that doesn't really apply in Texas in the real world. I've had some clients who've even taken it a couple of steps further. I had one the other day, a potential client that was putting their messages between them and her co-parent, her ex-husband. into AI and asking it to analyze his personality and her personality, and then to suggest good ways to respond to it. It was really interesting. She showed me the outputs from it. It was fascinating. I don't think AI can practice medicine, but it certainly had some opinions about each one of their psychological and psychiatric conditions. But most importantly, really suggested very, very good ways for her to respond to his particular personality. So that That's the kind of thing I've started to see just even as of a week or two ago.

SPEAKER_02:

Yeah. No, I'm the same way. I think you're right. It comes in various levels of usage. And the most basic of the, hey, I ran this through GROK or ChatGPT or whatever, here's what it says. And I have a lot of potential clients doing that. I have a lot of current clients doing that. I've been really impressed with the clients, the clients that we have and potential clients, the way they've used it and they're understanding the kind of limitations of relying on a AI answer. It's like you said, it seems to me like 10 years ago or 15 years ago when people would come in and they'd say, well, I Googled it. But most people, there's always the exception, but most people would say, I've Googled it. I understand this is just what Google says. What do you think, right? Like they used it. They had a great deal of skepticism of what they read. But it was a way to sort of, I don't know the best way to put it, kind of grease the wheels for the conversation, right? Somebody's coming in and they're starting to understand the terminology. They're understanding Kind of what other people are saying or what ChatGPT has said or what have you. And they're just in a better position to have a more productive conversation with their lawyer, which I think is great, right? For consults, for example, it makes for a much more efficient consultation if you've gone through either Google or through AI, talked to one of the AI agents and said, what's conservatorship? What's possession access? What's child support? How's that set? And understanding that it may not be, there may be an and what you've gotten from AI, but you understand how to have the conversation. You're almost like kicking the tires on kind of what you've learned online. I see people, frankly, use it too to sort of, no different than I would in my personal life or the way I do use it in my personal life, to kind of use it as a tool to see if the professional that they're talking to really lines up or I guess if the person they're talking to has the competency that they're looking for, right? Again, no different than Google. If I Google something and what a plumber should do to fix the leak in my faucet, and some plumber comes over and he or she says something completely different than I've read online, maybe this plumber's right, maybe they don't know what they're doing, and everybody online is right. I think I see people using that the same with AI. If you ask AI something and the lawyer that you're talking to is just totally off in a different universe than AI is, one of them's wrong, right? And it may be the lawyer. It may not be. So I think as lawyers, the way we have to approach that is we have to be prepared to talk to about, one, we don't need to get our feelings hurt, like I see a lot of lawyers do that get offended if people Google things or ask AI. I think we need to be prepared to say, yes, but here's the nuance, or yes, but here's where it's wrong, here's where it's right, and kind of use it to build off of with our advice, because it often is a really good starting point. This is random, just came off the top of my head, but I'm going to use this as an analogy. My sister's, so my brother-in-law, my sister's husband is a phenomenal artist. And as a professional, that's what he does. And I would say my sister's a phenomenal artist and that's what they do professionally. And she was talking about various artists that will sometimes, if they're doing work for a business, will start with AI asking for the concept and then they take it and make it their own. But it kind of helps inspire their work by just running something through AI. And I think a lot of it's the kind of same for us. It's definitely a starting point I'm not going to take that product and just put a rubber stamp on it, but I'm going to take that product and talk about ways that it's helpful and ways that it could be wrong. Because we'll talk about limitations in a second, but there are limitations to AI. And then like you said, Brian, I see clients. So that's kind of the basic I see. And then I see clients using it. And I know we in the office as well use it as a tool for communication, can be helpful with communication. It can be helpful for discovery. I had a case where a client This is a year ago, and I was way more skeptical back then about AI. But the other side had produced some text messages between the other side and another witness. And they produced, it had to be like 15,000 pages of text messages between the two individuals. And I think thinking, you know, who on earth is going to read this? Certainly not the lawyers. That would cost$100,000 just to read. And the client ran it through AI and asked, I forget which program he used, but asked it various different questions and found some really good text messages that we would have never found just because of the sheer volume of things, right? Would have never had time to sort of dig out two or three really, really relevant text messages that he was able to find. It's like that with bank statements. It's like that with really all financial. And again, I think as lawyers, and I want to kind of get your thoughts on this, Brian, but I think as lawyers, We need to kind of embrace that stuff. I think it's just like the internet or really all things technology. We see a lot of people in our profession kind of not just shun away from it, but kind of reject it and try to minimize or exterminate the usage of technology and what we do. What are your thoughts on that?

SPEAKER_01:

Yeah, I agree with you. That's a great example. Financial statements, you know, another, hey, find me all the cash withdrawals over$500 or something like that. You know, all that type of thing. That is a problem we have in family law, the one specifically of, yeah, there's often thousands of pages or thousands of messages or hundreds of pages or thousands of pages of documents. And it is prohibitive for us to go through it line by line. As attorneys, you're hesitant to delegate it to a non-attorney staff. Your client often doesn't understand or don't know what to look for or might be looking for something different. And so that's a great way to go through it. I think that's an excellent idea.

SPEAKER_02:

Well, and then speaking about kind of what you're talking, Brian, what they're asking you about customer and what's going to happen in court and stuff like that. There are limitations, I think, on this. And this will kind of bring us to the ethics opinion in just a second. But I think we do have to counsel clients or tell clients is that I think a limitation of AI and, you know, again, just taking the most common example of chat GPT is chat GPT will give you an answer, right? No matter what, if you ask it a question, it will spit out an answer and it will spit out a very confident answer. And that's not always that, you know, that's, it's not always right. And it's hard to tell that it's not right, because it does speak so confidently. I think the terminology is, though, you know, again, you and I aren't experts on this, but the terminology is, it will hallucinate, right? It will hallucinate and come up with incorrect information. And it is constantly getting better, but it's not always right. And so it is, I think it's, it's better used as I, as my current thinking is that AI is better used in conjunction with a helps you prepare for your meeting with your lawyer, helps guide the conversation, gets you ready for the conversation, makes it more efficient. But again, understanding that the information that you get, I mean, my wife knows this because I spend way too much time doing this, but you can make this, you can get incorrect information from like ChatZBT or Grok or any of them. If you ask it long enough, enough different legal questions, complex legal questions, it will eventually get a wrong answer. And if you don't know, if you're not doing this for a living, then it'll just seem like every other answer that you're getting. So the way I sort of am currently describing it to clients is I think, and I don't know the percentage, so I'm just making this up, but I always tell people AI is probably, it's accurate, it's correct 80% of the time, and 20% of the time it's wrong to devastating effect, right? Like if you, that 20% or call it 10% or call it 5%, it doesn't matter. When it's wrong, it's wrong in a big way, and it can really affect things. And so that just goes back to why it's so important to use it as a tool, work with your lawyer, and why it's so important as lawyers to embrace the part of it that is right and is correct and embrace the part that's helping guide our clients in making what we do more efficient and a better product for our clients, but at the same time, understanding that there are limitations and it's our role as the professionals, just like a doctor, to come in and address the imperfections in AI so you don't have those devastating, was that we should address this in another podcast. There's a news story I was reading the other day about the law where I think there's an AI company, this is in federal court, that has dedicated to representing themselves in whatever lawsuit they have 100% by AI. All their pleadings 100% by AI, not using lawyers and stuff. And the website had posted, the news story had posted the pleading that AI had written. And everybody was having a good laugh at the errors that it had. But I I read it, and it's like, well, 90% of this is pretty good, pretty impressive, and good points and good arguments. And then there's that 10%. You did just admit to liability or cause the whole lawsuit to go down the drain just with this one paragraph right here. But it was a good start. So that's where we're sitting right here, and I think we're recording this April 2025. So who knows where we'll be at in a year. That's where I think we're at right now.

SPEAKER_01:

In Texas family law, I think there are particularly two issues that I see become– One is family law is state specific. Right. So let's say that I was getting divorced and I maybe I had an immigration issue related to a work visa and I was getting divorced. So if I just ask them, you know, hey, I'm going to get divorced with this kind of work visa. And by the way, what's my alimony situation going to be? The answer is no. you know, immigration law is federal, so it's going to be, you know, maybe they'll get the right answer, but it would be correct where it would depend on what state you were in for the alimony question. So I think that's important. And I do see people bring those kinds of things in sometimes of not distinguishing between Texas and not. And then the other part of it, and I think this is where lawyers are really important. And I think AI would really struggle is the difference between, you know, the law as it's written and the practice of it in a courtroom. So classic examples, child support, where there's 22 or 23 faculty that the court can use to adjust child support off of the guidelines. But as a practical matter, that almost never happens, except I think if you have a very, very truly disabled child and one parent has a lot of money or a lot of income. So you might get an answer that says, well, child support, there's a guideline, but it can be changed for all these reasons. And then people think, oh, well, I'll get it changed and come to meet us. And like, well, not really. So those are the kind of things. But But that's, you know, I think together they can work, you know, they can, client can come in with a basic understanding of that concept. And then we can say, well, you know, here's what really happens. And then therefore they can get to the right answer.

SPEAKER_02:

And hopefully a much more efficient way. So I think, I guess the summary of that is, you know, it's great. We encourage it. We are continuing, just like we approach all things with technology in our firm, you know, continue to explore it and utilize it to a way that benefits our clients. There are ethics, you know, surrounding it. The The State Bar of Texas, the Ethics Committee, actually issued an opinion back in February 2025. Let's see, it's opinion number 705 entitled, What Ethical Issues Are Raised Under Texas Disciplinary Rules of Professional Conduct by Lawyers Used to Generative Artificial Intelligence in the Practice of Law. That's a long title, kind of classically lawyerly written, rather than just say what are the ethical issues by use of AI, but okay, long title. To summarize You know, it's an interesting opinion and you can find it online. It's a couple, I think six pages, single space. As I read it, it kind of makes three or four, maybe five, five points to it, maybe in a longer manner than it needed to be. But essentially it says, obviously Texas lawyers need to educate themselves about AI and ethical issues that may arise. You know, ethical issues, particularly about sharing client information and protecting client information is very important, obviously, as lawyers. We should, the opinion says that lawyers should acquire basic technological competence before using any AI tool. That's kind of all things, but that's particularly true of AI. Obviously, you need to make sure, like we talked about, that the AI doesn't imperil or confidential client information. This is what we've been talking about the last 20 minutes. Essentially, we need to verify the accuracy of AI. That's largely what we're called on to do when clients are using AI is that we're verifying accuracy and adding to accuracy. And correcting it where appropriate. And then lastly, this is important. I think the opinion talks about that to the extent that you've saved time for a client by using the tool, which we often do, you can't charge the client for what it would have cost without AI, right? as if it took us 10 hours, right? I think that you wish would go without saying, but the State Bar Ethics Committee did make sure to point that out to us lawyers. And that's absolutely the case. And so I guess I'll use that to kind of wrap up the conversation. That's kind of the whole point of AI and us using AI as lawyers is we should always be looking for a more efficient, better product for our clients. Because this is supposed to be a client-focused service that we provide. And if AI can help The clients have a better product with less of our time. That's better for everybody. And so we need to be really aware of that. I know we are at our firm. It's a really interesting topic and we'll continue to explore it as time goes on. So let's wrap up with that. That's all we have for today. If there's a topic out there that you would like for us to discuss on the podcast, or if you're interested in speaking to anybody on our legal team about your situation, please email us at podcast at waltersgilbert.com. You can also find us online at waltersgilbreth.com. I'm Jake Gilbreth. I'm joined by my law partner, Brian Walters, and we appreciate y'all listening.

SPEAKER_00:

For information about the topics covered in today's episode and more, you can visit our website at waltersgilbreth.com. Thanks for tuning in to today's episode of For Better, Worse, or Divorce, where we post new episodes every first and third Wednesday. Do you have a topic you want discussed or a question for our hosts? Email us at podcast at waltersgilbreth.com. Thanks for listening. Until next time.

UNKNOWN:

Thank you.

People on this episode