AHLA's Speaking of Health Law

Artificial Intelligence in Behavioral Health: Latest Trends and Developments

American Health Law Association

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 50:23

Wes Morris, Senior Director, Consulting Services, Clearwater, speaks with Randi Seigel, Partner, Manatt Phelps & Phillips LLP, and Jéna Grady, Partner, Nixon Peabody LLP, about the current environment regarding artificial intelligence (AI) in behavioral health. They discuss legislation regarding AI, including recent developments in Utah, the concept of regulatory relief, and tensions between the federal government and state governments; clinical decision support, chatbots, and operating across state lines; privacy and consent implications; and ethical obligations. Sponsored by Clearwater.

Watch this episode: https://www.youtube.com/watch?v=8FuC95zu6ko

Learn more about Clearwater: https://clearwatersecurity.com/ 

Essential Legal Updates, Now in Audio

AHLA's popular Health Law Daily email newsletter is now a daily podcast, exclusively for AHLA Comprehensive members. Get all your health law news from the major media outlets on this podcast! To subscribe and add this private podcast feed to your podcast app, go to americanhealthlaw.org/dailypodcast.

Stay At the Forefront of Health Legal Education

Learn more about AHLA and the educational resources available to the health law community at https://www.americanhealthlaw.org/.

SPEAKER_01:

This episode of AHLA Speaking of Health Law is sponsored by Clearwater. For more information, visit Clearwatersecurity.com.

SPEAKER_00:

Hello, and welcome to this episode of the AHLA podcast, Speaking of Health Law. Our topic today is artificial intelligence in behavioral health. I'm your host, Wes Morris from Clearwater Security. Clearwater is an organization that focuses on helping other organizations manage privacy, security, and compliance matters in the healthcare space and other spaces. Personally, I'm interested in this topic today because prior to getting into the world of privacy and security and HIPAA and all these things, I worked as a behavioral health specialist. So I am truly interested in hearing what our guests today have to say. Our guests are Randy Siegel and Jenna Grady. Let me introduce them real quick. Randy Siegel is a partner at Manat Phelps and Phillips and a recognized advisor to healthcare companies navigating regulatory compliance and innovation challenges. Based in New York, she co-leads Manat's Health Tech Vertical and plays a central role in the firm's work at the intersection of healthcare and artificial intelligence. A leading voice on AI regulation and healthcare, Randy frequently writes and speaks on topics including AI governance, ethical risk, and state and federal policymaking, and is a principal contributor to the Manat Health AI Policy Tracker, which monitors evolving federal and state AI regulations. Her insights have been featured in Modern Healthcare, Insider, and Pluribus News. And she has spoken at the 2024 HLTH conference and the American Telemedicine Association's 2024 and 2025 annual conferences. Our other guest is Jenna Grady, a partner at Nixon Peabody, who helps behavioral health providers and AI vendors fully understand the complexities of AI use in the behavioral health space. Also based in New York, she translates policy into practical steps, governance, data use, vendor diligence, and risk mitigation for behavioral health providers. Most recently, Jenna was a co-presenter for AI and Behavioral Health, Legal Considerations and Best Practices at the HCCA Behavioral Health Compliance Conference. Welcome, Jenna, and welcome, Randy. It's good to have you here today. And I'm really interested, as I've said, in where we're going with all of this.

SPEAKER_02:

Thanks for having us. Happy to be here.

SPEAKER_00:

Yeah, great to have you. Um, let's jump right in, should we?

SPEAKER_02:

Let's do it.

SPEAKER_00:

First thing I want to ask you about, uh, and I think you will probably have lots of perspective around this, is legislation regarding AI in general. It's become a real hot topic all across the nation. When we talk about artificial intelligence in healthcare, most of the conversation focuses on what might happen someday, future risks, future laws, future guardrails. But in one location, Utah, the future is already being tested in real time. Um, Utah has taken a different approach from most states. Instead of waiting for federal guidance or trying to write perfect rules up front, the state essentially said, we're going to learn by doing. Through its Office of Artificial Intelligence Policy and the AI Learning Lab, Utah is experimenting with how AI can be deployed in high-risk areas like healthcare and behavioral health while offering something unusual in the regulatory world, regulatory relief. Randy, would you talk with us first about what that means and the implications of regulatory relief and what Utah is doing?

SPEAKER_03:

Sure. So it's interesting. Utah has really been at the forefront of leading AI regulations. They were one of the first states actually to pass law requiring certain types of disclaimers to be provided to patients and other users of artificial intelligence. And then other states followed suit. They also have been the first state to adopt what they're calling a regulatory relief program with the idea that there can be a safe way to test innovative AI, even if normally that type of artificial intelligence and healthcare might run afoul of state rules governing sort of the corporate, I'm sorry, governing the practice of a profession like a mental health professional. And so this year they just approved an AI tool called Doctronics, which allows an AI tool essentially to renew prescriptions for 250 of the most commonly prescribed medications that they have determined are relatively low risk without physician involvement, provided they've sort of achieved certain quality and safety measures through the testing process. The agreement is for a year. They're going to use that to gather data. It might be extended for another year. And the signatories to that agreement are not only, you know, the uh, which I'm blanking on the exact name, but the uh Office of Artificial Intelligence and Policy, but also the Department of Licensing. So that means they've engaged probably with the medical board and the licensing division and come to an agreement that they think this is worth piloting. Um, part of the reason why, you know, it um Mr. Boyd explained um that they've sort of embraced this technology and why they're thinking about um having more healthcare use cases come through the regulatory relief program is they do think it can really improve access, especially to um patients who suffer from chronic conditions. And they believe that these sort of medication refills take a lot of time away from physicians. We also know there's a lot of lack of compliance in medication that can lead to adverse outcomes. And so they believe this is really enabling access in a safe way and are excited about the opportunity. I have to say this was carefully designed, probably with all the guardrails that Jenna and I would advise a client in terms of thinking about how to mark how to monitor post-deployment of AI. Uh, so so, in my opinion, this was a very thoughtful use case and sounds well designed and also sort of operating in a, while it is dealing with prescribing, it's operating in a relatively lower risk because these are medications that patients have been on for some time. It doesn't include control substances, it doesn't include infusion drugs. Um, and so it sounds very thoughtfully designed that could maybe lead to other opportunities for similar technology to go through the regulatory relief program in Utah.

SPEAKER_00:

I literally was going to ask about uh potential drugs of abuse uh and about uh you know how it was being monitored and managed. So I think you hit that very nicely. Um and so if I'm understanding correctly, this is for refills, not for initial prescriptions.

SPEAKER_03:

Correct. Yeah, and they have some geofencing, obviously, on their website because I went to sort of play around and see how it worked. Um, and I can't, you know, it says avail, you know, AI doctor available in Utah, and it won't let me proceed um past past that. Although I will be in Utah in a few weeks and intend to see if I could try it out from there just because I'm super interested in the technology.

SPEAKER_00:

You must, and you must report back to us what you find. Oh, fantastic. Um, so when we think about this idea of regulatory relief, would either of you consider this to be a responsible governance strategy with AI and healthcare? Or do you see it more as the law struggling to keep up? And so we've created something innovative to try to manage it. What are your thoughts?

SPEAKER_02:

I think to one point, yes, the law, like we have, if you think of healthcare in general, we have very old antiquated laws. Like when it comes, and especially in behavior health, we have some like mental health privacy laws that are, you know, 1980s, maybe 1990s, but a lot of these like privacy laws are very back in the day. So it's very hard to have all these state legislators think of innovative ways to um regulate AI tools while we're all still trying to understand what AI is really all about. If you think about 2024, we weren't all like having these AI um considerations, especially when it comes to clinical delivery of care. So these sandboxes are a very like unique way of testing how AI can be used in clinical care. Um Europe and other countries, um, they have been using sandboxes for a bit of time now. And sandboxes have also been introduced at the federal level. It's really just it was Utah, like Randy said, that has been innovative about how to um regulate AI from the get-go to come to the table and come up with us. And I think it's a great start. Um, like Randy said, we were talking about prescriptions that have been prescribed, very low risk. And the behavior health um standpoint, it would be interesting to see like sandbox 2.0 for this pilot. Could you do it for certain prescriptions for the individuals with serious mental illnesses that have, you know, sometimes very um hard times adhering to certain medications because of the conditions they have. So it'll be interesting to see.

SPEAKER_00:

Um, I I one aspect that I immediately thought of as we were walking our way through this was that that medication compliance point that Randy made. Um, that having an AI agent in the mix allows then for quick understanding and observation as to whether the patient is following the regimen that's being given to him, or at least based upon refill patterns and those sorts of things, you can you can get a sense of whether this person's actually complying with their medication. Uh yeah, and you're very correct, Jenna, that um in behavioral health, certain conditions uh tend to preclude uh medication adherence because uh the medication is difficult or it causes side effects or or other difficulties, uh, things such as tarted dyskinesia and and other uh kinds of conditions that cause patients to say, I don't want to take this, even though it would be in their best interest. But then again, we do many things not in our best interest in our world. Thank you for those perspectives.

SPEAKER_03:

Wes, I just want to note one other thing that didn't come up in our prep call, but as I was thinking about sort of where Utah sits in terms of the sort of regulatory environment, they also actually issued guidance specifically on mental health in terms of how clinicians can be using AI responsibly in their practice, um, as well as sort of have created, I believe, like a mini licensure pathway for some sort of chatbot tools. Um and so it's a really interesting state to watch. So just to those who are listening who kind of want to see how mental how states are thinking about AI use and mental health, and doctronics is really a medication adherence, not specific to mental health, but as we've talked about, creates access. Um, they've really been doing a lot of thinking about this. Um, and so it's an interesting state to take a look at, and I think one that Jen and I will both be watching because I think it is going to spur a lot of innovation in particular in Utah. And what we've seen, which we may talk about, is that states are uh copycatting each other in this space in a very bipartisan way. Um, and so if Utah kind of figures it out and is driving innovation to that state, you know, we could see other states sort of taking that approach, both in mental health, but as well as in the regulatory relief process.

SPEAKER_00:

That's an interesting perspective. I hadn't heard about that aspect of it, but yeah, we are definitely going to talk about the states in copycatting. In fact, that tends to bring us right on to the next theme that I wanted to talk with you about. Uh, and that federal versus state. There's some tension, or seems to be some tension, between the federal government and state governments in legislating AI. Um, without getting into a who's right or who's wrong approach, I don't want to do that. Can we look at this from a behavioral health perspective? Maybe starting with the idea of how fragmented the regulatory landscape really is.

SPEAKER_02:

Yeah, I think so first off, like from a federal level, um, just to give like an overview of what's happened. Like we had Biden saying we want guardrails, we want a certain framework to ensure you know AI is used appropriately, safely, not biased, et cetera. And then Trump came in and said, I think like the first or second day in office, like, we're done with that executive order, I'm in town. This is how we're gonna spur AI. And then it really has challenged like the view of, you know, what is federal government's role in AI implementation and spurring that. Um, we had the big beautiful bill that, you know, in Congress at one point was saying we're going to preempt any AI state law, which would have been quite interesting. Um, that to not go through. But now Trump very recently, as of last month, had an executive order saying that he wants all the agencies to um review state AI laws and see if it would deter what he considers, you know, necessary innovation in AI and using it um appropriately with agencies that are governing healthcare as well as other services. So we have HHS, FDA really wanting to spur AI use. And then you have states that, you know, I don't we talked about Utah, which is doing some very innovative things. States are just also wanting to be considerate about having appropriate guardrails. And so it's just identifying different um issues and perspectives that we need to evaluate when it comes to AI.

SPEAKER_03:

Yeah, I think what was interesting in the the sort of moratorium negotiation um was again, you know, in a very polarized environment, there was governor bipartisan support that they did not want to be prevented from legislating artificial intelligence in in specific use cases, especially if it was going to be a vacuum. If the federal government wasn't going to replace that moratorium with sort of comprehensive AI legislation, they really pushed back and said, Well, you can't prevent us from protecting our citizens. Um, and so I think there might be, and and you know, even uh the governor of Colorado, when he passed this very omnibus transparency law, in his letter approving it, basically was like, I don't like this law, it's not a great law, it's gonna prevent innovation in our state, but I realize we need it, but legislature go amend it. And hey, federal government, comprehend the space and adopt comprehensive legislation. Um, so I think there's alignment at the state that like maybe we don't want this patchwork, but at the same time, like patients, individuals are very rightly concerned about artificial intelligence use. And so the states are responding to their constituents in absence of federal legislation, and some are spurring innovation, and other states are you know adopting five, six laws last year that are sort of legislating AI. Some might say it's too burdensome, others might say no, it's it's it's actually quite reasonable. Um, and we would expect that in other types of industries.

SPEAKER_02:

And I think to that point about constituents, especially when it comes to mental health, there has been such, you know, cases we've seen that have had horrific outcomes that states rightly are saying, especially when it comes to mental health, we have to put significant guardrails up. Whether how those guardrails are implemented moving forward to be seen. I mean, I think the most restrictive mental health um legislation for AI was August in Illinois last year. So we'll see how it's implemented and if there's any discussion of how to revise accordingly so it is more um easily manageable for AI vendors and clinicians. So we will figure it out as we go.

SPEAKER_00:

There are two subtings, themes that I want to follow based on what you've just talked about here. One of them, from that regulatory perspective, at the federal level, some AI tools fall under FDA oversight. Some fall under FTC consumer protection oversight, some fall under state laws, and some fall under no laws. So when we think about that, it seems as though part of establishing an effective governance strategy at state or federal level is going to be to determine who has the ball and who's going to, whose rules we're all going to follow. Is that an accurate view of the world as it stands right now from regulation perspective?

SPEAKER_03:

Yeah, I mean, I think that probably the majority of the AI tools that we're talking about for mental health, they're sort of like two buckets, like Jenna mentioned, some of which have been conflated, like in Illinois. They're sort of all existing in one bill that is being copycatted in other states, you know, already in this legislative session. But there's sort of like the clinician's use of artificial intelligence. And one might think about regulating that differently. And those are generally following, falling under sort of the professional licensure provisions. And then there's tools that are being used for mental health purposes, whether not intended or not. They're called like companion apps or somebody who's engaging with Chat GBT to get mental health. And there's a bucket of laws that are emerging in that space. Um, and then there might be subregulations, that's not really a technical term, but applying to minors' engagement with those tools that may be in the professional laws, but often are really in the consumer protection laws because of what the tool is designed to do. And those would be like enforced by the attorney general versus enforced by um the sort of regulatory oversight body for the profession. So I think what you're talking about is definitely happening at the state level. And it's really, I think, dependent upon who is the user of the AI tool as to where proper governance uh or oversight lies. Um, at the federal government, I think some of that's happening, but I'm curious to hear Jenna's perspective. I think most of these tools are circumventing, at least the ones we're talking about for behavioral health, are trying to avoid FDA oversight by being sort of general wellness, or they're not diagnosing, they're just providing information. Um, and so I think that's that's maybe part of the vacuum at the federal level, is that they sort of sit over there. The FCC, so FTC sent out letters to some of like six companion apps after that horrible suicide case. But I haven't seen much come out of that yet in terms of sort of any sort of comprehensive directive.

SPEAKER_02:

Yeah, I think that's right. And especially like with the FDA, given their recent guidance earlier this month, it we have worked with AI vendors looking at that guidance. And it really is not exactly stating, you know, if you're just providing general wellness advice, then we won't consider you a medical device. But it's sort of leaning that way that, you know, let's just not, you know, say, okay, this person has depression, this person has bipolar. So you should reach out to your physician to discuss whether the suppressive symptoms are in connection to possibly having depression. So it's definitely like getting as close to the line as possible of like clinical advice and keeping it trying like just general wellness. Reach out to a physician or a clinician to further discuss.

SPEAKER_00:

So if we're if it's a clinician working with it, just to paraphrase and lock this in, if it's if it's clinically based, a clinician is involved in the interaction in some way, whatever that might look like, then we go the route of FDA has perspective or you know, HHS may have perspective, or whatever the case might be. But if it's uh a chatbot um kind of an environment or the companion kind of an environment, then it falls more to the FTC side of the line. So the immediate thought that I had here was um, so who decides which way this thing goes? I guess it has to be the developer telling the oversight body what their intent was. Is that a reasonable perspective for me to take or is that off base?

SPEAKER_03:

Well, if they fall so they're not going to the FTC for like approval or oversight, it's more like they fall into the enforcement bucket. Um, whereas if you are a tool that is either being used by clinicians or consumers, both of which depend, it's so technical, and I'm definitely not an FDA expert. It sounds like Jenna has more experience in that space. Um, but depending on what the tool is for and how it's being used, you can determine yourself essentially, probably through your legal counsel, whether you need FDA approval or you don't. And then in many cases, there's a little bit of a gray area, and you're sort of taking a gamble because the FDA has like enforcement discretion, I think is the term they use. And so some tools might say, we'll defend it later. And if we ultimately have to go get approval, we will. But the FDA, neither the FDA and really the FTC minus sort of these letters that went to these six companion apps, you know, I don't think they're out there like hunting down tools that um might be, you know, playing in a gray space or potentially even, you know, crossing the boundaries. Um, that that may change, especially at the state level, because some states are particularly more aggressive, and especially those that have laws that go into effect this year related to companion apps. Um, but that you know, that remains to be seen. I don't envision this federal government who's very much leaning into AI use and in particular in healthcare, like making it a high priority to like prosecute them out, yeah.

SPEAKER_00:

Okay. Um, thanks for that clarification because I when you said that FTC is you don't have to go to the FTC to get approval, you just have to defend yourself if the FTC comes after you to show what you're doing was reasonable in that space. That's a that's a great view. Uh whereas on the FDA side of the house, there is approval processes, and that's a very different market entirely. I was thinking about the uh difference between clinical decision support AI, which would be the side where it's involved in clinical care and those sorts of things, and then the chatbot side of the world. And uh I read an article as I was uh preparing for this where they were talking about the idea that some of these chatbots, by their tendency to mirror the user and continue a conversation, can actually reinforce delusions that may already exist and can therefore cause some harm. You know, when we think about mirroring in a clinical view, when I would mirror with a patient, it was to expose the patient to see that, help them see what they were saying and doing was helpful or unhelpful. But in this in this case, that chatbot tendency is really about keeping the engagement going, it seems like. And that's a different view uh than what we we talked about. I want to go backwards on one point, though, unless somebody wants to cover some details around that. Uh, but I think we've we've hit that pretty well. What I wanted to talk about uh is uh what about jurisdictional issues like telehealth and cross-state challenges? You know, most of these uh AI developers are not confined to the state of Idaho where I live or the state of New York where you live. They are national or global, yet the organizations that are implementing them are often uh have to have to address state lines and whether uh what is being done in one state can be done in another state. Any perspectives around uh you know practicing across state lines and whether the AI uh aspect of this would come into play, or would it be the clinician's responsibility to ensure that they are following their state's rules when they're using the AI uh decision support? What's your thoughts on this?

SPEAKER_02:

That's tough. That's my initial thought. Um I think I mean this continues what we initially saw with COVID, when a therapist were all of a sudden able to practice essentially in 50 states. And while practicing medicine obviously has its challenges, each state when it comes to mental health services is so different. Like, for example, duty to warn. I can work with mental health practitioners in New York and they have a certain situation that I say you absolutely have to tell XYZ, I cross to another state and I say no. In fact, you would um it would be considered a bad fact that you disclosed this confidential information solely because of what the state legislators thought perhaps 10, 15 years ago, about duty to learn. So you have, you know, duty to learn issues, scope of practice is different among the states. Um, social workers can do different things compared to like alcohol, um, addiction counselors. So you have like a hodgepodge of states really trying to regulate mental health. And if you think about it, there was a time where the federal government really wasn't involved in mental health. It was mostly a Medicaid like funded program. So that's why we have all these state different um requirements for mental health services. So you add AI to that, and then you add AI specific mental health um vendor requirements, and it becomes quite challenging. And yeah, for an AI tool in Utah versus another state, what is like in that AI tool, you know, identifies risk. Do you identify risk and who needs to potentially be um in communication with this patient differently because of the state? Like it's very fascinating to see how this will all play out.

SPEAKER_00:

Okay. Um I I had written down a couple of questions that I thought, oh, this would be a good challenge question to see what you think about that. Um, considering the decision support side of AI, uh so that the clinician is involved and active in it, should the patient always know when AI is being used in that environment?

SPEAKER_03:

So it's a really interesting question. It's it's a question where I um sort of change my opinion, uh, my personal opinion, but I'll sort of say some some stuff objectively. Uh first off, I think when we zoom out, we have been using clinical decision support tools for a very long time. They may not have been generative, but when they do medication flags in a system, that is a clinical support tool that is using some sort of AI, maybe not generative, to put together your medical history, what medications you are, and to provide a flag to a physician or a psychiatrist who can, of course, override that, but is something that helps inform them in how they practice. AI has been being used in radiology for a really long time. Okay. We don't often allow patients to opt in or opt out of things that have been sort of determined, the standard of care. Um, we may inform consent them for a surgical procedure, but we're not saying this is the robot we're using and this is the um, this is how we came up with our plan to do surgery, and maybe it was generated by AI. So, with that lens in mind, I think the there is a complication around getting consent for clinical decision support, um, unless the use of clinical decision support is somehow magnifying the risk to the patient, uh, whether it's actually informed consent over their like autonomy, potentially over privacy. But again, we've been using technology where we share patient data to provide services for a really long time. Um, and I worry that sometimes, you know, a patient who is not well, not even well informed, right? Just like a pay any old patient, right, um, who doesn't fully understand the technology and how it's being used is going to choose not to use it and opt out. With that said, there is a study that is embargoed but is about to be published. I'm not gonna give a lot of details about it, um, that a third party did that made clear that patients themselves strongly believe they should be able to consent and opt in to the use of artificial intelligence. Um, so it's about, I think, gaining the patient trust, but I do question the utility of a consenting process and how much information you would actually need to give them to provide informed consent. Feels even like overwhelming to like you've now turned in a quick, you know, a consent form into five pages that looks like a notice of privacy practice that like nobody reads today. And is that really helping to advance informed consent?

SPEAKER_00:

I'm I'm gonna I'm gonna object to one point there. I'm the guy who every time I go to a new provider, I read the notice.

SPEAKER_03:

I do too, because I want to see, I'm like, oh, I write these. Let me see if there's anything you know that's wrong here.

SPEAKER_00:

But right, exactly, exactly. That's why I read them. And and if I see problems, I'll circle them and highlight them and give them to the privacy officer at the end of my thing and say, hey, you want to look at that. So, but very good. I uh uh I really love what you said there. Um, the the main point being that we're not telling them about all of our other clinical decision support processes or ways that we're using technology to advance the care of the patient. So is it worth using doing that because we have this label called AI, which ask 10 people what AI is, and you'll get 12 different answers because one of them will have looked it up on Chat GPT too. So, you know, uh also the population that we're talking about among that population are people who are exceptionally skeptical, uh, or perhaps operating out of delusions or even worse psychotic states. And to ask for informed consent, so to speak, in that kind of an environment might really not work well. Jenna, any thoughts from your side on that subject?

SPEAKER_02:

Yeah, I one point that Randy made that I think is really strong to note again, is we have been using AI, especially in healthcare, for years. So it's just now all of a sudden the generative piece is getting people heightened risk awareness. Um, when it comes to this population, so I mean, think about when health information exchanges came about, and we were talking about how do we get individuals with mental illness to consent to exchanging their health records among entities. And a lot of states took different approaches. Um, there's some states that are very um strict on, you know, you absolutely have to have consent to share these records. You can only um break the glass with the exchange in certain circumstances. Other states are like they can opt out, but otherwise we're going to exchange their information. So there's this population has always with the records that they have, information that they have, there's heighten um consideration. So I agree with Randy that, you know, we generally have, you know, these support tools that are using our information that we don't consent to. Um, I think until we have federal governments saying otherwise, it's going to be the states really determining what's best for their constituents. Texas has already said you need to provide um notice to your patients when you're using these tools and it's leading to medical decisions. Um, how's that for an answer? I'm just like, let the states decide instead of giving my own personal opinion.

SPEAKER_00:

I I appreciate both of your perspectives on that because I hadn't even considered it that in that way that Randy had had brought forward the idea that we've always been doing this. This is good. This is that's the kind of point that clin both clinicians and the law firms and and attorneys supporting them should be thinking about is what are the rules that we have to follow? Uh, what are the considerations? There's a lot to it. I want to take a last view here, uh, a last subject to talk about briefly. And that is um, so I've I've established that I'm I have worked in the world of privacy and security compliance for a long time now, 20 almost 23 years, my goodness. Um, and before that in behavioral health, the privacy implications. That's the piece. Um, so uh is it should we ever? And I know ever asking asking an attorney to agree to an ever statement is not a good idea, but I'm gonna do it anyway. Uh, you can tell me. Should we ever use behavioral health data to train generative AI without explicit consent? Your thoughts.

SPEAKER_03:

So, in your question to go back, since you're asking an ever statement, I'm gonna do a lawyer thing and ask you a question to answer your question. Okay. Um, are we talking about you know identifiable data or de-identified data sets that were derived from providing behavioral health services?

SPEAKER_00:

Well, see, that's exactly the point. Because if we think about it from the pure HIPAA perspective, which is where I primarily work, um, in the pure HIPAA perspective, there are 18 critical identifiers. Well, 17 and one that says, and any other thing that could be reasonably used to identify the individual, but it's they're all based on the idea of reasonable use and what could reasonably be used to identify an individual. That even if we strip those identifiers, based upon things like life experiences, narratives, rare diagnoses, uh things like that, that rely on context rather than on content, there are discussions that say that just stripping those 18 identifiers and leaving everything else in the in the clinical summary that we then release publicly does not provide sufficient protection around privacy because, with that context, those things could be used to re-identify an individual. So, in that way, then I'm asking the bigger question of whether identified or de-identified, should we be using it to train generative AI without explicit consent? How's that for throwing it all the way back?

SPEAKER_03:

Yeah, I mean, I I think the answer is in my mind is probably dependent upon who's doing the training and for what purpose. I think in some, so the like one side is if we really want smart generative AI tools that are smart in behavioral health, then we are going to need a big enough data set to train it. Otherwise, if you, for instance, only a people consenting and you've already sort of raised a cons, you know, a note about how individuals with behavioral serious mental illness may be skeptical of um providing that type of consent, then any tool developed is not going to include data that is inclusive of that population, which means we it will be less effective, potentially biased against it, or um that that type of technology just won't be available. So that's sort of one side. And I think that is a consideration, especially if a health system is building it on their own or with a trusted partner, um, and it's being, you know, specifically built for clinical use cases. Um, I think on the flip side, I think patients would want to know generally if their data is being used. And if you have any fear that the data could be used in a way, in particular to re-identify an individual for a purpose unrelated to developing a tool that is going to benefit the population that's contributing the data. Um, I would think ethically we would have a responsibility to get their consent. That is my off the cuff. I've never thought about that question before. Um, curious to hear, to put Jenna on the spot and hear what she has to say.

SPEAKER_00:

Jump in, Jenna, tell us what you think. Yeah.

SPEAKER_02:

Yeah. When you first talked about this, Wes, I immediately went where Randy did. Like, how are we going to have generative AI that works appropriately for individuals with serious mental illness and just mental illness in general, if we don't have data to train the tool? I mean, if we think about just mental health in general, anytime we've had, you know, incentives at the federal level or state level to implement certain tools like EHR and behavior health was not included. Behavioral health has always, you know, stood behind when it comes to innovation. And here we have, you know, the ability to make some really unique discoveries and help people with like treatment adherence that perhaps have had very significant challenges before that are dealing with mental illnesses. So we really need the data to be there at the same time. Privacy is a significant issue. And we just actually, for um substance use records that are governed by 42 CFR Part 2, we aligned it with HIPAA to you know try to make it more operational. And HIPAA, we're already saying, can does HIPAA really understand and can it regulate generative AI and the de-identification standards that you um discussed? Um can it like can it control that? Or as Randy noted, that it can perhaps re-identify individuals regardless of the de-identification standards that HIPAA has. So we've loosened like confidentiality of substance use records to align with HIPAA, and now we're wanting people to use that data for generative AI purposes. I think if we can ensure the privacy is there and that data will be stored in a certain place and you know for certain reasons, um, then we absolutely should use it to train tools.

SPEAKER_00:

Okay, so that's fantastic. I I love both your answers. Thank you so much for that. What I'm hearing is one, we really can't truly de-identify the data if it's going to be clinically relevant to helping us create new solutions, new new tools, things like that. Two, and uh Randy, you use the word ethics. We've got to do uh the ethical thing in how we manage the data, uh, in how we receive the data. So there may be circumstances when explicit consent isn't necessary, but uh, and and ethics is not an issue. There may be other circumstances when explicit consent would be the ethical thing to do. And so it's going to become up to uh those who are developing, those who are advising, and those who are regulating to decide where the lines are uh in the in those kinds of a state. Um would you agree that that's a reasonable perspective that we don't have a great answer, but that's the general answer we get to at this point.

SPEAKER_02:

I would agree. No decision's gonna be able to be made in a vacuum. It's going to take a lot of stakeholders to analyze how we use this data to train and for other operations.

SPEAKER_00:

Okay. That gets us to the very end. We're gonna wrap up here with one last question. And this goes directly back to what we were just talking about. Thinking about Utah and the sandbox and the regulatory relief and those kinds of issues. Even in the regulatory sandbox environment such as Utah's Learning Lab, should some ethical obligations be non-negotiable? And who gets to decide what is ethical and not ethical? And that'll be our wrap up for this today because we're we've talked about a lot of stuff, but I'll leave that as the last question. Should some ethical obligations be non-negotiable even in sandboxes?

SPEAKER_03:

Well I I think um that there is consensus and this is actually included in the in the agreement between Doctronix and the um various agencies in Utah is that they have to disclose to the patient that they are interacting solely with artificial intelligence. And I think at this point if you are interacting with artificial intelligence whether to get companion support or potentially prescribing or some other technology and there is limited human to no human in the loop I think that we do owe an obligation to disclose that to whoever the user is so that they can make an informed choice. And second because some of these companion apps, the the response is so real and we're seeing stories of people like developing very emotional relationships with their chatbot is I think, and especially in in younger people or those who might be more impressionable is, you know, I think there is an obligation and some states are putting this in their legislation to remind the engager that they are engaging with something that is not re that is not a human and doing so in a way that's not buried in a terms of use that you have to go find um but is you know is sort of front and center to the user so that they can make a decision whether to continue that conversation or to question the output or to seek human intervention. And to me that feels like at least at this point until we have more data to suggest that that's not super important to inform consent or the user is so aware of artificial intelligence that they don't need it. To me that feels like our ethical responsibility and legal responsibility frankly I think to protect the the AI tool as well.

SPEAKER_00:

Okay. Jenna any final thoughts on that one? That's fantastic.

SPEAKER_02:

Yeah I absolutely agree and to Randy's point especially with these chat bots I think that's the one where everyone is like on board that as Randy pointed you need to identify that you're talking to a chat bot and you need to make sure that this individual is understanding periodically that they're still chatting with a chat bot. I think one of the open AI tools is now actually saying hey you've been chatting for like two hours or three hours it's probably time to take a break just to like gain that human awareness again. We've seen now AI psychosis which we didn't talk about but that's like a real thing now that we did not see in 2024.

SPEAKER_00:

So how do we make sure we protect individuals that use these tools and use them appropriately excellent no we didn't actually talk about the AI psychosis aspect of it and I thought that we would get there we've covered a lot of ground so much and and this has been um by far one of my favorite conversations uh with with uh AHLA thus far uh because I have some personal view in all of this as well um I this this has been great I will give you the last opportunity is there anything more you would like to add as a wrap up before we close down for today I'll start with Jenna since you're on screen right now I would say let's see what 2026 has from a state legislation and federal legislation standpoint it should be an interesting ride.

SPEAKER_03:

Excellent and Randy I think that was a perfect summary I think it's going to be an active and interesting and maybe worthy of some sort of you know TV series afterwards of sort of how the federal and state government sort of duke out jurisdiction here and um and and maybe where we land at the at the end of the year.

SPEAKER_00:

Let's the three of us get together take the ideas we've talked about today throw them into an AI generated tool and create our own script and create our own television series. Yeah exactly uh that's just trying to end on a little more humorous note. Thank you so much, Randy and Jenna this has been an amazing conversation I've appreciated it very much. I hope the listeners of the Speaking of Health Law podcast appreciate it as well. On behalf of Jenna Grady, Randy Siegel and myself I thank you for listening today and I hope you have a great rest of your day so long.

SPEAKER_01:

If you enjoyed this episode be sure to subscribe to AHLA Speaking of Health Law wherever you get your podcast. For more information about AHLA and the educational resources available to the health law community visit americanhealthlaw.org and stay updated on breaking healthcare industry news from the major media outlets with AHLA's Health Law Daily Podcast exclusively for AHLA comprehensive members. To subscribe and add this private podcast feed to your podcast app, go to americanhealthlaw.org slash daily podcast