AHLA's Speaking of Health Law

Key Issues Surrounding AI Governance in Health Care

December 01, 2023 AHLA Podcasts
Key Issues Surrounding AI Governance in Health Care
AHLA's Speaking of Health Law
More Info
AHLA's Speaking of Health Law
Key Issues Surrounding AI Governance in Health Care
Dec 01, 2023
AHLA Podcasts

Artificial intelligence (AI) holds great promise for improving health care delivery and management, but to realize its potential and avoid missteps, health care leaders need to establish a strong governance model for the acquisition and use of AI in their organizations. Mikaela Lewis, Principal Consultant, Clearwater, speaks with Andrew Droke, Shareholder, Baker Donelson Berman Caldwell & Berkowitz PC, about how AI is being effectively used in health care, privacy and security issues, common misuses of AI, considerations involving AI and business associate agreements, proper due diligence when acquiring AI technologies, dealing with breaches, and the current legal and regulatory environment. Since this podcast was recorded, the Biden Administration has released an executive order on the use and development of artificial intelligence. Sponsored by Clearwater.

To learn more about AHLA and the educational resources available to the health law community, visit americanhealthlaw.org.

Show Notes Transcript

Artificial intelligence (AI) holds great promise for improving health care delivery and management, but to realize its potential and avoid missteps, health care leaders need to establish a strong governance model for the acquisition and use of AI in their organizations. Mikaela Lewis, Principal Consultant, Clearwater, speaks with Andrew Droke, Shareholder, Baker Donelson Berman Caldwell & Berkowitz PC, about how AI is being effectively used in health care, privacy and security issues, common misuses of AI, considerations involving AI and business associate agreements, proper due diligence when acquiring AI technologies, dealing with breaches, and the current legal and regulatory environment. Since this podcast was recorded, the Biden Administration has released an executive order on the use and development of artificial intelligence. Sponsored by Clearwater.

To learn more about AHLA and the educational resources available to the health law community, visit americanhealthlaw.org.

Speaker 1:

Support for AHLA comes from Clearwater. As the healthcare industry's largest pure play provider of cybersecurity and compliance solutions, Clearwater helps organizations across the healthcare ecosystem move to a more secure, compliant and resilient state so they can achieve their mission. The company provides a deep pool of experts across a broad range of cybersecurity, privacy, and compliance domains. Purpose-built software that enables efficient identification and management of cybersecurity and compliance risks, and the tech enabled twenty four seven three hundred and sixty five security operation center with managed threat detection and response capabilities . For more information, visit clearwater security.com.

Speaker 2:

Hello everyone, and welcome to this episode of the American Health Law Association's podcast. Speaking of health law, I'm your host, Mikala Lewis. I am a principal consultant with Clearwater, where I advise and support our healthcare clients on how to move their organizations to a more secure, compliant and resilient state. With me today is Andy Dro , attorney and shareholder at Baker Donaldson. Andy leads the firm's artificial intelligence and GDPR teams, and he counsels clients on a broad range of data protection, privacy, and cybersecurity matters. It is great to speak with you today, Andy.

Speaker 3:

Thanks, Mikayla . It's great to be here. I'm really looking forward to our discussion about AI and healthcare and how organizations can manage risk and position themselves for success in this area. Awesome.

Speaker 2:

Sounds good. Let's dive in. So, as many of you may be aware , uh, there's a lot of conversation right now about artificial intelligence or AI and really it's applications in healthcare as well. Andy, I know you work with plenty of your clients on how to use AI appropriately in healthcare. So to start us off, can you give us a few examples of how you've personally seen AI effectively being used in healthcare?

Speaker 3:

Yeah, of course. Um, and so it's, it's a really exciting area and I think one of the things that's important to keep in mind is that AI-based technologies really exist along a spectrum, and it ranges from things that are, you know, as simple as chat bots that are logic-based and operate , uh, based off of simple yes no questions and decision trees to clinical decision support tools to , uh, implementations that are used to enhance medical imaging, to generative AI platforms that are used to create chart notes , uh, for providers to machine learning tools that are used to identify patients who might be eligible for clinical study opportunities. There are just so many AI-based tools that are already in the marketplace that there are different considerations for each and lots of opportunities. Those opportunities are , uh, both clinical and nonclinical. Some are , um, administrative in nature. And so in terms of kind of specific examples of what I would consider to be more administrative in nature, we've seen some clients who have implemented AI based , uh, systems and tools to support their billing and scheduling functions. Some who have implemented tools that help with , uh, things like fraud detection in terms of , um, their financial and accounting systems. Um, and , and then, and then another area where we've seen a lot of implementations are , are in the hiring and advancement arena in terms of HR operations. So it's, it's important to remember that, that healthcare organizations have these functions that are not necessarily patient facing , um, but all of which have, have opportunities for AI enabled tools. Um, on the patient facing side , uh, McKayla , we've also seen, you know, patient facing , uh, tools that are, you know, symptom management and, and triage based where individuals and , and patients can engage with those AI based tools. So those kind of chat bot , um, tools that are logic-based. Um, and , and , and those have , have been helpful, I think from a symptom management perspective where patients can have this kind of engagement with , um, a tool that then escalates , uh, back to the healthcare provider. And so there's some algorithmic decision making in the background of, of those tools. The broader , uh, AI implementations that people like to talk about that are in the news, I think are still a little future state for a lot of healthcare providers. Those where we see, you know, tools that help make diagnoses or identify, you know, risk patterns for particular D disease, those are , um, a little future state. Those are things that are coming down the pike where we talk about Google's advancements for med palm and seeing, you know, we know that it can answer multiple choice questions. Um, for a med exam, we know that they want it to be able to look at , um, an x-ray and be able to deliver a summary , uh, as a radiologist would. But we also know that those types of technologies are not fully baked yet. And so the implementations have ranged both in terms of clinical and nonclinical and along that spectrum of things that are what many of us have interacted with for years , um, when we've tried to cancel cable subscriptions or otherwise engaged with the internet , um, in a non-healthcare setting to things that are more advanced in looking at large sets of healthcare data to try to improve patient outcomes or identify opportunities. And so , um, those implementations have, have obviously ranged in complexity and presented lots of different issues.

Speaker 2:

Yeah, absolutely. Those are great examples, Andy, and I love that you called them on a spectrum. You know, we're seeing all different types of applications of AI across healthcare , healthcare specifically, and across other industries, and it's being used for all different functions. So I think that that is a great starting point for us here. Personally, I know my customers and clients asking me all about the privacy and security world of AI specifically, so we'll jump into that. Now, I know we , you mentioned some of the great uses of AI in healthcare and like you said, some of them aren't fully cooked yet. So I do wanna address some of those questions and considerations around security, privacy, and the law that have been coming up in our industry recently. So from your perspective, Andy, what really are the most common security and privacy concerns for the use of AI in healthcare?

Speaker 3:

Yeah, it's great question, McKayla . And you know, the, taking a small step backwards, the privacy and security concerns really are derived from some broad principles that undergird what the industry as as a whole. And what I think society and and regulators as a whole are looking at key foundational principles for ai , uh, namely safety, transparency, fairness and equity and validity , um, for things that should undergird AI as a technology, but also, you know, how do we regulate what are we concerned about and, and what should organizations be mindful of as they implement , um, AI in a particular way? And so when you think about privacy and security with an AI lens, it , many of the usual constructs still apply, right? We are still worried about the nuts and bolts , um, and the basic blocking and tackling that we've been worried about and concerned about for a long time. Um , but we're also thinking about the particular risks that AI presents. Um, and, and those are things that arise because of how , um, certain types of AI work and the data that are required to really take advantage of things like large language models. Obviously the greater the volume of data, the , um, more risk that there may be , um, and the, the more importance that there , um, is in terms of protecting it and thinking through those particular issues. Um, there are also kind of the usual concerns around , um, you know , business associate agreements, data ownership, vendor contracting and diligence. Um, one of the other kind of topics that we've seen a lot and, and heard a lot both inside and outside of healthcare with respect to AI, is the idea of keeping , um, a human in the loop with respect to decisions that are made by algorithms. Um, and, and that goes hand in hand with , um, a principle that healthcare organizations are, are long familiar with in terms of having privacy and security officers. And so we've seen , um, AI based issues landing on the desks of privacy and security officers , um, in many organizations. I think likely because those functions have the governance frameworks , um, in place to kind of process and start to understand how AI may impact , um, the organization and think through some of the risks and decision points. But having someone thinking through the additional issues from an organizational risk perspective is obviously a big part of , um, the data privacy and security considerations with respect to an AI deployment.

Speaker 2:

Yeah, absolutely. And I'm, I'm glad you brought up that organizational risk perspective here at Clearwater. We do a ton of risk assessments, and I'm getting that question quite frequently from my clients about, you know, how do we factor in this AI world into our risk assessment from that perspective. You know, we're also concerned about breaches here too, and that's why we're doing these risk assessments and ensuring that we are preventing these breaches. But everyone's asking the question, you know, what are some common uses of, excuse me, misuses of AI that we should be aware of to ensure that we're preventing these breaches and protecting that sensitive data?

Speaker 3:

So it's, it's an interesting question. Um, Mikayla , I mean, there are, we know that there are, you know, bad actors who are using generative AI to create more realistic phishing attacks. We know , um, that the technology will be used , um, in , you know, the, the ever evolving cybersecurity , um, warfare that exists , um, in terms of kind of internal threats and, and thinking about what the actual risks are and not the doomsday scenarios that the media likes to, to think about. Um, I think one that is really important for organizations to think about are the risks that arise from bias that exist in algorithms. Um, this is an issue that , um, outside of healthcare that EEOC has been really , um, proactive about, and it's, it's one that I would expect other agencies to follow suit with respect to. So , um, obviously we want to make sure that the algorithms and that undergird AI-based technologies are, are fair and non-discriminatory. And so as we think about misuses of ai , um, it's an area where in healthcare in particular, we need to make sure that a deployment of AI does not have a disparate impact on , um, a particular group of individuals based on on how that algorithm is, is programmed, but also with respect to the data on which it is trained. So that , um, as that algorithm and as that technology is used, it is deployed in a manner that has , um, an, an equal treatment and is not deployed in a way that has either, either a bias that is inherent in the algorithm or an impact that is , um, disparate or, or discriminatory in its actual , um, implementation and use.

Speaker 2:

Yeah, that is a great point about where are we getting our source data, how accurate is that? Especially since sometimes that information isn't shared necessarily with the systems that we're using in terms of, you know, the knowledge of what the source data is specifically. So we wanna make sure that that data, we know where it's coming from, we know that it's accurate, and also that we know that it's up to date . I know a lot of things in healthcare are constantly changing as well, you know, there's always new recommendations around security and privacy around the actual care as well. So we wanna make sure that information is accurate and up to date , and that's gonna provide, I think, the best , uh, opportunity to reduce that bias. Like you mentioned , uh, especially you mentioned , uh, business associate agreements. I wanna circle back to that as well for these AI systems to ensure, you know, we have this maximum security privacy and, you know, proper legal approach to these contracts. Is there anything specific around that BAA that we need to be aware of and that we need to look differently at these AI systems for ?

Speaker 3:

It's, it's a really fair point. I think , um, we all know that business associate agreements have evolved greatly , uh, since HIPAA was originally enacted. And that , uh, there are, I think, several areas that are important to keep in mind when negotiating for AI-based tools in , in particular, first obviously risk-based terms for insurance indemnification , um, and otherwise may, you know, vary and, and the parties, depending on, on who you represent and, and which side you are on in a particular scenario , um, may need to enhance what you would otherwise expect given the particular technology that's at play and the particular volume of data involved for a use case. In addition , um, one of the issues that I know lots of our clients are thinking about is secondary uses of information to train algorithms and further enhance products. This is one that has existed , um, and, and been a, a discussion point in traditional , um, SaaS and, and other, other technology agreements for a while in terms of whether a vendor can use information to enhance and improve their products and services. Um, there's variety of opinions depending on, on which side of an agreement you're on at any given moment in time, but it's particularly , um, relevant in the context of AI with respect to training and development of the algorithm because of the importance for the AI-based tool , um, and the downstream implications of that. And so that decision point is, is important from a compliance standpoint. It's important , um, from an operational standpoint depending on the particular tool, how it's being implemented and what the , um, ultimate kind of iterative effect is for any given implementation. So some , um, outside of healthcare, you know, some vendors are, are willing to agree that they will not use , um, personal data or regulated data to train , um, for the benefit of other customers. Some vendors are not. So this is going to be a discussion point, I think first outside of, of healthcare context. And then that will ultimately , um, inform some of those discussions inside , um, the context of negotiating a business associate agreement for, you know, a large language model, generative AI based integration or product in the healthcare space. When you're talking about those high stakes, higher risk , um, algorithmic deviate , uh, developments and , and downstream consequences in that context.

Speaker 2:

Yeah. You mentioned that there may be different questions during the acquisition process there. So I know you mentioned we wanna understand exactly how that data in AI is being used. Is there anything else that you can think of during, you know, due diligence phase, for example, of acquiring a new system that you should ask and would ask personally for your client ?

Speaker 3:

Yeah, ab absolutely. Um, so we're, we're in a time where it , it seems like everybody has an AI based tool , um, and everybody's selling an AI based tool. And, and I think the thing that's important to keep in mind is that they're not all created equal. Um, and, and it's fair to make sure that , um, you are diligencing both does the product actually work , um, asking questions about how are you actually going to use it, what data will that product process, and then relaying that information because it informs how will you negotiate the agreement , um, how does that impact what uses are permitted in the agreement? How does that impact the risk and, and liability allocations? Those are all critical facts to understand the overall structure of the relationship. In addition, some of the other questions that , um, we typically ask and want to , um, understand and push for, for AI implementations involve understanding , um, you know, some issues around intellectual property in terms of how was, how was the algorithm trained? Does the vendor feel confident that they have , um, and can make reps and, and provide indemnity with respect to non-infringement? In addition to that , um, asking the vendor to assist the customer with respect to , uh, transparency obligations. So outside of , um, healthcare specifically, many of the laws that are evolving outside of the states , um, that we anticipate would be , um, mirrored , um, and, and exist in some form in some of the, the state consumer protect consumer data privacy laws that exist , um, in, in several states now, contemplate algorithmic transparency. Um, that is something that is and requires some cooperation between the data controller, the customer that has the notice obligation and the vendor who holds, holds the information necessary to adequately describe and disclose what an algorithm does. And so the time to , um, discuss that ob obtain a contractual commitment to be able to a , acquire the informa information necessary to disclose accurately and adequately the information necessary to meet an algorithmic transparency obligation is at the time of contracting. And so having discussions about those items , um, during the vendor diligence , uh, process and during contracting is really important. 'cause it sets you up for success , um, in advance and in anticipation of laws that are currently in effect and, you know, that are coming down the pike at a, at , at a relatively rapid pace.

Speaker 2:

Absolutely, and I completely agree. I think if we can really set up for success when it comes to the diligence phase of new systems with ai, even existing systems with ai, with AI as well, I think we're setting ourselves up in general for a more secure and compliant , uh, part of the organization, which is awesome. However, I have to ask the blunt question, you know, breaches happen. So in the event of a breach of a system using AI from a legal perspective, who's responsible?

Speaker 3:

Yeah. So, you know, this is , this is the typical lawyer answer right of, of it depends and, and the facts of, of , um, how it's set up and, and, and how the agreements are set up and what actually happened are always going to to matter and, and win the day. But we do know that , um, un under hipaa, we know that , um, covered entities ultimately have the responsibility , um, to notify the affected individual. We know that they ultimately have the obligation to notify OCR, and we know that in general, OCR has been historically more interested in talking to covered entities about reported breaches than they have been , um, in talking to business associates that may be , um, changing somewhat. Um, but for the time being , um, providers and, and plans should view this as their responsibility that is , um, consistent with what the EEOC has said outside of the healthcare context in terms of the employers being responsible for , um, the bias that is in a software that they retain and being responsible for ensuring that employment decisions that are based on algorithms within software are not discriminatory. And so as you start to piece the puzzle together, while you may have , um, contractual obligations and ways to, to move risk around and, and point fingers , um, ultimately , um, it's important that , um, customers , um, and entities that are using the AI products are, are prepared , um, to deal with, you know, any, any reporting obligations and, and consequences that may arise if there were to be , um, an incident that arose and connection with that product.

Speaker 2:

Well, we hope that, you know, we can prevent that as much as possible, <laugh>. Absolutely. So I also wanna talk about the regulations that are currently out there around ai. I know you specifically mentioned the Consumer Data Privacy Act, HIPAA as well, and those are all general legislation around healthcare data. Is there any sort of specific regulation about the use of AI or any current standards that we're looking at?

Speaker 3:

So we've, we, we currently have , um, a bit of a, of a mixed bag. So , um, right now , uh, first and foremost, it , it's important to keep in mind that, that the use of AI is regulated by existing laws. People I think at, at first glance, think, oh, ai AI is somewhat new there, there must not be any laws that regulate it. Um, and that's not, not actually how it works. And so, so we're playing within our existing legal framework. So contracts still apply and reg and , and govern fraud and abuse laws still apply and govern if, if an AI based , um, tool up codes , um, on, on a, on a claim, you still have a reimbursement question. The , um, development of AI specific laws is, as you might expect, starting in the eu. And we've seen the EU currently there in, in the process of finalizing the EU AI Act. It's a risk-based law that would classify some AI implementations as , um, too risky and prohibited. And then there would be some that are high risk and require certain compensating controls and, and analyses, some that are kind of medium risk and then some that are lower risk. We would, you know, as, as we've seen in some state privacy laws anticipate that at some point in the future, something like the EU AI Act may exist in the states , um, from a in within the states. Um, we have within certain of our, our state privacy laws like the CCPA, the Colorado law and others , um, we've seen rights to opt out of automated decision making or profiling. So use of algorithms to make certain decisions. Those would generally apply. There are also some transparency obligations like we talked about before, under those laws. We've also seen somewhat of a push at the federal level , um, to do something that , that administration has been actively involved in trying to secure some voluntary commitments from key players in the space. On the technology side, Congress has been making some efforts, holding some, some listening sessions and, and discussing kind of how to regulate in this space. The comments from, from various senators have acknowledged the difficulty in , um, coming together and, and figuring out how to regulate effectively in this space. Um, so we will see what happens at a federal level. Um, and then from a, from a regulatory or from a, from a voluntary risk management perspective, NIST has the AI risk management framework, which is a, you know, risk-based framework that governs the design, development use of AI-based products. And so organizations could adhere to the risk management framework in their review and adoption of AI-based products to help them govern and manage those risks.

Speaker 2:

Awesome. I know I've been watching those senate subcommittee presentations the last couple weeks. Highly suggested to go look at those. I think they're some great insights. They brought in some great subject matter experts on the topic as well, so it's something to watch closely right now, which is, which is great. I think we're moving towards these conversations around how to regulate AI and how to best use it. 'cause it is a powerful tool that you spoke to earlier, Andy, that it is, it has great implementations as well. We just are making sure it's fully cooked, like you said. So to finish this out here, I do wanna cover , uh, your recommendation specifically around the use and management of AI in healthcare. So from a legal perspective, from a privacy and security perspective as well, do you have any specific advisement for your clients and our clients on the use of ai?

Speaker 3:

Absolutely. So this is an area where I think right now strategy is key . Um, there's not going to be a one size fits all answer for everybody, and you really have to balance the legal and the operational considerations based on a, the particular implementation, and b, the particular business goals. So developing a really robust AI strategy and governance plan, now thinking about what data an organization has, if and how it can be leveraged, what the opportunities are, and then how to go about engaging , uh, AI-based tools with the strategy and the data in mind is the first best step to , um, efficiently deploying an AI-based tool in the future. So taking those steps at this point to inventory what's happening, understanding the legal rights, the contractual rights , um, what's been said in privacy notices, understanding how to operationalize the data that exists. Those are the key steps to take now looking , um, have vendor agreements thinking through the contracting pieces. Those are the , those are the legal steps to take now , um, to set yourself up for success , um, in developing , um, the building blocks of, of a governance framework. Um, the other, the other piece that I think is really important , um, to, to engage in now is , is talking and training employees today. Um, this is something that is in the news, it's something that employees are absolutely aware of talking about , um, and, and are, you know, likely I think using , um, with or without supervision or care for if they should or shouldn't be. And so organizations I think , um, that are not thinking proactively about this may be caught a little flatfooted as they realize that they have individuals who may be , um, using these publicly available AI tools without , um, fully comprehending what, what risks that present to the broader organization.

Speaker 2:

Awesome. Thank you for all of that insight, Andy. I think those are great points, especially that strategy is key. Setting yourself up for success now before we see more and more implementation of these AI systems and have as we see more regulations come out. I think that is a fantastic point. So thank you for your excellent insights that you've shared all throughout this conversation. Andy, it was great to talk to you today. Thank you everyone for listening. I hope you have a great rest of your day.

Speaker 1:

Thank you for listening. If you enjoy this episode, be sure to subscribe to AHLA speaking of health law wherever you get your podcasts. To learn more about AHLA and the educational resources available to the health law community, visit American health law.org .