AHLA's Speaking of Health Law
The American Health Law Association (AHLA) is the largest nonprofit, nonpartisan educational organization devoted to legal issues in the health care field. AHLA's Speaking of Health Law podcasts offer thoughtful analysis and insightful commentary on the legal and policy issues affecting the American health care system.
AHLA's Speaking of Health Law
Navigating the Conflicting Interests of Digital Health Innovation and Business Advancement
Shalyn Watkins, Associate, Holland & Knight, and Selena Evans, Founder, Ara Governance Advisory, discuss how health care attorneys can advise clients who seek to implement, develop, or use digital health. They cover implementing policies and procedures, updating data privacy and security policies, understanding professional board requirements, staying aware of litigation, and inserting critical judgement in artificial intelligence (AI) outcomes. They also discuss the biggest mistakes they see in the industry when clients are implementing or leveraging AI, interests being weighed when discussing AI implementation, common pitfalls, and ethical issues. Shalyn co-write an article for Health Law Connections magazine about this topic. From AHLA’s Health Care Liability and Litigation Practice Group.
Watch this episode: https://www.youtube.com/watch?v=mdaz1gw9Vog
Read the Health Law Connections article: https://www.americanhealthlaw.org/content-library/connections-magazine/article/8508ed3c-e3bd-4e60-9a30-0a74e89e805d/Navigating-the-Conflicting-Interests-of-Digital-He
Learn more about AHLA’s Health Care Liability and Litigation Practice Group: https://www.americanhealthlaw.org/practice-groups/practice-groups/health-care-liability-and-litigation
Essential Legal Updates, Now in Audio
AHLA's popular Health Law Daily email newsletter is now a daily podcast, exclusively for AHLA Premium members. Get all your health law news from the major media outlets on this podcast! To subscribe and add this private podcast feed to your podcast app, go to americanhealthlaw.org/dailypodcast.
Stay At the Forefront of Health Legal Education
Learn more about AHLA and the educational resources available to the health law community at https://www.americanhealthlaw.org/.
This episode of AHLA Speaking of Health Law is brought to you by AHLA members and donors like you. For more information, visit AmericanHealthlaw.org.
SPEAKER_00:Hi everyone, my name's Shaylin Watkins. I'm an associate at Holland and Knight in our healthcare regulatory enforcement practice group. And I'm so excited to talk to you today about a recent AHLA Connections article titled Navigating the Conflicting Interests of Digital Health Innovation and Business Advancement: Five Tips for Healthcare Lawyers Advising Clients Interested in Digital Health. So that was the biggest mouthful of all time. I co-wrote this with two of my colleagues, Mason and Harshida. And I'm super excited to talk about it with no one else other than my friend Selena Evans. When we were talking about doing a podcast, I was like, who can talk about this? And then immediately your name came up. So Selena, you want to introduce yourself and tell us who you are and what you do?
SPEAKER_02:Thanks, Shaylin. I'm super happy to be here. And the article is fantastic. So for any listeners who haven't read it, be sure to, because it's full of great nuggets that I doubt we'll be able to cover entirely in this podcast today. So yeah, I'm Selena Evans. I run a firm called Aura Governance Advisory. I specialize in change and transformation in highly regulated spaces.
SPEAKER_00:Yeah, and I think just by way of summary of the article, which um was sponsored by our AHLA's um health care litigation and liability practice group. Um, just kind of by way of summary, you know, we go into pretty good detail about describing what digital health is, which I think is the biggest problem in the whole article and in a lot of what we do, Selena, I know you always have the funnest answer to this question. But like I think it starts with the reason why this is becoming such a hot topic is digital health, the term encompasses so much. So when us as lawyers are coming together to try to advise our clients, you're picking up some people who are doing something as simple as just like telehealth to people who are creating whole pieces of advanced technologies that are going to be run by AI and are gonna help with diagnostic tools and things like that, right?
SPEAKER_02:Yes, and how things, how these things all fit together. You know, we we tend to want to segment things off into buckets, but really what is being developed around digital health is really an ecosystem of providing care. And because of that, the relationships between all of these things are really important for us to consider. You can't just put, you know, a digital device, a wearable device together with telehealth and with, you know, and kind of put that chain together without thinking through what the whole package looks like, what that means for the patient, what that means for for your regulatory obligations. So um, yeah, it's a quagmire.
SPEAKER_00:Right. And I would assume that based on what you do, you know, the key component of ensuring that this, you know, new age of AI is kind of properly integrated in healthcare is corporate governance and governance policies related to the implementation of AI, which I think was kind of the crux of what we were getting to in our article, just talking about you know, the five biggest things we're seeing for our clients right now. You know, you're gonna have regulatory obligations. They're on many different levels. It's it's not just you know what it takes to put together and get, you know, a piece um uh cleared by the government, but it's also the provider liability that's you know associated with leaning on the understanding of AI instead of just also putting critical pieces of your own, you know, clinical judgment into patient outcomes. And I think that's what makes this such a sticky but fun topic.
SPEAKER_02:Yeah, I couldn't agree more. And, you know, it's also it's also very difficult because there's um, you know, a jagged adoption curve. Um, you know, a hospital system can't just like roll out um AI and be like, okay, we got the policies. Here we go, everybody. Go have at it. You know, there needs to be training, there needs to be dialogue, there needs to be learning and understanding. And all of it is changing at such a rapid pace that it that the governance becomes not just not just the policies and not just, you know, the committees and these kinds of things that we think about um, you know, kind of in traditional ways of thinking about governance, but like, how do you make decisions around this? How do you prioritize these um these investments and the actions against them in a way that can align your regulatory requirements with your operations, with, you know, the protections that you need to ensure that you're not creating liability for yourself downfield? And it's um it's a real challenge, but it begins with that decision making and it's very multidisciplinary and cross-functional. And that is always a challenge for organizations to navigate.
SPEAKER_00:Right. And I think in the beginning of this, like as a lawyer, half the time the easiest thing is just to tell the client to avoid the risk and not use it. But at this point in time, there's not a world where we're going to be operating without the AI. Like it's it's for so many reasons and so many different facets of what our clients are doing, they're gonna need it. I mean, I think the area I've seen a kind of the adaptation happen the most quickly was in insurance claims, right? Being able to just parse out and see just large amounts of data about what's being billed for or not, it the AI review process was had been critical. It was kind of the first time I saw things happening in a large scale. And now when we're talking about getting into the point that, you know, every type of healthcare provider or even some of the startups I work with, you know, is going to be using some form of AI, it's it's like the business outcomes outweigh the risk of getting into this very controversial area now.
SPEAKER_02:Yeah, absolutely. And, you know, if if larger organizations are not going to take the plunge to figure out how to navigate this new technology, they'll be disrupted in other ways. So there's a lot of strategic risk around all of that and around the difficulty in in navigating these things. Um and you know, one thing that that occurred to me as you're talking is we're using AI internally to an organization and in products. And the way that those things overlap and the way that they fit together is, I think, um a really, really interesting um subject that not a lot of companies are taking up. And I think that that's a real pitfall because it matters significantly. You don't want to necessarily take a large language model and leave a regulatory decision up to that large language model. They're not built for determinative solutions. Um, so you have to think about the way you're using it internally for your regulatory processes and then also the way that it's embedded in products. So, yeah.
SPEAKER_00:Definitely. And as a counterpoint to that, even our smaller organizations, they're not gonna really have much of a choice, right? If let's pretend most everything was out of the realm of possibility as far as risks are associated, if you're billing insurance, or even if you're collecting large amounts of data and you're using AI in any form or fashion, there are gonna be data privacy or health information privacy obligations that are always gonna be there too. So, like you can't really get out of HIPAA, you can't get out of state health information privacy laws. Um, as we're collecting and processing information, there's always at least one big bucket that's gonna come back to bite you if you don't think about it.
SPEAKER_02:Yeah, absolutely. And, you know, I think that um we don't have a clear understanding of what enforcement will look like. We don't have a clear understanding of what the plaintiff's um bar will look like in terms of of products liability enforcement and and how they how they will start to structure cases around um artificial intelligence. But it the the laws don't go away. They're still there. So there's gonna be new theories and new things. So I think like if I were to give any advice to any company in in this space, be do the really hard thinking. Start with your with your with the level of your strategy, layer on the you know, the different considerations of of your organization and and work together and just don't rush, don't rush into it, be thoughtful, be pragmatic, and make sure that you're tackling this thing from from multiple angles and multiple perspectives so you can get ahead of it.
SPEAKER_00:Yeah. I think one thing that would be really fun to do is kind of just go through the five tips that we came up with in our article, see what your thoughts are on those tips, and then see if you, because you're a genius, can come up with other big tips that maybe we didn't even fully flesh out when we were thinking through it for our article. So the first one was um basically the importance of implementing policies and procedures. I know you talked a little bit about that, but on a large scale, what types of policies and procedures are we looking at and why are we looking at that when we're thinking about using AI?
SPEAKER_02:Yeah, um gosh, that in and of itself is a very um broad topic. Um, yeah, I mean, of course it's important to the like the policies and the procedures are where you are going to anchor your operations. And so having having that be thoughtfully done as opposed to just wrote and off the shelf, you really need to design them for your business context and for the way that your company operates and with a good understanding of your maturity in where you're trying to embed these policies. Um, so the importance of them, I don't think, can be overstated. Um, but I think that there's um a real risk in not appreciating the nuance of an operating context. I mean, you ask AI to create a policy for you, it'll it'll spit something out. But but what does that necessarily mean for um for the structure of your operations, for the capabilities that you have within your organization to be able to execute against those policies? Do your workflows have to change significantly? You can't just like rub some AI on your old policies and hope that and hope for the best. You know, there's just there's so many kind of um second and third order consequences to rolling out this technology that that um need to be thought through from an operational perspective, but also from a policy perspective.
SPEAKER_00:Yeah, and in my early years as a lawyer, I started out as an assistant attorney general in the state of Ohio. And then I went to the U.S. Department of Health and Human Services as assistant regional counsel. So I spent a lot of time working for regulators, and I'll say the first question that gets asked when you're being investigated, even if it's just a normal audit, is like where are your policies and procedures? Because the absence of the policy or procedure is usually a violation in itself, right? But then secondly, like let's say it's not a violation, your failure to comply with your own policies is an indicator of your non-compliance with certain rules or regulations, um, or your failure to train employees on them. So even if we weren't talking about AI where we don't know specifically what enforcement's going to look like, we know that no matter what enforcement looks like, it's going to require some sort of structured understanding internally for the organization. And compliance with your own policies is usually step one to defending against any type of action. And it's not even just to avoid oversight from a regulator. It's to avoid future liability lawsuits too, right? You know, the the easiest thing to say is, hey, we were in line with uh the standards of operation for this type of product or for this type of use. Um and if we really simplified it, I talked about HIPAA a little bit earlier, right? You know, operating without a HIPAA policy would make you just de facto non-compliant. And here we know for a fact, we've seen OCR has let out guidance about the use of AI. I mean, it's not the greatest and easiest to grasp guidance, but we know that there is some oversight coming related to it. And so therefore, having a policy or procedure helps protect against really the chance of a violation if you're all adhering to it.
SPEAKER_02:Yeah. Yeah. And, you know, uh, you raise a good point with guidance, you know, coming out. And it's it is in and of itself iterating. So I think that that we we do spend a lot of time thinking that you can create a policy and and kind of leave it alone. I think we need to get really, really good at the life cycle of the policies and making sure that you're keeping up with the changes and getting that regulatory intelligence and the guidance intelligence in and making sure that you're doing something with that and reacting to that. Like it's a whole pat it's a whole pattern that has been really, really hard for companies to digest. So even just looking at that regulatory intelligence piece from where do you get the information? What information do you need? What comes into you? And then what do you do with it? Where does it go? Who's accountable to it? How does it get embedded in the policies? Like you kind of have to have that whole end-to-end process um designed really well to be able to keep up with with all of all of the change.
SPEAKER_00:You're so right. And there's the piece that we know that no client wants to be like, hey, lawyer, please redo my policy or go over my policy. I don't want to pay for this, right? You know, I feel like I just did this. How long is this life cycle supposed to be? I think when we're advising other lawyers, you know, one of the biggest things to think about is like, hey, when you see that new guidance has come out or you see that there's chatter of something new happening, whether it be on the state level, on the federal level, even some guidance that we I we didn't discuss at all the EU AI act in our article. But, you know, even just understanding those kind of parameters as they're ever changing and being adopted, then that's the perfect time to reach out to your client and say, hey, I just thought I just saw this article, or I just saw this thing pass, or I just saw this new change came into effect. Um, it might be a great time for us to look at that. And I mean, at the end of the day, the client has to make the the end call on if they're going to put the time and effort into it. And it's our job to make sure that they at least are on notice that, hey, it might be time to update those policies and procedures.
SPEAKER_02:Yeah, absolutely. I agree with that. And I I I also might push even a little bit further than that, that each organization should really have um have a structure around like the kinds of guidance that you're looking for to make sure that that information isn't left to, you know, a um a client relationship that may not exist anymore. They like that a company needs to be structured to be able to get in that information so they know, okay, I'm getting this type of regulatory intelligence from Holland and Knight. I am using this particular information service from Bloomberg or whoever, whatever it ends up being, to get this type of information and make sure that that is robust. Because I think that the implicit in um in the sort of cyclical quality management things that we see in a lot of the AI guidance that is coming out is the um requirement that you that you stay up to date. And not that it's ever not been there, but you we're seeing a lot more focus on it in the regulations. And so I think that getting that process really structured so that you know, like you can proactively know when to reach out to your council is super, super, super important.
SPEAKER_00:Yeah, I think that's right. Okay, our second tip, which I've kind of hinted at a few times already, was kind of the importance of updating data privacy and security policies. I don't know how you feel about this topic.
SPEAKER_02:Well, it's so important. I mean, like, okay, we we talk so much about um about because we're lawyers, we talk about liability, we talk about policies, we talk about um that regulations. Um, like the patient is always center in this. Like that, that is like the mantra of the patient is is center in this. If you have data for your patients that gets compromised due to lax cybersecurity um practices, uh it it doesn't it doesn't do you any good. There's huge reputational damage, all of this. But it also puts your puts your patients at risk in ways where you're kind of used to thinking about like the health risks that come along along with these things, but but the the privacy risk is is really big. And AI has um a huge attack surface. So cyber security rules need to not just be um and and practices within a company. They need to be embedded. And I think that that predominantly we've seen a lot of like bolt-on, you know, pri in in a lot of instances, privacy um groups and security groups that kind of operate in silos. And and that that sort of thing just can't happen anymore, especially if you have AI embedded in your in your products and whatnot. Those those things need to be continually refreshed, continually um, you know, addressed from a from a security space and getting ahead of what it what the future of cybersecurity looks like is is another really important thing.
SPEAKER_00:So yeah, and especially if you're in a place where things are heavily regulated, let's forget HIPAA for a second. Like I'm in California, the CCPA, like there's already a robust amount of laws regarding the way that we collect and store data. And if your AI is helping you collect and store that data and it's keeping any of it and it happens to be health data, you know, you're now sitting on a gold mine for any prospective attack. Um, and so I always just, you know, it no one wants to be a security official until they until they realize how important the security officer is. Um, but it I I I know that at least in Europe, right, there is a really heavy emphasis on this as a component of AI implementation. And it's arguably the only place that we already have guidance here in the United States, right? We know for a fact that there is regulation on this issue. Um, and we know for a fact that AI has propensity to store the data in large um ways and the propensity to accidentally misuse the data because we're still figuring it out.
SPEAKER_02:Yeah, yeah, absolutely. And um I all the um the broad security regulations that are coming out, I think that we can look to like the financial sector um around around these cybersecurity requirements and the operational resilience requirements coming out of the EU. Um, because you know, in in the US also, um we security is a huge focus of the current administration. Um and so I I think that that that that will continue to be um a really important factor in and seems to me, um, if I were like reading tea leaves, would be a place that um that this administration would look to for our enforcement perspective. So I think that the security regulations and like looking at them from an architectural perspective um and a capability perspective is is very wise guidance for for anyone in this space.
SPEAKER_00:Yeah. And lastly on this point, for me at least, there's also we've seen the Planus Bar has already made a good use of this in the past five years, right? When it comes to data and information privacy and security, that has been the easiest way to get sued for a lot of healthcare providers right now. Um, and I think for some of our listeners, they're probably thinking, well, I don't see how as long as I just keep all of the information inside and I know I have to thwart off a tax, but I can insure against the tax, you know, you're probably thinking it's very unlikely that a lot of this will happen. I think the real reason why you have to think into this space is because usually if you're using AI, you're using vendors. And it's the fact that third party risk. Right. When you have those additional third parties and everybody has their own security protocols, um, you know, there's there's always room for a little, a little like if that risk exists when we're dealing with paper files, it definitely exists when something's just sitting in the cloud, right?
SPEAKER_02:Yeah, absolutely. And, you know, um, without going down this uh this road too too much, I the third party risk piece I think is really complicated. Um I don't think that most companies do a particularly, I think that they do a really good job onboarding their third parties and making sure that the requirements are in the contracts and that things are going well out the door to protect against against the the privacy and security risk. But I I think that managing those contracts becomes really difficult because a lot of times companies like the business will own the contract with the companies. And so, you know, things don't end up kind of percolating through the legal department in the same way. Um, and and so I I think that it I think that it really does require a fresh look at your third-party risk program, certainly.
SPEAKER_00:It's so funny you say that because I literally just spoke on a panel at the AHLA annual meeting about navigating your vendor contracts for these exact reasons. And we're gonna be doing a podcast on it. So you guys are gonna get tired of hearing my voice on the HLA podcast, but this is just such an important issue right now that I think all of our clients are seeing. So I love that you're seeing it too. Um, and I don't feel it's crazy.
SPEAKER_02:No, you're definitely not crazy, and like even think about like terminating a contract with a company. Like, you know, we don't we we tend to think about the contracts as getting into them, but what about what happens when you separate with a company and where are your where are your um retention obligations and where are they housed and mapping your data and and really getting a great handle on, and I I I know we're gonna talk a little bit more about data, but your data lifecycle management that includes your third-party risk could not be more important.
SPEAKER_00:Yeah. So the next, um actually, I feel like we can kind of bundle the next two together. Um, it's understanding professional board requirements and staying aware of litigation. We kind of just hit on a lot of the litigation risks that are happening. Um, and I say to pair that with understanding professional board requirements, because I think this is mostly targeted towards our providers who are implementing AI, right? You don't really see this on, you know, the tech side of things. Um, but you know, uh there's also medical malpractice possibility issues. Um, you know, there's issues with keeping and maintaining a license for an individual provider or facility if you have widespread issues with your AI usage.
SPEAKER_02:Yeah, yeah. I um I think it is um, I would say that board responsibility feels a little bit nascent to me, um, but I think we're gonna see it be more important because we are the bridge between patients and consumers, um, lawyers and and healthcare providers, you know, alike. Um so I think that we can expect to see um a lot a lot more of this too. And you don't want to be in a situation where where you're um up against a a board, a board hearing um because because you played fast and loose um with this new new digital technology. Um, you know, the techno it's so interesting because we take such a hard look at um at the aspects of things that we roll out within um devices and and pharmaceuticals and and these types of things. The technology and AI and large language models and everything has come really, really fast. And so I think we're really, really we're playing so much catch up with how these things should be embedded in those kind of more rigorous processes that are out there without having that like front end of the rigor on the AI itself. Um, so I think that creates, again, more liability. And but the professional responsibility around that is to slow down, take a breath, figure out like what is your context, what is your use case, how are like it is great for a healthcare provider to use AI to do their meeting notes and and everything. I like it, it I I had a doctor's appointment last week. I just it was great. My doctor was like fully engaged with me, and it was it was really super, but I would have been horrified, horrified if that conversation were to be let loose somewhere where where I didn't intend it. And there's a lot of systems that overlap. So, what are your connections to your systems and all of that? So yeah, landmines everywhere.
SPEAKER_00:I think that's great.
SPEAKER_02:I think and also enablement. Like how cool is it that you you know that we can have these kinds of tools? I do, you know, I think we always tend to focus on risks. There's a lot of possibilities that are that are there too, but gosh, we just need to be careful.
SPEAKER_00:Yeah, I think that's part of what we spoke about kind of in the intro of our article, was just the the fact that we're in this brand new world where there is so much left to be done, and we're continuing to uncover so much um as we start to use these tools. And I I say it all the time to my friends that we come a long way from just like having our Apple Watch, right? And it and my Apple Watch is every time that it updates, it's weirdly even more accurate about what's happening in my body. So I'm you know, I'm very impressed, right? Um, but now we got into the world where the data in my Apple Watch is something I can share with my mind chart with my provider, and they can start to see, you know, what my trends are and that can help them think about diagnostic tools for me. And I think that that's just kind of amazing. Yeah, absolutely. Absolutely. So that actually gets to the last piece of our advice, which was inserting critical judgment and AI outcomes, which is kind of what we were just talking about, right? It's like, you know, once the machine spits out information to you, it's not giving you an answer, it's giving you something to help you get to the right answer. Um, kind of like jump starting your brain to avoid having to do the 10 million piece um puzzle to get to the right outcome for the patient or the end user or the you know whoever you're dealing with, right?
SPEAKER_02:Yeah, absolutely. And I I love that you bring this up because we we humans are wired to make sense of what we see and to make quick judgments. And we we are influenced by things that we we can't always think through. Um it's neurological, it's not, you know, you're terrible at your job or you're careless or anything like that. We are just wired that way and we we um see those things play out all the time. When, and especially with the large language models, when it comes back with a really confident answer and will double down on confident answers that may be wrong, we need to really, really think through our own personal process for how we take in information that comes out within an LLM and make sure that we're that we're testing our thinking. Um because I I do it too. You know, I'm I'm looking at AI output, I'm like, oh my gosh, that sounds magical. And it makes sense and it sounds so confident. And all of those things really trigger um biases in in our minds that that make us act without thinking. And so I think that that that is a huge part of um of professional responsibility, but also like your own personal care with the way that you handle this technology. It's meant to engage you, it's meant to um to kind of pull on those things. So if we don't, if we don't think about that in like the whole ecosystem, we really run the risk of. Of just like over reliance on the I and overconfidence in the technology and these kinds of things. And the human it at bottom, the way that the the AI is right now, these LLMs, it's still just math. It's still just words and pattern recognition. It is not, they do not have a stable concept of the world or your patient or you know, these types of things. So it the construct of how you use it needs to be needs to be pretty well thought out.
SPEAKER_00:Yeah, I think we've joked about this before, but you know, a lot of people see either we're in the Stone Ages or we're in those end of the world movies where the robots all took over and we're just that close. And so the truth is we're somewhere in the middle right now, right? We're not in the Stone Ages anymore, but the the robots still can't take over. They still need us to input information, they need us to help them get to the answer. Um, and I think that's that's the really important point because you know, if we rely solely on what the robots under, sometimes if you kind of read the outcome, it doesn't the beginning of it makes sense and then the end of it makes sense. But when you put the two things together, they don't make sense, right? Um but it's still very valuable because you kind of see where the missing piece is that got it to those conflicting conclusions.
SPEAKER_02:Yeah, yeah. And you know, and it goes, yes, totally. And there's so many aspects of this. So like the narratives that come out of big tech that AI can do everything, I think is really challenging because we know that large language models like are challenging in certain use cases. So you it like just the narrative that they can do everything impacts our decision making. So we need to be careful on that standpoint. And then, you know, when we think about um about the the um gosh, now I forget where I was going with that. This is why I just get like, you know, so distracted. Um, but I think that that the point of the technology needs to be used for a specific use case. We need to understand the implementation of the technology within the context of what we're doing. Like we lawyers need to be able to talk to our technology teams about what that looks like, how models drift over time. Um, you can set up, you know, um, some very well-meaning protocols for LLMs to do things for you, but they can break down in complexity over time. What are you doing to remind it that it needs to be within a specific context? Like these kinds of things I think are all kind of baked in that, in that huge like we're somewhere in the middle and we haven't figured it out. So we need to be really careful about the way that we that we deploy these things. And even technologists have a hard time with it.
SPEAKER_03:Oh, yeah.
SPEAKER_02:Like it's it's like assuming that we can know all of it is just it's just not the way it is. We learn things all the time about the capabilities of of AI and new forms of AI. Like right now we're in, you know, language model land um and natural language processing, but you know, there's there's new things coming on the scene that show a lot of promise, but also will carry their own risks with it too. And staying on top of that piece is also important and knowing that there's a difference between different kinds of AI is is an important piece of it too.
SPEAKER_00:Right. And I think at the end of the day, what we have learned in the industry is our clients want to use this, right? Uh the patients and the end users, they actually prefer the the advanced access that they're getting as really and and sometimes the the the ease at which they're they're getting information and and being able to get health outcomes because of this type of implementation. Um you know, so those two factors alone are enough for people to invest in uh creating these models and to build build out your business to do this because you will yield more financial gain from it. Uh but I think both parties would still say they are tremendously terrified of the risks that are associated with this extra access. And so I came up with and I emailed you these, but you know, if you don't remember these questions, I came up with a couple questions I had to ask you before our time is over because uh I I genuinely I need to know the answer. And if you think of anything else that you have to tell me, this is I just I waited too long to ask you some of these questions. So my first one was what are the biggest mistakes that you are personally seeing in the industry when clients are implementing or leveraging AI, both internally and externally?
SPEAKER_02:Yeah, rushing into it, rushing into it without doing the really hard thinking about what strategically makes sense. Like everything needs to be grounded in what you're trying to build, the context in which you're operating, your market, all of that. And and there isn't a technology that can just deliver you a report. There isn't it, it really does require good, hard multidisciplinary thinking and getting your leaders together and cultivating a shared understanding of where you're trying to go. Rushing in, I think, has been um the the AI FOMO and everything has been probably the biggest pitfall for for nearly every everyone. But we're seeing companies that have like, you know, been more methodical and and slower about the way that they've done this, have some have some more successes um with implementation, which I think is really important.
SPEAKER_00:And I'd assume there's not like some perfectly baked timeline. Sometimes it's just about what does your rollout look like versus, you know, are you are you trying to just shove it onto everyone at the same time immediately? Are you doing any trial periods, stuff like that?
SPEAKER_02:Yeah, and like is your organization ready for it? Can they ingest and metabolize this amount of change to whatever processes? Like it is um there there is there is nothing from a technology perspective that can that can help organization, well, I mean they can help organizations deal with it, but the hard work of everything is unique. Everything you cannot have a consultant come in and give you their like format and their you know policies off the shelf and you know go and implement them. Um, it really needs to be AI itself is so contextual. Um, there's so many different risks that are it's it both helps mitigate, it can help mitigate risks, it can exacerbate risks, it's like a double-edged sword everywhere. So, like really doing the hard work of sitting down, taking the time to digest it, taking the time to put together a methodical roadmap so that you can have an architecture of how you're going to handle um this AI and how you're gonna govern it going forward is is like just it's hard work. It's change is hard and change initiatives fail all the time and digital transformation efforts fail all the time and all of this. So, like we need to be better at how we how we handle those things and how we think through them. And that requires a lot of different um act, like a lot of different people coming together to be able to make that happen. Like, you need to involve your HR department, you need to involve your organizational learning um organization. And so, like getting everybody synchronized and aligned and understanding where you're trying to go at the same time is is a huge challenge and it takes a lot of work, but it's well worth it. Right, you know, play on the play.
SPEAKER_00:I see that even worse in our startup clients because they're operating on you know lower influx of cash, they just really want to get started, they're really excited, they know this is a big idea, they're trying to get to market faster than competitors because this is something new and cutting edge. And it's like, no, you know, you gotta take a step back. And I think, you know, if we were just talking about this, you know, as lawyers, it's really easy when we're talking about you know, our own business and what we're gonna do versus our you know other risks that are associated with what we're doing. Usually they're just all professional risks that we're absorbing and we're trying to navigate around. We have rules around it and we just have to follow in the lines of the rules. Um, but you know, and so our business can never really uh conflict as much, right? Yeah. Here we're dealing with business people who, you know, they might have other professionals that are working in their organizations, but the idea is this is supposed to be a profitable venture and and the loss that you're sometimes asking people to take when it comes to digital health in the beginning is much greater than you would in a normal venture, but it's because your compliance protocol has to be top-notch because the value of the company is its compliance.
SPEAKER_02:Yeah, trust. I you know, when you see like all the articles that come out, like, well, what if if AI can do everything, what's the differentiator? Well, if A, AI can't do everything yet, but B, the differentiator is trust. The differentiator is building that legitimacy. And so what I tell startups is is like, no, you can move fast. What I mean by slow down and do the hard work is not slow down. I mean prioritize the hard work, prioritize the planning, prioritize building the roadmap that you can execute against, prioritize alignment um, you know, within within your functions, develop a perspective on your on your critical path. Um, and then that way you can see where you can start to accelerate things, you can bring in the right levels of expertise to keep you going um fast to market and that stuff. So we can't in digital health, we can't move fast and break things, but we can move fast and be really considerate about our end users, the patients, and the viability of our companies going forward. Because what what else matters in digital health other than that?
SPEAKER_00:Right. And so what are the interests that you're generally seeing your clients weighing when they're making some of these decisions? Um, and and and I know some of the decisions can be hard, but kind of in the process of implementing the AI, so what are they generally kind of thinking about?
SPEAKER_02:Um again, that's like it really is contextual in terms of the trade-offs. I think that um that one of the big things that I see, and I think that it is born of like the AI FOMO is the trade-offs that are implicit to the models themselves, um, just the way that they operate in order to keep a large language model on track in the right way, or a small language model for that matter. You have to build architecture around it from a technology perspective. Um, and that can be very time consuming. It can be really brittle, it can require a lot of upkeep. Don't like we don't really know a lot about the long tail consequences of AI, um, but people are starting to feel it. So I think that those are like some of the bigger trade-offs that people need to make because you're with with large language models and natural language processing, you're trading um accuracy and and determinative reasoning to use that versus like a traditional um machine learning and like an algorithm-based based care because those things need to go together and they need to be able to work um together. So um those those kinds of balances I think are really hard in a healthcare healthcare context because we so often want it to be very determinative.
SPEAKER_00:You know. That makes a lot of sense. Um we talked about mistakes earlier, but what are some of the common pitfalls in this journey that can occur, right? So we know we shouldn't move too fast at the outset, but as we're going through and kind of seeing that this process is taking some time, where are some places that are easy to kind of fall in? Some some tricks of the trade you might have.
SPEAKER_02:Um I think I think that um that having the intelligence structure built in a way um like that you're um so the way I view governance is it's largely sense making and orchestrating is how I think about it. So being able to have um, I think there's a huge pitfall when you're not getting the right intelligence into your system at the right times. And because AI itself evolves, because the regulations evolve, because um, you know, there's um so much um dynamic movement um in terms of jurisdiction and and whatnot, um, I think that that that that is a really critical piece that can cause so many downstream pits of despair that we can fall into. Um so that that would be one thing that I would is like the trick of the trade is getting getting that information squared away. And that means your data lifecycle management, what you're feeding it, that means um, you know, un really understanding the depth of your data quality and your collection practices, really understanding all of all of those things I think, I think are really critical.
SPEAKER_00:That's really awesome. I so the last question I have, I as I was rereading it to myself just now, I realized we could do a whole podcast on this. We probably could do a whole podcast on everything we've talked about today, but thinking about who I'm talking to, can you just briefly start discussing and hitting on the ethical side of this, right? Ethics, I think is your bread and butter. Um so um what are what what's happening on that front right now?
SPEAKER_02:I mean I think I think that AI in and of itself, what does it mean to have artificial intelligence? What is intelligence? What does it mean to be human are all things that are bringing ethics to the forefront in in ways that we um have not um done a great job of, I think, um in terms of like even discussing ethics has been like a challenging thing at a lot of organizations. Um it tends to feel squishy to people, but it but it it really isn't. There are good models of ethics that you can operationalize against, that you can embed in your policies. Um and so the the ethical piece to me, and also what gives you, you know, kind of the the easiest path to prioritizing your regulatory compliance program is starting with the patient and doing doing what is right for the patient on their journey through your products, on their journey through your system. Um and and that is really what what I think that we owe as though those of us who who are who are in this space. So we talked a lot about it. Like privacy and security has an ethical component to it. We don't tend to talk about it so much, um, but it's really important. Patient care and like how that interaction works. Like you don't want to, like, you know, I had my experience with with my doctor being really wonderful because she was able to be so engaged, but you could see how um a doctor who wasn't as magical as mine would um you know lean on AI to not engage with patients. And so I think that I think that there's an ethical responsibility to make sure that we're giving the level of care, the level of concern, the level of conscientiousness and product um and service design in digital health. Um and so I think that AI is a really great opportunity to talk about that more and also to like situate operations around those those ethical obligations. And so if there's a message that I can give to anyone, it's like, no, it is ethics is not squishy. It's not, it's not some like amorphous thing. Like we can get in there and we can operationalize against it if we're willing to have the hard conversations and um and discuss what we should do, not just what we can do.
SPEAKER_00:Right. And I think part of those hard conversations is especially in the digital health world, are understanding the biases that are related to the AI, right? It's don't forget who has taught the AI and information, um, what subset of information is the AI always working on? What information is it also learning during its way? I know as a Black woman, for example, I a lot of the data that's related to any kind of ailment that could be happening in my life is significantly skewed when you change also. You could you could put my age, you can put my BMI, you could put all this in. And then if you also add that I'm African American, right? Some of what our information will actually tell us could be fundamentally different than if you don't add that critical piece into making sure you make my care plan. So it's not just about making sure that we um remember that those biases, biases exist in the AI, but also as we're inputting and teaching it along the way to start encouraging the AI to recognize those same biases and understanding that information.
SPEAKER_02:Yes, and to really dig in and understand where those biases occur for your particular context. You know, say that you're um, you know, um developing some like a um dermatology related product, um, you know, that that might be more obvious to care providers that they're um, you know, that they need to consider that black people will be using this product too. Um and so considering the bias there, but how you how you train it, what data, what does your, you know, clinical, like what does the clinical diversity look like? I mean, that's one of the things that the that the FDA has been so strong on is diverse clinical trial recruitment and those kinds of things that can, you know, help and inform um all of this stuff. But then also like the the biases of like uh because we've we have such a history of in in healthcare of mistrust in in our system for really good reasons, and we're adding this technology that is based on algorithms and not that that kind of care. I think we have an extra responsibility to help people feel comfortable and help people be able to trust the system as well, um, to encourage them to be able to partake in a different kind of care um so that it doesn't feel like extractive and biased and um and um ill-informed for for certain parts of the community that are not as well represented in in our in our data.
SPEAKER_00:Right. And it's it's you know, thinking about your poverty level of your your patient set. It's thinking about like my 94-year-old grandfather would be like, oh, well, the computer just told him to do that, right? You know, and hit a little bit on that in the article where you know the the information we have shows that, you know, a large set of the population is very, you know, open to the idea of having AI, but there are going to be pockets of the population that are, you know, they're already scared to go to the doctor, they're already scared to put on the Apple Watch, you know. So when you when you start with someone who might be moving a little bit slower, you're creating that trust that I think has been kind of the theme of your advice today, which I I completely love. So I'm completely word vomited, but what have I missed? What is left for us to talk about before we let these people go?
SPEAKER_02:Oh my gosh. I, you know, I just think that staying on top of this with the patient in mind is the last thing that I that I want to leave us with, whether it is like the care um around around um the patient, whether it is the care around the data with which you feed your systems, whether it is structural, structural um care. Like we have to, when things are changing this much, um the the duty is is so heightened because we can't rely on best practices or formulaic responses to these things lest we go down rabbit holes or exacerbate problems that have happened before. Um so ethics first, as always.
SPEAKER_00:Well, thanks so much for talking with me. I'm always excited when we get to speak. Um thank you for reading our article. Um It was great. Thank you. So if anyone's interested, please check out the AHLA Connections archive. Um, and I hope to talk to you again soon on a future podcast.
SPEAKER_02:Thanks, Shaylin.
SPEAKER_00:Bye, Selena.
SPEAKER_01:Bye all.org and stay updated on breaking healthcare industry news from the major media outlets with AHLA's Health Law Daily Podcast, exclusively for AHLA comprehensive members. To subscribe and add this private podcast feed to your podcast app, go to americanhealthlaw.org slash daily podcast.