AHLA's Speaking of Health Law

The “Wild West” of AI and Health Care Compliance

American Health Law Association

Artificial intelligence (AI) adoption is skyrocketing across the health care industry and innovation is booming, but regulation is struggling to keep pace. Dennis Breen, Managing Partner, Source 1 Healthcare Solutions, speaks with Alicia Shickle, President and CEO, ProCode Compliance Solutions, David Bolden, Senior Health Care Compliance Consultant, and Lyman Sornberger, CEO, Lyman Healthcare Solutions, about how the health care industry can remain compliant in this rapidly evolving environment. They discuss the current challenges and opportunities that AI poses, using AI to navigate compliance, practical tips for the health care industry, and what the future holds. Sponsored by Source 1 Healthcare Solutions.

AHLA's Health Law Daily Podcast Is Here!

AHLA's popular Health Law Daily email newsletter is now a daily podcast, exclusively for AHLA Premium members. Get all your health law news from the major media outlets on this new podcast! To subscribe and add this private podcast feed to your podcast app, go to americanhealthlaw.org/dailypodcast.

Speaker 1:

<silence>

Speaker 2:

Support for A HLA comes from Source One Healthcare Solutions, which combines compliance, expertise, advanced technology, and compassionate service to help healthcare providers minimize risks, optimize revenue integrity, and ensure outstanding outcomes. Trusted by organizations nationwide and internationally, they deliver immediate remediation and sustainable improvement, empowering their clients for lasting success. For more information, visit source one hc.com.

Speaker 3:

Hi, this is Dennis Brine , and , uh, we have our group here. Uh, we're actually talking about artificial intelligence, ai, and what we've done is we've compared it to the wild, wild West. What's really gonna happen with ai? Who knows? It's a frontier that's unknown, and we're going to rip it apart and discuss it from a compliance perspective. I have three incredibly experienced and really, really wonderful guests who have a lot of insight into this topic. Uh , first person we have is Alicia Shekel from Pro Code Healthcare Solutions. She's been a , uh, longtime colleague and friend of mine. She's been involved in many aspects of compliance, from expert witness testimony to investigating compliance issues, to setting up compliance programs As a coder and a practice manager, she has a lot of real world experience. We also have David Boldin , who , uh, comes to Source one , uh, as a contractor. He, too has many, many years of , uh, exposure to compliance to coding in the healthcare world. And the , uh, uh, introducing AI into this world has really caused a lot of thought and , uh, ideas that have popped into David's head from his many years of experience. And then we also have, our third guest is Lyman Schoenberger from Lyman Healthcare Associates. He, too, he's been on , uh, the foundation board member of many healthcare organizations. He's been involved in with the top notch , top level healthcare organizations without the country, and has often been a consultant on when , uh, new types of programs and apps are coming into play and how it fits into the revenue cycle world. So, thank all three of you for , uh, uh, joining me this morning. And I'm excited to hear your , uh, your thoughts and your comments on artificial intelligence in healthcare, and truly how it is like the wild, wild west. And that brings me to our first question , um, that I wanted to at least for us to ponder. Um, with it being the wild, wild West, what's your perception of Alyssa , of where AI and the adoption of AI types of tools , uh, for healthcare and by healthcare pro professionals, where do you think it , uh, we need to address the risks from a compliance concern?

Speaker 4:

Yeah, I mean, I think, you know , uh, the Wild West analogy really kind of highlights the urgency, both, both the urgency and the lack of regulation, right? Which is absolutely critical. Um, but I think it also , um, you know, risks oversimplifying the complexities of ai. Um, and that in turn, I think, is making it like hesitant. Some people are hesitant to , uh, innovate while others are kind of bow barreling forward, rec recklessly, right? So I think the real challenge is gonna be striking a balance between caution and ambition here. So, David, what, what do you think?

Speaker 5:

I actually think that the analogy creates kind of an awareness that this is definitely like a free for all kind of situation with ai, but there is also this ex this kind of like, unknown, right? The genesis point before that new frontier , um, that is waiting to be explored. So I think that , um, you know, it's definitely a point to take on that this is , uh, this AI venture should be navigated responsibly. Um, and then kind of connecting those things to the past and what we've done in the past with regards to EHRs and, and medical records and documentation standards and coding standards, and kind of moving forward into this new world of, of AI and how it can help enhance quality of care for members. So , um, Lyman , what do you think?

Speaker 6:

Well, it's definitely a wake up call for the industry, and rightfully so because it's , uh, it is a dramatic , uh, change and anything , um, of this magnitude is difficult for , uh, the industry to , uh, perceive and adopt. But, you know, I think we as an industry really look at need to look at it , um, a little differently and not perceive it as necessarily a threat without question. It's more complex than what we originally had thought. Um, and I think it will continue to evolve to something to where the industry , um, will be sensitive to it. But I think that we're on the right track. And part of that is , uh, is a byproduct of these education sessions for with our colleagues and our peers.

Speaker 4:

Absolutely.

Speaker 3:

Mm-hmm . That's actually a great point. Um, I know with, with ai , um, oftentimes even when we all use it , um, it spit out an answer, and it's something that is unique. It's all put together very well, but what it doesn't do, it , it, it misses and it lacks that human component that, that gut feeling. So, Alicia , I see a lot of physicians using , um, AI in clinical decisions, and that, to me, brings up some ethical issues among other items. You know, what, what happens when the AI tool disagrees? Or the provider has a hunch that maybe there's something missing , um, and really has an issue with what the AI is spitting out to them from a clinical judgment perspective. We know that we had a , uh, I had a client that, that uses ai. It was the premise of AI in healthcare, and it would spit out a care plan based on what that that provider was putting in. And several times that provider did not agree with what the AI treatment plan was, and it caused a lot of problems. 'cause they had a hard time going against the AI and had to document , um, why they were just disagreeing with it. What, what is your take on, on ethics and, and the ability to override what the AI is spitting out , um, what, from a provider perspective,

Speaker 4:

Legally, the clinicians are gonna be responsible for the decisions. Um, but I think we need some clear framework like outlining AI's role, right? Again, we've talking about it, it's a tool. Um, if the AI conflicts with the provider or human judgment, you know, I think we have to make sure that we're building in like some accountability protocols , um, that are gonna address both perspectives in a transparent way. Right? I think overall, again, just, you know, proactively, we talked about, you know, the importance of proactive education implementation, having those guardrails, but I think when it comes down to the bottom line, like the docs are gonna have to take the responsibility to make sure that they're reviewing , um, you know, like we talked earlier, a little bit about, you know , garbage and garbage out, right? So, you know, I think it's gonna be helpful. The docs are gonna have to take it seriously to , um, you know, review the output. And, you know, if they don't agree with it, they have to have the ability , um, to make those changes based on clinical decision making . I don't think, you know, we're ever gonna get away from that no matter how good the AI gets. You know, and we're really, again, like , like to Lyman's point, it's a very positive thing. I think it should be and can be. Um, but overall, the docs are gonna have to take the responsibility to one, change the information if they don't agree with it. And then I think from a compliance perspective, we've also gotta be diving in auditing, monitoring and holding folks accountable that, that are not catching these things that are, you know, getting out there in the industry without, without the human element. So, David, yeah.

Speaker 5:

Yeah. No, I agree. And, and kind of to piggyback of what you were saying is that I think that it's very important for clinicians to understand that they need to explain those things within the documentation, and then also during , you know, reviews, right? So coming from an audit kind of world and background, you know, I've been through plenty of audits and reviews where the clinicians, you know, during an exit interview, they need to kind of explain what they're doing in that documentation, because it's not clear for the person who's looking at it now all the time. And depending on the expertise level of the auditor, they might not be able to infer what the doctor or the physician was, was doing during that visit. So it's very important for the clinicians to be able to explain ai, because AI could be just another version of a clone note. And we see that all the time where a cloning happens over and over and over again . And so, you know, it's really, absolutely, AI is kind of like leading, you know, like kind of leading that interpretation of what's going on within the, the visit. Um, you know, unlike software like Dragon, right? Dragon Software, where it's kind of a dictating kind of software where it's word for word, you know, AI could be something where it's in the room with you and it's kind of listening and, and, and , um, you know, reporting on what it thinks is happening during that visit. And in that case, how is a doctor gonna explain, you know, what, what , what that actual visit was for Lyman ?

Speaker 6:

Like, everything, it's all about communication and education, and that's at all levels of the organization and all positions that have any association with that . Um, patient's delivery of care , uh, I can't stress enough that the clinicians and the coders and the staff need to be empowered to challenge the AI and feel comfortable questioning it and to learn from it. Probably the key message, I think, operationally is that it needs to be embraced as a resource, and not necessarily , necessarily the message is that it's an end all solution. Um, if they can look at it as a tool and not look at it as a mandate, it will, it , I think the industry will find it more comfortable to adopt. That's my personal opinion.

Speaker 3:

Um, we all know that AI is based on historical information. That's exactly what's out there. So there is , um, a consideration that we need to , uh, take into consideration or , um, to work through is the bias and the fairness of the tool itself, what information it's based on. I know that , um, the , uh, in the OCR , uh, or what was released by the OCR, there was some , um, information that was published that basically said , uh, if there is, if you're using it in clinical judgment or use it in treatment of the patient , that it's up to the providers really to assess , uh, if there is any bias in the tool that they're using. They pushed it back to the provider to determine if the AI that they're using has any potential for that bias or fairness. So, so that also brings that equality question , uh, into , uh, into play here, but it's also pushing the burden on each individual provider as as , as opposed to the government itself going after the developer or holding the developer accountable. Alyssa , where do you see that bias? Uh, or what's your comments or thoughts on where bias comes into the ai, especially when it's used for , uh, clinical decision making ?

Speaker 4:

Yeah, I mean , uh, look, I, I think, you know, prevention is key here, right ? And I think, you know, we're gonna need to be really diligent in ethical audits, you know, going forward I think are gonna be key. I think, you know, we need to review the AI systems for bias , um, before they're rolled out, and then to continuously monitor and audit them going forward, right? Um, I also think that regulators are gonna need to provide some guidelines so that we have some guardrails around there that are specifically addressing the biases , um, in healthcare ai. So, yeah, I mean, it's not just like a set it , forget it kind of thing, right? I think we need to proactively look at that before the tools are deployed and then continuously monitor it after , um, as well. So David,

Speaker 5:

Yeah, I think you are right on the money. And I also think that the , the human factor and that training that you were talking about, those things are non-negotiable, right? With ai, I think that in the world that we are kind of developing and creating right now , um, to your point, just kind of putting these things out and just letting it be is, is, is not the way to go, especially with compliance. Um, you know, how is AI gonna know when there are , you know, updated medical policies , um, for medical necessity and coding? You know, how are they going to know , um, you know, how to decipher eligible handwriting from a physician , um, to make sure that whatever services being provided was actually medically appropriate for that particular visit. Like, how, how does it going to decipher those things? And really, that comes down to the bias, right? So how are we, as, as users and as people who are reviewing AI, going to kind of, you know , um, understand and kind of explain that in a way where, you know, we're not putting patients in harm's way, right? Because that's number one, patient, patient care is, is where this is all kind of leading to. So , uh, Lyman , what are your thoughts?

Speaker 6:

Yeah, I I actually think this conversation reminds me of a sidebar that I had with a physician a couple of weeks ago. And we were talking about the use of ai, and ironically, he was welcoming , um, the tool. And part of me in that discussion was trying to feel him out as to why he was so receptive to it. And when I made the comment to him that , um, what do you think of the error rate that might exist around that, between the clinical documentation or maybe even the coding area? And his first response to me was , uh, do you realize the error rate that exists today with the human factor? And, and what he stressed? To me, it all boils down to what Alicia said earlier, and that is , uh, making sure that you have a quality program in place, particularly one that is strongly enforced if you're an early adopter mm-hmm <affirmative> . Um , while the tool was being matured. So , um, you know, I, after I talked to him , I stepped back and said, you know, that's an interesting concept. We, there is a , uh, a certain percentage of human error that exists today.

Speaker 3:

Good point. That's a great point. It is a great point, Alicia and Lyman , because we know that , uh, the AI tool is based on , uh, historical data that's put in there. And we all know that , um, in the early stages of , uh, healthcare, even in the country, the oftentimes people who were seeking treatment wouldn't go and get treatment right away for whatever reason. They don't have the time. If they were a farmer and they're out in the field, they would wait to , uh, uh, or, or someone who lives in the inner city, they would wait till the disease gets to a certain point where they can't handle it anymore. So a lot of that historical treatment or prevention is not there. So it's, there is a certain amount of bias just inherently where the data's coming from, because history oftentimes does repeat itself. So that's where I was coming from, from a bias, if we're using information that's already has some, some touch of bias or inequity in it, that we're going to really have , um, some problems going forward. So the data now is based on old information, and maybe in another a hundred years, the information that we're providing and feeding into that will make that AI stronger. So I think there is more, more to come in that way. And I also think that from a regulation standpoint, keeping that unbiased or that requirement by law, that everyone is treated equally is very difficult unless you truly know the source of the information that's feeding that AI tool. So I think it is really important for all of us to understand where it's coming from, not just understand how to use it or where it's gonna take us. And that actually brings us to that, to our next topic. Alicia , we were talking about using ai , uh, to, to navigate through compliance. Um, where do you think the AI can , um, can cause risk with either violating , um, regulations or when we're using ai , AI to try to en enforce regulations? What's your take on , um, navigating compliance with ai?

Speaker 4:

Um, I think the governance is gonna need to set the standards for the tools , uh, that we're gonna be using. Um, I think AI is gonna have to operate within clearly like, defined parameters and that our organizations are gonna have to verify compliance on an ongoing basis, right? To some degree. And we're talking about the wild west right now, but it's like, you know, we want to, you know, you can't set it and forget it. And, you know, all of us as compliance folks in , in the compliance industry, we know that we , like , we wanna trust, but we've gotta verify, right? So while I think that we are super excited about the capability , um, and how AI can help us all to be more productive and to do more , uh, get more done in a shorter amount of time , um, I think we really just gotta make sure that we have those guardrails that we have. You know, we're doing our, our ongoing risk assessments. Uh, we talk about, you know, the likelihood and probability , uh, of an event occurring, and then the severity and impact, right? So I think , um, yeah, I mean, just really critical. I think that we've gotta do the risk assessments, get this on our audit and monitoring plan , uh, for continuous , uh, oversight going forward. That's

Speaker 3:

A great point. I did wanna say we had , uh, I'm sorry, Alyssa , Alicia , I didn't mean to interrupt you.

Speaker 4:

No, no , no . We've

Speaker 3:

Already started at a lot of our, our , uh, clients , uh, because we do some outsource compliance management, we assist setting up compliance programs and testing mm-hmm <affirmative> . And one of the areas that we're really trying to push, even in the smallest of organizations, maybe not a a , a small physician practice, but a small facility type, is to really put together a group that we're calling , um, uh, ai , uh, standardization or AI committees , um, you know, innovation committees or different names of it. And it really is, it's pulling in pieces of all areas, not just clinical, not just compliance, but billing, revenue code, coding, quality, et cetera. David , what's your take on where , um, on where AI is gonna be when you're navigating through , um, an organization and also , uh, what's to come as far as navigation? Yeah,

Speaker 5:

No, and I , and you know, you bring up a great point, Dennis , and I think it just, again, ties back to that human factor, right? You're creating these, these environments and these controls where people are involved in the navigation of AI and how it's gonna perpetuate the, the healthcare system. Um, you know, because right now, you know, AI can kind of perpetuate some of that fraud, waste and abuse that we see in the, in the, in the system right now. It could, you know , uh, misclassify services, so there's fraudulent billing there. Um , you know, could be a kind of like encouraging those medically unnecessary services if the clinicians aren't appropriately looking at the documentation and making sure that it's suggesting the right recommendation, suggesting the right treatment. So that's waste right there, right? So we're seeing, seeing all these things that, you know, the AI could potentially perpetuate if we don't have those human connections involved with the ai, if we're just letting it roam free, you know, like the wild, wild west, you know, all kinds of things can happen, right ?

Speaker 3:

Yeah. I ,

Speaker 6:

I think ultimately, again, it starts , uh, as we've all , uh, mentioned with education and training and clear communication. I meaning everyone in the healthcare arena at all level need to understand its value, but at the same time respect its risk with compliance. I mean , uh, you know, and at the end of the day, you know, I'll go back to my Cleveland Clinic years of our motto, which was patients first, you know, the messaging clearly needs to , uh, stress that operationally we want to embrace this to enhance patient care. And that's what we're all here to , um, you know, ultimately strive , uh, to do in the industry healthcare space. And I think that should be a clear message , um, as we try to , uh, educate and communicate , uh, the change.

Speaker 4:

You know, I just wanna kind of echo off of that for a second. I mean, it's interesting, right? Like we are seeing this ai , um, is impacting all areas of compliance, right? Uh , legal, financial, and operational. So I think we're gonna see it across all lines. And again, you know, like we're saying, like everyone in the organization, you know , we say , who is responsible for compliance across in the organization, right? Everybody. And I think it's gonna be important for Lyman , like you're saying, organizations are gonna need to make sure that everyone in the organization is getting appropriate training and education, how AI is gonna impact their role. What does it mean? I think it's gonna be important to see role playing going on. Um, you know, how how does it impact me in my day-to-day , uh, job, right? I think everybody needs to understand, oh , it's not just a doctor thing or a , you know, biller thing. Like it impacts the front desk as well, right? Sure . So everybody's gonna have to understand their role.

Speaker 3:

Um , I was actually talking and thinking about when we were, you know, talking about wild, wild West and all this, if you, when those frontiers matter came to the country, they didn't have GPS , they didn't need , they didn't have, you know , uh, AAA that would print trip ticks and have the highlight through the map that we were all used to without GPS . Now my kids know exactly where I am at all times. I pull out my, my garage. If I didn't tell my younger daughter that I was leaving the house, I'm getting all kinds of messages telling me, why are you going, why are you, oh, wait, I see that you're at target. Why are , can you pick me up A, B, and C? Back then, in the wild West, you know, they would hop on their horse and just head into the wilderness. I know with my directional impairment, I would probably be eaten alive by whatever wild animal we would encounter. But we keep saying that it's an unknown, but it's always gonna be an unknown. There's always gonna be development, there's always gonna be training that's necessary. So what from , um, the busy lives that we have and the AI tools that are out there to make it better, we grab them, we use them, but we may not understand , um, what we're using or how to use it properly. So if I were to ask each of you, what tips would you give someone who's just starting to use AI in their role in healthcare? Both either from a clinical judgment, from compliance, from billing, what are some tips that you'd want to give individuals that are just realizing that AI is out there and it's not going away? Everyone has the feeling like, oh, it's gonna replace me, but it's not. So where can you give some information to people so they can feel better about their jobs? And what can they do to , um, instead of eliminating ai, where can they use it to enhance their position? What kind of, what kind of guidance can we give them?

Speaker 4:

Yeah, I mean , uh, just from my perspective, I think, you know, it's definitely coming. I think we absolutely, you know, anyone who thinks that they're not gonna be able to embrace it is gonna get left behind, right? And we know, like the only constant thing in healthcare right now is change. Um, I think we also, all of us in the industry recognize that we're regulated second only to nuclear power plants. Um, so I think prevention , uh, proactive education is gonna be key. I think we, we should see it as a positive thing. We should embrace it, but we should also be mindful that it's going to come with some significant compliance risks. And I think just, you know, getting back to basics, you know, with everything, right? You've gotta, you know, yeah, put those controls in place, and then you've gotta go back and you've gotta do your risk, ongoing risk assessments. I think we need to continuously do our auditing and monitoring to make sure that the controls are effective, that they're working. I think, you know, people really just need to understand that it's not gonna replace us, right? But it is absolutely a fantastic tool that can make us all more efficient , uh, and effective. And I think it really overall can , uh, ultimately enhance patient care , um, you know, from the big picture. Uh, but I just think, you know, just be proactive. Put the education and training in place, continue to monitor it for, you know, risks and effectiveness. And yeah, I mean, I think overall, it , it's gonna be a good thing as long as we , uh, have parameters, accountability, have good controls around it.

Speaker 3:

I wanted to at least , uh, uh, echo what Alicia was saying there with training. We all went through our computer training, and the first thing we learned is garbage in, garbage out. And I really think that's something that we all need to consider when we are doing this. Um, in using any of the tools , uh, that are based on AI type of formatting or methodology. What's your take on, on, just, you know, how would you tell someone if they're just starting to use ai, what would you say to them?

Speaker 5:

You know, I, I like that analogy of, of garbage in garbage out because, you know, with ai, you're looking at it from the standpoint of it's supposed to be enhancing patient care. I think Lyman brought that up before, and that quality of care that we're giving to the patients. Um, and so we wanna have that same quality when we're developing, you know, AI algorithms and, and things that AI is gonna be doing for us in the healthcare system. Um, and I think skipping that training, like, to everyone's point that's saying here, you know, right now, is that when you skip that training, we don't have that education. That's when the problems start. That's when the problems arise, is when that initial education and training is done. And I think everybody malicious point before everybody is, you know, responsible for compliance. Everyone is responsible for making sure that AI is working appropriately and that things aren't going off the rails, and we're not, you know , um, you know, suggesting medical services that aren't necessary to patients and putting them in harm's way. We're not , um, you know, overutilizing , uh, you know, taxpayer dollars in order to make sure that, you know, businesses are staying afloat, things like that. Like, these are things that in the compliance world, like they're scary for us because we're seeing this all the time and, and it's something that we need to take account of. But, you know, it is a great resource. It is a great tool. So how can we balance those things out, Lyon ? Yeah . Okay .

Speaker 6:

Yeah . I , I , you know, I wanna step outside of the bricks and mortar a little bit and, and stress that. I also think it's important that we engage the patient in the flow of this change. And what I mean by that is, the worst thing you can do is erode the trust between the patient and the physician. And they're gonna be curious as it white , what it might mean , um, with the introduction of artificial intelligence into whether it be their documentation or their medical record. Um, and, you know, I personally think that the messaging should be around that , and that doctors and , uh, staff should stress that it's not a solution as Alicia , uh, Ali , Alicia , uh, mentioned it , but more stress that it's an extra set of eyes and it's not a machine doing the work. It's giving you a , a separate set of a perspective and a separate set of eyes and ears. I think that messaging in layman's term , um, will help the patient better understand what exactly the introduction of AI is doing with their care.

Speaker 4:

That's a good point . Hey, Leman , I think, I think you make a great point too. Like, not only do we need to educate, like providers and staff and our legal, but patients, right? We're all educated shoppers right now, right? I wanna know what's going in my medical record. Um, I know Dennis, we talked a little earlier too about, you know, some of these prac , you know, practices that we work with, we're like, oh, wow, they're , they're having issues. And then we get, do a root cause analysis and we find out that somebody said an override in the practice management system or the EMR , right? And so automatically, when, when the provider picks a certain, you know, procedure code, hey, it automatically dumps in this , uh, diagnosis code, right? From a patient perspective, I don't wanna be getting assigned diagnoses for conditions I don't have, right? Like that could be , uh, so I, again, you know, it kind of broadens the scope of education. You know, I think patients need to be educated as well. So that's

Speaker 3:

A great point . When using any, any of these AI tools, it's really critical to understand and know how you're using it. Um, I'm sure doctors now all the time, I , when the practice is , they wish the internet was never embedded. 'cause now we're all diagnosing ourselves. Why do we even need a doctor? If we have an AI that's gonna tell me what, what , uh, drug I should take or what exercises I should do, or what's wrong with me? I should probably do a how to do an appendectomy and just remove my own appendectomy to save me a couple thousand dollars <laugh> , uh, where you know that ai , uh, it's only as good as how you know how to use it. Uh, yeah . If it's used in the wrong way, it could really, really go down a very dangerous road. Uh, so knowing exactly what it is, how it's supposed to be used by the experts who either developed it or have put the time into developing the training, it's very critical for each one of us to pay attention to that. Because otherwise, we're all gonna be physicians. There's no need, you know, at , at this point, we'll just use AI to , uh, you know, prescribe some medication or tests and, and we'll do it all ourselves. Uh, it's a very dangerous assumption, but so many people believe that with ai, that it's, that it's absolutely correct. There's no bias or there's no , uh, error rate that could come out of it. Uh, and it's true, the information in there is probably very accurate or more accurate, but it's really what information you put in to get that information out. And that's what I meant by garbage in , garbage out. You really do need to , to know how to use it , um, before even thinking about , uh, relying on the outcomes. So I guess , uh, really that brings us into what, where is the future with ai? I mean, when they were worried , worried about the , uh, the wild, wild west, and we had the 13 colonies and they moved west, there was a vast <laugh> country that was out there, there, and that opened up to where we are now, you know, all 50 states. And it's an incredible world that, that we have. And it just took a few of us to go west and to start expanding in those areas. And that's what's happening now. AI is everywhere, both, even just from a basic, my grandmother, who's probably 97, 98, is asking me what chat GPT is, and she's spitting out information to me. She has no idea what's going on or what it means, but by gosh, she's reading it off to me. And it's, it's incredible. I mean, she has an MD degree , um, but where do you see, where is the future of ai? Where are we going? What's gonna happen for our jobs from a compliance perspective with navigating AI and basically keeping a pulse on it? What's your take on the future? Isha ? I don't know , uh, where to even begin with that.

Speaker 4:

Yeah, look, I, I , I think like everything, right? Like, I think the regulations are going, I think it's gonna be a fluid situation, right? I think , um, the regulations are likely going to adapt out of necessity, right? I don't, I don't think it's gonna be fast. I think we're gonna have some things happening pretty quickly coming out of the gate. And I think, you know, pressure from stakeholders, providers, payers, patients , um, are gonna push regulatory bodies to evolve. Uh, but I think it's gonna be incremental. We're gonna see incremental changes. I think that's gonna be much more realistic than seeing sweeping reforms. Um, you know, and like, you know, even like, look at the evolution of like, hipaa, right? We're , we're, we're seeing new changes happening right now in the cyber world, right? Because of , you know, it's , uh, it's like everybody's really big on compliance and we're all up on it, and then we get a little laxa daisy, and then like, something happens, right? And then we're like, oh, yeah, oh yeah. We have to, we have to, you know, get back to being diligent about those things. Um, interesting. I had , uh, I had seen a , um, a keynote speaker from Nassau one time who said the same thing happens in every industry, right? Like, look, you know, they're sending rockets out into space and, you know, oh, things are going great. So they get a little laxa daisy in areas until like an event occurs, right? So I guess for me , uh, from a compliance perspective, you know, making sure that we're doing ongoing risk assessments, don't take our finger off the pulse, go back. I'm all about, you know, the auditing and monitoring. We, we can implement controls, but unless we're going back to check them and test 'em to make sure that they're actually effective , um, I think that is the most tangible, real world practical thing that we're gonna have to do as the evolution is happening. I think we're just gonna have to keep, keep , keep our eye on the , on the, on the ball , Dave.

Speaker 5:

Oh , yeah. I do think that , um, and to your point, I do think that , uh, you know, AI isn't going anywhere, right? It's been around already for a while . You know, we've been using it in our day-to-day activities, and physicians have been using it in offices and , uh, you know, dictation software. And now, you know, auditors and coders are using codify and audit manager in order to, you know, make sure that their calculations are appropriate for what the physician is documenting in the medical record. And I think, you know, to your point, and to everyone's point here on the call is that, you know, making sure that human factor is there , um, and making sure that the education is there to , to understand like, okay , yes , I'm making these clicks, and boom, boom, boom, boom, boom, I'm at a level four e and m. Is it really a level four based off of what's actually documented? Is it really a level four based off of what the doctor, you know, assessed the patient with and also what treatment he recommended? So it's, it's that piece of it. I think that we're all kind of saying that, you know, it's, it's not just letting it sit there. It's not just rolling out AI and saying, oh, it's the end all , and that's all we can, you know, that that's it. Like, we don't have to do anything else. Like, I think that education, the risk assessments, the ongoing training, monitoring, like those things are, are crucial to make sure that we're having that significant change with AI versus just AI being just another tool that we can use in order to , um, you know, make more money, right? So I , I think those are the big things to look at.

Speaker 4:

I also think, you know, a lot of us in the compliance world, we're like, Hey, you know, we've gotta do our risk assessments annually, you know, but then also being mindful, not really like it's as or more frequently as necessary, right? I don't think this is gonna be an area that you're gonna be able to just like dump on your risk list of things for your risk assessment in your annual audit work plan. I think any organization who's gonna adopt this into their , uh, into their workflows and their, and their organizations are gonna have to really be diligent in staying on top of it and monitor on a daily basis and then do their auditing, you know, regularly.

Speaker 6:

You know, I guess , uh, in the spirit of the theme of this , uh, forum, I think what comes to mind is , um, when I try to look at AI in the future, I can't help but , um, make the quote, this isn't my first rodeo and <laugh> . By that I mean , uh, we could take a look back and have flashbacks and probably painfully so of back when we had to adopt the EMR , and most recently in the last , uh, post covid or , uh, during the Covid period where we had to embrace telemedicine and telehealth to the extent that we didn't know what it meant to the industry long term . And everyone was trying to, you know, forecast what does this mean to healthcare and to patient care overall. But all that said, looking at the past, we got there, we made it happen. Um, I think if it's done appropriately and the messaging is strong, that says if AI is used , uh, appropriately, it's a valuable tool. And, but along with anything of that robust and that change , um, there's some sensitivity and we need to respect the risk. And part of that is compliance. That's

Speaker 3:

A great point. Uh, when you look at the wild, wild west or when those , uh, pioneers were there, there are some that say said, no way, I'm not leaving the compounds of my own little world here. And others saw it as an opportunity, something that's grand and majestic and , uh, the , the world is at their fingertips and they pushed ahead. And that's where we are now with , with innovation and opportunity. If we don't take advantage of that, then, you know, we will be stagnant. Opposite of that, though, there are opportunities that may not have the integrity of what's behind the AI and using it to go forward and to push healthcare into, into new realms for, for the good, for the benefit of patient care. What we also find are those innovators and those opportunities that, that don't quite have that integrity or the ethics behind it, and they use it to line their own pockets. Hmm . And that's where our job from compliance comes in, is to , uh, identify those bad actors or keep , uh, uh, monitoring on them. I think if I hear Alicia say it once, I have heard her say it a million times, that whole trust and verify , uh, it's absolutely true. We don't have to be the police force or everything that's coming in and, and mandating proof right here. We need to work with them and work collaboratively and we trust them, but absolutely verify. We need to make sure that the controls are in place, that the auditing monitoring are in place, so that when we do move forward, when we do see these new opportunities, that we, we seize the opportunity, we take it, we see where it's gonna help, but we also are mindful of where it , where it can go , um, go wrong. So where does that future lie at this point , uh, from ai? It's not going away, like you said, it's constant. The only thing that is constant is change. Uh, and we're forging ahead. Uh, what are your last comments on, on the future of AI there?

Speaker 4:

Yeah, I mean, you know, I think I love , uh, you know, look , none of us have a magic eight ball , right? Uh, you know, that's gonna give us our magic. Crystal ball is gonna tell us, but I think Lyman really kind of like hit it on the head, which is why I giggled when he said, this isn't my first rodeo, right? Like, I think we have to look at things historically. Um, I love Lyman that you said like, look man, we were all, were in the same boat when we started switching over to EMRs, right? There's gonna be good and there's gonna be bad in everything. Um, you know, we always used to say like, the house is great and the yard is terrible, right? Like, when the EMRs started coming out to be a total solution, right? MR and practice management system . So I think, you know, you know, using our, our many, many years of experience and staying ahead of the curve, which is what we all do, right? Because that's what we've gotta do, like to be the best in the field. Um, I think, you know, yeah, I mean, I think we learned from history and we apply all of the things that have made us successful , um, and we carry that forward with us. Like you said, it's not our first rodeo, you know, it is the wild west, but, you know, we've all wrestled a , a horse, you know, in the past. So , um, yeah, I think that was great. Great advice.

Speaker 5:

Think that it's, it's gonna be a , a , a challenge all around. Um, I think that you had a , a word that you said in your last , uh, kind of like explanation was, was collaboration, right? AI is a collaborative tool. That's what it's supposed to be. We're supposed to be using it as a tool to collaborate with physicians, with, you know , um, coders, with auditors with the regulatory bodies. Like everybody is supposed to be collaborating in a way that ensures that we are providing quality care. And that is what it comes down to is quality care. So AI is a tool like any other that we need to use in order to make sure that we're providing that care that, that, that the patients expect and, and that we're supposed to be giving them. Um, you know, and, and as compliance, as auditors, as coders, like that's, that's our job is to make sure that we are, you know , um, really looking out for the, the , the patient, right? And the , the patient also has a responsibility as well. But you know, that's our job as well, is to make sure we're looking out for the patient and making sure that , um, you know, things aren't, aren't the wild West <laugh>

Speaker 4:

Lineman . Give us your words of wisdom, dude, <laugh> .

Speaker 6:

I actually think , uh, probably my ending thought would be , um, that we stress that this is not a tool to replace individuals. There's a human element and aspect to healthcare that cannot go away. And so no one should be feel threatened by that element , um, to this and look at it from a positive standpoint. But I know that even with telemedicine, I heard people say, am I never gonna see my doctor again? FaceTime. There is , uh, an aspect to FaceTime in real time , and we all did this when we were going through Covid and got closeted or sat on Zoom every day . Um, there's a human element to meetings, there's a human element to FaceTime, and that isn't gonna go away, and that intervention will continue in spite of the fact that we have a new and sexy tool out there.

Speaker 3:

Yep . That's a great point. 'cause I, I do think what all of these tools and even our jobs and why we're, we're performing the functions that we are, it's to protect that. And I might be the only one, but I view that relationship between the provider, the doctor, and their patient as a sacred one. That information, you're putting your lives in the hands of this professional doctor, and we're, we're relying on them to keep us healthy, to keep us alive, and in some cases keep us happy , uh, so that we can enjoy our life and these tools, you know, before when the pen and paper were involved , uh, it was a medical record and it was to record the information from the patient to the doctor and the doctor to the patient. It's still to protect that sacred bond between doctor and patient, and everything from the pen and paper to the medical record, to the EMR and now ai, it's all there to support that sacred relationship so that the physician can concentrate on treating the patient, the patient can take that information, get healthy, be happy, support others on this world and so on. And I think AI is just, it just fits in there to support that sacred relationship and that , and I think if we keep that as our, our motto and our mantra in all that we do, I think it will be used for the good and it will be used correctly. Um, in addition to that, part of what I train others in, in healthcare consulting and fraud and abuse is , um, follow the money. If there's a cost, then they're the ones who should be getting the money to pay and cover that cost. Medicare wants to pay for the care of treating their patients and their patients only. And that's where , um, we sometimes lose , uh, the ability to, to trust and verify is those who's spending the money and who's getting the money gets very clouded through these , uh, these times. And AI doesn't always make that clear. Sometimes it does bury who's paying for what and who's getting the money and the reward. Uh, so it takes a while to navigate through that. Just like the EMR Alyssa , you had mentioned one , you know, earlier when we were talking about a diagnosis in that , uh, in the care the EMR was set up to make the physician's life easier so that they can treat their patients well by making it easier. In the one tool that I was , uh, or I'm sorry, the one case that I was involved in with one of these attorneys that I do work with, it was a $1.4 million payback just because of one diagnose diagnosis , uh, attached behind the scenes to a bunch of lab tests. So tens of thousands of lab tests were billed with, with the wrong diagnosis. They were paid, and they were kept going, paid , you know, getting paid. No one knew where this was coming coming from. And all of a sudden, one doctor said, well, my patient doesn't have that diagnosis, doesn't have diabetes. Why is it on here? And then when we uncovered it all, it came down to a , a very smart computer system, but it was so buried within that system, it took a very long time to even try to get the data out and analyze it to reach that $1.4 million payback. So , uh, the more complex the tools get, the harder it is going for us to , to make sure that we're following the money appropriately. Alicia , I know you were just gonna say something. I'm sorry , I

Speaker 4:

Just No, yeah, and I was too, I , I , I think just, you know, one final thought from my perspective is , you know, we, we talk about fraud, waste abuse , um, in the industry, and I think in everything good for all of us who've been around a really long time, we see bad actors, right? So everything good, the bad actors are gonna come out of the woodwork, but I think from the, you know, most providers are, you know, in healthcare for the right reason, they hear things like fraud, waste, abuse, and then, you know, instantly there's a little wall , wall goes up and it's like, you know , well, that doesn't apply to me because I'm not a fraudster. I don't have bad intentions. But in my world of training providers on compliance, it's not just fraud, waste abuse, it's about errors. So I think we still have to be mindful to do our due diligence because, you know, yeah, if you're not committing fraud, you know, intentionally doing things wrong, and, you know, you're not trying to be wasteful or abusive, we're still subject with all the, you know, com complexities in our day-to-day world to errors. And I think, you know, I , I think everyone has to come at this with an open mind that, you know, it's not just about intentional fraud and abuse, it's also about the errors that can occur. So that's my final, my final thought.

Speaker 3:

Thanks HLA for giving us this opportunity. And , uh, let us know how you feel. Guys and girls. We're excited to see where this , where the frontier's going.

Speaker 2:

Thank you for listening. If you enjoyed this episode, be sure to subscribe to ALA's speaking of health law, wherever you get your podcasts. To learn more about a HLA and the educational resources available to the health law community, visit American health law.org.