The Reminger Report: Emerging Technologies

Can AI Help fix Healthcare’s Biggest Challenges?

Reminger Co., LPA Season 3 Episode 71

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 22:47

In this episode of The Reminger Report Podcast on Emerging Technologies, host Zach Pyers is joined by Anna Ray Ziegler, a law clerk in Reminger’s Columbus office, to explore how artificial intelligence is transforming the healthcare industry. 

From easing administrative burdens and addressing physician shortages to raising complex legal and ethical questions around patient data, this conversation dives into the promises and pitfalls of AI in modern medicine.

ZBP      Zachary B. Pyers, Esq.
AR        Anna Ray

 | ZBP | Welcome to the next episode of the Reminger Report Podcast on Emerging Technologies. I am joined today by one of our Law Clerks here in the Columbus Office. Anna Ray, thank you for taking the time to join us today. Thank you for willing to speak to us on the topic of the use of artificial intelligence in healthcare and if you would, just introduce yourself a little bit to our listeners.
| AR | Yeah, thank you so much for having me. My name is Anna Ray Zeigler, I’m from Columbus. I grew up here and I came back to Columbus for Law School at Capital University and I’m about to start my third year of law school, so very exciting time.
| ZBP | Right and congratulations on that.
| AR | Thank you.
| ZBP | Now, we’re talking about artificial intelligence and it’s a topic that everybody loves to talk about because it’s impacting all sorts of different areas and practices and the way we live our lives. I was at lunch yesterday with a partner at another firm – actually we used to work together and we were talking and he was telling me about some new platform that he is using at a Hearing he just had out in Colorado and I never heard of a platform and he was going through all of it and it seems like, you know, we can’t even keep up with all the use of artificial intelligence in our own industry, let alone the other industries it's impacting outside of the legal – and so one of the industries that is obviously is having an impact is in the healthcare community. So, if you could, kind of give me an overview – what is driving the usage of AI or artificial intelligence in the healthcare industry?
| AR | Yeah, it’s a huge thing in healthcare and obviously I think the flashes – parts of it are the idea that one day we would just be able to use an AI Doctor and diagnose something just like a human but its really driven by administrative overload and burnout and just a real shortage of people in the healthcare industry. Shortage of doctors, shortage of nurses and one aspect is that it just takes up so much of their time to do the administrative aspect of being a medical provider and there’s just a high demand for this level of accuracy and efficiency. Everybody expects everything to go very quickly all the time these days. I mean I know same thing in the law, same thing with medicine, people just want answers now, they want it to go smoothly, quickly and not spend as much time on it and there’s projected to be by 2034, about 124,000 – a shortage of 124,000 by 2034 in Physicians. So, really, if you - the problem in there’s a lot of data now. Like, this is feasible in the way it hasn’t been before because this type of data is recorded on everything from your smart watch to when you’re at the doctor actually getting tests and they’re putting it into the system. So, its just a whole new world and lets it kind of work in a scale it hasn’t been able to before.
| ZBP | You know, I couldn’t help as you were sitting here talking about the efficiency and the process – the speeding up the processes, you know, for the healthcare providers. I’m old enough, not that I’m super old but I’m old enough to remember when doctors would not necessarily take notes in the exam room with you. They might be – and when I say take notes, I mean they weren’t entering stuff into the chart directly. They might take scratch notes down on a pad but then they would go back into her office, and they would pick up a Dictaphone then they would start dictating the notes, right? And those would get transcribed. Now, you know, when you go to the doctor’s office and I’m sure most of us have seen this, most of my doctors are either walking in with an iPad or a laptop and have dropdown menus for the selection of the notes and they are taking their notes contemporaneously with the visit and the process has been speed up exponentially because of it. I mean, instead of completing your notes or you know, doing your notes at night before you leave, most of these physicians are now doing their notes contemporaneous so when they leave – when they walk out of the exam room usually with me, the notes are already in the system. Which is just kind of a huge change, and we talked about this. This might even, with the use of artificial intelligence, might speed that process up even more.
| AR | Yeah, I find this to be generally one of the less scary ways to think of AI entering your experience with the medical field because two thirds of the time every day, doctors spend doing documentation and administrative work. Like, it takes up a ton of time, it’s not what they really want to be doing, they want to be with patients and engaging and I don’t know if you felt this way, but when they’re looking at their laptop typing the entire appointment, it doesn’t exactly feel super connected. You don’t feel like you know your doctor, you feel like your doctor, as you said, is just picking something from a list that feels right rather than really paying attention. One other thing that’s common is medical scribes. Which, is a great opportunity for people trying to go into medicine but its also an expensive thing to have and that does take that off but now there’s Amazon web services that’s come out with the one and there’s just a lot of investment in AI medical scribes and a huge amount of interest from physicians – the idea of being that you can just record the entire appointment, the conversation and then it will know your specific chart, how you use it and auto-populate it based on the appointment notes or based on the actual words of the appointment rather than what you can recall from it or some dropdown menu of notes. So, it brings in more one-to-one interaction, more personalization and ideally just streamlines things and takes a lot of stuff off people’s plates so they can see more patients, especially if the workload is really going to pick up because demand is not going down and look at how physician shortages. I think it has a huge impact in the community.
| ZBP | And I know you talked about Amazon web services. Has this – and I believe there’s another one called Teladoc, if not more. Does Teladoc work similar to Amazon web services platform?
| AR | So, I think Teladoc is one working with like open AI, they’re integrating a new model. So, they auto draft everything and they do the – they can summarize Telehealth visits too. As can AVS but theirs is more of a – it’s pretty similar, like they are going for the same thing. They want to make – and Amazon focuses on making an electric – the like HER ready notes. 
| ZBP | Yeah.
| AR | Like they want to be able to be used in your office directly rather than just giving the care to the doctor, which is more Teladoc’s kind of -
| ZBP | Got it. Okay. That makes sense to me.
| AR | Those ended up getting huge investments and a lot of money is going there.
| ZBP | Now, when we talk about, you know, AI and no matter what scope we’re talking about it or who I’m talking about it with that the two areas that come up are legal and ethical implications of the use of it. And so, what are some of the biggest, like, legal grey areas that you see when it comes to the use of AI and patient data processing?
| AR | Yeah, I think right off the bat I think working in the law, it comes up as like okay, who’s going to be responsible if the AI wrote something down wrong, misheard, didn’t translate correctly because it talks about the ability to maybe help with that as well. Language barriers. I think that’s an obvious huge first problem is who would be responsible for that and I’m sure there’s some sort of agreement to take care of it, but right now, it’s kind of the wild west. There’s also obviously who owns the data. That goes on every day with cookies, with everything, with all of the stuff constantly being collected. Having your most personal healthcare information owned by maybe the provider, maybe this giant tech company. It probably makes people uncomfortable, understandably, and there’s also just the transparency aspect. Obviously, it’s one thing you’ll need to tell them, they’re being recorded because that would be weird to just record them without consent and then you need to make sure that they – it’s best to make sure that the patient actually understands how this recording is not just dictated for you to type later. It’s being used going out into the cloud, into some AI database where it will be transcribed. Which it might be very, very quickly, but there is some time where it’s not right there and what happens then? The risks of obviously there’s cybersecurity risks and just the risk that the patient’s data is used in a way that they don’t want it to be, that they’re uncomfortable with, which is not ideal.
| ZBP | Now, one of the topics that, you know, people love to talk about is HIPPA, besides AI. Everybody, you know, if there’s one thing that I think there’s been more confusion around over the last, you know, five plus years with the onset of the pandemic is HIPPA, HIPPA violations and what constitutes HIPPA and I think the reality is – most people know that HIPPA deals with medical records, right? The privacy of medical records and patient records. But I think it gets a little bit more confusing is too, you know, who actually can enforce those rights and what rights do you actually have. And I know that when we talk about medical records, because this was an issue when the introduction of electronic health records, the EHRs you mentioned, came on the scene, how do you see, you know, AI kind of fitting into and ensuring that they are HIPPA compliant.
| AR | I think compared to areas where people are trying to use it to diagnose, obviously it has the sensitive data, but you can enter into the business associate agreements as HIPPA requires. That should be standard for any health provider wanting to use AI to kind of make sure that the business is aware of what encryption they have to do, what type of access law they have to do. The user and authentication and just all the rules that they have to make sure they’re using and it’s one thing that I think really HIPPA will be able to touch, I believe is the idea of making sure that the patient data is not used for training the model. That’s it not taking it in to start to use this person because that can be kind of deanonymized or I forget the exact word, but you can figure out the reverse and then you incur who people are so just making sure there’s some extra protections and keeping it simple, scribe-style work rather than training the next AI Doctor.
| ZBP | Now and so you kind of hit on this and I want to elaborate just a little bit. Because even in the legal field we encounter a very same problem as it relates to AI models being used to train on our materials as lawyers and so, you know, what we’ve seen – or what the industry has is kind of, a couple of things. You know, from lawyers’ perspectives, our client information is confidential, it’s subject to the attorney-client privilege, and so we have to go through great lengths in order to ensure that it is protected and remains confidential. If I and this is what I think some people don’t necessarily always know, if I were to upload those client documents into what I would say is a public-facing AI platform and I’m not picking on them but I’m just going to use it – is ChatGPT. They’re going to – not only review the documents that I enter into the system, but they will also use the documents that I upload to train the model further because the model is constantly learning based upon the inputs. And so, as users put information into ChatGPT or most of the other artificial intelligence platforms, it’s actually training them that information. And so, the key is – and I’m going to butcher this term too, deanonymize the information, you know, we run the risk that we are accidentally exposing our client’s information in a way it shouldn’t be. And so, there are a lot of AI platforms that are being developed in the legal community where the systems don’t train on the information the users input into it and so, not only does it not train, it doesn’t store it. And so, once you upload it, it keeps it for like 30 minutes. So, if you ask a platform to review a document to summarize it, the platform might do so but then after the 30 minutes or whatever time it is, the session gets closed out, the documents get deleted from the system and there’s no training on it so, there’s no record of it and it allows our lawyers to keep the information both confidential and protected from the attorney-client privilege, which is, I think similar to what you are describing - is what these healthcare platforms are going to do in the healthcare space. Is that fair?
| AR | Yes, that’s totally fair. I think that I have to go about it.
| ZBP | Now, one of the things that we’ve talked about a lot when it comes to emerging technologies, and it seems like every time an emerging technology comes up, regulators, whether it’s AI or anything else; regulators are a little slow to get out in front of it and to start regulating. So, when we talk about the regulatory landscape in the healthcare space surrounding AI, you know, what are we seeing and where are we currently – and where do you think they should be going?
| AR | Honestly right now, it’s definitely the wild west. There’s a strong argument going on between letting AI companies grow as they might to just get better but there’s a lot to be said for we need to do some sort of way to protect and regulate these things when they’re so often a black box type of model where you don’t know what’s going on actually inside. And in healthcare there’s just a lot of overlapping regulators. The FDA does govern or regulate a bunch of the medical – software medical devices and diagnostic tools so it seems like they would have the purview to do that and some of the FTC would monitor some of aspect of it if there were any deceptive practices or there were problems with people not disclosing the use and there’s obviously going to be like just in general the whole HHS would be, you would think, handling the data privacy aspect. But congress clearly would have the best place to start from a regulation standpoint to get something federally in place to protect people and their data. Their most important sensitive information and state by state it would be very patchwork, which just probably wouldn’t do the best to protect from all of the risks and grey areas you were talking about but it’s definitely complicated as there are so many overlapping agencies and interests.
| ZBP | Now I know from, you know, the legal field that one of the things that we have talked about for at least the last two years, is the term hallucinations when it comes to outputs or information that the AI platforms will generate and for those of you who don’t know what hallucinations are, I’ll explain at least in the legal context how they come to bite us in the rearend is if you’re not using, you know, an appropriate legal research AI platform and you’re using some of the publicly available ones, the non-legal specific ones will occasionally make up an answer. Meaning, they will generate a case site that looks like it's citing to a legal case and to a lay person looking at it and even to a lot of lawyers, it looks like a case cite and so, it looks like it’s got a name, a Court where it came from, its got both a volume and a page number and they look real. And sometimes when the lawyers or the Judges who find these in legal briefs, go to look them up on the traditional legal research platforms they can’t find the cases. And part of the problem is that the AI has hallucinated those. So, I’m assuming – and I mean I know that this is a problem, not just in the legal industry but kind of cross the spectrum of industries that use AI – how, from a risk-management standpoint do we kind of help to mitigate or guard against the risk of a hallucination.
| AR | Yeah, I think starting with like – as you talked about, an industry-specific thing could help to train the AI to keep it from going off into random data from Google about medical information and its just important in a field where biases can lead to a lot of different interpretations of symptoms or put out different types of information going off of some intrinsic quality and making some sort of judgement can be really dangerous in the healthcare industry when doctors do it and when AI does it. And because AI’s so eager to please, it wants to give you the answer you’re seeking, that seems to be when hallucinations start to come up. So, really its super important to make sure that you’re working to train models about bias in medicine and in general to make sure that your not – it’s not putting more weight on some aspect of the conversation then another because of whatever reasoning it has been trained or accidentally taken on and trying to get some sort of clarity into the black box models, some sort of audit system and just making sure there’s always a human there as you said, not letting the Judge find out that the case is fake. Have a Law Clerk like me double check before if something is there. Just you don’t want to let something go all the way to the point where you’re diagnosing based off of a chart to find out that that was not actually something they ever said or did. It was just what the generative AI thought made sense or sounded right.
| ZBP | Now pulling out the crystal ball, what do you see kind of in the future as some of the biggest both you know, exciting advances and potential downfalls of the use of this technology in the healthcare space.
| AR | As I said, I think as scribes are overall a pretty good thing in reducing the burnout level, helping physicians just have a better lifestyle, keeping things more open between doctor and patient, getting to spend that face-to-face time and build the trust rather than thinking they’re doing something else or just wanting to get out of it, it really means a lot to people. Especially when going and interacting with a doctor can be so make or break on every experience. Just feeling like they understand you is so key to the treatment that you’re going to get and I find that really exciting that it can also be used to flag things just quicker, that might have slipped under the radar because you’re doing so many other things and it could just flag it quicker and note it because it doesn’t maybe have – it can stay kind of isolated into the one area that it is dictating from rather than you – a physician going from room to room to room. And I find that pretty exciting and in the aspect of maybe longer term when some of the more diagnostic angles, just kind of the democratization of medicine allowing people who maybe don’t have a doctor or don’t have access to know if something really is – because it can be so expensive if something is really something they need to go to. Obviously, I don’t think putting all your faith in AI is a smart thing and people should see doctors but its an interesting potential area for growth and care to underserved communities but I think the presently, the risk of hallucinations and biases creeping in as well as just not knowing how they work, not knowing what if the data accidentally – I know ChatGPT has it now where you can do a temporary chat and maybe something uses both like how do you know it didn’t accidentally become the other one. How do you know that they’re not lying about it or think it’s doing something and with information that’s sensitive, I think it’s always important to consider that bad actors might try to use the information and leverage it in some way but overall, I think it’s an exciting prospect. 
| ZBP | Well, I’m excited by it, Anna Ray, I appreciate you taking the time today to talk with us, for coming on and hopefully we’ll have you back soon.
| AR | Yeah, awesome, thank you so much.
| ZBP | Thank you.