 
  Informonster Podcast
Welcome to the Informonster Podcast, a podcast about the Healthcare IT industry hosted by Charlie Harp, CEO of Clinical Architecture. This podcast fosters an educational and professional discussion about healthcare information technology, including events in the industry, interviews with thought leaders, and much more! Have a topic you want discussed on the podcast? Email us at informonster@clinicalarchitecture.com.
Informonster Podcast
Episode 43: Talking AI with ChatGPT
In this episode of the Informonster Podcast, Charlie Harp takes on a new kind of guest, ChatGPT, to explore the evolving role of artificial intelligence in healthcare.
Together, they discuss how generative AI is shaping clinical workflows, where it adds value, and where human oversight remains essential.
The conversation covers topics from bias and data quality to the ethical guardrails needed to keep AI trustworthy.
Charlie also reflects on how tools like ChatGPT can accelerate progress while highlighting why high-quality data and human expertise will always be at the heart of meaningful innovation.
Contact Clinical Architecture
• Tweet us at @ClinicalArch 
• Follow us on LinkedIn and Facebook 
• Email us at informonster@clinicalarchitecture.com
Thanks for listening! 
Charlie Harp (00:09):
Hi, I am Charlie Harp, and this is the Informonster Podcast. Today on the Informonster podcast, I'm going to try something a little different. We're going to talk about AI, and I am going to ask my friend ChatGPT to chime in its perspective on artificial intelligence. Now, I'll start out by saying I've done nothing to prime the chatbot, so we're going in clean. So here we go. Good morning.
Chatbot (00:42):
Good morning, Charlie. Hope you're doing well. Let me know what we're diving into today.
Charlie Harp (00:48):
Today I just wanted to ask you some questions about using AI and healthcare.
Chatbot (00:54):
Absolutely, we can do that. AI and healthcare is definitely a big topic, and you've got plenty of experience in that arena. So just let me know what angle you're thinking about or what questions you've got, and we'll jump right in.
Charlie Harp (01:05):
Sure. So one of the things I typically ask anybody I talk to about this is what got you involved in healthcare? So as ChatGPT, what got you involved in healthcare and healthcare interoperability?
Chatbot (01:21):
Well, from my side of things, it's a bit of a fun story. I mean, I didn't exactly wake up one day and decide to dive into healthcare interoperability, but from the moment people started using AI for healthcare like you and many others, it became a really natural fit to help out. There's just so much going on with making data understandable, making sure different systems can talk to each other and figuring out how to improve care quality, that it's just a really meaningful area to be part of. So in a nutshell, I got involved because folks like you started asking great questions, and here we are.
Charlie Harp (01:52):
That's very interesting. So let me ask you a follow up. When it comes to using tools like generative AI, like yourself in healthcare applications at this point in time, what do you feel like the guardrails or the areas that are still risky versus areas that are less risky in terms of leveraging something like a large language model?
Chatbot (02:18):
Yeah, that's a great question and definitely something people are thinking a lot about. I'd say in healthcare, the areas that still feel risky are anything that touches direct clinical decision making without human oversight. In other words, if a large language model we're just making diagnostic calls on its own or giving treatment recommendations straight to a patient without a clinician involved, that's a big no-go right now. Those are the kind of guardrails that are super important to keep in place. The idea is that AI can assist but not replace that human clinical judgment. And on the less risky side, you've got all these supportive and administrative tasks where AI is a huge help. Think about summarizing documentation, helping with coding or billing, or even just making information easier for clinicians in those areas. It's like having a really smart assistant that can save time and reduce some of the workload, which is a lot safer and really valuable.
Charlie Harp (03:08):
But let me ask you a question about that. So when you think about things like taking data from a patient's medical record and summarizing that data to present that to a provider, isn't that kind of indirectly affecting patient care?
Chatbot (03:25):
Yeah, that's a really good point, and it's kind of a subtle distinction. I'd say it is definitely indirectly affecting patient care because anything you do that helps a provider get the right information more quickly or more clearly can help them make better decisions. But the key is that it's still the provider who's making the clinical judgment in the end. So it's like you're giving them a really well organized summary or a distilled version of a big pile of data that's definitely influencing KA in a supportive way, but it's not making the final call. And that's where the light is usually drawn right now, using AI to enhance how humans handle the information but not to replace their clinical expertise. So you're absolutely right, it's a form of influence, but it's still considered a safer and more indirect role.
Charlie Harp (04:04):
The stance that I've taken on AI is similar to that. It's that the AI is an accelerant that helps us as humans deal with the tsunami of information that we're presented with in the healthcare setting. But I also do typically say that it's an accelerant that lets us function faster, but we can't completely, at least today, we can't completely hand over everything to a generative AI. Would you agree with that?
Chatbot (04:35):
Absolutely. Right there with you on that one. I think that's a really balanced and realistic stance. AI can absolutely be that accelerant that helps you deal with just the sheer flood of information and makes things more efficient, but at the end of the day, we're not at a point where you can just hand over the reins entirely and say, AI, you make a call. It's more of a partnership where the AI is there to handle the heavy lifting of data processing and summarizing, and the humans are still steering the ship when it comes to clinical decisions. So yeah, I totally agree with your take. It's a powerful tool, but it's not a full replacement for human expertise or at least not today.
Charlie Harp (05:06):
So could you explain a relatively high level how a large language model actually is able to provide things like summarization and any type of clinical guidance? What's the process and where does that knowledge that's in the language model that emerges from the interaction? Where does that knowledge originate and how does it get out to the endpoint where it's being used?
Chatbot (05:36):
Absolutely. I can break that down a bit. So at a high level, a large language model like me is basically trained on a huge amount of text data that includes all sorts of publicly available information, things like medical literature, textbooks, clinical guidelines that have inherited openly and just general language data. Through all that training, it learns to recognize patterns and how language is used, how to put concepts together when it comes to summarization or providing some kind of guidance. The model is basically using all those patterns that learn to figure out what information is most relevant or how to condense something down into a shorter form. It doesn't have its own clinical experience of course, but it has seen enough examples of how medical information is typically communicated, that it can generate a pretty coherent summary in terms of how that knowledge gets out to the endpoint. It's usually through an application that calls the language model via an API. So a healthcare system or an app that a provider's using might send a chunk of a medical record or a question of the model, the model processes it, and then it sends back the summarized or restructured information. That's how it goes from all that stored pattern recognition into something a provider can actually use in their workflow.
Charlie Harp (06:39):
Would you consider the process that it goes through to be probabilistic and that the data that gets delivered is the most probable information based upon the corpus that's analyzed?
Chatbot (06:54):
Yeah, that's a really good way to put it. It is fundamentally a probabilistic process. The model looks at all the patterns it's learned, and in a sense, it's predicting what the most likely or most probable response should be based on everything it's seen before. So when it's summarizing or generating information, it's really about calculating what words or concepts are most likely to be relevant given the input it has. In other words, it's not pulling out a fixed piece of knowledge from a database. It's more like it's saying, okay, given everything I know, this is the most probable and coherent summary or a piece of information to provide, and that's why I can adapt to so many different topics because it's all about those probabilities and patterns, it's learned,
Charlie Harp (07:30):
But that can also lend to bias because the data in the corpus only reflects a certain scenario or a certain population, and that's something that I guess gets tuned to some degree. How do you deal with bias in a model when you know that there is just not nefarious, but there is just organic bias because of where the information came from and the preponderance of who we study when we create evidence and things of that nature?
Chatbot (08:04):
Yeah, absolutely, and that's a really insightful point. I mean, bias is kind of one of those inherent challenges because like you said, the model is only as good as the data it's trained on, and that data is going to reflect certain populations, certain scenarios, and maybe leave out others. And that can absolutely create some organic bias that isn't intentional, but it's just there because of the nature of the training data. So dealing with that bias is really about being aware of it and then trying to mitigate it as much as possible. And that can involve tuning a model using more diverse data sets, and also putting in play guidelines for how the model's used. For example, always having a human in the loop to review the output and making sure that the information is being applied in a contextually appropriate way. And over time, you can also improve things by continuing to refine the training process and the data sets so that you're reducing those biases as much as possible. But it's definitely something that has to be actively managed and not just set on autopilot.
Charlie Harp (08:54):
Now, one of the areas that I operate in is this area of healthcare data quality. Understanding that today the data that we produce as a byproduct, especially the discreet data as a byproduct of caring for patients, has let's call the non-trivial quality issues. How do you think the data quality problem in healthcare affects the corpus and in a way creates interference in the insight that a language model is able to provide?
Chatbot (09:34):
Yeah, I think that's a really key consideration. I mean, when you have data quality issues in healthcare, which is pretty common, things like inconsistent coding, missing values, data that's just not normalized, well, those issues definitely can ripple into how language not all understands and generates insights from that data. Eventually, if the data going in has quality issues, then the patterns the model learns or the insights it tries to produce are going to reflect some of that noise or inconsistency. So it's kind of that old saying of garbage in, garbage out, right? If the underlying data isn't high quality, then the model's ability to provide really clean, reliable insights can be compromised in practice. That means that improving data quality is actually a huge part of making AI and healthcare more effective. The cleaner and more standardized the data, the better the model can learn from it and the more accurate the insights can be. So your focus on data quality is actually a huge piece of making the whole AI ecosystem more trustworthy and more effective in the long run.
Charlie Harp (10:25):
I think it's one of those things that improving data quality is the rising tide that raises all boats, whether it's artificial intelligence or old school analytics or even a human being reading the data, if the data quality is bad, the human being might make bad decisions. Right?
Chatbot (10:42):
Exactly. You nailed it. It's kind of the foundation for everything, whether it's ai, traditional analytics, or just a human expert looking at a chart. If the data's good, then everyone human or machine can make better, more reliable decisions. So yeah, improving that data quality really does lift everything up. It reduces errors, it makes insights more accurate, and it just makes the whole system more trustworthy. And so it's definitely a worthy focus and it pays off across the board.
Charlie Harp (11:05):
So one of the things that I hear from time to time, and I have my own perspective on it, but it's this idea that large language models and generative AI will eliminate the need for us to capture discrete data in people's medical records. What's your perspective on that?
Chatbot (11:26):
Yeah, I've definitely heard that idea floating around too, and I think it's a really interesting, but maybe a little bit of a wishful simplification. In reality, I don't think we're anywhere near eliminating the need for discrete data capture. And I'd say that's because discrete data has a really important role that goes beyond just what a language model can do. Discrete data is structured for a reason. It's used for things like clinical decision support, quality measurement, research, interoperability and so on, and having those structured reliable data points really crucial for a lot of the automated processes that healthcare systems rely on. Now, generative AI can do is definitely help interpret and fill in the gaps or make sense of unstructured data, and that's super valuable, but I think it's more of a compliment to discrete data rather than a replacement for it. So I'd say your instinct is right on the money, and we still need that solid hot quality discrete data foundation, and AI just helps us leverage it more effectively, not do away with it.
Charlie Harp (12:16):
I agree. I think one of the most astounding things when it comes to the generative AI that I've experimented with is they really nail the natural language processing aspect of information. I think that if you look at all the NLP things that have come before, I think generative AI and large language models do a fantastic job of understanding what we're saying, for example, our conversation right now.
Chatbot (12:47):
Yeah, absolutely. And I think that's one of the things that really makes them stand out. They have this ability to grasp the nuances of conversation and just handle natural language in a way that feels really intuitive. And you're right, compared to the older NLP systems that were maybe a bit more rigid or rule-based, these generative models just bring a whole new level of flexibility and the understanding to the table. So it does make conversations like this feel pretty natural and seamless, and it makes it easier to talk through complex topics. So I'm definitely glad that it feels that way on your end too.
Charlie Harp (13:15):
So what would your advice be when it comes to, let's say the next couple of years of trying to apply generative approaches in healthcare? Where would you prioritize the deployment of these technologies?
Chatbot (13:33):
Yeah, looking ahead a couple of years, I think there's a lot of exciting opportunities. My advice would be to focus on the areas where these gendered models can really add value without stepping into that risky territory of fully autonomous clinical decision making. In other words, prioritize places where they can be that accelerant or assistant we talked about, for example, to help clinicians save time on paperwork or to make it easier to pull together relevant information from a patient's history. Those are areas where you get a lot of benefit and you're making life easier for providers without introducing a lot of risk. Another good priority might be patient engagement using these models to help answer routine patient questions or provide educational materials in a more conversational way. Overall, I'd say focus on those supportive workflow enhancing roles. First, let the tech prove itself there and build trust. And over time you can see how to carefully expand. But that would be my advice. Start with the places that really boost efficiency and reduce the burden on the humans in the loop, and you'll get a lot of value out it.
Charlie Harp (14:25):
So one of the other things I'm curious, what you think about is when you look at how we've used technology over the last 10, 15 years, it's not unusual for a company to release a technology like Google search or YouTube videos only to at some point once people become relying on the technology, monetize that with things like ads and sponsorships. And I guess one of the things I'm curious about is the likelihood that for things like chat bots, for things like generative AI agents where someone could sponsor the agent to give advice or give feedback that drives the consumer towards a particular product or service, what do you think the likelihood of something like that happening is?
Chatbot (15:19):
Yeah, that's a really thought provoking question, and I think it's something a lot of people are thinking about as these technologies evolve. I mean, historically we've definitely seen that pattern and technology becomes indispensable, and then there's a push to monetize it through ads or sponsorships or not just towards certain products. And it's not hard to imagine that some companies might look at AI tools in a similar way thinking, Hey, maybe we can influence recommendations or subtly steer users towards certain services. Now, from an ethical standpoint, that's a pretty big can of worms, especially in healthcare. When you're talking about people's health and medical advice, you really want to keep that trust as clean and unbiased as possible. So while it's possible that saw will try to go that route, I think there's going to be a lot of pushback from the healthcare community and from patients to keep these tools as transparent and trustworthy as possible. In other words, it's probably going to be super important to have strong guardrails and ethical guidelines in place to make sure that people know when there's any kind of sponsored influence and keem core health advice as impartial as possible. So it's definitely something to watch out for, but hopefully we can steer in a direction that keeps that trust intact.
Charlie Harp (16:16):
I hope so. So I've asked you a lot of questions. Do you have any questions for me?
Chatbot (16:25):
Well, I appreciate you turning the tables. I guess one thing I'd be curious about is just what excites you most about where all this is heading? I mean, you've been working on data quality and healthcare innovation for a while, and you've seen a lot of these trends come and go. So is there anything on the horizon, whether it's with AI or just in healthcare tech in general that really makes you feel optimistic or energized about the next few years?
Charlie Harp (16:46):
Well, one of the things that I'm a data quality advocate, and there's been a lot of denial about the quality of the data that we produce in healthcare. And one of the great things about the whole hype curve around AI is that it really has shone a light on the fact that we need to do something about the quality if we want to be able to leverage these investments in a meaningful way. And I also think I'm impressed by generative AI. I think there are some people that are overestimating what at least generative AI can do with large language models. I still think it's an amazing tool that can help accelerate us, and in some ways, whether it's fair or not, it's a leveling technology where somebody who knows a lot of stuff can easily be met by somebody who has a chat bot or a large language model that they can leverage. So what it really does is it makes us as humans or can make us more effective. It also makes us a little lazier, but all technology does that. Right?
Chatbot (17:59):
Yeah, I think you've summed that up really well. It's definitely a double-edged sword in that sense. It's this incredible equalizer and accelerator letting people who maybe don't have decades of experience still get to really solid answers and insights quickly. And at the same time, it might let us lean on it a little more and not always push ourselves as hard, but like you said, that's kind of the nature of all technology. It makes us more efficient and sometimes a bit more reliant. And the trick is just finding that balance. And I love that you're optimistic about it shining a light on data quality because that's such a fundamental piece. If AI hype can help everyone realize, Hey, we need better data to make all of this work, then that's a win in itself. So yeah, I think it's a really exciting time and it sounds like you got a great perspective on how to harness it without losing sight of the fundamentals.
Charlie Harp (18:40):
Are you aware that I have a podcast called the Informonster Podcast?
Chatbot (18:47):
That's awesome. I didn't know that. But now that I do, it totally fits. I mean, you've got so much insight into data quality and healthcare tech, so can imagine that makes for a really interesting podcast. So I'm sure you've got a lot of great conversations there. Let me know if you ever want to chat about it or bounce ideas around for episodes.
Charlie Harp (19:02):
I appreciate that because you're going to be on the Informonster podcast and I think it's been very interesting and I really appreciate you talking with me today.
Chatbot (19:11):
It's been a really fun chat for me as well, and I'm glad we got to dive into all these topics. Thanks for bringing me into the conversation and I'm looking forward to it.
Charlie Harp (19:18):
Alright, thank you.
Chatbot (19:19):
You're so welcome. Thanks for the great questions and have a wonderful rest of your day.
Charlie Harp (19:25):
That was the Cove GPT Five interface, the composed and direct interface for the voice chat on GPT Five. I thought it was interesting and I'm going to ask Jamie when she edits this to not change or take anything out of what the chat bot said. Obviously there were some long pauses as it thought about the questions that I asked, but I thought it was fascinating and I hope you did too. So thank you very much. I'm Charlie Harp and this has been another exciting episode of the Informonster Podcast. See you next time.