The Signal Room | AI Strategy, Ethical AI & Regulation
Healthcare AI leadership, ethics, and LLM strategy—hosted by Chris Hutchins.
The Signal Room explores how healthcare leaders, data executives, and innovators navigate AI readiness, governance, and real-world implementation. Through authentic conversations, the show surfaces the signals that matter at the intersection of healthcare ethics, large language models (LLMs), and executive decision-making.
The Signal Room | AI Strategy, Ethical AI & Regulation
How Healthcare AI Innovation Redefines the Patient and Physician Journey (Part 1)
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Healthcare AI innovation carries real clinical risk when workflow design fails and clinicians stop questioning the output.
In this two-part premiere of The Signal Room, host Chris Hutchins is joined by Dr. Barry Chaiken, a physician leader in clinical informatics and healthcare AI, and Ratnadeep Bhattacharjee, a data analytics and AI strategy leader, to examine the realities of implementing AI in clinical environments.
This conversation goes beyond the promise of AI to address the operational risks that emerge when healthcare systems adopt AI without adequate workflow design. The discussion centers on automation bias, the tendency for clinicians to defer to AI-generated recommendations without independent verification, and why that pattern can compromise patient safety.
Key themes explored in this episode include why human oversight remains essential even as AI becomes more capable, how workflow design determines whether AI supports or undermines clinical judgment, the risks of AI hallucination in healthcare settings, and practical approaches to training and change management that reduce the likelihood of over-reliance on automated systems.
This episode is essential listening for healthcare executives, clinical informaticists, data leaders, and anyone responsible for deploying AI in care delivery environments.
Guests: Dr. Barry Chaiken and Ratnadeep Bhattacharjee
Dr.
SPEAKER_01Barry Taken is not just a physician leader. He's a strategic and a longtime advocate for meaningful health care change. With more than 25 years of experience in clinical transformation, health IT, and public health, he's worked across payer, provider, and policy domains. Barry served as chief medical officer under multiple health tech companies, advised the federal government on pandemic preparedness, and is a past board chair of HIMS. He also is the founder of Docs Network, a consultancy focused on clinical innovation and patient safety. But beyond his resume, Barry brings a steady voice on the ethical use of technology, the culture of care, and the human moments that often get lost in the machinery of healthcare systems. In this conversation, we talk about what it takes to lead with integrity, how to challenge orthodoxy in times of change, and why he still believes healthcare is calling.
unknownDr.
SPEAKER_01Barry Chakin, welcome to the Signal Room. Thank you so much, Chris.
SPEAKER_00I really appreciate that introduction.
SPEAKER_01Well, it it's an honor to have you. I've known you for a number of years, and it's just exciting to be able to launch the Signal Room with it with a good friend and a visionary at the same time. So again, thank you and welcome. Let's just jump right into it. I'm kind of excited just to talk about some of the things you've been working on. Very timely and relevant for the world we're living in and all the buzzwords that are out there. I think you're going to bring some some real real life uh humanity to it. It's not just the technology discussion. So let's just jump right in.
unknownDr.
SPEAKER_01Shaken, your career has spanned clinical practice, public health, and technology. I want to ask you about your book. What inspired you to write Future Healthcare 2050 at this particular moment?
SPEAKER_00Well, I have to tell you, I have to go back to the time that I was the IIS officer, and it was one of the first visits that I did to the C D D C in my training. And I saw this IBM PC sitting there, and I was like, oh, tech is really interesting. And that actually got me interested in technology. And I've been using it my entire life. So what it was about two years ago, ChatGPT came out where you could actually, without being a programmer, interact with it. And I said, let me learn about this new technology. And that got me very much interested in artificial intelligence. And again, not from the place where I was a data scientist and I was worried about JSONs and Pythons and coding, but about how to use artificial intelligence in a way that it could be helpful in your personal life and then in your work, and then, of course, in healthcare. And that inspired me to say, let me see what I could do to talk about healthcare and AI as it started to become more mainstream type of technology that people became familiar with. That inspired me to write the book. I also, and throughout the book, I put this concept in, which is the idea of trust. I wanted to make sure understanding some of the risks of AI back in the beginning that have continued to this day. We have to make sure that if we're going to use artificial intelligence, it needs to be done in a way that really is focuses on healthcare, on the patient, and on the benefits to society. And that really inspired me to write the book.
SPEAKER_01Yeah, so definitely more to do on the human interaction side than on the tech. I think we've been excited about technology forever. And we've honestly we've not done ourselves a lot of favors in recent years with our focus on that. So let's talk a little bit about some of the things that led up to that. You at one point coined the term RHIT in a previous book that you've written. How does that concept evolve in this new work, especially in light of AI's acceleration?
SPEAKER_00Well, I use the word revolutionary. It stands for revolutionary health IT. The reason I used the word revolutionary because I wanted to people think of something completely new, different, a different direction, a change in how we did things. And just using evolutionary health IT didn't really cut it for me. I don't want it to be people to understand that information technology tools are just that. They are just tools. And if you don't figure out and plan how you're going to utilize them and tie them to the outcomes and goals that you want, they're just tools. And you can go ahead and do spend a lot of money, a lot of time, and not only achieve nothing, you might even make things worse. I learned that through my experiences working with electronic health records. And of course, uh, when I wrote my book on navigating the code, it was about process redesign, analytics, workflow redesign, and the idea that you need to go and change use technology for a particular goal to achieve a particular objective. And it requires you to do not just use the technology, but you design redesign the things around it to make sure the technology is used in a proper way that is revolutionary in terms of the outcomes it delivers.
SPEAKER_01That's such an important distinction. It's unfortunate that we we are so word-sensitive, but I think revolutionary is the right word for what you're describing. So in terms of transformation, in the book, you just you're exploring how AI could reshape the patient physician journey. What do you think is the most misunderstood aspect of that transformation today?
SPEAKER_00My experience in working with technology has shown me that people like to get excited about that new bright, shiny object.
SPEAKER_01Right.
SPEAKER_00And I understand we all love new things, right? We love a new pair of sneakers, we love a new technology we get to work with, but we always have to be focused on the objectives and the outcome. There's a lot of talk today about um using clinical artificial intelligence to achieve a particular goal or objective. I think that is has tremendous potential. But the issue here is we have a lot of work to do to achieve that. I think there's a lot we can do to further humanize how we deliver healthcare using artificial intelligence by making the interaction between the caregiver, which is not just the physician, but the nurses, physical therapists, people who do the blood draw, the people who register you with the clinic, make that interaction more personal. And the way it can be done is AI can do two things. One, it can prepare the caregiver so they can have a more personal humanistic interaction with that person by giving that caregiver information before they see that patient. And secondarily, the interaction that the patient has with the system, even if it's automated, can be personalized. Whether it means before you have your surgery, you're going to have instructions of the things you need to do to prepare yourself for that surgery. They can be personalized to who you are, to the language that you speak, to the level of education that you have, to the type of job that you actually do. And it can adjust its recommendations and speak to you in a language that can be more motivating and comforting for you. The same way the interaction between the physician and the patient can be more humanized in the interaction by the information given to that physician, but also in the communication written, verbal, and otherwise to that patient by personalizing it around who they are, what their current diseases may be, their challenges may be, their family situation. These are things that we cannot do as humans. It's just too much to cover. But AI can actually help us do that.
SPEAKER_01Yeah, I love that. You you you you bring out a really important thing that we could easily lose sight of. And, you know, yeah, it the patient experience obviously is the one of the most important things we can look at. But it's not only a physician they are interacting with. Um a bad experience at check-in can be just as much of a problem uh for the patient as anything else. So you know, focusing on the people that are essentially their services are being wrapped around that patient experience. There's just a lot of opportunities in there. Let's talk about what practical solutions might be in the near term. What AI use cases that that you think health systems should be focused on and maybe talk a little bit about one or two that you believe are overhyped.
SPEAKER_00The overhyped ones are really easy. So let's talk about some of the practical ones, okay? We know a lot about physician burnout. We know about nurse burnout and other caregiver burnout because of the documentation. So let's go back to where that was. We created these electronic health records. Why were they created? They were created to help patients. No. They were not developed to document patient care. They were developed so that it would make it easier for organizations to document care for the purpose of receiving reimbursement. We have a reimbursement system in the United States, right? That is dependent upon that documentation. I'm not going to talk about the pros and cons of that. There are many arguments on all sides, and all of them are valid. I'm just talking about the EHR was developed for documentation. Why was it done that way? Very simple. Who would buy a system that didn't optimize the documentation for reimbursement? Nobody. So what we tried to do through meaningful use and other measures by the government to make it so it was focused on the health care and the outcomes of that patient. But reality is that we live in a system, whether you're a for-profit or not profit organization, we survive by generating the proper revenue for our organization to keep it going. And to be fair, that includes generating revenue to pay for people who can't pay for the care. So that's part of the process. It's not just generating revenue for the benefit of senior people or for a board or for something like that. It's really also to have surplus money to take care of those who can't afford to provide care and it's provided essentially free care. So that's where the revenue piece is. So I think if we are never going to get rid of that requirement unless we completely change our healthcare system. So we need to have a way, how can we decrease that burden? The way we can do it is we can use AI through ambient listening and other processes to make it so the physician can focus on the patient. Why do I think this is really important? It's not just the time that the physician can spend with the patient, looking at their eyes, you know, testing them in various areas, the communication that occurs in transferring, say, the therapeutic plan or instructions for the next week or two to that patient. It's the also the idea that when you document the care, we know that that ambient listening by the eye can document the care as well, if not better, when properly implemented, better than that actual caregiver. Now, think of the possibilities that can occur from that. The note that goes to the patient can also be modified and written in a way that that patient can understand it and is meaningful to them. It's not just seeing the physician's note, but seeing a customized summary of their visit. If there is a referral to a specialist, the information from that referral could be prioritized based upon the specialty of that referred physician, so that the most important details about that patient are not buried somewhere in the note, but are right at the top because we know a surgeon needs to know X, Y, Z about the patient, but less about some other things. For example, we might it'd be very important to know the patient's some labs on a particular patient who's go undergoing, say, a hip replacement, but maybe their cholesterol level isn't as important as their clotting factors or the hemoglobin. And it can we can make sure that those factors are right up front and initially seen, and that also impacts care and improves the quality of care while at the same time addressing the issue of the physician burnout.
SPEAKER_01I love that. Is there are there things that you think we could do specifically to really kind of lean in on what you're saying, make sure that the AI is supporting rather than replacing the human connection in medicine? Because I think that's an area where probably is the most uh uncomfortable for us across the industry as we're thinking about how do we use AI and do it in a way that's meaningful.
SPEAKER_00Well, that goes back to the idea of clinical AI. Now, we can do great things with AI in terms of drug discovery, clinical trials, and I can go in details in some of those. But the challenge in using clinical eye is in the idea of using it as a therapeutic or diagnostic tool. Right. The problem here is that if you do not set up the workflow correctly and you go ahead and implement that AI, you have the risk of getting into automation bias where the physician sees the AI recommendation, they're busy, they're distracted. Oh, by the way, physicians, nurses, and other caregivers are human beings. They are going to get distracted. They are going to miss things. We all miss things and make errors. If you have automation bias, it'll just go okay, okay, okay. We know that AI can hallucinate, can make up things, provide misinformation. Without that human check, we have a problem. Now, how do we address that? Well, we could create AI that goes ahead and looks at the what the output from the AI and monitors that. Um, and there are other surveillance tools we need to put in place. So it's actually multifaceted. Let's make sure the workflow is correct so the physician is part of the process and we try our best in the design to avoid the automation bias. Second, is we have some type of tool, whether it's AI or AI and some other additional algorithm that works to check on what the results are from the AI, from the physician's plan. And the third thing is let's set up a surveillance tool. Let's go ahead and monitor how we're doing with these things on a regular basis and look for things that are out of line that may be wrong. And that way we can work to correct them, correct them in terms of the workflow processes and correct them in terms of the artificial intelligence.
SPEAKER_01That that brings up an interesting concept because I think when we talk about things like surveillance, the that word has negative connotations probably that come to people's mind. Um it's big brotherish. Uh, you're you're gonna monitor me and babysit everything that I do. I went to medical school. I I mean, I I know what I'm doing, and your your technology is gonna be looking over my shoulder. Is that what you're telling me? I mean, I I think that's an interesting conversation because I think we're this is a bit different from a cultural adjustment standpoint for anybody who's gonna be measured this way, but I think it's incre incredibly important for the reasons that you said. I mean, human nature is such that eventually, no matter what our intentions are, we can get comfortable and complacent over time because 99% of the time it's right, or perception we have is it's it's always right. I don't think I have to worry about it anymore. Um I think the the ability to measure, and I think that how I think about it is if uh Dr. Shakin is spending on average three minutes reviewing the post-visit summary, and this Hutchins guy is barely spending 30 seconds. Do we have something that flags that? This you know, make somebody needs to come and talk to that Hutchins guy that he's clearly not being attentive to it and he's putting himself and his organization at risk. Uh it's it seems seems like a little bit more than people probably are accustomed to thinking about, but I think it's really important for the reasons that you said. I mean, human nature is human nature. I don't know how we short circuit that.
SPEAKER_00Well, Chris, maybe that Hutchins guy knows something that the shaking guy doesn't know about, and the Hutchins guy's spending 30 seconds instead of the three minutes, and it's not only much more efficient, but also then has extra time to spend with that patient. We don't know whether the Hutchin guy or the Shaking guy is doing the right thing. But let me give you this model. If you go back 40 years, the chance of you dying in an airplane crash was 10, 20, 50, 100 times greater than it is today. Right. Why is that? The reason is because the pilots who initially said, hey, I'm the captain, you can't tell me what to do. They learned over time that is the wrong model. You have to have crew resource management, which means the captain, the co-pilot, and other people on the aircraft all have a right to intervene. Now, give you an example, what we do in healthcare. We have a checklist in the OR. And anybody in the OR who thinks that something is not going correct can stop the beginning of the surgery and ask them to recheck it or to question it. That's the mindset we have to have in healthcare, which is if I'm looking at how you're doing, I'm not looking to punish you. Now, I'll talk about that in a second. We're not looking to punish you. What we're looking to do is to make sure that you're within the parameters of what we expect. And oh, by the way, we might discover something that you're doing that we should have everybody else do because it's better. Fortunately, what how surveillance is used today, particularly in the for-profit um hedge fund-owned venture fund organization, it's taking you nurse four minutes to spend the time on the phone with the patient. You've got to cut it down to three minutes and 45 seconds. If any of you know a nurse, if any of you are a nurse, you know that is not the way you work. And doctors don't want to work that way either. The most important person in the room when you're seeing a patient is the patient. You went to medical school, nursing school, PT school, or you joined healthcare as an intake person with no medical training, but you did it because you're committed to that cause. That's what's most important. We can make it more efficient within the humanistic uh using humans and get them to do the right thing, but we're not here to punish them for time. We're here to identify the best ways to do things and encourage them to do the best for their patients.
SPEAKER_01Thank you for explaining that. I think it's just a really important thing that we have to get people to understand. If if we don't put these in place as we're starting to deploy, we're definitely going to go in a in a direction that's very, very dangerous. Thank you for that. I want to get into some things that are maybe a little bit more difficult to wrap well for me, it's still more difficult to wrap my head around in terms of the use the types of use cases we can be comfortable with. But you're right about predictive diagnostics, citizenship, and workflow automation. Which of these will be the most disruptive to current medical practice, do you think?
SPEAKER_00Um we've evolved in using electronic health records, right? I think that we would be able to utilize these tools in ways that could be incredibly beneficial to patients. But what we need to do is we have to involve the clinical people in the design, at least of the workflow and the output and the surveillance tools. We have to gain their trust in utilizing them. And you know, let's not forget about the patient. They should have a voice in what they want to, how they want to use the AI in terms of receiving, say, messages and summaries and such. It has to be a holistic approach to understand this, this piece. And then and let's not decide what we're going to do before we've collected all the information from everybody. Right. And the other thing is if you want to fail in a hospital, what you should do is implement something without asking the nurses what you should do. Okay? That's what you need to do. So don't do that. Involve everyone, include everyone, understand it, and then implement it and tell people what you're trying to accomplish.
SPEAKER_01Where can listeners find your book, follow your work, or invite you to speak further about this important topic? Uh we've only scratched the surface, um, but I know our listeners are have have gotten a ton of really great insight. And I I think you're gonna need to make it available, make your book available, tell them right where they can get it, because I think they're gonna they're gonna be excited to read.
SPEAKER_00Well you can find everything about me. At BarryChaken.com. Uh, and in there you will you can have some fun. Not only you can read about my book, you can order a deluxe sign copy there that I'll sign. And if you want, I'll put a little uh message into you and that I can mail out to you. Um it's also available on Amazon and Barnes and Noble, but obviously I can't sign them there. Um the second the second thing is what I created was a little chat bot, two different types of chatbots. One of them is you can text in it and that'll appear on the the homepage as well as there's a little menu, you can go to that and you can ask me questions. But what I specifically did is if you ask me what the weather is going to be in your town next week, it'll specifically say to you it's not my database, and I've been instructed not to hallucinate. So I've made a point of doing that. The second one is I also created one where you could actually quote unquote call me. You hit the little call button, you can ask the same question, it'll respond back to you in my cloned voice. So it's just a little fun way of how to use AI. And um, I have a lot of uh uh information on my website, barrychaken.com, but of course, you know, feel free to reach out to me using that site. And please connect with me on LinkedIn. Um it's Barrychaken.comslash LN, and that'll uh link you up to LinkedIn and my page, and we can connect.