The Signal Room | AI in Healthcare & Ethical AI
Welcome to The Signal Room, your go-to podcast for expert insights on ethical AI, AI strategy, and AI governance in healthcare and beyond. Hosted by Chris Hutchins, this show explores leadership strategies, responsible AI development, and real-world implementation challenges faced by healthcare AI leaders. Each episode features deep conversations covering healthcare AI innovation, executive decision-making, regulatory compliance, and how to build trustworthy AI systems that transform clinical and operational realities.
Whether you are an AI strategist, healthcare executive, or AI enthusiast committed to ethical leadership, The Signal Room equips you with the knowledge and tools to lead AI transformation effectively and responsibly.
Join us to learn from industry experts and healthcare leaders navigating the evolving landscape of AI governance, leadership ethics, and AI readiness.
Follow The Signal Room and stay updated on the latest trends shaping the future of ethical AI and healthcare innovation.
The Signal Room | AI in Healthcare & Ethical AI
Redefining the Patient-Physician Journey with AI: Dr. Barry Chaiken on Ambient Listening, Automation Bias, and Revolutionary Health IT
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Patient-physician AI transformation, physician burnout solutions, and the hidden dangers of automation bias take center stage in this episode of The Signal Room. Host Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants, welcomes Dr. Barry Chaiken, a physician leader with more than 25 years of experience in clinical transformation, health IT, and public health who has served as chief medical officer for multiple health tech companies, advised the federal government on pandemic preparedness, and is a past board chair of HIMSS. He is also the founder of DocsNetwork, a consultancy focused on clinical innovation and patient safety.
Dr. Chaiken opens with the inspiration behind his book, Future Healthcare 2050, tracing his fascination with technology back to an early encounter with an IBM PC at the CDC during his training. The conversation quickly pivots to his concept of Revolutionary Health IT, or RHIT, a framework reminding leaders that technology is just a tool, and without proper workflow redesign tied to specific outcomes, it can make things worse rather than better.
The most practical segment explores how AI can humanize healthcare rather than replace human connection. Dr. Chaiken describes how ambient listening technology can free physicians from documentation burden while simultaneously producing personalized patient summaries, customized referral notes prioritized by specialty, and communications adapted to a patient's language and education level. He warns that the electronic health record was originally built for reimbursement documentation, not patient care, and that AI's greatest near-term value lies in reclaiming the physician's attention for the patient in the room.
A critical discussion of automation bias follows. Dr. Chaiken draws a powerful parallel to aviation's crew resource management revolution, noting that airline fatality rates dropped dramatically once pilots accepted collaborative oversight. He advocates for three safeguards: proper workflow design to keep the physician in the loop, AI-monitoring-AI systems to check outputs, and ongoing surveillance tools to identify patterns and improve care. His warning to hospital leaders is direct: if you want to fail, implement something without asking the nurses.
About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.
Website: https://www.hutchinsdatastrategy.com
LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/
YouTube: https://www.youtube.com/@ChrisHutchinsAi
Book Chris to speak: https://www.chrisjhutchins.com
Dr. Barry Chaiken is not just a physician leader. He's a strategist and a longtime advocate for meaningful healthcare change. With more than 25 years of experience in clinical transformation, health IT, and public health, he's worked across payer, provider, and policy domains. Barry served as chief medical officer for multiple health tech companies, advised the federal government on pandemic preparedness, and is a past board chair of HIMSS. He also is the founder of DocsNetwork, a consultancy focused on clinical innovation and patient safety. But beyond his resume, Barry brings a steady voice on the ethical use of technology, the culture of care, and the human moments that often get lost in the machinery of healthcare systems. In this conversation, we talk about what it takes to lead with integrity, how to challenge orthodoxy in times of change, and why he still believes healthcare is a calling. Dr. Barry Chaiken, welcome to the Signal Room.
Dr. Barry Chaiken:Thank you so much, Chris. I really appreciate that introduction.
Christopher Hutchins:Well, it's an honor to have you. I've known you for a number of years, and it's just exciting to be able to launch the Signal Room with a good friend and a visionary at the same time. So again, thank you and welcome. Let's just jump right into it. I'm kind of excited just to talk about some of the things you've been working on. Very timely and relevant for the world we're living in and all the buzzwords that are out there. I think you're going to bring some real-life humanity to it. It's not just the technology discussion. So let's just jump right in. Dr. Chaiken, your career has spanned clinical practice, public health, and technology. I want to ask you about your book. What inspired you to write Future Healthcare 2050 at this particular moment?
Dr. Barry Chaiken:Well, I have to tell you, I have to go back to the time that I was the EIS officer, and it was one of the first visits that I did to the CDC in my training. And I saw this IBM PC sitting there, and I was like, oh, tech is really interesting. And that actually got me interested in technology. And I've been using it my entire life. So about two years ago, ChatGPT came out where you could actually, without being a programmer, interact with it. And I said, let me learn about this new technology. And that got me very much interested in artificial intelligence. And again, not from the place where I was a data scientist and I was worried about JSONs and Pythons and coding, but about how to use artificial intelligence in a way that it could be helpful in your personal life and then in your work, and then, of course, in healthcare. And that inspired me to say, let me see what I could do to talk about healthcare and AI as it started to become more mainstream type of technology that people became familiar with. That inspired me to write the book. I also, and throughout the book, I put this concept in, which is the idea of trust. I wanted to make sure, understanding some of the risks of AI back in the beginning that have continued to this day, we have to make sure that if we're going to use artificial intelligence, it needs to be done in a way that really focuses on healthcare, on the patient, and on the benefits to society. And that really inspired me to write the book.
Christopher Hutchins:Yeah, so definitely more to do on the human interaction side than on the tech. I think we've been excited about technology forever. And we've honestly not done ourselves a lot of favors in recent years with our focus on that. So let's talk a little bit about some of the things that led up to that. You at one point coined the term RHIT in a previous book that you've written. How does that concept evolve in this new work, especially in light of AI's acceleration?
Dr. Barry Chaiken:Well, I use the word revolutionary. It stands for Revolutionary Health IT. The reason I used the word revolutionary, because I wanted people to think of something completely new, different, a different direction, a change in how we did things. And just using evolutionary health IT didn't really cut it for me. I want people to understand that information technology tools are just that. They are just tools. And if you don't figure out and plan how you're going to utilize them and tie them to the outcomes and goals that you want, they're just tools. And you can go ahead and spend a lot of money, a lot of time, and not only achieve nothing, you might even make things worse. I learned that through my experiences working with electronic health records. And of course, when I wrote my book on Navigating the Code, it was about process redesign, analytics, workflow redesign, and the idea that you need to use technology for a particular goal to achieve a particular objective. And it requires you to not just use the technology, but redesign the things around it to make sure the technology is used in a proper way that is revolutionary in terms of the outcomes it delivers.
Christopher Hutchins:That's such an important distinction. It's unfortunate that we are so word-sensitive, but I think revolutionary is the right word for what you're describing. So in terms of transformation, in the book, you're exploring how AI could reshape the patient-physician journey. What do you think is the most misunderstood aspect of that transformation today?
Dr. Barry Chaiken:My experience in working with technology has shown me that people like to get excited about that new bright, shiny object.
Christopher Hutchins:Right.
Dr. Barry Chaiken:And I understand we all love new things, right? We love a new pair of sneakers, we love a new technology we get to work with, but we always have to be focused on the objectives and the outcome. There's a lot of talk today about using clinical artificial intelligence to achieve a particular goal or objective. I think that has tremendous potential. But the issue here is we have a lot of work to do to achieve that. I think there's a lot we can do to further humanize how we deliver healthcare using artificial intelligence by making the interaction between the caregiver, which is not just the physician, but the nurses, physical therapists, people who do the blood draw, the people who register you with the clinic, make that interaction more personal. And the way it can be done is AI can do two things. One, it can prepare the caregiver so they can have a more personal humanistic interaction with that person by giving that caregiver information before they see that patient. And secondarily, the interaction that the patient has with the system, even if it's automated, can be personalized. Whether it means before you have your surgery, you're going to have instructions of the things you need to do to prepare yourself for that surgery. They can be personalized to who you are, to the language that you speak, to the level of education that you have, to the type of job that you actually do. And it can adjust its recommendations and speak to you in a language that can be more motivating and comforting for you. The same way the interaction between the physician and the patient can be more humanized in the interaction by the information given to that physician, but also in the communication written, verbal, and otherwise to that patient by personalizing it around who they are, what their current diseases may be, their challenges may be, their family situation. These are things that we cannot do as humans. It's just too much to cover. But AI can actually help us do that.
Christopher Hutchins:Yeah, I love that. You bring out a really important thing that we could easily lose sight of. The patient experience obviously is one of the most important things we can look at. But it's not only a physician they are interacting with. A bad experience at check-in can be just as much of a problem for the patient as anything else. So focusing on the people that are essentially their services being wrapped around that patient experience, there's just a lot of opportunities in there. Let's talk about what practical solutions might be in the near term. What AI use cases do you think health systems should be focused on and maybe talk a little bit about one or two that you believe are overhyped.
Dr. Barry Chaiken:The overhyped ones are really easy. So let's talk about some of the practical ones. We know a lot about physician burnout. We know about nurse burnout and other caregiver burnout because of the documentation. So let's go back to where that was. We created these electronic health records. Why were they created? They were created to help patients. No. They were not developed to document patient care. They were developed so that it would make it easier for organizations to document care for the purpose of receiving reimbursement. We have a reimbursement system in the United States that is dependent upon that documentation. I'm not going to talk about the pros and cons of that. There are many arguments on all sides, and all of them are valid. I'm just talking about the EHR was developed for documentation. Why was it done that way? Very simple. Who would buy a system that didn't optimize the documentation for reimbursement? Nobody. So what we tried to do through meaningful use and other measures by the government was make it so it was focused on the healthcare and the outcomes of that patient. But reality is that we live in a system, whether you're a for-profit or not-for-profit organization, we survive by generating the proper revenue for our organization to keep it going. And to be fair, that includes generating revenue to pay for people who can't pay for the care. So that's part of the process. It's not just generating revenue for the benefit of senior people or for a board or for something like that. It's really also to have surplus money to take care of those who can't afford care, and it's provided essentially as free care. So that's where the revenue piece is. I think we are never going to get rid of that requirement unless we completely change our healthcare system. So we need to have a way, how can we decrease that burden? The way we can do it is we can use AI through ambient listening and other processes to make it so the physician can focus on the patient. Why do I think this is really important? It's not just the time that the physician can spend with the patient, looking at their eyes, testing them in various areas, the communication that occurs in transferring the therapeutic plan or instructions for the next week or two to that patient. It's also the idea that when you document the care, we know that ambient listening by the AI can document the care as well, if not better, when properly implemented, better than that actual caregiver. Now, think of the possibilities that can occur from that. The note that goes to the patient can also be modified and written in a way that that patient can understand it and is meaningful to them. It's not just seeing the physician's note, but seeing a customized summary of their visit. If there is a referral to a specialist, the information from that referral could be prioritized based upon the specialty of that referred physician, so that the most important details about that patient are not buried somewhere in the note, but are right at the top because we know a surgeon needs to know X, Y, Z about the patient, but less about some other things. For example, it'd be very important to know some labs on a particular patient who's undergoing, say, a hip replacement, but maybe their cholesterol level isn't as important as their clotting factors or their hemoglobin. And we can make sure that those factors are right up front and initially seen, and that also impacts care and improves the quality of care while at the same time addressing the issue of physician burnout.
Christopher Hutchins:I love that. Are there things that you think we could do specifically to really lean in on what you're saying, make sure that the AI is supporting rather than replacing the human connection in medicine? Because I think that's an area where it's probably the most uncomfortable for us across the industry as we're thinking about how do we use AI and do it in a way that's meaningful.
Dr. Barry Chaiken:Well, that goes back to the idea of clinical AI. Now, we can do great things with AI in terms of drug discovery, clinical trials, and I can go into details in some of those. But the challenge in using clinical AI is in the idea of using it as a therapeutic or diagnostic tool. The problem here is that if you do not set up the workflow correctly and you go ahead and implement that AI, you have the risk of getting into automation bias where the physician sees the AI recommendation, they're busy, they're distracted. Oh, by the way, physicians, nurses, and other caregivers are human beings. They are going to get distracted. They are going to miss things. We all miss things and make errors. If you have automation bias, they'll just go okay, okay, okay. We know that AI can hallucinate, can make up things, provide misinformation. Without that human check, we have a problem. Now, how do we address that? Well, we could create AI that goes ahead and looks at the output from the AI and monitors that. And there are other surveillance tools we need to put in place. So it's actually multifaceted. Let's make sure the workflow is correct so the physician is part of the process and we try our best in the design to avoid automation bias. Second is we have some type of tool, whether it's AI or AI and some other additional algorithm that works to check on what the results are from the AI, from the physician's plan. And the third thing is let's set up a surveillance tool. Let's go ahead and monitor how we're doing with these things on a regular basis and look for things that are out of line that may be wrong. And that way we can work to correct them, correct them in terms of the workflow processes and correct them in terms of the artificial intelligence.
Christopher Hutchins:That brings up an interesting concept because I think when we talk about things like surveillance, that word has negative connotations probably that come to people's mind. It's big brotherish. You're going to monitor me and babysit everything that I do. I went to medical school. I know what I'm doing, and your technology is going to be looking over my shoulder. Is that what you're telling me? I think that's an interesting conversation because I think this is a bit different from a cultural adjustment standpoint for anybody who's going to be measured this way, but I think it's incredibly important for the reasons that you said. Human nature is such that eventually, no matter what our intentions are, we can get comfortable and complacent over time because 99% of the time it's right, or the perception we have is it's always right. I don't think I have to worry about it anymore. I think the ability to measure, and how I think about it is, if Dr. Chaiken is spending on average three minutes reviewing the post-visit summary, and this Hutchins guy is barely spending 30 seconds. Do we have something that flags that? Someone needs to come and talk to that Hutchins guy that he's clearly not being attentive to it and he's putting himself and his organization at risk. It seems like a little bit more than people probably are accustomed to thinking about, but I think it's really important for the reasons that you said. Human nature is human nature. I don't know how we short-circuit that.
Dr. Barry Chaiken:Well, Chris, maybe that Hutchins guy knows something that the Chaiken guy doesn't know about, and the Hutchins guy's spending 30 seconds instead of the three minutes, and it's not only much more efficient, but also then has extra time to spend with that patient. We don't know whether the Hutchins guy or the Chaiken guy is doing the right thing. But let me give you this model. If you go back 40 years, the chance of you dying in an airplane crash was 10, 20, 50, 100 times greater than it is today. Why is that? The reason is because the pilots who initially said, hey, I'm the captain, you can't tell me what to do, they learned over time that is the wrong model. You have to have crew resource management, which means the captain, the co-pilot, and other people on the aircraft all have a right to intervene. Now, give you an example, what we do in healthcare. We have a checklist in the OR. And anybody in the OR who thinks that something is not going correct can stop the beginning of the surgery and ask them to recheck it or to question it. That's the mindset we have to have in healthcare, which is if I'm looking at how you're doing, I'm not looking to punish you. Now, I'll talk about that in a second. We're not looking to punish you. What we're looking to do is to make sure that you're within the parameters of what we expect. And oh, by the way, we might discover something that you're doing that we should have everybody else do because it's better. Unfortunately, how surveillance is used today, particularly in the for-profit hedge fund-owned, venture fund organization, it's like, it's taking you, nurse, four minutes to spend the time on the phone with the patient. You've got to cut it down to three minutes and 45 seconds. If any of you know a nurse, if any of you are a nurse, you know that is not the way you work. And doctors don't want to work that way either. The most important person in the room when you're seeing a patient is the patient. You went to medical school, nursing school, PT school, or you joined healthcare as an intake person with no medical training, but you did it because you're committed to that cause. That's what's most important. We can make it more efficient within the humanistic framework using humans and get them to do the right thing, but we're not here to punish them for time. We're here to identify the best ways to do things and encourage them to do the best for their patients.
Christopher Hutchins:Thank you for explaining that. I think it's just a really important thing that we have to get people to understand. If we don't put these in place as we're starting to deploy, we're definitely going to go in a direction that's very dangerous. Thank you for that. I want to get into some things that are maybe a little bit more difficult to wrap, well, for me it's still more difficult to wrap my head around in terms of the types of use cases we can be comfortable with. But you write about predictive diagnostics, clinical decision support, and workflow automation. Which of these will be the most disruptive to current medical practice, do you think?
Dr. Barry Chaiken:We've evolved in using electronic health records. I think that we would be able to utilize these tools in ways that could be incredibly beneficial to patients. But what we need to do is we have to involve the clinical people in the design, at least of the workflow and the output and the surveillance tools. We have to gain their trust in utilizing them. And let's not forget about the patient. They should have a voice in what they want, how they want to use the AI in terms of receiving messages and summaries and such. It has to be a holistic approach to understand this piece. And let's not decide what we're going to do before we've collected all the information from everybody. And the other thing is, if you want to fail in a hospital, what you should do is implement something without asking the nurses what you should do. Okay? That's what you need to do. So don't do that. Involve everyone, include everyone, understand it, and then implement it and tell people what you're trying to accomplish.
Christopher Hutchins:Where can listeners find your book, follow your work, or invite you to speak further about this important topic? We've only scratched the surface, but I know our listeners have gotten a ton of really great insight. And I think you're going to need to make your book available, tell them right where they can get it, because I think they're going to be excited to read.
Dr. Barry Chaiken:Well, you can find everything about me at BarryChaiken.com. And in there you can have some fun. Not only can you read about my book, you can order a deluxe signed copy there that I'll sign. And if you want, I'll put a little message in and I can mail it out to you. It's also available on Amazon and Barnes and Noble, but obviously I can't sign them there. The second thing is what I created was a little chatbot, two different types of chatbots. One of them is you can text in it and that'll appear on the homepage as well as a little menu. You can go to that and you can ask me questions. But what I specifically did is if you ask me what the weather is going to be in your town next week, it'll specifically say to you it's not my database, and I've been instructed not to hallucinate. So I've made a point of doing that. The second one is I also created one where you could actually quote unquote call me. You hit the little call button, you can ask the same question, it'll respond back to you in my cloned voice. So it's just a little fun way of how to use AI. And I have a lot of information on my website, BarryChaiken.com, but of course feel free to reach out to me using that site. And please connect with me on LinkedIn. It's BarryChaiken.com/LN, and that'll link you up to LinkedIn and my page, and we can connect.