The Signal Room | AI in Healthcare & Ethical AI
Welcome to The Signal Room, your go-to podcast for expert insights on ethical AI, AI strategy, and AI governance in healthcare and beyond. Hosted by Chris Hutchins, this show explores leadership strategies, responsible AI development, and real-world implementation challenges faced by healthcare AI leaders. Each episode features deep conversations covering healthcare AI innovation, executive decision-making, regulatory compliance, and how to build trustworthy AI systems that transform clinical and operational realities.
Whether you are an AI strategist, healthcare executive, or AI enthusiast committed to ethical leadership, The Signal Room equips you with the knowledge and tools to lead AI transformation effectively and responsibly.
Join us to learn from industry experts and healthcare leaders navigating the evolving landscape of AI governance, leadership ethics, and AI readiness.
Follow The Signal Room and stay updated on the latest trends shaping the future of ethical AI and healthcare innovation.
The Signal Room | AI in Healthcare & Ethical AI
Redefining the Patient-Physician Journey with AI: Dr. Barry Chaiken on Trust, Interoperability, and the Future of Healthcare 2050
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Healthcare AI governance, algorithmic bias, and the future of the patient-physician relationship converge in this episode of The Signal Room. Host Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants, continues his conversation with Dr. Barry Chaiken, a physician leader with more than 25 years of experience in clinical transformation, health IT, and public health. Dr. Chaiken has served as chief medical officer for multiple health tech companies, advised the federal government on pandemic preparedness, and is a past board chair of HIMSS. He is also the founder of DocsNetwork, a consultancy focused on clinical innovation and patient safety.
The conversation opens with a direct challenge to healthcare leaders: identify a goal before deploying AI, start small, execute, learn, and iterate. Dr. Chaiken warns against letting vendors obfuscate, noting that even Sam Altman, the founder of OpenAI, does not fully understand how AI works. The discussion turns to algorithmic bias and the diverging approaches between the United States and the European Union, with Dr. Chaiken raising a sobering point about deep fakes influencing approximately 160 elections worldwide.
A passionate plea for solving healthcare interoperability anchors the middle of the episode. Dr. Chaiken draws on the analogy of global ATM networks to illustrate how far behind healthcare data sharing remains, arguing that the data belongs to the patient and that EMR companies have a responsibility to fix this problem.
Looking toward 2050, Dr. Chaiken envisions AI-accelerated drug development through AlphaFold protein folding research, clinical trials conducted in silico using digital twins, and personalized patient engagement that accounts for real-world constraints like a single mother's medication schedule. His closing call to action is unmistakable: you are what makes AI valuable. Without human knowledge, experience, ethics, and values directing AI, it can hallucinate and cause harm. With those qualities guiding it, AI can do great things. Humans should control AI; AI should never control us.
About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.
Website: https://www.hutchinsdatastrategy.com
LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/
YouTube: https://www.youtube.com/@ChrisHutchinsAi
Book Chris to speak: https://www.chrisjhutchins.com
Dr. Barry Chaiken is not just a physician leader. He's a strategist and a longtime advocate for meaningful healthcare change. With more than 25 years of experience in clinical transformation, health IT, and public health, he's worked across payer, provider, and policy domains. Barry served as chief medical officer for multiple health tech companies, advised the federal government on pandemic preparedness, and is a past board chair of HIMSS. He also is the founder of DocsNetwork, a consultancy focused on clinical innovation and patient safety. But beyond his resume, Barry brings a steady voice on the ethical use of technology, the culture of care, and the human moments that often get lost in the machinery of healthcare systems. In this conversation, we talk about what it takes to lead with integrity, how to challenge orthodoxy in times of change, and why he still believes healthcare is a calling. Dr. Barry Chaiken, welcome to the Signal Room.
Dr. Barry Chaiken:Thank you so much, Chris. I really appreciate that introduction.
Christopher Hutchins:Well, it's an honor to have you. I've known you for a number of years, and it's just exciting to be able to launch the Signal Room with a good friend and a visionary at the same time. So again, thank you and welcome. We've touched a little bit about what I would consider being AI ready. This is a big cultural adjustment that an organization will have to go through. So for hospital and health system leaders that are tuning in, what's the first organizational change that they should make to become AI ready?
Dr. Barry Chaiken:Identify a goal that you want to achieve, make sure you don't forget what that goal is, and explore AI technology with other changes and adaptations you need to make to be able to achieve that goal. Do not say, oh, I'm going to have an AI plan and implement AI. It has to solve a particular problem, or it has to improve your quality of care, your capacity, whatever, around your organization. That's the most important thing. Understand what the goal is and deliver the goal. I'll tell you, I think we all kind of know this. Big projects have a high propensity, probability of failing. So why choose a big project? Choose a small project, execute on it, learn from it, make it a success, and then do a second one using what you learned from the first one and go on and on. That said, unless you're a large academic medical center, you probably don't have the resources to start developing your own AI. You're going to have to buy it from a technology vendor. If you're going to do that, make sure to the best of your ability, you test every assumption and every suggestion that they make and ask them as if you were a three-year-old or four-year-old, why, why, why? It's the why and the how. If you do not understand what they're doing for you, that means that they don't understand what they're doing for you. You need to be able to understand everything they tell you and answer every question and don't let them obfuscate. And I'm not saying that the IT organizations are going to do this. All I'm saying is AI is so complicated. It's difficult for those experts, the Altmans of the world, to understand how AI works. And he's the founder of OpenAI and ChatGPT. If he doesn't understand it, how can you expect to understand it in that detail? But what we can do is take the proper steps to understand it as best we can, get the explanations as best we can. That decreases the likelihood that we're going to have a problem.
Christopher Hutchins:So essentially, we don't transform because it's transformation and that's the end goal. It's still really all about what healthcare starts with, ends with, and it's that human interaction, and it's about taking care of people. And one of the things that I have a great deal of respect for is people like you who took the Hippocratic oath, first do no harm. It's not about the technology that we can bring to bear. It's how the technology can be used to support the mission. And I love the clarity that you speak with on that. It kind of leans into something else that's a little challenging to really work through. And I think as we're talking about readiness, there's a bunch of things that we have to look at. Does it check this box, this box, and this box? It kind of gets into some of the more sensitive areas. So there's growing concern around algorithmic bias, as you mentioned. How do you see governance evolving to address equity and ethics with AI?
Dr. Barry Chaiken:Well, there's a divide between what the United States wants to do and what, say, Europe wants to do. The recent announcement from the administration around AI, I think it was about a week or so ago, was solely focused on go as fast as you possibly can and don't worry too much about the consequences. There's about 20% of people who live and die with AI in their life, they think about it all the time, who think that general artificial intelligence will destroy humanity. So I'm not going to tell your audience how they should think of this problem. I'm just going to say the United States has a particular perspective about going as fast as we can, and we can worry about it later. The European Union, on the other hand, understands the need to compete with China, for example, in AI technology, but understands some of those risks and has a little bit more constraints on that. My own opinion is that AI run amok is quite dangerous in so many ways. Politically, there were about 160 elections around the world last year, and almost all of them were influenced by deep fakes, artificial intelligence, fake images, fake videos, and such. I'm not sure that we as humans are going to be able to discern what's real and what's fake. And that scares the heck out of me because if we don't trust each other, which is the foundation, the bedrock of society, and we get to a place where all we do is disagree and believe fake videos, we don't trust each other, society breaks down. Let me share one quick thing with you. Sometime in the last couple of years, you've been at a coffee shop, you've been in a shopping center, you've been in a grocery store, and you needed to leave your bag in the shopping cart or at the table as you went up to get a stir for your coffee or something you forgot to pick up on the cereal aisle for your child. And you never wondered at one second that the person who you asked to watch your bag or watch your cart was going to take something from you. Never. We inherently, as human beings, trust people. That's what makes us human. If we do things, and AI has a risk of eroding that. If we lose that, then we lose our humanity, and in a sense, we lose our society. So let's be careful about that and how we use artificial intelligence.
Christopher Hutchins:Yeah, that's a huge, huge challenge. Having countries make different decisions around how they're going to deploy it and the pace that they're going to go. It does put us in a really interesting position because healthcare in the United States is not only impacting citizens of our country, it's impacting people who visit here too. And things like GDPR in Europe, the patient has some level of ownership of their data and they can actually say yay or nay in terms of where they want it and how they want it to be used. So what role should patients actually have in AI development and deployment? How are we going to earn their trust and invite their voice into the design process? Because it seems like that's something that would be the right thing for us to do.
Dr. Barry Chaiken:Having worked for tech companies, I think it's going to have to happen at the provider organization level. You'll probably see a bit of it also in pharma because they have the resource to be able to do that. I don't think the tech companies are connected that way to involve patients in the development of their products. I probably see their clients as being responsible for that. Even if you're a small community hospital, it doesn't require a lot of resources to bring some people in and have them part of the planning process. I think everyone who lives in a community with a community hospital, particularly in suburban or rural areas, are vested very much in that facility. And therefore, I think they'll participate to help the organization make the right tools, choose the right tools, and make the right decisions on using the AI. So just involve them. And if anything, you're just bringing your community together in your institution that is incredibly important to their health and survival.
Christopher Hutchins:I think that's another really great call out. It's a different type of responsibility that we need to own in healthcare. A patient needs to be able to understand enough to make an informed decision around whether they're comfortable. Not everyone's going to want to be disclosing that they're using AI. I think in healthcare, because of these ethical standards and the commitment as a physician that you make, there's something that we have to do there. And I think the communication and that education, we've got to come up with some great simple ways to make sure that happens without adding yet another layer on the complexity of what a physician actually needs to do in that encounter, because a patient's going to be much more comfortable trusting their physician's explanation than they are from maybe the person at the front desk who doesn't really understand it but is going to try to make sure the patient does. Those are some things we have to kind of think through together, both from an administrative and executive standpoint and a clinician standpoint, and even our nurses. So we need to kind of think through that and where does that messaging happen and how.
Dr. Barry Chaiken:When you are in an organization, particularly one like healthcare, you have to think of it as a community. And if you're a part of senior leadership, you have to make people who work for you feel part of the community. I've always believed that much of people's self-worth when they're in working age is the work that they do. Sure, they like getting their salary and getting their bonuses and all of that. And most people are most concerned, do I have enough to put the food on the table, have a roof over my head, take care of my children. That's the way they look at it. They want to feel involved. If you're senior leadership, just make those people who work there feel important and seen. And you will get a lot more happy employees, which then means happy customers or happy patients. It's incredibly important being connected to the people who eventually report up to you. With AI, you want to emphasize the concept of trust. Well, do things to make sure the people in your organization feel important and wanted. And it's not having a nice holiday party. It's not so much about giving out an increase in a bonus. That's all nice, but those go away really quickly. It's the day-to-day feeling of you're involved, I'm contributing, I'm making my organization successful. And on top of that, I'm serving my community, my neighbors.
Christopher Hutchins:I love that. That's really probably one of the biggest things that our tech partners can understand about what we need them to help us do. They're part of our communities too. So are the investors who are wanting to help us really develop the right solutions. If all of us are about that community and taking care of the people that we are in that community with, I think we have a lot of opportunities and I'm excited about the future if we can get this right. So looking ahead to 2050, back to the concept of your book, what's your most optimistic prediction of how AI will improve care and what worries you the most?
Dr. Barry Chaiken:Okay, let's talk about the positive first. I think first AI is going to do amazing things at discovering new ways to treat patients. Pharmaceutical developments and acceleration is going to be tremendous, and that's magical. AlphaFold 2, AlphaFold 3, and the AlphaFold program. Understanding how proteins fold, that doesn't sound important. But everything that happens in biology is around proteins, and being able to predict how proteins interact with each other, how molecules interact with each other, is an enormous advantage. It can quickly accelerate drug development. And then, of course, clinical trials, identifying people who are novel patients who have not been treated yet, to enter into a clinical trial where today, once you've been treated, you may no longer be eligible for a trial. So that gets people involved. Reaching out outside of the academic medical centers to identify those people and get them involved. We can use AI to be able to do that, to redesign the model, to do a clinical trial in silico, meaning in a computer using digital people, right? They'll take whoever you are, create you digitally, they'll understand your genetics and all the other things that make you who you are, and do a clinical trial in silico and run it that way to see how the molecules and the proteins will interact in biology. That obviously is going to greatly accelerate all this work and development. And then, of course, the research we can do in terms of how to do surgeries and how to take care of patients and do physical therapy and others, all that's going to be great. I think if we deploy AI properly, we'll be able to have a much better relationship between the caregiver and the patient and the patient and the organization. Today it's extremely expensive to hire people to be on the phone to interact with you, to remind you to come in for your appointment, to do the proper things before your surgery or post-surgical care, to take your medications. So much of that can be personalized that before we couldn't do that. We can decrease the variation in how care is delivered using AI. Everyone should not be treated the same way, but some people shouldn't be treated that way at all. And we can decrease that variation and personalize it in a simple way. A busy single mother with three children is unlikely to be able to take a medication four times a day, even though that's the best treatment for her. So the AI can understand that and say, I think with Sally, she should take this pill in the morning and at night. That'll fit into her schedule. It is not the best, but it's more likely she'll take that medication, she'll get some benefit from it. So that's really important. And the interaction where the organization can personalize the messaging to those patients is great to motivate them, to follow their regimen, to come into the organization, to feel seen, because it can personalize in ways that we don't have people that can actually personalize. What's the downside? My biggest fear is using AI solely to generate revenue and not worrying about the outcomes that happen for those patients. And the second thing is making mistakes and clinical errors through automation bias with poorly designed workflows, or AI that is not properly trained for the population that is being seen in the organization that is utilizing it. For example, New York City has a very diverse population. If you trained the AI models on people from Oklahoma or Indiana, it would not apply to the people in New York because they have a much more diverse population from people all over the world. If you took what's the training that happened in New York and applied it to Oklahoma, it wouldn't work there either. So you have to balance how the AI and have it be specifically trained for your region. And if you don't do that, you're going to have bad outcomes.
Christopher Hutchins:That's a really important thing as well. I think we're definitely quick to trust. And I think one of the things that concerns me most is the bias, not from the standpoint of somebody doing something on purpose. It's that inadvertent miss that could occur. Whether it's making the wrong assumption, like we would do between Oklahoma and New York City, there's different needs and different dynamics around the communities, a lot of different factors that could make that very different. But there's also what's missing. And that's another thing that is a concern because I know, as a physician, you don't know what you don't know about me as your patient, and you don't know if what you don't know is important. And AI is not going to solve that for us, so we definitely need to be aware of it. I think it can maybe reduce the odds of missing information, but it's not going to completely address that at all. So it's a really great call out.
Dr. Barry Chaiken:If I may, I have one request for everyone. Can we please fix the interoperability problem? Please. That's just really, really simple. All of you who've ever traveled around the world, you know that you can go into an ATM machine, stick your card in, and get local currency. We can do that. Now, I understand that healthcare is much more complicated. Our systems are more complicated, the nomenclature we use is more complicated, but there's no reason that a hospital, one hospital in Boston, which is less than a mile away from the other hospital in Boston, the physician can't see the MRI done in hospital A because they're on a different system in hospital B. An MRI is a standard format that should easily be able to be accessed and seen. Patients should be able to download their MRI and if they need be, put it on a disk, on a thumb drive, on their phone, and deliver it to the other physician. That's a real problem because it's not only expensive for us to do this, we duplicate tests, we misdiagnose patients because we're missing it, but also we don't have a data set that we can use to train the AI. Can't we just have one standard data set that we can use and have that interoperability happen? So I challenge those EMR companies. Can you please fix this? It should not be this hard. And oh, by the way, the data belongs to the patient, not to the hospital and not to the tech company. It belongs to the patient and they share it with you so you could do good things with it, so you can develop better systems. It's okay that you're for profit. It's okay that you make money. That's fine. But recognize it's the patient who's making the investment to help you do that. So your responsibility is to give back. How do you give back? Produce great tools, and let's fix this interoperability problem.
Christopher Hutchins:I don't know if that could ever be said more clearly. That is a huge issue, and I think anyone who's been working, particularly in the IT or clinical spaces over the last decade, has seen so many difficulties arise from not being able to get access to the information that you need. This is about saving lives, after all. So, yeah, I think we need to figure this out.
Dr. Barry Chaiken:And what about saving money? And if we are able to save money, we can do more with that. We can be more efficient, right? We have to get rid of the friction that exists in healthcare. And that's fine. Everybody can make their profit. I'm okay with all of that. I'm a capitalist. That's good. But let's do better. Doctors try to do better, nurses try to do better. The janitor in the hospital works hard focused on helping the hospital stay in a pristine way that patients will not get sick from hospital-acquired infections. So let's all try to do a little bit better with this. And I think we can. There are ways for us to do this.
Christopher Hutchins:Wholeheartedly agree. I'm with you. And as long as I get the chance to collaborate with you, I'm going to be working on my end just as hard as I can to help you solve this. I know it's hugely important, and if we can leave a legacy of something valuable, that would be one massive contribution and I think we could get a lot of people aligned to help us really push on that. So I hope we get the chance to really do some meaningful work there together.
Dr. Barry Chaiken: That would be great. I was at a HIMSS conference and the keynote speaker, Biz Stone, started Twitter back in the day. And during his keynote speech, he said companies only have three purposes:make money, do good, have fun. And let's think about it. You can make a lot of money and be very successful doing good and having fun. And there are examples of those companies out there. And I think in healthcare, we should do the same. Let's do good, let's have fun, and let's make money. We can do that.
Christopher Hutchins:Well said. I love it. So if you could give a piece of advice to health system CEOs and boards that might be reading your book, what would it be?
Dr. Barry Chaiken:When we think of AI, we think it's smart, but in reality, it's dumb. It doesn't know anything, has no opinions, it's all statistical and probabilities. You know who's smart? You know who's intelligent? You know who's knowledgeable? The person at the desk using the AI. So my call to action for all of you is recognize that you're what makes AI valuable, and your knowledge, your experience, your dedication, your goals, your objectives, your morals, your ethics is what should constrain and direct AI and how it is used. So never think that I'm going to use AI and it'll solve my problem. Think, I'm going to use a tool like AI with my knowledge, intelligence, strategies, values to solve the problem. And without you, AI can do nothing. It can hallucinate, it can do misinformation, it can hurt people. But with you, it can do great things. So never forget you're the center of what can make AI great.
Christopher Hutchins:I don't think there could be a better public service announcement than what you just gave. It's such an important message. I'm so glad that you're saying this. And I cannot wait for our listeners to hear this and I'm going to go out on a limb. I think they're going to read a lot more about it too. So as we're kind of wrapping up, I want to continue just a little bit more on your Future of Healthcare 2050. What's one question that you hope every reader walks away asking themselves after finishing reading this book?
Dr. Barry Chaiken:What can I do to ensure that the humanity invested in the word trust gets applied to how AI is utilized in my professional and personal life? That is most important. Let's preserve the trust and understand that humans should control AI, but AI should never control us.
Christopher Hutchins:Well, as we begin to wrap here, Dr. Chaiken, this show is all about signals versus noise in healthcare innovation. What's the clearest signal that you're hearing right now in where AI and healthcare is headed?
Dr. Barry Chaiken:That's a difficult question. Of course, I hear way too much noise. Crazy ideas, it's going to do A, B, C, it's going to take over doctors, we're going to have a hologram of a physician who's going to take care of you, all of that. That's so much of the noise we hear. And I think the reason why you hear so much noise is because people don't really understand AI and how it should be used. Last week I spent time with two of my wonderful longtime colleagues. One teaches management at a local university, the other one teaches environmental engineering at a local university. And I've been a guest lecturer in both of their classes. And I speak to them about AI, about things that I knew and understood a year ago, and they don't know about. They tell me about their faculty deciding how students should be utilizing AI. And my colleague and I take our long walks and we do the usual go back and forth of how should we use it and try to evolve and develop and build upon each other's ideas using the AI. So there's so much noise because most people don't understand it. What's the signal? That's a hard one. The signal is the AI itself is moving so incredibly fast, irrespective of everything else. That is absolutely happening. And it is happening at a pace that keeps accelerating, that even the people who work in it don't know. So let's ignore the noise, focus on the signal, and learn so we can understand what the signal is and what it's telling us.
Christopher Hutchins:Where can listeners find your book, follow your work? We invite you to speak further about this important topic. We've only scratched the surface, but I know our listeners have gotten a ton of really great insight. And I think you're going to need to make it available, make your book available, tell them right where they can get it, because I think they're going to be excited to read.
Dr. Barry Chaiken:Well, you can find everything about me at BarryChaiken.com. And in there you can have some fun. Not only can you read about my book, you can order a deluxe signed copy there that I'll sign, and if you want to put a little message in, I can mail it out to you. It's also available on Amazon and Barnes and Noble, but obviously I can't sign them there. The second thing is what I created was a little chatbot, two different types of chatbots. One of them is you can text in it and that'll appear on the home page as well as a little menu bar. You can go to that and you can ask me questions. But what I specifically did is if you ask me what the weather is going to be in your town next week, it'll specifically say to you it's not my database, and I've been instructed not to hallucinate. So I've made a point of doing that. The second one is I also created one where you could actually quote unquote call me. You hit the little call button, you can ask the same question, it'll respond back to you in my cloned voice. So it's just a little fun way of how to use AI. And I have a lot of information on my website, BarryChaiken.com, but of course feel free to reach out to me using that site. And please connect with me on LinkedIn. It's BarryChaiken.com/LN, and that'll link you up to LinkedIn and my page, and we can connect.