The Signal Room | AI Strategy, Ethical AI & Regulation
Healthcare AI leadership, ethics, and LLM strategy—hosted by Chris Hutchins.
The Signal Room explores how healthcare leaders, data executives, and innovators navigate AI readiness, governance, and real-world implementation. Through authentic conversations, the show surfaces the signals that matter at the intersection of healthcare ethics, large language models (LLMs), and executive decision-making.
The Signal Room | AI Strategy, Ethical AI & Regulation
How Healthcare AI Innovation Redefines the Patient and Physician Journey (Part 2)
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Healthcare AI innovation is redefining the patient and physician relationship, raising new questions about clinical trust, governance, and care delivery.
In Part 2 of this conversation on The Signal Room, host Chris Hutchins continues the discussion with Dr. Barry Chaiken and Ratnadeep Bhattacharjee on the evolving intersection of healthcare AI, clinical workflows, and the patient experience.
Building on the automation bias and workflow design themes explored in Part 1, this episode examines how AI innovation is redefining what it means to deliver care. The conversation explores how data-driven tools are changing diagnostic pathways, how patients are becoming more informed participants in their care through AI-powered access to health information, and what physicians need from AI systems to maintain trust and clinical autonomy.
The discussion also addresses the governance structures that must be in place before AI-driven workflows can scale safely, including data quality requirements, explainability standards, and the organizational readiness necessary to support responsible adoption.
For leaders navigating AI strategy in healthcare, this episode offers a grounded perspective on what responsible implementation requires when the stakes are patient outcomes.
Guests: Dr. Barry Chaiken and Ratnadeep Bhattacharjee
Dr.
SPEAKER_01Barry Taken is not just a physician leader. He's a strategic and a longtime advocate for meaningful health care change. With more than 25 years of experience in clinical transformation, health IT, and public health, he's worked across payer, provider, and policy domains. Barry served as chief medical officer under multiple health tech companies, advised the federal government on pandemic preparedness, and is a past board chair of HIMS. He also is the founder of Docs Network, a consultancy focused on clinical innovation and patient safety. But beyond his resume, Barry brings a steady voice on the ethical use of technology, the culture of care, and the human moments that often get lost in the machinery of healthcare systems. In this conversation, we talk about what it takes to lead with integrity, how to challenge orthodoxy in times of change, and why he still believes healthcare is calling. Dr. Barry Chaiken, welcome to the Signal Room. Thank you so much, Chris.
SPEAKER_02I really appreciate that introduction.
SPEAKER_01Well, it it's an honor to have you. I've known you for a number of years, and it's just exciting to be able to launch the Signal Room with it with a good friend and a visionary at the same time. So again, thank you and welcome. We've touched a little bit about what I would consider being AI ready. Um this is a big cultural uh adjustment that an organization will have to go through. So for hospital and health system leaders that are tuning in, what's the first organizational change that they should make to become AI ready?
SPEAKER_02Identify a goal that you want to achieve, make sure you don't forget what that goal is, and explore AI technology with other changes and adaptations you need to make to be able to achieve that goal. Do not say, oh, I'm gonna have an AI plan and implement AI. It has to solve a particular problem, or it has to improve your quality of care, your capacity, whatever, around your organization. That's the most important thing. Understand what the goal is and deliver the goal. I'll tell you, I think we all kind of know this. Big projects have a high propensity, probability of failing. So why choose a big project? Choose a small project, execute on it, learn from it, make it a success, and then do a second one using what you learned from the first one and go on and on. That said, unless you're a large academic medical center, you probably don't have the resources to start developing your own AI. You're gonna have to buy it from a technology vendor. If you're going to do that, make sure to the best of your ability, you test every assumption and every suggestion that they make and ask them as if you were a three-year-old or four-year-old, why, why, why? It's the why and the how. If you do not understand what they're doing for you, that means that they don't understand what they're doing for you. You need to be able to understand everything they tell you and answer every question and don't let them obfuscate. And I'm not saying that the IT organizations are going to do this. All I'm saying is AI is so complicated. It's difficult for those experts, the Altmans of the world, to understand how AI works. And he's the founder of OpenAI ChatGPT. If he doesn't understand it, how can you expect to understand it in this in that detail? But what we can do is take the proper steps to understand it as best we can, get the explanations as best we can, that decreases the likelihood that we're going to have a problem.
SPEAKER_01So essentially, we don't transform because it's transformation, and that's the that's the end goal. It's still really all about what healthcare starts, it ends with, and it's that human interaction, and it's about taking care of people. And you know, one of the things that I have a great deal of respect for is people like you have took took the Hippocratic oath, first do no harm. It's not about the technology that we can bring to bear. It's how the technology can be used to support the mission. And I I I love the clarity that you speak with on that. It kind of leans into something else that's uh a little challenging to really work through. And I think as we're talking about readiness, there's a bunch of things that we have to look at. And does it check this box, this box, and this box? It kind of gets into some of the more sensitive areas. So there's growing concern around algorithmic bias, as you mentioned, bias uh in healthcare. How do you see governance evolving to address equity and ethics with AI?
SPEAKER_02Um well, there's a uh a divide between what the United States wants to do and what, say, Europe wants to do. The recent announcement from the administration around AI, it was, I think it was uh about a week or so ago, was solely focused on go as fast as you possibly can and don't worry too much about the consequences. There's about 20% of people who live and die with AI in their life, they think about it all the time, who think that uh general artificial intelligence will destroy humanity. So I'm not gonna tell your audience how they should think of this problem. I'm just going to say the United States has a particular perspective about going as fast as we can, and we can worry about it later. The European Union, on the other hand, understands the need to compete with China, for example, and AI technology, but understands some of those risks and has a little bit more constraints on that. Um my own opinion is that AI run em up is quite dangerous in so many ways. Right. Um, politically, there were about 160 elections around the world last year, and almost all of them were influenced by deep fakes, artificial intelligence, you know, fake images, fake videos, and stuff. Um I'm not sure that we as humans are going to be able to discern what's real and what's fake. And that scares the heck out of me because if we don't trust each other, which is the foundation, the bedrock of society, and we get to a place where all we do is disagree and believe fake videos and such, we don't trust each other, society breaks down. Let me share one quick thing with you. Sometime in the last couple of years, you've been at a coffee shop, you've been in a shopping center, you've been in a grocery store, and you needed to leave your bag in the shopping cart or at the table as you went up to get a stir for your coffee or something you forgot to pick up on the cereal aisle for your child. Right. And you never wondered at one second that the person who you asked to watch your bag or watch your cart was going to take something from you. Never. We inherently, as human beings, trust people. That's what makes us human. If we do things, and AI has a risk of eroding that. If we lose that, then we use lose our humanity, and in a sense, we lose our society. So let's be careful about that and how we use artificial intelligence.
SPEAKER_01Yeah, that's a that's a huge, huge challenge. You know, having countries make different decisions around how they're going to deploy it and the pace that they're going to go. It does put us in a really interesting position because healthcare in the United States is not only impacting citizens of our country, it's impacting people who visit here too. And you know, things like GDPR, uh in in Europe, you know, the patient has some level of ownership of their data and they can actually say yay or nay in terms of where they want it and how they want it to be used. So what role should patients actually have in AI development and deployment? I mean, how are we gonna earn their trust and invite their voice into the design process? Because it seems like that's something that would be the right thing for us to do.
SPEAKER_02Aaron Powell Having worked for tech companies, I think it's gonna have to happen at the provider organization level. You'll probably see a bit of it also in pharma because they have the resource to be able to do that. I don't think the tech companies are connected that way to involve patients in the development of their products. I probably see their clients as being responsible for that. Even if you're a small community hospital, it doesn't require a lot of resources to bring some people in and have them part of the planning process. Um I think everyone who lives in a community with a community hospital, particularly in suburban or rural areas, are vested very much in that facility. And therefore, I think they'll participate to help the organization make the right tools, uh, choose the right tools and make the right decisions on using the AI. So just involve them. And if anything, you're just bringing your community together in in your institution that is incredibly important to their health and survival.
SPEAKER_01I think that's a good uh another really great call out. It's a it's a different type of responsibility that we need to own in healthcare. Um a patient needs to be able to understand enough to make an informed decision around whether they're comfortable. Not everyone's gonna want to be disclosing that they're using AI. I think in healthcare, because of the the these ethical standards and the commitment as a physician that you make, there's something that we have to do there. And I think the communication and that education, we've got to come up with really some s some great simple ways to make sure that that happens without adding yet another layer on the complexity of what a physician actually needs to do in an in that encounter, because a patient's gonna be much more comfortable trusting their physician's explanation than they are from from maybe I'm the guy at the front desk, and uh I don't really understand it, but I'm gonna try to make it sure the patient does. Those are some things we have to kind of think through together, um, both from an administrative and executive standpoint and a clinician standpoint, and even our nurses. So we need to kind of think through that and where does that messaging happen and how?
SPEAKER_02Uh when you are in an organization, particularly one like healthcare, you have to think of it as a community. And if you're a part of senior leadership, you have to make them, people who work for you, feel part of the community. Right. Um I've always believed that uh much of people's self-worth when they're in working age is the work that they do. Sure, they like getting their salary and getting their bonuses and all of that. And most people are most concerned do I have enough to put the food on the table, have a roof over my head, take care of my children. That's the way they look at it. They want to feel involved. You need to go if you're senior leadership, just to make those people who work there feel important and seen. And you will get a lot more happy employees, which then mean happy customers or happy patients. It's incredibly important uh being connected to the people who report eventually report up to you. With AI, you want to emphasize the concept of trust. Well, do things to make sure the people in your organization feel important and wanted. And it's not having a nice holiday party. It's not about so much about giving out a an increase in a in a bonus. That's all nice, but those go away really quickly. It's the day-to-day feeling of you're involved, I'm contributing, I'm making my organization successful. And on top of that, I'm serving my community, my neighbors.
SPEAKER_01Uh I I love that. That's really the probably one of the biggest things that we our tech partners can understand about what we need them to help us do. Um, they're part of our communities too. So are the investors who are wanting to help us uh really develop the right solutions. If all of us are about that community and taking care of the people that we are in that community with, I think we have a lot of opportunities and I I'm excited about the future for if we can get this right. Um so looking ahead to 2050, back to your the concept of your book, what's your most optimistic prediction of how AI will improve care and what worries you the most?
SPEAKER_02Okay, let's talk about the positive first. Okay. I love that. Excuse me. I think first AI is gonna do amazing things at discovering new ways to treat patients. Uh pharmaceutical developments and acceleration is going to be tremendous, and that's magical. Alpha 2, alpha three, and alpha fold. You know, create look, understanding how proteins fold, that doesn't sound important. But everything that happens in biology is around proteins, and then being able to predict how proteins interact with each other, molecules interact with each other, is an enormous advantage. It can quickly accelerate uh drug development. And then, of course, clinical trials, identifying people who are novel patients who have not been treated yet, to enter into a clinical trial where today, once you've been treated, you may no longer be eligible for a trial. So that gets people involved. Reaching out outside of the academic medical centers to identify those people and get them involved. We can use AI to be able to do that, to redesign the model, to do a clinical trial in silicon, meaning in a computer using digital people, right? They'll take whoever you are, create you digitally, they'll understand your genetics and all the other things that make you make you who you are, and do a clinical trial in silicon and run it that way to see how the molecules and the proteins will interact in biology. That obviously is going to greatly accelerate all this work and development. And then, of course, the research we can do in terms of how to do surgeries and how to do take care of patients and do physical therapy and others, all that's gonna be great. I think if we deploy AI properly, we'll be able to have a much better relationship between the caregiver and the patient and the patient and the organization. Today it's extremely expensive to hire people to be on the phone to interact with you, to remind you to come in for your appointment, to do the proper things before your surgery or post-surgical care to take your medications. So much of that can be personalized that before we couldn't do that. Um, we can decrease the variation in how care is delivered the same way using AI. Everyone should not be treated the same way, but some people shouldn't be treated at all that way. And we can be able to decrease that variation and personalize it in a simple way. A busy single mother with three children is unlikely to be able to take a medication four times a day, even though that's the best treatment for her. So why don't we go ahead and do something? Did you hear that? You didn't hear it. Okay, well, let me start again because my anyway, it was playing music and I thought it came through the mic. So you'll edit this thing. Um, you have a single mother who has three children and is working, it's really hard for her to take her medication four times a day, even though that is the first line of defense for her illness. Right. So the AI can understand that and say, I think with Sally, she should take this pill, which is in the morning and at night. That'll fit into her schedule. It is not the best, but it's more likely she'll take that medication, she'll get some benefit from it. So that's really important. And the just the interaction where the organization can be personalized, and the messaging to those patients is great to motivate them, to follow their regimen, to come into the organization, to feel seen, because she can personalize it in ways that we don't have people that can actually personalize. What's the downside? My biggest fear is using AI solely to generate revenue and not worrying about the outcomes that happen for those patients. And the second thing is making mistakes and clinical errors through automation bias with poorly designed workflows, or AI that is not properly trained for the population that is being seen in the organization that is utilizing it. For example, New York City has a very diverse population. If you trained the AI models on people from Oklahoma or Indiana, it would not apply to the people in New York because they have a much more diverse population from people all over the world. If you took what's the training that happened in New York and applied it to Oklahoma, it wouldn't work there either. So you have to balance how the AI and have it be specifically trained for your region. And if you don't do that, you're gonna have bad outcomes.
SPEAKER_01That's a really important thing as well. I I think we we're we're definitely quick to trust. And I think one of the things that concerns me most is the bias from the not the standpoint of somebody doing something on purpose. It's that inadvertent miss that could occur. Whether it's making the wrong assumption, like we would do between you know Oklahoma and New York City, there's different needs and different dynamics around the communities, a lot of different factors that that could make that very, very different. But there's also what's missing. Uh, and that's another thing that is is a concern because I know you know I've talked about this before as a physician. You don't know what you don't know about me as your patient, and you don't know if what you don't know is important. And AI is not gonna solve that for us, so we have we definitely need to be aware of it. I think it can maybe reduce the odds of missing information, but it's not gonna completely address that at all. So it's a really great call out.
SPEAKER_02If I may, I I have one request for everyone. Can we please fix the interoperability problem? Please. That's just really, really simple. All of you who've ever traveled around the world, you know that you can go into an ATM machine, stick your card in, and get local currency. We can do that. Now, I understand that healthcare is much more complicated. Our systems are more complicated, the the nomenclature we use is more complicated, but there's no reason that a hospital one hospital in Boston, which is less than a mile away from the other hospital in Boston, the physician can't see the MRI done in hospital A because they're on a different system in hospital B. An MRI is a standard format that should easily be able to be accessed and seen. Patients should be able to download their MRI and if they need be, put it on a disk, on a thumb drive, on their phone, and deliver it to the other physician. That's a real problem because it's not only expensive for us to do this, we duplicate tests, patients, we misdiagnose patients because we're missing it, but also we don't have a data set that we can use to train the II. Can't we just have one standard data set that we can use and have that interoperability happen? So I challenge those EMR companies. Can you please fix this? It should not be this hard. And oh, by the way, the data belongs to the patient, not to the hospital and not to the tech company. It belongs to the patient and they share it with you so you could do good things with it, so you can develop better systems. It's okay that you're for profit. It's okay that you make money. That's fine. But recognize it's the patient who's making the investment to help you do that. So your responsibility is to give back. How do you give back? Produce great tools, and let's fix this interoperability problem.
SPEAKER_01I don't know if that could ever be said more clearly. I I that is a huge issue, and I think anyone's been working, particularly in the IT or clinical spaces over the last decade, has seen so many difficulties arise from not being able to get access to the information that you need. Um this is about saving lives, after all. So, yeah, I think we need to figure this out. Equally perhaps more.
SPEAKER_02And what about saving money? And if we are able to save money, we can do more with that. We can be more efficient, right? We have to get rid of the friction that exists in healthcare. And that's fine. Everybody can make their profit. I'm okay with all of that. I'm a capitalist other day. That's good. But let's do better. Doctors try to do better, nurses try to do better. The janitor in the hospital works hard focused on make helping the hospital stay in a pristine way that patients will not get sick from hospital-acquired infections. Okay. So let's all try to do a little bit better with this. And I think we can. There are ways for us to do this.
SPEAKER_01Wholeheartedly agree. I'm with you. And uh as as long as I get the chance to go to collaborate with you, I I'm gonna be working on my end just as hard as I can to help you solve this. I know it's hugely important, and if we can leave a legacy of something valuable. That would be one massive contribution and and I think we could get a lot of people aligned to help help us really push on that. So I hope I hope we get the chance to to really do some meaningful work there together.
SPEAKER_02That would that would be great. Um I was at a hymns conference and the keynote speaker uh Bit Stone uh started Twitter back in the day. And during his keynote speech, he said companies only have three purposes. Make money, do good, have fun. And let's think about it. You can make a lot of money and be very successful doing good and having fun. And there are examples of those companies out there. And I think in healthcare, we can should do the same. Let's do good, let's have fun, and let's make money. We can do that.
SPEAKER_01Well said. I I love it. So if you could give a piece of advice to health systems uh CEOs and boards that might be reading your book, what what what would it be?
SPEAKER_02When we think of AI, we think it's smart, but in reality, it's dumb. It doesn't know anything, has no opinions, it's all statistical and probabilities. You know who's smart? You know who's intelligent, you know who's knowledgeable? The person at the desk using the AI. So my call to action for all of you is recognize that you're what makes AI valuable, and your knowledge, your experience, your dedication, your goals, your objectives, your morals, your ethics is what should constrain and direct AI and how it is used. So never think that I'm gonna use AI and it'll solve my problem. Think I'm gonna use a tool like AI with my knowledge, intelligence, strategies, values to solve the problem. And without you, AI can do nothing. It can hallucinate, it can do misinformation, it can hurt people. But with you, it can do great things. So never forget you're the center of what make can make AI great.
SPEAKER_01I don't think there could be a better public service announcement than what you just gave. It's such an important message. I'm so glad that you're saying this. And I cannot wait for our listeners to to hear this and uh I'm gonna go out on the limb. I think they're gonna read a lot more about it too. So as we're kind of wrapping up, I want to continue just a little bit more on on your future of healthcare 2050. What's one question that you hope every reader walks away asking themselves after finishing uh reading this book?
SPEAKER_02What can I do to ensure that the humanity invested in the word trust gets applied to how AI is utilized in my professional and personal life? That is most important. Let's preserve the trust and understand that humans should uh control AI, but AI should never control us. Excellent.
SPEAKER_01Well, as we begin to wrap here, Dr. Taken, this show is all about signals versus noise and healthcare innovation. What's the clearest signal that you're hearing right now in where AI and healthcare is headed?
SPEAKER_02That's a difficult question. Of course, I hear way too much noise. Crazy ideas, it's going to do A, B, C, it's going to take over doctors, we're going to have a hologram of a physician who's going to take care of you, all of that. That's so much of the noise we hear. And I think we reason why you hear so much noise is because people don't really understand AI and how it should be used. Last week I spent time with two of my wonderful longtime colleagues. One is teaches management at a local university, the other one teaches environmental engineering at a local university. And I've been a guest lecturer in both of their classes. And I speak to them about AI, about things that I knew and understood a year ago, and they don't know about. They tell me about their faculty deciding how students should be utilizing AI. And my colleague and I take our long walks and we do the usual go back and forth of how should we use it and try to evolve and develop and build upon each other's ideas using the AI. So that's so much, there's so much noise because we don't most people don't understand it. What's the signal? That's a hard one. The signal is the AI itself is moving so incredibly fast, irrespective of everything else. That is absolutely happening. And it is happening at a pace that keeps accelerating, that even the people who work in it don't know. So let's ignore the noise, focus on the signal, and learn so we can understand what the signal is and what it's telling us.
SPEAKER_01Where can listeners find your book, follow your work? We're invite you to speak further about this important topic. Uh, we've only scratched the surface, um, but I know our listeners are have have gotten a ton of really great insight. And I I think you're gonna need to make it available, make your book available, tell them right where they can get it, because I think they're gonna they're gonna be excited to read.
SPEAKER_02Well, you can find everything about me at BarryChaken.com. Uh and in there you will you can have some fun. Not only you can read about my book, you can order a deluxe sign copy there that I'll sign in if you want to put a little uh message into you and that I can mail out to you. Um it's also available on Amazon and Barnes and Noble, but obviously I can't sign them there. Um the second the second thing is what I created was a little chat bot, two different types of chatbots. One of them is you can text in it and that'll appear on the home page as well as there's a little menu pair. You can go to that and you can ask me questions. But what I specifically did is if you ask me what the weather is going to be in your town next week, it'll specifically say to you it's not my database, and I've been instructed not to hallucinate. So I've made a point of doing that. The second one is I also created one where you could actually quote unquote call me. You hit the little call button, you can ask the same question, it'll respond back to you in my cloned voice. So it's just a little fun way of how to use AI. And um, I have a lot of uh uh information on my website, barrychagen.com, but of course, you know, feel free to reach out to me using that site. And please connect with me on LinkedIn. Um it's Barrychan.comslash LN, and that'll uh link you up to LinkedIn and my page, and we can connect.