The Signal Room | AI in Healthcare & Ethical AI

Healthcare Leadership: Balancing AI, Human Judgment and Clinical Trust | Dr. Mark Gendreau

Chris Hutchins | Healthcare AI Strategy, Readiness & Governance Season 1 Episode 15

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 34:13

Send us Fan Mail

Clinical AI in emergency medicine earns trust when it amplifies physicians rather than replaces them, and when the time it gives back actually reaches the patient. Dr. Mark Gendreau, an emergency medicine physician and senior healthcare executive, joins Chris Hutchins to examine where AI applications in healthcare are already shifting clinical practice, and where responsible AI in healthcare still depends on the human judgment no algorithm replicates.

What We Cover

  • How digital radiology AI now alerts physicians to findings in real time, and why probability-scored pulmonary embolism alerts matter for diagnostic fatigue during long shifts
  • Ambient AI documentation platforms: 97% accuracy drafts, feedback on bedside manner, and what clinicians get back when AI does the note
  • The concept of pajama time (late-night EHR catch-up) and why reducing it to zero is the cleanest signal AI is actually improving care
  • The trust equation for AI adoption in clinical settings, built on demonstrated reliability at the moments that matter most
  • Where the boundaries of AI support should sit in high-stakes care decisions

Key Takeaways

  • Clinical trust is earned through reliability in the moments that matter most. AI has not consistently met this bar in emergency care, and the path to earning it is measured in shifts, not slides.
  • The most valuable AI tools augment the pattern recognition experienced physicians develop. They do not try to replace it, and they surface their limits honestly when they are uncertain.
  • If AI does not give time back to the patient, it is not working. Pajama time going to zero is a better proxy for value than any dashboard.

Frameworks & Tools Mentioned

  • Digital radiology AI (real-time alerts, probability scoring for pulmonary embolism)
  • Ambient AI documentation platforms (97% accuracy draft notes, bedside manner feedback)
  • Human-in-the-loop AI design for emergency medicine
  • Pajama time as a clinical AI effectiveness metric
  • Stephen M.R. Covey's trust equation applied to AI adoption

## Timestamps 00:00 – Introduction and framing the AI scaling challenge 01:18 – Workforce scarcity and why AI must amplify clinicians 02:10 – AI in radiology: co-pilots, fatigue reduction, and safety 05:26 – Ambient documentation and eliminating “pajama time” 07:17 – Using AI to improve clinician communication and empathy 09:33 – Where AI falls short and why humans must stay in the loop 12:44 – Guardrails, trust, and human-AI partnership 13:44 – Trust in AI vs trust in human relationships 16:07 – Adoption curves and clinician buy-in 18:05 – Why AI fails when treated as an IT project 20:41 – Leadership’s role in shaping AI culture 22:07 – Interoperability, governance, and scaling challenges 26:04 – Signals that an organization is truly AI-ready 29:26 – Emotional intelligence and where AI should never lead 33:59 – Alert fatigue and governance accountability 37:27 – Measuring success: outcomes, equity, and pajama time 38:36 – How to connect with Dr. Gendreau

Support the show

About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.


Website: https://www.hutchinsdatastrategy.com 

LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/ 

YouTube: https://www.youtube.com/@ChrisHutchinsAi

Book Chris to speak:  https://www.chrisjhutchins.com

Christopher Hutchins: The tagline for my company:

humanizing AI for healthcare. We've talked about how healthcare needs to be emotionally ready before it can be technologically ready. How people feel safe, seen, and empowered is how change happens.

Christopher Hutchins:

Hundreds of thousands of dollars on the data that powers the technology.

Dr. Mark Gendreau:

The goal is to increase access to healthcare. In this instance, it's that sort of awareness, discernment, judgment, experience. As I always say, we have lived. AI has not. A human-AI copilot sort of partnership. And how do you engage in that partnership?

Christopher Hutchins:

Today in the Signal Room, we're talking about what it really takes to scale care responsibly, not by replacing people, but by amplifying them. My guest today is Dr. Mark Gendreau. He has spent his career at the intersection where urgency meets innovation. He's an emergency medicine physician and a senior healthcare executive who's led teams through the most complex challenges in modern care delivery. As a physician leader, he has championed data-driven decision making, operational readiness, and a culture of safety that keeps the human connection at the heart of every interaction. In this conversation, we'll explore how AI is beginning to collaborate with clinicians, where trust fits into that equation, and what it means to build systems to help people, not just process patients. Dr. Gendreau, you've spent your career on the front lines of emergency medicine and being a chief medical officer. When you hear the phrase scaling care with AI, what does that mean in a real hospital setting?

Dr. Mark Gendreau:

What it means is that the goal is to increase access to healthcare. We are having greater needs in terms of the workforce in healthcare. We have an aging physician workforce, nursing, and we're having less and less people entering into healthcare. So we need to, as you said at the beginning, amplify the workforce that we have so that we can reach more patients and improve the quality and safety of healthcare.

Christopher Hutchins:

There's a whole scarcity, I think, is the word I would use for a lot of clinical roles, and especially in the nursing area. I can see why it's such a focus that you have to try to use these tools to optimize the workflow so that people are freed up to do the things that they actually trained for, which is helping people. I'm curious because we've talked a little bit about this recently, but I really would love to hear where you're seeing some successes in how AI is being used collaboratively with clinicians and what makes those specific examples successful.

Dr. Mark Gendreau:

Three areas really immediately come to mind in terms of amplifying clinicians and transforming their work-life balance. So digital radiology, we have AI tools that basically read images, for example, CAT scans. There are various modules that basically read the images almost instantaneously. They don't interpret, but what they do is heads up the radiologist as to there's something that you should pay attention to on image 63. And some of these algorithms will actually give you a probability of, if you're doing a CAT scan of the chest, the probability that it could be a pulmonary embolus. This is incredibly useful. Particularly in the nighttime, or there's less and less radiologists, 12 hours a day they're looking at study after study, there's fatigue that builds in there. And with that fatigue can come reduced diagnostic accuracy. So this really helps them focus where they should be looking. It doesn't interpret. That's still in the hands of the human to basically discern and judge what is going on and then do the interpretation. I was at a conference in the spring, and there was a trauma surgeon in Nevada, and they were having issues with maintaining their trauma designation with the American College of Surgeons because they weren't meeting the guidelines for having a CAT scan of the head read by a radiologist so that they could quickly act upon it. And they used this AI product so that it would alert the trauma surgeon that there's an issue, take a look at a specific image, and help them decrease their time and get back to trauma designation. Very practical, very useful technology that is really assisting and amplifying the physician. I would say the second is in documentation. We use ambient AI platforms to assist physicians in documentation. This has been transformative to the primary care doctors, not only because it listens and then when they're done the note has been generated with about 97% accuracy as a draft. This allows the physician to actually look at the patient and touch the patient and spend more time with the patient rather than having their head buried in the computer documenting everything that needs to be documented. On the other side of this, we measure something called pajama time, which is we can see when physicians are in the EHR after hours. And we see people who are still trying to catch up at 10, 11 o'clock at night. This is not sustainable. That is a second example of the utility of it. We have another ED that is using an ambient system that also, when you're done with your note, it gives feedback to the clinician. You forgot to introduce yourself, the patient seemed to be a little bit tense when you were saying that, and your response, you could have approached it this way. So not only are you getting that documentation, the physician or the advanced practice provider is getting feedback to be a better clinician on a more human level. These technologies are really amplifying the human in healthcare and not automating them.

Christopher Hutchins:

That's fascinating. I actually was speaking with a company yesterday, scheduled for a half hour. I think I stole another half hour from them because I was excited watching them walk me through what they were doing. It was exactly what you're describing in terms of really undergirding and supporting the clinical interactions. With every staff person that interacts with a patient, it's actually profiling the individuals that are interacting with the patient. It's profiling the patient and it's detecting, much like you said, detecting if there's tension or stress involved, not only analyzing the words, but actually analyzing and detecting tone and voice and the things that would indicate that there's stress involved. Fascinating to me because I had not heard of anything that was geared towards wrapping around the encounter, regardless of who the individual was that was interacting with the patient. They're having some really great success. It's actually being done by a physician who's come out of Stanford, out in California. I'll definitely share some of that information with you later today, just so you can take a look at what they're doing. It sounds like something that would be right up your alley as an advocate for helping to relieve the burden that we've continued to layer on to clinicians. You've touched on some great examples of where AI is being useful. Is there an example that stands out in your mind where there's a specific decision where AI added value or where it didn't, and what that taught you about the boundaries between the human and the machine?

Dr. Mark Gendreau:

Where has AI not been as stellar? We're essentially five to ten years into AI, and really into AI over the last three or four years. We see improvement all the time. But take, for example, the digital radiology where these AI algorithms will basically alert you to something. They still have issues with them. They will sometimes think something's going on, and it turns out that when the radiologist looks at it, no, that's a benign calcification that isn't anything that we need to worry about. And so there lies, as I say, the human brain. Always keep the human in the loop with this. Healthcare and other industries where there's high reliability, you have to have the human in the loop every time. In this instance, it's that sort of awareness, discernment, judgment, experience. As I always say, we have lived. AI has not. And it's those reps that the clinician or the radiologist in this particular example, who's read thousands and thousands of images, those reps give that judgment and that insight that no, that's not anything that we need to be worried about, that doesn't need to be reported in that fashion. That becomes very important.

Christopher Hutchins:

Exactly. I was thinking about a simple example where if you're my physician and I historically have a bit of a high blood pressure that's controlled with medication, but any elevation detected by AI would be flagged. If no one knows that this is normal for me and I'm being treated, decisions can be made or alerting can happen. Alert fatigue is something that I've heard quite a lot about over the years. Working in IT departments in particular, I've heard a lot of feedback that was not necessarily favorable about the alerts that we were triggering for what seems like everything under the sun. So we want to dig into trust a little bit. We talk a lot about whether we trust AI. More recently, I've been hearing a lot more conversation about the trust in human relationships. With technology moving so fast, we know that trust is still built slowly and deliberately. What helps clinicians trust AI insights enough to act on them?

Dr. Mark Gendreau:

Great question. You think about what trust is. Stephen Covey wrote a book on trust, actually, I believe two books on trust. And he defined trust as an equation that has credibility plus judgment plus safety. He called that intimacy, but he was meaning more like psychological safety. Divided by self-orientation, or what is this being used for? Is this being used for good or is this being used for bad? So the ethics component of it. I think for AI, because healthcare is a field of a lot of relationships, of trust, of quality and safety, it needs to have those components of trust. The capabilities have to be good, the reliability of it needs to be solid. We're not going to trust something that has reliability issues half the time with its output or what it's actually doing. And it's got to contribute to safety. I think those are the components of trust and AI adoption. When those are met, clinicians feel more comfortable interacting with AI. We've seen this with our ambient documentation platforms. Initially the clinicians are a little bit reserved. Their trust is very guarded. But as they become more comfortable with that interaction, as AI starts learning about their preferences and what they like, what they don't like, and when they train it saying you said this, the patient actually said this, that is the interaction that you need. Then the trust comes and people are more receptive. Word gets out and people adopt it. We had a lot of laggards with our ambient AI platform. As soon as more and more of the early adopters and late adopters came aboard, we leveraged them to influence the laggards to come on board. And some of those laggards became the biggest champions of the technology.

Christopher Hutchins:

That's phenomenal. I've often seen things go sideways when these types of efforts are viewed as an IT project. I don't mean that everyone does this, but historically there have been a lot of times where solutions were developed and just imposed. People that I've spoken to in your profession have looked at me and given me an eye roll. You did it to me again. Why are you wasting my time? This does not solve a problem I actually have. But there are some you could solve if you'd like to listen. I love how you're approaching it. I'm sure that your colleagues are grateful that you're on the tip of the spear and leading through this because it is a bit of a frightening exercise when you've got all these things coming at you, this pressure to use it, and this explosive growth in volumes of data that you have to process quickly to diagnose something or to decide how to treat it. We've only been layering and layering constantly, whether it's for the purpose of improving quality. We add data elements that have to be captured, and then all of a sudden there's penalties being imposed based on an average rather than an individual, which I think is an interesting problem that we're going to have to solve for in regulatory areas. Because as you know, if you take the averages that we're measuring against, you're never going to find an individual anywhere on the planet that matches that profile. But we decide to penalize or incentivize based on that. The practice of medicine is an evolving science, of course. So I don't know how we're going to get there, but I think we really have to lean in on the regulatory side and make sure they understand the regulatory nature of things historically has not been based on reality. It's been based on a static snapshot in time, essentially, which is not going to work with AI. How should hospitals be thinking about creating that culture where data and human judgment reinforce each other instead of competing? Because I know there's in some cases a tension between those things.

Dr. Mark Gendreau:

I think it needs to come from leadership. Leadership plays a big role in this, of showing people, getting them involved, giving them agency, really allowing people to experiment with it and be creative with it. All that is part of change management.

Christopher Hutchins:

When you're working on these types of things, there's one thing when you're just trying to kick the tires and trying to get people used to it. But at some point you have to start figuring out how to scale it and what are the indications you're going to be looking at to determine when it's actually time to scale. From an operational standpoint, what are some of the challenges to scaling these AI systems across the health system that you've encountered?

Dr. Mark Gendreau:

I would say the biggest obstacles are threefold. I think interoperability is probably the biggest one. And then governance is the third. And then basically, how you structure with respect to training the models. A lot of health systems now are partnering with one of the big AI LLM companies and using their model, parking it behind a firewall and then training it with internal data. So the model starts to get an understanding. It knows who we are, it knows what we do, and things of that nature. We're actually very busy in our health system making sure that it's ready for prime time for the masses. We've been busy with setting up governance around what we push out, when, to who, and how we do it. And the interoperability is a big thing because it involves getting the right APIs, making sure that everything is talking to each other, because if you don't do that, the data is just not useful to you. Those are the key components of scaling up AI in a health system. This was all pointed out, by the way, in 2022 when the National Academy of Medicine put out a document on artificial intelligence with respect to healthcare. That's a worthwhile read if you haven't read it.

Christopher Hutchins:

I appreciate that. There's this human factor that oftentimes just doesn't get the time and attention that it needs. One of the most important things that I've realized is the need to start with listening and having conversations with someone like yourself. This is really important if you're going to try to develop any kind of solution to make things better. You're talking about scaling. What are some of the signals that you're looking for to know when your organization is ready? This is not just technical readiness, but there's the cultural aspect of it as well.

Dr. Mark Gendreau:

I would say when you've got leadership who's involved. The beauty of AI is that you don't need to be tech savvy. You need tech-savvy people, but for the most part, I'm not tech. But that's the beauty of AI, it really doesn't require that. Most people who do very well in AI seem to be not tech savvy, but people who are creative, are good problem solvers, and then have those core elements of human-centric leadership. They've got strong empathy, they're ethical, trust building is a non-negotiable sort of thing with them. When you've got leadership who isn't trying to basically push it to IT to roll out, but upper leadership who is saying we need to do this, and here is the why and here is the how, then I think that culture changes. More and more people start using AI, like generative AI, and it just becomes a catalyst for readiness. There are multiple layers of how you know when your organization is ready. I think it's not just one thing, it's multiple things.

Christopher Hutchins:

Absolutely. It's a lot more complex than we would like it to be. You touched on something else with the ethical considerations, and one of the things that's been challenging my thinking, I'm not quite sure if this really requires guardrails, it may actually require some that we have to figure out. But how empathetic do we really want the technology to be? Because there's a risk now that people trust too quickly and too easily. Where do you think some of these lines might be to make sure that we're keeping medicine focused on human beings and trusting human beings and making sure that empathy doesn't get lost in our rush to get efficiencies?

Dr. Mark Gendreau: AI is great at a lot of different things, pattern recognition, automating repetitive things that are fairly routine in terms of patient care. But if you're doing something that requires emotional intelligence, relationship building, or shared decision making, that ain't AI's territory, that's the human. One of my favorite quotes from Jeff Woods is, you are the leader and don't ever abdicate your leadership to AI. And so I would flip that to:

you are the human, never abdicate your role, your humanness, to AI. I was listening to a podcast recently, and they were discussing how as we become more into AI and it becomes more agentic, we're going to have two workforces. We're going to have an AI workforce, we're going to have a human workforce. The important thing that we have to do is make sure that we keep those kind of separate. Like we don't try to humanize the AI and start calling it names and everything, because the technology, there's some technology out there that you can't tell the difference if you're speaking to a human or a robot. We have to make sure that we don't get down that track too far because giving away our human traits is just not going to be a good thing.

Christopher Hutchins:

It's clearly a risk. And the dangers are there because there's also this trust factor. We can't be sure that we're going to get people to really follow the guidelines on what you should and shouldn't use these technologies for. Increasingly people are connecting, listening to, being influenced by people who look, think, sound, and behave like they do, and they're not necessarily listening. Oftentimes that can be a real miss if you're trying to run an organization as a CEO and your people aren't trusting you. There's got to be an awareness that you've got to find ways to message so that people are really understanding and they can make the decisions to follow it. From a leadership standpoint, I think there's a really significant need to understand better what the experiences are for clinicians. We haven't gotten too far into the burnout conversation, but we know that there's a fair amount of that because you mentioned documentation being done until 10, 11 o'clock at night. How should leaders be thinking about protecting their clinicians from things like alert fatigue or overautomating, but still making sure that we're moving innovation forward?

Dr. Mark Gendreau:

Alert fatigue is big, particularly on the nursing side of things because they're on the floors continuously, and there's alarms going off everywhere. I would say an alert needs to earn the right to interrupt a clinician. I think that's a good way to think about it moving forward. And another principle is if you're going to add something, you need to take something away. We have so much technology with alarms and everything that one of the big things that we have become increasingly more concerned about is it almost becomes just background noise that people don't even hear. That's why it's on humans and leadership in healthcare to put together alert fatigue committees and actually review what sends alerts, what's the value that it's providing, what's the impact that it's doing. If it's not adding value and it's having a negative impact, we need to basically change up that technology.

Christopher Hutchins:

That fits the whole purpose of the podcast that I've created here. There's a ton of noise. If everything is triggering an alert, there's really no point anymore because that's all you're going to do is respond to alerts. That's just not a productive way to use the technology. I think it's going to be a very interesting challenge because there just seem to be so many great ideas coming forward, and I'm sure that you're getting a full inbox with emails from people who have the best thing since sliced bread, but it's only going to solve a fraction of a problem that you really want to solve. And that's just not sustainable. But there are so many of them coming at us all the time. It's really hard to detect sometimes where something has real value and can have real impact.

Dr. Mark Gendreau:

There's a lot of noise out there. There is. And that signal-to-noise ratio, we've got to somehow make that better.

Christopher Hutchins:

If we look forward five years from now, speaking of signals, what do you think will be the single most important one that you'll be watching that tells you that this AI and human collaboration is actually improving care?

Dr. Mark Gendreau:

I would say outcomes. That we're seeing better health outcomes. We're seeing the health disparities gaps shrink or go away totally. And pajama time goes down to zero. I think those are the signals that would tell me that we've succeeded.

Christopher Hutchins:

You just used a term that I'm sure I'm not the only one that hasn't heard this, but please help me understand what the pajama time is really all about. That's probably doctor code, I imagine.

Dr. Mark Gendreau:

That's when a clinician is documenting at 10, 11, 12 o'clock at night to catch up on documentation. That's what we call pajama time.

Christopher Hutchins:

I might have to borrow that, but I'll give you credit. Dr. Gendreau, if folks are interested in reaching out to you to understand what it is that you're doing and what you're seeing bring successful adoption, how do they reach you? They can best reach me on LinkedIn. Fantastic. And for those that are listening, I'll make sure that you have all the information you need in the show notes to reach out to Dr. Gendreau. It's been a pleasure to have you on today. Fascinating conversation. I'm sure we'll be having more of them in the future. But thank you so much for taking the time. It means a great deal to me personally, so thank you for being here, Dr. Gendreau.

Dr. Mark Gendreau:

You bet, Chris. Thank you.

Christopher Hutchins:

Well, that's going to do it for this episode of the Signal Room. If today's conversation sparks something in you, an idea, a challenge, or a question, don't keep it to yourself. Join the conversation on LinkedIn or visit us at SignalRoomPodcast.com. We're here to amplify the signals that matter across leadership, ethics, and innovation in healthcare.