The Signal Room | AI in Healthcare & Ethical AI
Welcome to The Signal Room, your go-to podcast for expert insights on ethical AI, AI strategy, and AI governance in healthcare and beyond. Hosted by Chris Hutchins, this show explores leadership strategies, responsible AI development, and real-world implementation challenges faced by healthcare AI leaders. Each episode features deep conversations covering healthcare AI innovation, executive decision-making, regulatory compliance, and how to build trustworthy AI systems that transform clinical and operational realities.
Whether you are an AI strategist, healthcare executive, or AI enthusiast committed to ethical leadership, The Signal Room equips you with the knowledge and tools to lead AI transformation effectively and responsibly.
Join us to learn from industry experts and healthcare leaders navigating the evolving landscape of AI governance, leadership ethics, and AI readiness.
Follow The Signal Room and stay updated on the latest trends shaping the future of ethical AI and healthcare innovation.
The Signal Room | AI in Healthcare & Ethical AI
AI Regulation in ER and Clinical Judgment: Why AI Tools Must Be Designed for 3 AM, Not 3 PM | Dr. Natasha Dole
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Emergency departments are the hardest environments to deploy AI applications in healthcare because speed, accuracy, and contextual judgment all compress into seconds. Dr. Natasha Dole, an emergency physician and digital health leader, joins Chris Hutchins to examine why AI tools designed for routine clinical workflows fail under ER conditions, and what responsible AI in healthcare actually requires when a missed signal can end a life.
What We Cover
- Why emergency medicine is the hardest stress test for AI in healthcare, and what that exposes about every other deployment setting
- How trust gaps between ER physicians and AI tools compound when systems produce recommendations without contextual awareness
- Where clinical decision support adds value in the ER and where it breaks down under the pressure of a live trauma bay
- What AI regulation and patient consent actually look like when a patient arrives unconscious and a scribe tool is already recording
- How digital health leadership inside a clinical setting is different from strategy work done outside the care environment
Key Takeaways
- Clinical judgment is not a legacy skill AI replaces. It is the thing AI tools must be designed around. Emergency physicians develop situational awareness algorithms cannot replicate from training data.
- A trust gap is a patient-safety issue, not a change-management issue. When ER physicians do not trust an AI tool, they either override it or disengage from it. Both outcomes degrade care.
- Responsible AI in healthcare means designing for the worst 3 AM, not the average Tuesday. Any AI tool that cannot survive the emergency department's conditions is not ready for the rest of the hospital either.
Frameworks & Tools Mentioned
- Human-in-the-loop AI design for high-acuity clinical settings
- AI scribes and clinical documentation tools in the ER
- Clinical decision support integration with emergency workflows
- Patient consent protocols for AI-assisted care
- Digital health leadership inside clinical operations
## Timestamps 0:00 The 2:00 AM Crisis: Why AI Fails 0:35 Introducing Dr. Natasha Dole: ER Innovation 1:30 Credibility in the ER: Pre-AI vs. AI 2:45 The AI Scribe: Reducing Cognitive Load 4:15 Why Patients Must Stop Using AI for Triage 6:02 AI vs. Clinical Judgment: Who Wins? 8:40 The "Scary Truth" About AI Hallucinations 11:15 Responsible AI: Consent and Disclosure 13:40 Designing for the 3:00 AM Bottleneck 15:50 Will AI Replace Doctors? The Real Answer 18:10 Final Verdict: The Future of Responsible Care
About Dr. Natasha Dole
Dr. Natasha Dole is an emergency physician and digital health leader focused on how AI tools actually perform inside real clinical environments. She works at the intersection of emergency medicine, AI governance, and responsible deployment, with particular attention to the safety and ethical dimensions of AI applications in healthcare.
Related Resources
About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.
Website: https://www.hutchinsdatastrategy.com
LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/
YouTube: https://www.youtube.com/@ChrisHutchinsAi
Book Chris to speak: https://www.chrisjhutchins.com
A&E is buzzing at 3 AM. That's when I need you to come shadow me as the person making the tool to see what my bottlenecks are, what my challenges are, and understand my environment. Because stuff normally goes wrong at ungodly hours, and that's what I need you to see. And that's where the problems arise. That's where all the issues arise. So if we're not testing for that, then you having a great tool that works at two o'clock in the afternoon is not necessarily going to be a great tool at 2 AM. And for me, that's the tool I need.
Christopher Hutchins:Today's guest is Dr. Natasha Dole, a physician, a healthcare executive, and technology leader working at the intersection of clinical care, digital innovation, and artificial intelligence. Dr. Dole has spent her career focused on how technology can meaningfully improve the practice of medicine and the experience of both clinicians and patients. Her work spans clinical operations, health system leadership, and the responsible integration of emerging technologies into real-world healthcare environments. What makes her perspective particularly valuable is that she approaches innovation from both sides of the equation. She understands the realities of practicing medicine, and she also understands the design, deployment, and governance challenges that come with bringing advanced technologies like AI into clinical settings. In a moment where healthcare is rapidly experimenting with automation, decision support, and new forms of digital infrastructure, Dr. Dole brings a grounded perspective about what actually works in practice, what risks are often overlooked, and what responsible adoption really requires. In this conversation, we explore how clinicians experience AI at the point of care, what health systems often misunderstand about implementation, and why thoughtful governance and human-centered design will determine whether these tools ultimately strengthen care or complicate it. Dr. Dole, welcome to The Signal Room.
Dr. Natasha Dole:Thank you. That was quite an introduction. Thank you so much. I'm very honored and humbled. Thank you for the invite.
Christopher Hutchins:Well, it's a pleasure. I've been looking forward to this. I've been really enjoying your content that I've been seeing. You've got an unbelievable knack for translating images and a wide range of ways that you frame things that people can really relate to that would be more difficult for them to understand otherwise. So I think one of my favorites was the recent one, the Barbie and Ken thing. That's what it was. But it's just one example. You've got a really great way of translating things and telling a story so people really can understand.
Dr. Natasha Dole:Thank you so much.
Christopher Hutchins:So I want to jump into some good stuff here because there's a lot that people don't understand about practicing medicine. And that's clear to me in how even reading the threads that are online when you're posting content about what you're experiencing, I just see that there's a wide variety of levels of understanding. So really excited to get into things with you this morning. It'll be great for our audience. So in emergency medicine, how is credibility established in the room before anyone speaks?
Dr. Natasha Dole:So I think I would need to break that up into pre-AI and AI, right? So if we're going to the world pre-AI in the emergency room, credibility is normally established by assumptions based on situational awareness and the senior-most leader, who's often the senior-most decision maker. And that person then automatically acts as team leader for the rest of the team. And that role is assumed, it is then explained, further roles are allocated, responsibilities are clarified, and you go forward. Often when it's a situation like that, we've got strict SOPs, standard operating procedures and policies in place, specific algorithms that are followed with the scribe, and a clear team leader who allocates other roles. With AI, the team leader is still the human in the loop, 24/7, come what may. But AI is assisting as a scribe, which is great because it picks up things that are happening in real time, especially in a place like A&E, which is a very high-stakes environment. And obviously it significantly decreases cognitive load, but you have to double-check everything. You cannot over-rely on it. And you need to make sure it's recording all the correct facts. And often in that situation, if it's a high-stakes environment, the patient may not be in a position to give consent. So that's where the governance issues come in. And often you don't have next of kin where you can get consent. So everything you do is on balance, and obviously on the risks. You weigh out the benefit versus the risk and decide if you're going to proceed. But using an AI scribe in those situations seriously decreases mental fatigue and cognitive load.
Christopher Hutchins:Well, I think that's certainly an important objective. I've seen over the course of my career the things that we continue to layer on somebody who just wants to take care of patients. It's excruciatingly painful for everyone I've ever talked to about this. I've seen eye rolls every time I have invited a physician to come see a demonstration of technology.
Dr. Natasha Dole:And that is so important. And if I could add to that, the reason physicians eye roll at that is because we're not included in the decision making when this tool is being made. And the tool is for us and needs to be used by us. So if we can't give you input as to what we want, it's not going to be a success. Excluding us and the patients who it's for and who it's aimed at and who we are trying to prevent further deterioration for or decrease their reattendance rate and improve our efficacy, that's when you need to get a multidisciplinary team and have all your stakeholders. That's what needs to be a top-down and a bottom-up approach. So if you've just got your very intelligent tech people making a tool and you've got no clinician helping them, that tool is already doomed for failure. And if it's going to add more layers and more checks, I'm not going to want to use it. I already have enough checks to do pre-AI.
Christopher Hutchins:Right. Yeah, that makes total sense. And it's interesting, you mentioned the patient in this perspective as well. I'm excited, I'm going to be doing another episode soon with someone who's really looking at things from the caregiver standpoint, having really been trying to guide a family member through some really challenging things in the healthcare environment. So this is going to be an interesting period of time. One of the things I wanted to dig into a little bit with you is you've written a lot about assumptions that patients and colleagues make about clinicians. We just don't think the way that you do because we've never been trained to do that, first of all. But how do these assumptions show up in high-pressure moments? Emergency medicine is not like any other aspect of medicine, generally speaking.
Dr. Natasha Dole:So from my experience and obviously very recently lived experience, it shows up with a patient convinced that the paid version of the AI they're using is correct. So they have put in their symptoms or their concerns into AI, and then the output that's generated automatically equates to "go to A&E" or other advice, which really worries me. And again, there's two sides to this. I'm actually more worried about the ones that don't come to A&E because I wonder, are those people the ones that actually need to be there, versus the ones that are told to come to A&E that didn't necessarily need to be there, and they've been given false reassurances, false advice, incorrect advice? The AI has completely ignored the inaccuracies, the harms, the bias, the population. And also there are alternative care pathways available. I fully appreciate that currently where I'm working in the UK, GP appointments are really difficult to get. So I understand that patients are turning to alternative forms of healthcare resources. But the way things stand, a paid AI version is not an alternative healthcare resource. If anything, I would almost want to say I want to ban patients from using it for that reason. You shouldn't be asking AI for medical advice. That's where my concern lies. However, when patients come in to say the paid version of this AI has told me to come in, I see that as an education moment and it's a moment for me to explain to them why that output is wrong. And if it's right, which so far it has not happened once, then I explain to them and that's a moment of education, and I'm hoping that will get carried forward. But that said, in high-stakes situations, when I've got an AI fracture detection tool, it's great. It'll pick up a fracture that I may potentially have missed, but I still need to use my brain and my clinical judgment and decide if I'm going to trust the AI or I'm going to override it. Vice versa, if I think there's a fracture and there isn't one. And it needs to clinically make sense. So if the pieces add together, it equals a fracture, then yes, the AI was correct. But there are times where it's not. That's rapidly evolving and that's the future. And I would love something like that to be incorporated on a daily basis. All of those things are currently being trialed, just like with lung nodules on a chest x-ray or detecting strokes with AI. The anatomy needs to make sense. You see a patient, you've gone to medical school, you've got two, three, four degrees, you know what equals a stroke. AI is to be your second brain. It's your co-pilot. It's not an automated answer, which is what I'm seeing. So I worry about the med students of today because they have jumped to AI and I worry that they're not using their brain as their first brain. They're using AI as their first brain.
Christopher Hutchins:Yeah, you bring up something that's really important. The whole idea of putting your own information as a patient into that kind of platform without understanding context or how this thing actually works, that's really the biggest concern. But the reality is I don't necessarily remember what medication I was on two years ago. I don't even remember the names of the ones that I might be taking today. So without that context, AI, just like a doctor, if a doctor does not have access to certain information about your medical history, it changes the approach that they might take. You might have a symptom that by itself is really not that big a deal. But if your AI doesn't realize that you have other conditions where that becomes a different kind of risk, then this is where it gets dangerous. And I really want to make sure that we're dialing in on how you think about the whole workflow of navigating this when patients show up and they've trusted that capability. They really need to understand how you evaluate it in order to understand why they need to be very cautious about what they're looking at in terms of what a piece of technology can give them. Because the reality is it's still the patient-physician relationship that is front and center. And that's where the trust really has to be.
Dr. Natasha Dole:And also the accountability. And as you said, the data. Where is all that information getting stored? People are putting patient-identifiable details into the World Wide Web. Where is it being stored? Who is accountable for it? In terms of the AI that's being used, that data could be used for experiments and further research without your consent, without your knowledge. And you don't want that data shared in a space where you don't have access to it or have rights to say this is what you can and cannot do with my data.
Christopher Hutchins:So I mentioned the process that you go through to determine whether you want to trust a tool or even use it. When you're looking at an AI recommendation, how do you decide whether to trust the tool or your own clinical judgment?
Dr. Natasha Dole: For me to trust the tool, it needs to have gone through clinical governance and been approved. I need to see that and I need to see how it's been approved. I want to know who the stakeholders were, who was involved in the making. I want to know that I'm still the human in the loop and it's still my judgment that will make the final call. It's my co-pilot, not an autopilot. And I want to see the statistics. I want literature to say it has worked and in what data set and what population group it has worked in. I want to know about the biases, I want to know about the harms, the inaccuracies. I want to know about the scientific facts linked with it. And if it adds more layers to my brain or more tasks to my already multitasked day, I'm not interested in using it. It needs to make my life easier and it needs to be an adjunct. And it's the classic we keep hearing:garbage in, garbage out. When I teach about it, I say it's the three Ps. The first P is patients, because obviously in healthcare everything's about patients. The second P is people, which is us, the clinicians, the healthcare providers. And the third P is profit. And I don't necessarily mean profit from the AI app. I just mean in terms of how much it's costing or what the funding is needed to continue using it if it's really good, and where we're going to get funding from. And this is purely from a UK government perspective that I'm speaking from.
Christopher Hutchins:Yeah, I think though that when you talk about the profit side of it, I don't think it really matters too much where you are. I think the difficulties that come from the bias that people are thinking about is where it gets a little dicey. So what does AI not know? It doesn't know anything unless you tell it, first of all. But it also doesn't necessarily understand the context. It doesn't understand history versus real world today.
Dr. Natasha Dole:So a patient using it doesn't necessarily know what pertinent positives and negatives to put into the AI. And even so, I wouldn't trust it because it's just based on a whole lot of information that I'm not sure I trust the medical literature associated with it or if there's enough medical literature associated with it. Certain ages, certain races, certain population groups are excluded from it. And I don't believe that a patient is going to say, I am of Indian origin, my family history is this, these are exactly my medications, this is the symptom I'm having. And even if they describe the symptom, there are at least five or six other differential diagnoses that could be causing that symptom. You could go from mild, moderate to severe. And AI isn't going to pick up whether it's mild, moderate, or severe.
Christopher Hutchins:Yeah, that's a whole other conversation as well. The difficulties that come from missing information and things that you just don't even know. I just don't recall that much from my past in terms of what might even be relevant to have a conversation about. So I think this is an important area to really make sure that we understand. And in particular, what are the characteristics of a system that lead you to a point where you know that you've got enough information to make a decision?
Dr. Natasha Dole: Again, I think for me it's the process. If it's decreasing my workload and it's making me more productive, making me more efficient, and I can see that translating into evidence-based care that's improving patient flow and avoiding overcrowding and avoiding exit block, because those are the two main pressures that we're seeing and we're trying to mitigate, then it's a yes. But I need to know the background to it. Just because you're selling me a great AI tool, which is great in the presentation, it doesn't equal great in real life. And just because it's smarter, it also doesn't equal competent. And fast doesn't necessarily equal competent either. I need it to be safe. And I think that's the part that a lot of the tech world are missing. By all means, if it saves me from sitting doing documentation two hours in a day, absolutely, I'm all for using it. If it's picking up things that I wouldn't pick up when I'm tired, absolutely. But again, I still need to double-check it. And I think that's the most important word:need to double-check. And we cannot be over-reliant.
Christopher Hutchins:Yeah, and I think the problems that I've seen repeat themselves are when that judgment aspect of it wasn't accounted for. Not that we don't think people are going to be able to apply judgment, but are we putting the prompts in the right places that actually cause you to step back and actually think about those things? Because there's just so much going on at any given time. Things are moving so quickly. One room to the next, you've got very different scenarios that you're faced with. And quite honestly, people don't come in bunches like bananas. Everyone's unique. You and I might actually have the same condition, take the same medications, have what appears to be the same kind of environmental scenarios. But it's very rare that two individuals are going to actually have the exact same outcome.
Dr. Natasha Dole:And you hit the nail on the head there. That's exactly it. That's the crux of the matter right there.
Christopher Hutchins:So I want to dig into a little bit around where these recommendations lead and what challenges arise when the recommendation doesn't align with your clinical judgment.
Dr. Natasha Dole: So I think there's two sets. One is mindset adoption, and then the other one is organizational readiness, whether the organization is actually ready for this digital change that we're all experiencing. And as you said, it's rapidly evolving and changing literally on a minute-to-minute basis. And then you have the people that are pro-AI and you have the people that are anti-AI. And it's finding that balance and getting them on board. For me, I'm an AI enthusiast, but I'm also aware that I will still trust my clinical judgment more than an AI tool. So if the AI tool differs from what I have come up with, what my clinical features are suggesting, I will correlate the two and decide:is it actually making sense as one plus one equals two, or is this telling me one plus one equals three? And if that's the case, then that's when I go to a second human brain and I'll say, I've already used AI as a second brain, but now I need another human to agree with me. And I have no qualms reaching out to another colleague to say, can we just have a moment of shared decision making? Because this is a tricky situation. This is the clinical picture, and this is what AI is saying. And now I'm confused. It's also two o'clock in the morning, so another head and another opinion would be great. And then you make a joint decision. I have never 100% solely relied on AI. And currently, as things stand, I can't see that happening, at least for me, for my own individual clinical practice.
Christopher Hutchins:That's such an important thing to realize. I think our systems historically have been built like healthcare is static somehow. And we always say it's not. The whole idea of practicing medicine has been kind of lost over a number of years. People either trust completely or they don't trust nearly enough. And that's a place where we could really do some damage and hurt ourselves if we're not keenly aware of it. Because when we approve something from a governance standpoint today and then we don't revisit it for five years, that's problematic even without AI. Now that AI is in the picture, the evolution is going to be much faster. I'd love to hear some of your thoughts around how we really address that from a governance standpoint. What are some of the things that we can put in place to ensure that we are monitoring and we're not assuming six weeks from now that what we decided and signed off on last month is going to be the way it always is?
Dr. Natasha Dole:So I think for me, that is a three-aspect answer. One is AI literacy, which is different from digital literacy. And the third is continuous audits and a cycle that gets reiterated. It needs to have sole ownership where someone's saying, I'm going to be the one that does the audits, I'm going to gather the good, bad, and the ugly and feed this back to the consumer so we can change it. And again, that needs to be a multidisciplinary team. You need to have not just clinicians; it needs to be everybody. That's why I say top-down, bottom-up approach. And then the other thing is currently the biggest stigma around it is you need to be tech-savvy to use AI. And I keep hearing "I don't have enough time for AI." If anything, the people that are not tech-savvy and don't have time for AI are the ones that need it the most. So it's kind of breaking that mindset and changing that barrier. Change is never easy. But within the UK, the NHS 10-year plan is to go digital, and AI is a big part of it. So it's getting people on board. The way I do that is I often adapt two quality improvement tools into getting people on board. One is "what's in it for me." I explain to them what AI does for you, why you should be using it, and how it'll help you. The most important thing, besides decreasing cognitive load, is having more meaningful connection with your patient, which we've never had before. The second one is the five whys. Why should I use AI? You answer the question, and then you answer why again, and then why again. By the time you get to the fifth why and the fifth answer, there is enough unpacking and dissecting that's happened. That's how you get people on board. It's kind of like a service improvement project, except you're using it to market a tool which you believe in, especially if it's doing good and it's been approved. And you have to stress, and I keep stressing this, that you always need a human in the loop 24/7, 365 days. That's not negotiable.
Christopher Hutchins:Right. I think that's an important distinction. We hear "human in the loop" thrown around like it's the latest buzzword. And I can see where a physician that hasn't had a lot of exposure to AI may feel like that's an imposition and asking even more of them.
Dr. Natasha Dole:The reason they haven't had a lot of exposure is because we sadly don't have enough allocated protected time to teach it. If you tell me that after my night shift, there's a day training for how to use AI and how it can make your life better, that's the day of my rest. That's the day I'm sleeping, that's the day I've got childcare, I've got parenting or whatever it is going on. You're not going to then want to attend teaching on something that you're already not sure may or may not work. So it's identifying the opportune moments to get people on board. And you need to identify on the impact engagement scale who do you need to help you get people on board and how important is this person? Because if you have person A on board who's really important, then B, C, D will follow. Whereas if you get person F who's not as important or as big a figure in the executive team, the rest of the team may not want to follow. So it's about having the big guns on your side and working collectively. You need to have shared outcomes because you've got mutually agreed goals to reach.
Christopher Hutchins:Right. When we think about this, it comes down to a couple of different things that are important. You've already touched on not replacing the human judgment aspect of it. But in a broader sense, this is also about what responsible AI in healthcare looks like. It's not just AI and healthcare; it's responsible healthcare. What are the things we're introducing that either reinforce and protect that exchange between a doctor and their patient? When we talk about that, what are some of the responsibilities that you think remain uniquely human? And I think that speaks to the apprehension that you might run into from clinicians that are already feeling overwhelmed.
Dr. Natasha Dole: So I think three things:accountability, consent, and disclosure. And you need to realize that if the AI makes a mistake and you've gone with it, it's not the AI's fault. It's still you that treated the patient. And consent, like I said, especially in A&E, it's not always possible because the patient is unconscious. But if you can explain it and do it on the balance of probabilities because the benefits outweigh the risks, then by all means. But you need to disclose it in your notes. Currently all the AI that I use auto-populates a line saying this note was generated by a specific AI tool. Or even with the fracture detection tool, it says this is a virtual fracture detection tool. And patients are now starting to give consent and patients are seeing the effect it's having on overcrowding and time to be seen and the wait times, especially when we're drowning. We're supposed to be seeing the end of winter in the UK, which we're clearly not because the pressures are just horrendous and they're getting worse. Our wait times are just increasing. And if AI can fix that and help assist with alternative pathways, whether it's through a scribe or stroke detection or fracture detection, that's really the way forward. And again, only if you've got a human in the loop and it has gone through strict, robust governance. Consent and disclosure are of utmost importance to me. That's what responsible AI would be for me.
Christopher Hutchins:You've touched on consent a couple of different times and I wanted to dig into this a little bit because I don't think I've heard it talked about a whole lot. Emergency medicine is a different thing altogether because oftentimes the patient, when they do arrive, they're not in a position to actually make a decision about consent. Where do you see some of the more critical needs and opportunities for us to be focusing on as we're trying to design technologies that actually support the realities of what you're dealing with?
Dr. Natasha Dole:I think if we get patients and clinicians involved from the very beginning, not at the end, that's absolutely essential because the clinician will tell you what they want in that model, what works, what the bottlenecks are, what the challenges are, what we see in lived experience. And the patient will tell you what they want from the output, what they want from the model. And I think consent needs to be built into the model. It needs to auto-populate and that disclaimer is absolutely essential. If I then need to write another sentence to say consent was obtained and add the disclosure, that's adding more work to me. I appreciate it's two more lines rather than the entire clinical note, but it's still important. And you're right, patients don't know what it is. Right now, because it's such a big buzzword, AI is changing everything, not just in healthcare. People are becoming more aware of it. But people aren't aware of the harms. Patients are just seeing the benefits of it, or are over-relying on it without realizing the inaccuracies associated with it, not to mention the bias.
Christopher Hutchins:Just the whole notion that any kind of technology can actually replicate years of experience. I see this in a lot of different ways. It's not even necessarily about AI and technology. There are just certain things that don't make any sense when you look at it from a real-world lens. There's just no way on earth I could wake up one day and know enough to make a decision based on all your years of experience and clinical judgment. It just doesn't work that way.
Dr. Natasha Dole:Exactly. And then it comes down to the patient, because then the patient's responsible for trusting the AI and not coming to A&E if they've gone with that advice. But I'd like to think that the AI models are changing given all the feedback they've received. The outputs are not as concerning as they were previously, but I still don't trust them. AI should not be used by patients for medical advice at all.
Christopher Hutchins:Yeah, this kind of leads into something else I want to talk with you about in terms of how design is actually approached. What are some things that designers, vendors, or health system leaders consistently underestimate about clinical environments?
Dr. Natasha Dole:I think consumers and tech creators model this for outside of A&E, for an eight-to-five healthcare job. A&E is buzzing at 3 AM. That's when I need you to come shadow me as the person making the tool, to see what my bottlenecks are, what my challenges are, and understand my environment. Because stuff normally goes wrong at ungodly hours. That's what I need you to see and that's where the problems arise. So if we're not testing for that, then you having a great tool that works at two o'clock in the afternoon is not necessarily going to be a great tool at 2 AM. For me, that's the tool I need at 2 AM when I'm overtired, I've got decision fatigue, my situational awareness is decreasing, my cognitive load is increasing, I'm running on caffeine, I'm running on adrenaline. That's when I want something I can rely on, but again as an adjunct and as a co-creator and a thinking partner. A&E is not the same as a surgery that's taking place at three o'clock for a planned appendix removal. Minutes change in A&E. You could be completely calm one minute with a very quiet department and the next minute everything explodes and you've got one sick patient after another. That's where AI can help, but that's what the tech team aren't seeing. They're not covering for those times.
Christopher Hutchins:Clearly we've kind of missed the boat on a number of occasions in terms of the assumptions that we make going into design. I can't tell you how many times I've been frustrated over the years just being on the administrative side of healthcare where we have vendor solutions put in front of us pretty frequently. One of the first things that happens is they will come in and actually do shadowing or have conversations, but then it's quiet for a very long time and then one day they show up and they've got something that they've built and they're convinced you should just use it. What are some of the assumptions about AI by designers or clinicians where we've just got things misaligned and they're always going to go wrong?
Dr. Natasha Dole: I think the biggest assumption, which is false, especially among the anti-AI group, is that AI is going to replace us. And this line has been used repeatedly:people who use AI are the ones that are going to replace you, not the tool itself. It actually makes me more efficient, more productive, and it's given me my time back. It frees up space in my brain and allows me to think better. It allows me to have multiple tabs in my brain open, which are normally open anyway, but now I can actually give each one of them appropriate nourishment and engagement because I've got something else taking down notes for me or something else assisting me. And something that's functioning at a very high level, but again it's still me making the end decision. Even if it takes off that 1% of technical burden or clinical workload or cognitive load, it makes a difference. And again I can't stress enough, provided it's passed governance and has a human in the loop.
Christopher Hutchins:Right. And that's a distinction for me. We've exchanged messages on a number of different topics, but I don't know that we've talked about this specifically. My own experiences have been geared towards supporting the patient-provider relationship because I saw something up close and personal on several occasions when I was younger. The critical thing, and I've heard it in what you're saying, even in the tone of your voice, there's a real passion for what the mission is all about. It's really taking care of people. I don't know that we can quite grasp that, especially if we're not trained as a physician. The only reason anyone wants to even go through all the training and the years and the cost to actually be in a position to provide that kind of help to people. I don't think we can really understand it if we haven't done it, to be honest. But I think it's just really important, and I truly do appreciate and respect the work that you're doing. Technology is not always the answer. The reality is people are coming to the ED for any number of reasons.
Dr. Natasha Dole:And the majority of people that come in for whatever problem they come in for, they're scared and they need some love and some care and some attention. There's no replacement for the human touch. The worst part about COVID was how we had to break bad news. You couldn't hug a patient, you couldn't touch a patient, you had layers of PPE on, and it was awful. I don't ever want to practice medicine in that scenario again. And with AI, I've been able to hug patients again because I have more time with my patients because the scribe has captured the stuff and I've been able to look a patient in the eye and smile, or had a chance to interact with their little kid that's come along who's a very cute toddler. Stuff like that has not happened in years. Even if I go back to med school, which is now a very long time ago, that's something I was never even trained with because AI was not a thing. Likewise, if you turn that around and you ask a med student of today or even the Gen Z of today, if you go back to DOS or pre-DVDs, they don't know what any of that is. So it just shows how rapidly the world is changing and evolving, and so does medicine. Medicine evolves as rapidly, and if you don't keep up, you're going to be really behind. That's the future. At one stage the ultrasound started replacing the stethoscope. Very soon AI is going to start replacing certain pathways, and again, not the clinician, the pathway. If anything, it's going to make the pathway easier and more accessible. But it needs to be equitable. That for me is the other key, because it needs to cater to every single population.
Christopher Hutchins: Yeah, you said a couple things there that I think we need to be really cognizant of. You're talking about populations, and we already discussed a little bit that individuals are very different. There's not two people on the planet who look the same from a profile standpoint. I think that's an area where we really have to lean in. We can't be lazy about it and design the way we've always done it. We also have to understand from a development standpoint and design: we are already starting at a point where we have taken time away from that clinical encounter. The reasons that you went and got your training wasn't for our technologies. That was never the point. The fact that we designed electronic health records to support accurate billing, not clinical workflows, we have to be aware of that. This is not an efficiency play. When you spend money on technologies like this, there's this underlying push and motivation to try to be more efficient. But I just think that's the wrong lens to have in front of people. It really is:how are we going to give back what we have been stealing from our clinicians since the introduction of all these technologies that are intended to make things better? I can't stress that enough. We have to do better. As we look ahead, I think this year there's going to be a lot more conversations around governance. Sadly, on a global scale, there are going to be some lessons that we'll have to learn the hard way because we've gone a little bit too fast.
Dr. Natasha Dole:That's it. It's evolving faster than what we're ready for. You've nailed it. The technology is miles ahead of us. If we go back to just emergency medicine, we are miles behind. Oncology, radiology, cardiology are miles ahead of us, and emergency medicine's just not catching up. We're trying desperately, but there are many barriers, many obstacles, many bottlenecks. And it's not just funding. It's the organization that's not necessarily ready. There are funding issues, there's that mindset adoption, and it's breaking barriers to change and implementing change. You need repeated audit cycles and a continuous process of review, monitoring, and progress. And it needs to be the good, bad, and ugly so you get the full spectrum, the entire narrative to help you change it for the better.
Christopher Hutchins: This is a really important conversation that we're having. I think we've got to figure out a way for it to be amplified even more. We're putting out buzzwords, we're putting out all kinds of technologies and tools. Patients don't come through the front door because of our technology. They don't come because we've got flashy-looking dashboards. That's been my space for a long time, data visualization in the healthcare sector. I'll put it this way:the technologies that we're deploying and the designs, I think about it in a similar way. If you go to a venue to see your favorite band, if you know who the sound guy is and where he's located, that's probably not a good thing. If you're noticing it, they're not very good at it. From a technology standpoint, we kind of have to have the same mindset, really making sure that we're undergirding the profession and supporting it. That's what we're about. It should never be front and center. That's never a good thing when the technology is what we're talking about.
Dr. Natasha Dole:Absolutely agreeable.
Christopher Hutchins:So as we're wrapping up, and I can't believe we've gone so quickly through this time, I would love to be able to talk to you for hours. But as you think about the remainder of this year, what are some of the things that you would encourage people to make sure they're thinking about, not only relative to emergency medicine but in general, how we're thinking about using AI and where we might be able to avoid some missteps?
Dr. Natasha Dole: I think it's a really good question and one that's not necessarily a straightforward answer. Personally, it's about becoming AI literate. Professionally, it's becoming digitally literate. And if you don't empower yourself and embrace the challenge, it's not necessarily going to happen. It's a rapidly evolving landscape at the moment, and AI is being used everywhere in the personal world and in people's professional worlds. There are tons of free resources out there. You can listen to podcasts, attend webinars, and you need to start somewhere to get a grasp. You start at the foundation, and once that foundation is learned, you build your own framework to decide what you're happy to accept and what you want to question. It's embracing that change and the challenge associated with it, because not all technology is good. Just because it's good doesn't mean it's safe, and faster doesn't automatically mean it's safe. That's my big tagline, something we need to be very cognizant of. And don't be afraid to ask for help when the AI opinion differs from your opinion. There's no shame in asking for help. Pre-AI, we'd always phone a friend, ask a senior colleague. Still do that. AI is helping you and it's supposed to be supporting your decision. Just like the human can be wrong, AI can be wrong too. Don't rely on it 100%. You need to proofread, you need to proof-check, and especially in healthcare, you need to dot your I's and cross your T's because at the end of the day, it's the clinician that's responsible and accountable. If something goes wrong, no one is going to say it was the AI. It was you. And in a medico-legal stance, you've got no protection whatsoever. That's not something you want to be fighting with your medico-legal lawyers. We need to be very careful how we tread. And like I said:explicit consent, disclaimers, and full disclosure.
Christopher Hutchins: It's such a wild time. I think back to the 90s, we couldn't have imagined where we'd be when the internet started to become such a big thing. Essentially our whole lives are surrounded by things that not that long ago didn't even exist. Getting this stuff right as we're talking about patient care is really critical. As we wrap, could you think about for just a minute:if we can do anything for you with the way we're approaching things with AI by design, what would be the top priorities for you as a clinician?
Dr. Natasha Dole:Involve me and come and watch me work and see yourself the bottlenecks that I face at 3 AM. And give me an AI that works, or that makes me want to use it, at 3 AM. Not at 3 PM.
Christopher Hutchins:That is amazing. As usual, you're really crystal clear thinking about these things and I really appreciate it. I just want to thank you for your continual contributions. I learn a lot every time I look at an article or any activity that you're posting.
Dr. Natasha Dole:And I really appreciate all your support and engagement. Thank you, it means a lot.
Christopher Hutchins:It's my pleasure. The interesting dynamics for me, a year ago I did not anticipate being in a situation where I was running my own business and trying to support the clinical mission from a different side of things. I've always worked inside of a healthcare system. I'm more inspired now than I've ever been, and I really do appreciate and respect the work that you're doing. I think it's unbelievably challenging already to balance all the things you balance, and now throw in this AI concept. Not only are you figuring out for yourself, you're advocating for the responsible use of it. You're advocating on behalf of clinicians and challenging the status quo. It's really important. It's one of the reasons a year ago I started to work with some firms that are helping me to really push some narratives that are opening gateways for clinicians to step up and have a voice, because too often you've got politicians, insurance companies, or pharmaceutical companies designing things with the right intention, but to your point, they have to be done with you, not to you.
Dr. Natasha Dole:It is a balance and it's finding that balance. Thank you, Chris. Really humbled and honored and very grateful for the opportunity.
Christopher Hutchins:No, thank you so much. I'll be in touch with you because I know there's more for me to learn and I can't wait to see what you're going to do next. I hope I'll have you back again really soon. We'll have some new exciting things to talk about. We'll also have some learnings, and I'm sure you'll have a lot more successes in the future just because of the way that you think and approach your role as a clinician but also as an advocate. Again, thank you so very much for being on the show today.
Dr. Natasha Dole:Thank you very much for your very kind words. We'll be in touch soon.
Christopher Hutchins:That's it for this episode of The Signal Room. If today's conversation sparks something in you, an idea, a challenge, or a perspective worth amplifying, I'd love to hear from you. Message me on LinkedIn or visit SignalRoomPodcast.com to explore being a guest on an upcoming episode. Until next time, stay tuned, stay curious, and stay human.