The Signal Room | Healthcare AI Strategy & Governance

Responsible AI in Healthcare: Ethical Leadership and the Ways of Working | Asha Mahesh

Chris Hutchins | Healthcare AI Strategy, Readiness & Governance Season 1 Episode 12

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 21:57

Send us Fan Mail

Responsible AI in healthcare only works when ethical leadership is built into daily operating practice — Asha Mahesh on AI governance and the ways of working.

Ethical leadership in AI is not a paper you publish and forget. Asha Mahesh, a technologist at the intersection of AI, ethics, and life sciences, joins Chris Hutchins to examine what responsible AI actually looks like when it has to hold up inside pharmaceutical and healthcare environments where patients and treatments are on the line. Ethics becomes a way of working, not a statement of values.

What We Cover

  • How AI and machine learning predicted optimal clinical trial locations during COVID-19 with enough accuracy to complete a vaccine trial ahead of schedule with fewer patients enrolled
  • Why responsible AI outcomes come from mission-aligned teams, not from the technology alone
  • The practical framework for addressing workforce fear: lead with what is in it for the individual, not with what the technology can do
  • How to assure a clinical development leader with a decade of education and years of practice that AI is not coming for their core judgment
  • Where ethics live inside daily AI workflows rather than in the retrospective incident review

Key Takeaways

  • Ethical AI requires leaders who model accountability, not leaders who mandate compliance from the sidelines. The difference shows up in every decision the team does not escalate.
  • Responsible data practices address representation, consent, and community impact at the same time as privacy. Treating them sequentially means shipping the first three and promising the fourth.
  • The gap between stated AI values and operational reality is the most significant challenge most healthcare organizations face. Closing it takes sustained leadership attention, not a one-time ethics initiative.

Frameworks & Tools Mentioned

  • COVID-19 clinical trial location optimization (AI/ML)
  • "What is in it for you" upskilling framework for clinicians
  • Ethical leadership as a daily operating practice
  • Community-impact extension of AI fairness frameworks
  • Human-centered AI design in life sciences

## Timestamps 00:00 – Live from Put Data First: Why AI Ethics Matters in Healthcare 01:05 – Asha’s Path into AI Ethics, Privacy, and Life Sciences 03:00 – Human Impact as the North Star for Healthcare AI 04:30 – Humanizing AI for Care: Purpose Before Technology 06:20 – Embedding Ethics into Culture, Not Policy Documents 07:55 – COVID Vaccine Development: AI Done Right 10:15 – Mission Over Technology: Lessons from the Pandemic 12:20 – The Erosion of Trust in Institutions and Technology 14:10 – Fear and AI: Addressing Job Loss Concerns 16:30 – “What’s In It for You?” A Human-Centered Adoption Framework 18:00 – How Human Should AI Be? Drawing Ethical Boundaries 19:50 – The Irreplaceable Role of Human Touch in Care 21:10 – Human-in-the-Loop, Guardrails, and Clinical Accountability 22:00 – Closing Reflections: Leading AI with Heart and Responsibility

Support the show

About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.


Website: https://www.hutchinsdatastrategy.com 

LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/ 

YouTube: https://www.youtube.com/@ChrisHutchinsAi

Book Chris to speak:  https://www.chrisjhutchins.com

Asha Mahesh:

The passion that drives me is anything that I do is getting one step closer to bringing the right treatments to the patients.

Christopher Hutchins:

The tagline for my company is humanizing AI for care.

Asha Mahesh:

For me, ethics and responsibility is not something you write a paper and forget about. It's not about a paper, it's not about some writing or anything. It's about building the ways of working.

Christopher Hutchins:

Super excited to come to the event. For people who are watching, we're at Planet Hollywood in Las Vegas at an AI conference called Put Data First. And I'm talking to Asha Mahesh, who's an expert in ethics, privacy, all the things that make people really nervous, particularly in healthcare, when it comes to the innovations that we're coming out with and AI in particular. There are a bunch of different facets I'd love to chat with you about. But maybe tell me a little bit about what's your passion? What led you to get into ethics and privacy? And obviously in technology, that's a little bit more technical, more so than even philosophical.

Asha Mahesh:

My passion is, I'm a technologist at heart. At the same time, I'm also very practical. In the sense that technology, if you don't apply the technology the right way, it can end up being really bad, especially in an industry like ours, like life science. It's all the more important to apply the technology the right way, giving it to the right people at the right time. That's really critical to get to the success of any technology that we use. The passion that drives me is anything that I do is getting one step closer to bringing the right treatment to the patients. As you see, we're not immune to health problems. It's across the board. We all have our families, our close friends. You deal with them on a day-to-day basis. When you think of your work actually making someone feel better, it's all the more important. Whatever we do, that's what drives me, that's the passion. And sometimes I end up going above and beyond and doing things because ultimately my goal is, it may not really look like I'm actually treating the patient or anything, but ultimately how you're doing something that is getting closer and closer to that goal.

Christopher Hutchins:

That's one of the coolest things about dealing with data and AI. I grew up in a household where my mom worked in a hospital in the radiology department. My dad did mailing systems and database types of things. At that point in time when I was young, I never imagined that I could do work that could have any kind of impact, direct or indirect, with data on healthcare. But I grew up around healthcare people. I was always inspired by it. But I find it remarkable that fast forward a couple of decades and all of a sudden, jobs that didn't exist, no one even heard of them, people like us can build careers and we can actually have direct impact. I think that's one of my favorite things about the kind of work that we're doing. What really gets me excited is hearing people with your passion that are really responsible and looking at the ethics component of it. The tagline for my company is humanizing AI for care. It may sound cliche to people, but the passion that is driving you is familiar to me because it really is about human beings. It's about relationships. We already mentioned the people in your life. I think there's probably about the most critical thing we could do right now, which is to get people focused and coming to places like this, not only to learn, but to actually lean in and influence the direction that technology is being taken in to make sure it doesn't lose that humanity component. Talk to me a little about how you think about influencing people as they're starting to think about design. And sometimes they probably are not starting where you think they should start. What are some of the things that you see that we should be doing and we can challenge people to lean in on to make sure that we're not losing track of why we're doing what we're doing? It's really about making people's lives better. It's not about technology just because it's cool.

Asha Mahesh:

That's a great point. Ultimately, that's where you look at what is the intended use of the technology. What is your intention? Is it purposeful? In terms of influencing, to your point, what you mentioned about humanizing, that's a concept we all have to embed in ourselves. Sometimes we're so passionate about technology, the data, and all those things. So there is no North Star. We're all running towards what's cool. But when you have that North Star, and then to your point, where is that ultimately going? Who is it going to impact? You need to figure out a way to inspire people on that. How do you influence? Influence is showing that North Star. And that's something that we do in a company like ours. We have something called CRADLE, which is written on the wall, but at the same time, they've done a great job in terms of embedding those CRADLE values in everybody and every employee, saying basically, in a nutshell, we are here to serve our patients, our providers, and our customers. That's what we are here for. One other thing we also do well is bringing heart and science. Purely the science by itself will not really solve all the problems. You need to also bring your heart to whatever you do. When you combine science and heart, that's when things actually happen. That's when it becomes more meaningful and impactful. How do you influence people? Also show the success and the value of what has been done. If you really look at a lot of things that we have done, we have applied AI in terms of, I'll give you an example. During the pandemic, we had a vaccination program that we were working on, the COVID vaccination. We actually used a lot of data and AI, machine learning, and all those things. We built a lot of models in terms of predicting where we want to run the clinical trials. Our predictions were so good, so accurate. We were able to finish the clinical trial ahead of time, and also with fewer patients enrolled into the trial.

Christopher Hutchins:

Amazing.

Asha Mahesh:

That by itself is amazing in terms of getting even one day closer, one day earlier, it makes a difference. Especially when you're running against the time. Those are some of the examples. There are some great examples. When you look at that and say, yeah, we were able to do it successfully. Why? Because we were all so passionate. We felt like we've got to do something. This is not the way to live for people. You need to get there. That's when you look at those examples and use them to influence people. And also you mentioned about how do you do it responsibly, ethically, and all those things. That's not something you can forget about. For me, ethics and responsibility is not something you write a paper and forget about. It's not about a paper, it's not about some writing or anything. It's about building the ways of working using ethics and responsibility. If you build that culture, it's all about culture again. Building that culture into your organization and how to do that, and also inspiring and recognizing people who do that, who actually apply that on a day-to-day basis. That's one way to answer the question.

Christopher Hutchins:

I love the example you gave during a pandemic. I was actually working on the health system side of that, actually in New York, and I have never seen such good in people that I saw during the pandemic. To your point, it was a passion. It was a shared one because we all felt an immense responsibility because it was a crisis like we'd never seen before. And I don't think people probably have an understanding of how little we knew at the outset, which makes what you accomplished developing a vaccine even more remarkable. We thought it looked like pneumonia, we thought it looked like flu, we thought a lot of things. If we had applied any models that we had at that point, we would have misdiagnosed. But the fact that people rallied around it, in even my own health system, the thing that was really remarkable to me is I saw executives responsible for marketing that were showing up to go help set up the tents and administer the COVID tests. It was an all-hands-on-deck type of thing. And I really saw the good of humanity on a whole different level. But I really wanted to bring attention to the fact that you mentioned how much was done and so quickly because people came together with a mission. It was passion, and it was about people. It was not about technology.

Asha Mahesh:

It's not about technology. We were all using technology and applying it. At the same time, our goal was to get the treatment fast, get that vaccine to the people, so they can lead a normal life. To your point, the people came together, and there wasn't a single day anyone complained that we were working nights, working late, working long hours. Not a complaint. The people were literally happy to do that. To be honest with you, people who weren't even savvy technologically, and they come to us and say, how can I help? Can I do the data curation? Can I do annotation? Can I do anything? That was really truly inspiring. I wish we could bring that same culture without a pandemic.

Christopher Hutchins:

For once, could we actually hold on to something that was really good and keep it because it is good? I think one of the challenges we deal with, particularly where we're trying to introduce new technology, is the trust factor. What I hear more about in terms of trust is not really where I think our biggest challenge is, but it's more on can we trust AI. But what I've come to understand is, I was recording an episode with a gentleman who's a clinical psychologist, his name is Dr. Larry Kuhn. He's been working with executives for a long time and coaching them and mentoring. And he had just published a paper about the erosion of trust. He put some stats behind things that I think we all kind of intuitively can feel and we know. The example is he said about 20 years ago, if you surveyed 10 people, eight of them probably would say that they trust the government. Or maybe the same number would say they trust clergy or they trust law enforcement or they trust the CEO or whatever. Trust has eroded to a point now where we're talking about the low 20s in many of those cases. And so we have a much bigger challenge ahead of us because people just don't trust each other very much anymore. The things that we're talking about that can have such life-changing impact, life-saving impact, how do we start to attack this trust issue that we know we're working from a deficit and start to engage people in a way that they feel like they can lean in and start to trust? The biggest thing I'm sure you hear about all the time is fear. I'd love to hear your thoughts on how we can really address the fear. And more importantly, what you feel is really important, what's meaningful, what can they do, and how do we help them get past the fear?

Asha Mahesh:

When you say fear, is it the fear of using AI or building the trust? It's going to take my job.

Christopher Hutchins:

My job.

Asha Mahesh:

It's going to take my job. Yes. That fear is there. I think what I do, at least when I go talk to, for example, a scientist, saying we are bringing this AI that's going to do the next task, what we call designing a molecule or whatever it is, there are scientists who, that's their lifeblood. They do experiments, they studied biology for so many years, even the clinicians and all those things. I think you need to go with an attitude of saying how it is going to help you. I use the framework called what's in it for you, what's in it for me. That's the framework I usually apply.

Christopher Hutchins:

Most of us can relate to that.

Asha Mahesh:

It can relate to that. In terms of going with the messaging around, this is an AI, it is going to do so-and-so for your job. But to focus on how it is helping them. Not something like this is going to replace you. That's a different story. But when they say it is going to help me do my job better, they will be receptive to that. That's something that has worked for me over time. And also when I go to them, I go with the attitude of can I help you with something? What can I do to help you? What is in it for them? That actually helps them get the fear out. One example I want to give you. I was working with a clinical development leader who happens to be an MD. They have a lot of MD PhDs, they have a lot of knowledge in whatever this is, and we are bringing this system that would say it's going to answer all these clinical questions, and we're going to roll that out.

Christopher Hutchins:

Right.

Asha Mahesh:

One thing I went and assured, there is no way this system is going to do what you can do, because you've gone to school. How many years is it? Ten years of school with all this practice that you have. There is no way anything can do even half of what you can do, or that knowledge. But at the same time, it's going to eliminate the tedious work that someone would do. So I would go with that attitude of how it is going to help you, what it's going to do for you, versus saying it's going to do magical things. People have that fear. It's true with all of us. That it's going to replace our jobs. But the way I approach it is with that framework. Rather than saying it's going to do something magically, it's going to solve all your problems, which is not the reality to begin with. I think people understand that.

Christopher Hutchins:

I think you're right, but there's definitely something to that in really making it personal for them. What is in it for them? I love that because you're putting it back into the context of this really is about how we can help you, how this technology can come along and support what you need to do and make things easier for you. There's a conversation I've been having recently about how much human-like we should make AI. Initially I thought, well, it's actually good in some ways if we can emulate certain things so that, for example, if someone's having a really bad day and they're running your doctor's office and you call them and they're stressed, they might be short. So having something that's even-keeled, I think that's an interesting use of AI. But the other thing that concerns me even more is people have come to trust technology so much that they're not aware of the risks that they take now. The things that they'll put online that they probably should never put online because they're just comfortable. It's just become second nature. But it's only in the early 1990s that that was even a possibility. I think we've got to have some guardrails. Not quite sure where to draw the line, but how much do we really want it to feel human? Because people can get way too comfortable way too fast. They've already done that. And I think we've really not done a great job with that balance yet.

Asha Mahesh:

In terms of humanizing and all those things, like every solution or every product that we design, we want to bring in a human aspect, human-centered design. Is it going to resonate with a human at the end of the day? In terms of that, there still needs to be a human touch. You can humanize everything, but at the same time, one example I would give is let's say a patient goes to a hospital, and there are many times you can have a mostly automated robot treating you. But ultimately, we all see the placebo effect. The placebo effect is a real thing. Ultimately what really matters is someone holding their hand and saying, you're going to be okay. That carries a long way more than anything else that you can do at that point in time. I feel like no matter what you do with a machine, that human touch still needs to be there. In the critical aspects of humans, whether it's health or when you're in a situation where you actually need a little bit of support, someone holding your hand, those are the aspects I don't think we can replace with AI. That will still be there. One other thing I want to mention. When we design these AI-based diagnostics and clinical decision support systems, we always say human in the loop. There is a clinician in the loop, they're the ones looking at the output and making a decision. But the regulators come back and say, a human can get complacent at some point. They will start trusting that. If it makes a mistake, they may not know it's a mistake. They're going to start following it. What kind of guardrails are you going to put in place in order for humans to still do the critical thinking and see what's right, what's not right? Those are the aspects that we have to consider when we do it at the level of clinical decision support and diagnostics.

Christopher Hutchins:

I agree with you, I appreciate that so much. Asha, I can't thank you enough for joining me. It's been a pleasure chatting with you, and I hope that we can stay in touch. I'd love to dig into more of this. I think as things evolve, there's going to be a lot more to talk about in this space.

Asha Mahesh:

Thanks for having me.

Christopher Hutchins:

That's it for this episode of the Signal Room. If today's conversation sparks something in you, an idea, a challenge, or a perspective worth amplifying, I'd love to hear from you. Message me on LinkedIn or visit SignalRoomPodcast.com to explore being a guest on an upcoming episode. Until next time, stay tuned, stay curious, and stay human.