The Signal Room | AI in Healthcare & Ethical AI
Welcome to The Signal Room, your go-to podcast for expert insights on ethical AI, AI strategy, and AI governance in healthcare and beyond. Hosted by Chris Hutchins, this show explores leadership strategies, responsible AI development, and real-world implementation challenges faced by healthcare AI leaders. Each episode features deep conversations covering healthcare AI innovation, executive decision-making, regulatory compliance, and how to build trustworthy AI systems that transform clinical and operational realities.
Whether you are an AI strategist, healthcare executive, or AI enthusiast committed to ethical leadership, The Signal Room equips you with the knowledge and tools to lead AI transformation effectively and responsibly.
Join us to learn from industry experts and healthcare leaders navigating the evolving landscape of AI governance, leadership ethics, and AI readiness.
Follow The Signal Room and stay updated on the latest trends shaping the future of ethical AI and healthcare innovation.
The Signal Room | AI in Healthcare & Ethical AI
Data Governance and Trust Infrastructure for Scaling AI | Amit Shivpuja
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Amit Shivpuja reveals governance and data lineage infrastructures essential for healthcare AI trust and leadership accountability.
Amit, as Director of Data and AI Enablement at Walmart, exposes the systems and practices that operate beneath the surface of every trustworthy AI deployment. The conversation covers data governance architecture, data lineage, and the often-invisible infrastructure that determines whether stakeholders can actually trust what AI systems produce.
Trust in AI systems is not built through marketing or executive proclamations; it is built through governance infrastructure that most stakeholders never see. Data lineage is the foundation of accountability in any AI system — when stakeholders cannot trace where data came from and how it was transformed, they cannot assess whether AI recommendations are trustworthy.
Topics covered: data governance architecture, data lineage and accountability, trust infrastructure for AI systems, enterprise data strategy, and why organizations that skip unglamorous governance work build systems that appear sophisticated while remaining fundamentally unreliable.
About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.
Website: https://www.hutchinsdatastrategy.com
LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/
YouTube: https://www.youtube.com/@ChrisHutchinsAi
Book Chris to speak: https://www.chrisjhutchins.com
If we were to equate AI to launching rockets, data's the launch pad, right? You need a good launching infrastructure to be in place. Then you can launch as many rockets as you want. So the thing I think, a quote that I really like is garbage in is garbage squared out. Just basically making sure you can minimize biases, making sure you're transparent and clear about what data you have. The other piece of it is we have to get our data thing right. We've kicked the, forgive me for saying, but we've kicked the can down the road too many times.
Christopher Hutchins:Well, I have a special treat for you all today. I am live on location at Planet Hollywood in Las Vegas, Nevada for the Put Data First Conference. And joining me today on the Signal Room is a new friend, Amit Shivpuja. He currently is the Director of Data and AI Enablement for Walmart. And I'm really happy to be able to get to meet you and just to meet you at such an important event where we're here talking about AI. We're talking about the currency of trust, we're talking about the importance of getting this foundation right, thinking about it from an ethics and responsible perspective is really built into the fabric of how we're designing everything. So you and I've talked a little bit about this from the topic that we want to talk about. It's really the hidden infrastructure of trust. And you said you can't scale AI without scaling trust. Tell me a little bit what that means to you and what does that look like in practice?
Amit Shivpuja:Sure. And first, thank you so much for having me and welcome the opportunity. So if we were to equate AI to launching rockets, data's the launch pad, right? You need a good launching infrastructure to be in place. Then you can launch as many rockets as you want.
Christopher Hutchins:So it can be heat sensitive.
Amit Shivpuja:Well, it depends how you define heat. But the thing is that since AI learns patterns from the data, you want to get that as right as possible in order to minimize things further on. I think a quote that I really like is garbage in is garbage squared out. Just basically making sure you can minimize biases, making sure you're transparent and clear about what data you have, making sure that there's accountability because you don't want to deploy something and then have people come back and say, what do we do now? or who's gonna fix this. So it's all of those layers coming into play in order to make sure that the output that gets generated is actually trusted by the users. The thing with especially today's generative AI, it's so easy to use. And people tend to put more trust in it because the friction is so low. And it's very easy to lose that trust also because if you keep giving wrong answers, either due to hallucinations or the training data that was used. So building that trust becomes extremely important. It's transparency, it's accountability, and it's also showcasing where this came from, right?
Christopher Hutchins:So it's an interesting concept when you're talking about building the trust when you already got something in motion, but how do you think about it in terms of where you launch from and at what point do you start bringing in subject matter experts and people who are actually on the front line and might actually use it? And how important is that?
Amit Shivpuja:I think you have to start that very early on because a lot of the context comes from them, right? I wish we were in a world where everything was documented and all of that, which is not the case. But even the users, the subject matter experts, you want to involve them as early as you can. In fact, it's worthwhile actually figuring out your requirements, your exact needs, and what business impact it could potentially have, even before you go down building anything. So that way you actually have some direction or some baseline against which you want to compare the outputs that you get. Them being part of the journey is an extremely critical component as early on as possible. And the other benefit is that once you launch this stuff, they might become your biggest champions in the organization, helping with adoption, right? And also figuring out the roadmap into the future.
Christopher Hutchins:Yeah, I think that's a really interesting point you're bringing up. Because I think trust can be lost before you even start. And I think it's really critical to understand that if you want human beings to benefit and leverage this capability, they have to believe in it. And they're not gonna believe in it if they don't understand the purpose behind it and what it's intended to do.
Amit Shivpuja:There's a lot of change going on.
Christopher Hutchins:So in healthcare, one of the things that's always been problematic is that we often tend to design something first. And it's almost what I call the field of dreams approach. If you build it, they won't come. Well, if we build something really cool and exciting, someone would obviously need it. But this is really not always the case. Talk about the sensitivity and the fear piece of it, if you wouldn't mind. How do you get people comfortable to actually highlight a workflow maybe that would be a candidate for automation on some level, even if it's just basic automation, when they're also balancing this fear that AI could take their job.
Amit Shivpuja:Yeah. I mean, what impact it'll have from a job perspective is currently up for debate, right? There's camps on either side. But I think to alleviate the fear is, like I said, bringing them early on in the conversation, but it's also showing them value. Let's say they're spending 10 hours doing something manually, and it brings them down to one hour. So that kind of manual effort, which can introduce errors and all of that, gets reduced. It helps them be more efficient. But also, I think it's good to have a conversation on what the rest of the saved time can be used for, right? I'm sure they have things that they want to improve, they have things that they want to do, or other challenges that their teams need to address, which can now be focused on because you're gaining those efficiencies and the time. Now that also means that people have to become some level of data literate and AI literate, right? Because that's the landscape. So the question then becomes how do you support them, how do you help them transition and pick that up? Everybody doesn't need to know all the technical details, but they need to know at least a baseline of what an LLM is, what it can do, what's an agent and stuff like that. So I think some of that time can be invested in upskilling and learning as well, so that they're ready for the next thing because you may not want to lose that subject matter expertise. So yeah, I think that combination is what will work.
Christopher Hutchins:I think you're touching on something that people really need to be much more cognizant of. You can stay in the place where you're fearful, of course, and that's the natural tendency. But the reality is you have the ability, you have the time now to take advantage of learning to use the capabilities for your own benefit. And I think AI will replace some jobs probably at some point. But what AI is not going to do is replace people who have embraced AI and are figuring out how to use it to their advantage and to contribute at a higher level. So you mentioned something about saving time, which I think is something I want to pause on for a second. As a leader, oftentimes when some kind of a process improvement is implemented, there is an order that comes down from leadership around how that time is to be used. What's your advice to leaders as they're leading through this kind of transition around balancing hearing and listening to the workforce that are closest to the action versus maybe making some uninformed dictation of instruction and how to do that?
Amit Shivpuja:So I think it's twofold. One, I think as people like us who are at the bridge between leadership and the people actually doing the stuff, it's good to bring them together to showcase what options are. But I think what's worked for us is prioritizing based on business value or business impact. If what comes from leadership, we can showcase that business impact and the need, then that takes priority. But if we can make a case using the feedback and the examples that we have bottom up, that should take business priority for the same, right? And it comes down to business value. There's only four of them. You can save time, you can save money, you can make money, or you can mitigate risk. So it's just a question of bucketing it into that and having that messaging, right? So then it's neutral, it's transparent, you can have a conversation, you can tie it to goals that the organization has. But what I found is a lot of the top-down stuff is more in terms of hey, we need to transition towards this or adopt this. It's not so much do this, right?
Christopher Hutchins:I want to talk about the trust thing just a little bit more. Trust is something that typically takes time to build. As you're working through something that is disruptive because it tends to feel that way, what are the early warning signs that we should look for where maybe trust is starting to erode a little bit?
Amit Shivpuja:I think some of it is just reactions that you're getting, right? If you're keeping stakeholders in the loop frequently, you build a prototype, you get feedback and all that. The signals you get in terms of usage, in terms of feedback, the quality of the feedback. Even if you release the first version and you put it out there and see how it's adopted, I think those are all key signals. But I think as leaders, if we have the right relationships, we'll get those signals as well, right? If they trust you, they'll be like, hey, I don't want to do this, kind of a thing. So that perspective is also really valuable. So yeah, I think it's being constantly in contact with stakeholders and the enablers. You're suggesting it's not a flyby activity.
Christopher Hutchins:No.
Amit Shivpuja:No, because somebody has to support this going forward.
Christopher Hutchins:That's right.
Amit Shivpuja:Especially something like an agent, it will drift, it will do something it's not supposed to do. What do you do in that situation? It's not a fire and forget kind of a thing, right?
Christopher Hutchins:Yeah. So let me tell you a little bit where that's coming from. I actually had a conversation with a clinical psychologist very recently. And we're talking about trust and the erosion of it. And I think intuitively we kind of know that things have shifted in society here in the U.S. in particular. But the numbers are staggering in terms of how few people actually trust some of the basic institutions that 20 years ago, if you'd have said, you'd trust your government, people would have said yes, at least eight out of ten. Today that's different. You can say that and apply it to almost any kind of societal role that someone plays, whether it's clergy, law enforcement, or whatever, it's kind of across the board. But people not trusting their leadership in the organization that they work for, or not even trusting their colleagues is something we have to be cognizant of. And so I really love the fact that you're bringing everything back to this human relationship and the need for that constant monitoring and the back and forth, open dialogue so that we're not missing those signals. Kind of wrapping up our time. If you're gonna advise an organization, we can pick any, it could be healthcare, it could be retail, any organization. We're at this really amazing pivot point where we're talking about some disruption that's probably 10x, 100x, what the internet was in the 90s. What are two or three things that you would encourage leaders to be really mindful of and cognizant of as they're trying to navigate and lead their organizations forward through this transition?
Amit Shivpuja:One that comes to mind is just building off the trust conversation we have, which is the trust but verify piece of it, right? Trust the technology, trust the teams to deliver, but verify whether it really does what it's supposed to do, right? It's really key to do that, especially given the amount of automated capabilities that these tools have. Because unlike the internet, AI is building the tool in some cases. It's not AI doing something quicker alone. So it's got its own capabilities. The other piece of it is we have to get our data thing right. We've kicked the, forgive me for saying, but we've kicked the can down the road too many times. And that foundation, even in an idealistic scenario where we get the data right, we're still gonna have biases. We're still gonna have patterns that occur because of the way the business is set up or the data that's collected and the like. So there's still a lot of work that needs to be done. But if you don't do this, then you can't do that. The third, I'd say, is human in the loop is non-negotiable.
Christopher Hutchins:That's important. I love that.
Amit Shivpuja:Just the fact that whether it's verifying things, whether it's having them involved in the process. At least for some time we're building these things for a human being. Somebody has to represent the human beings in the process. So the third thing I'd say is that, yeah.
Christopher Hutchins:Oh, that's outstanding. Well, if someone wants to get a hold of you, they've heard you talking today and they want to have an engaging conversation around how they need to proceed with this AI journey, how do we get in touch with you?
Amit Shivpuja:Sure. The best is to get me on LinkedIn. You'll find me on LinkedIn. But if people are interested in my thoughts and a couple of other frameworks that I use personally, I put out a book called The Data and AI Compass. It's available on Amazon in both soft cover and Kindle versions. And I also blog quite often on my Substack, which is datacompass.substack.com.
Christopher Hutchins:Fantastic. Well, I will make sure for my listeners that all of these things are available on our site and in the show notes. It's been a pleasure speaking with you today. I am excited about this event. I think we're gonna have some more fun. It's already been a good start. Thanks again for being on the Signal Room. Appreciate you. That's it for this episode of the Signal Room. If today's conversation sparks something in you, an idea, a challenge, or a perspective worth amplifying, I'd love to hear from you. Message me on LinkedIn or visit SignalRoomPodcast.com to explore being a guest on an upcoming episode. Until next time, stay tuned, stay curious, and stay human.