The Signal Room | AI in Healthcare & Ethical AI
Welcome to The Signal Room, your go-to podcast for expert insights on ethical AI, AI strategy, and AI governance in healthcare and beyond. Hosted by Chris Hutchins, this show explores leadership strategies, responsible AI development, and real-world implementation challenges faced by healthcare AI leaders. Each episode features deep conversations covering healthcare AI innovation, executive decision-making, regulatory compliance, and how to build trustworthy AI systems that transform clinical and operational realities.
Whether you are an AI strategist, healthcare executive, or AI enthusiast committed to ethical leadership, The Signal Room equips you with the knowledge and tools to lead AI transformation effectively and responsibly.
Join us to learn from industry experts and healthcare leaders navigating the evolving landscape of AI governance, leadership ethics, and AI readiness.
Follow The Signal Room and stay updated on the latest trends shaping the future of ethical AI and healthcare innovation.
The Signal Room | AI in Healthcare & Ethical AI
Healthcare AI Security Risks and Cybersecurity Leadership Challenges | Anitha Mareedu
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Healthcare AI security risks demand attention as organizations deploy AI systems without fully understanding how to protect them. Anitha Mareedu, a cybersecurity and network security professional with experience spanning advertising, social media, security companies, and chip design, brings a holistic view of security from layer one through layer seven. Her message is clear: deploying AI without securing it is deploying a vulnerability.
In this episode of The Signal Room, recorded live at Planet Hollywood in Las Vegas during the Put Data First Conference, host Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants, sits down with Anitha to discuss how cybersecurity must evolve alongside AI adoption. She explains the CIA triad of confidentiality, integrity, and availability and how different industries prioritize each dimension differently. Government agencies prioritize confidentiality. Private organizations and B2B businesses prioritize availability. Healthcare and financial services must protect personally identifiable information under frameworks like HIPAA, NIST, and SOC.
The conversation covers why agentic AI creates new security challenges around access control and deployment boundaries, the compounding risk landscape that organizations face as AI capabilities accelerate, and why the cybersecurity talent pipeline needs to grow to match the pace of AI advancement. Anitha shares her own journey from electrical engineering through network engineering to cybersecurity, working across endpoint security, firewalls from Cisco to Palo Alto to Checkpoint, IDPS tools, and user identification systems. Her advice for anyone entering the field: question what genuinely interests you, because the technology changes every year now rather than every five or ten, and continuous self-directed learning is the only way to keep pace.
About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.
Website: https://www.hutchinsdatastrategy.com
LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/
YouTube: https://www.youtube.com/@ChrisHutchinsAi
Book Chris to speak: https://www.chrisjhutchins.com
So we see from the last few years it's very, very much booming, the world of AI or even the work that's going on around AI because it's very fast, it's moving really fast-paced.
Christopher Hutchins:But we have to balance the risks that come with it because not everyone's trying to design for good. Well, today I am excited to be joined by Anitha Mareedu, and we are here at Planet Hollywood, Las Vegas, Nevada, for the Put Data First Conference. You're smiling big, you had a big smile when you walked over here, so I know that you're with your people too. I've been really excited, looking forward to coming here and meeting people like yourself because there's so much buzz around AI, how we're using it, the cybersecurity aspects of it. Interestingly enough, I've had a couple of opportunities to respond to some questions for journalists who are asking because of some recent activities in cyber, and it's given me an even deeper appreciation. I've always been appreciative of people who are working in the cyber space because as a chief data officer in my previous roles, if I didn't have people worrying about that stuff, I don't think I would have slept at night. Because it's just a really important area. So as we get started in our conversation, I mean the whole week that we're here is all about AI. What was it about this particular event that got you interested in coming out here? And what are some things that you're wanting to really learn and get from the time here, but also what are some areas that maybe you're concerned about that you could have an opportunity to help influence and shape while we're all here?
Anitha Mareedu:Yeah. Firstly, thank you so much for having me here. I really appreciate it. And this event is going great. I'm here to learn more about AI, to learn more about how my space or my industry that I'm working in, how AI can help, to network with the people and to learn more and grasp things as this event is mainly data-related, AI-related data and stuff like that. So in my area of cybersecurity or network security space, this is even more challenging with all this AI that's going on around in the world. So yeah, I'm here to learn and meet great people like yourself.
Christopher Hutchins:Well, I'm excited about the promise that we have here with AI. But I think some of the things that I've heard about over the last couple of years, I didn't even know this until someone forwarded me an article. But it was just coming out of the worst part of the pandemic that one of the biggest cyber attacks in the US happened during that period of time. And there's just one example of how pervasive the nefarious activity can be in cyber. And I'm not sure if you're familiar with this particular incident I'm referring to, but there were literally millions of access points that people were firing messages at the NIH and apparently didn't do anything to take anything offline, but it clearly was designed to bombard the system and look for vulnerabilities. Tell me a little bit about what got you interested in cybersecurity and what are some of your observations as you've gotten into it, and maybe talk a little bit about that. How should we be thinking about it? And how do we advise our friends and colleagues who are working in the AI space? I mean, it's exciting, but we have to balance the risks that come with it because not everyone's trying to design for good.
Anitha Mareedu:Yeah, absolutely. That's a great question. So to answer your first question, what interested me in this field is, initially I started my career in networking as a network engineer because I did my master's by taking some courses in the network field. Though I'm an electrical engineer and I did my thesis in VLSI, I did my research in VLSI, but I also took courses in networking and security while doing that and taking some certifications in the field. That interested me. And once I started working as a network engineer, I see that we can create so much about it, we can talk from anywhere in the world using this network, creating the routes, all the way from layer one to four. And then the layer seven and the security aspects is what intrigued me because nowadays, as time is going on, we see a lot of applications around the world. Application security has become very much prominent in recent times, app security or API security when we talk about it. That intrigued me, the breaches and the attacks that keep on happening. So how we can defend this. That interests me. And I started taking courses, I started training myself through certifications and courses that we have online. And while I was working on it, that really interested me to get into this field of security. I have worked on a few areas of security. I have worked on the endpoint side of security, I have worked on network security, basically on the defense side, working on different firewalls across different vendors like Cisco, Palo Alto, Checkpoint, Juniper, to name a few. I also worked on different layers of security. It can be to prevent the attacks or breaches through IDPS tools, or it can be all the way from user identification, user ID, all the way to the network security and endpoint side. So basically I have had the experience to work on a holistic view, the whole ecosystem of security, all the way from layer one to layer seven, as well as covering from user ID to the perimeter side of security. So when we talk about AI or even agentic AI or AI LLM models, we see from the last few years it's very much booming, the world of AI, and it's moving really fast-paced. We can also see that a lot of the work that we can do probably in hours can be done in minutes now by deploying AI. But at the same time, we need to be vigilant about where we are deploying them and how we are deploying them and is it really needed? And if it is needed, how we can also secure it, right? Because at the end of the day, security matters a lot because if we are deploying some kind of agentic AI or any kind of AI, if we are deploying it and we're just giving it the power to do it, then we also need to secure it, how much access we can give to what.
Christopher Hutchins:So you're touching on a really important thing, and it's how much data or what data can we give it. And I think when I've thought about bias, for example, the thing that's always been more of a concern to me, and don't misunderstand, I know there's people who are actively trying to breach systems and do things that are just beyond my imagination why they would want to. Clearly somebody is either making them very wealthy or they just have a very different mindset than I do. But the pieces that are concerning is from a national security perspective, for example, there's information that you don't necessarily want everybody to have. I think back when the first Gulf War was about to unfold, we first heard about stealth technology at that point in time. It was over 20 years old, closer to 30 years old at that point, where the American public even became aware of it. It's a good example of something that you don't necessarily want exposed and you don't want to put it in just every model. But as we think about AI just from a general perspective, are there industries that you think we should be concerned about that have a tendency to be a lot less stringent about security than others? What are some of the industries that you think are of concern at this point? From your experience, what are you seeing and where should we be looking and paying attention and learning from, whether they're making good decisions or making some mistakes?
Anitha Mareedu:Yeah, that's a great question. I have worked with different industries. I have worked in advertising product-based companies, I have worked in social media companies, I have worked with a security company itself, and now I'm working with a chip designing company. So in my experience, if we talk about the CIA triad in security, which is confidentiality, integrity, and availability, each is concerned with each different area of sector, right? If you're talking about a government sector, confidentiality matters a lot. Integrity, of course, is very important when we talk about any kind of principles in security because the data has to be intact. And when we talk about availability, it's mostly about the private organizations or B2B businesses, or B2C, right? You need the availability of the data all the time. Make sure that there is no downtime. So depending on each company or each organization or the sector that you are in, it matters what type of data that you need to protect. When we are talking about a health organization, principles like HIPAA come into picture. We need to make sure that the data, the privacy, is not being compromised, that PII, personal identifiable information, is not being compromised. The same goes with the finance sectors, banking sectors, where you have a lot of users' data. There's a lot of users' data in the health sector as well as the finance sector. So you need to make sure what access you're giving and how much protection you are putting there, the policies, the best practices that need to be followed, or the principles that you need to follow. It can be NIST, HIPAA, or SOC, getting all of these in place. That matters a lot depending on the industry that you are in, or depending on the industry that you are trying to partner with or having those customers you're integrating with. We need to also comply with those principles and standards depending on the sectors that you're with.
Christopher Hutchins:Yeah, the adaptability and the nuance to different industries, I can imagine is a big challenge for you as you're trying to find the right solutions. You talked about educating yourself. And I'm glad you said that because I really want to make sure that people are hearing the things that they should be really thinking about doing for themselves because there's a massive fear factor out there. People are terrified they're gonna lose their jobs to AI. And there's young people out there that don't know whether to be excited or scared because they get excited, but their parents are probably trying to put guardrails around them, trying to get them to be careful and all that. But what would you tell people they should be doing if they're realizing that this is here to stay? What should they do? What can they do? Especially since you seem passionate about what you do, maybe you can inspire some folks that may listen to our podcast. What are some things that you would tell them from your own experience and what can they be doing to really take advantage of the moment that we're in? This is probably at least 10x as transformative as the internet coming on the scene in the 90s. So massive disruption.
Anitha Mareedu:Yeah, I totally agree with you. And people like new grads who are just graduating, or the people who are trying to get into education, or even people around the world. We know that the United States is the place to start anything at the foremost, and we know that Silicon Valley is the place where everything starts. We can see already big tech companies who have been trying to do a lot of hiring in the AI sector rather than traditional work or even in areas where you can replace tasks. The idea of having one person do ten things using AI technology. So I would say to anyone who is interested in pursuing and who are really interested in learning more, whether it's AI or developing something related to learning new models or languages, or even who wants to be in the area of security or finance or any other sector, I would first ask them to question themselves what their interest lies in. And I would also tell students to keep up with what's happening because the technology changes every two years or every year. It used to be every five years or ten years before, now it's every two years or every year, something new comes, and you need to learn something new all the time. There are several platforms out there. Even me, when I was at the beginning, even though my master's degree was in electrical engineering, I learned myself to get into this field of cybersecurity. The same way, whatever intrigues them, I would highly recommend to learn themselves because anything that is coming out there right now is new to everyone. Not everyone knows everything. The more that we learn, the more we know better than the other person. So that's what my recommendation would be.
Christopher Hutchins:I love that. I think we need to inspire people to go into this field primarily because the risk seems to be compounding at a rate that's a little concerning. But there's so much enthusiasm. And I'm particularly excited that you came to this event because I think we have to have people like yourself that are trying to figure out how we can say yes to this capability, make sure that we're using it well. There's plenty of folks who are gonna be the ones that'll want us to pump the brakes, so I understand that. But the reality is somebody is gonna continue to push the ball forward. It's gonna continue to advance. And we can either be part of making sure it goes in a direction that we can live with, or we can live with regret because we fell asleep at the wheel and saw everything go sideways. And that's not a great place to be setting up camp, I don't think. Well, Anitha, just to wrap up, if people wanted to get in touch with you, especially people wanting to know where do I start to learn about cybersecurity, because she seems excited, she's having fun, how will they reach you?
Anitha Mareedu:You guys can reach me on LinkedIn or through email. But yeah, I'm always available on LinkedIn.
Christopher Hutchins:That's fantastic. I really appreciate you coming by and having a conversation with me. I don't know what the final insight package looks like walking away from this event this week, but it's been off to a great start, and I cannot wait to hear from all the folks here and meet people like yourself. This is the best place to be, I think, anywhere if you're interested in AI and technology and cyber compliance. If you're an AI person, this is a little slice of heaven for a little bit.
Anitha Mareedu:Oh yeah, definitely. And it's my pleasure meeting you, and thank you so much for this opportunity.
Christopher Hutchins:Anitha, thank you so much for getting on the Signal Room.
Anitha Mareedu:Thank you, thank you so much.
Christopher Hutchins:That's it for this episode of the Signal Room. If today's conversation sparks something in you, an idea, a challenge, or a perspective worth amplifying, I'd love to hear from you. Message me on LinkedIn or visit SignalRoomPodcast.com to explore being a guest on an upcoming episode. Until next time, stay tuned, stay curious, and stay human.