The Signal Room | Healthcare AI Strategy & Governance

No Alerts, Still Breached: Ethical Leadership in AI Security and Why Undetected Cyberattacks Threaten Healthcare AI Governance | Guman Chauhan

Chris Hutchins | Healthcare AI Strategy, Readiness & Governance Season 1 Episode 16

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 34:04

Send us Fan Mail

AI governance and cybersecurity fail together when healthcare AI systems expand the threat surface — Guman Chauhan on ethical leadership and undetected AI breaches.

AI systems expand the healthcare threat surface in ways traditional security tools were not designed to detect. Guman Chauhan joins Chris Hutchins to examine the intersection of AI governance, cybersecurity, and ethical leadership in healthcare, focusing on the most dangerous category of breach: the one that never triggers an alert. Health data privacy, AI governance policy, and security maturity all land in the same leadership conversation when undetected breaches are the real risk.

What We Cover

  • Why undetected cyberattacks on AI systems are the most dangerous class of threat, and why existing monitoring tools miss them
  • Where AI governance policy for healthcare HIPAA environments needs to account for the AI-specific threat surface, not just the traditional data stack
  • How ethical leadership in AI and ethical leadership in healthcare are the same conversation once security is involved
  • What alert fatigue, SIEM gaps, EDR blind spots, and SOC coverage look like when AI pipelines are in scope
  • Why security-first architecture is a leadership position, not a tooling decision

Key Takeaways

  • AI governance that does not account for undetected breaches is incomplete. The dangerous attacks are the ones your dashboards never register.
  • Ethical leadership in AI starts with taking security seriously before an incident forces the conversation. Healthcare organizations with the strongest AI security postures are the ones where cybersecurity has a seat at the strategy table.
  • Health data privacy is an AI governance problem, not only an IT problem. Every AI system inherits the data it trains on, and every breach in that data is an ethical failure, not just a technical one.

Frameworks & Tools Mentioned

  • AI governance policy templates aligned to healthcare HIPAA
  • SIEM, EDR, and SOC coverage in AI-enabled environments
  • Penetration testing for AI pipelines and data stores
  • Incident response frameworks for undetected breaches
  • Security maturity models for healthcare AI organizations

Timestamps

  • 00:00 The cybersecurity blind spot in AI governance
  • 03:30 Why your AI monitoring tools miss the most dangerous threats
  • 10:00 Ethical leadership in AI security: who is accountable?
  • 17:00 Healthcare AI systems as high-value cyber targets
  • 23:30 AI governance frameworks that account for undetected breaches
  • 30:00 Responsible AI development with security-first architecture
  • 36:00 AI regulation and compliance in an evolving threat landscape
  • 42:00 Leadership strategies for building resilient AI systems

About Guman Chauhan

Guman Chauhan works at the intersection of healthcare AI, cybersecurity governance, and ethical leadership. His focus is on helping healthcare organizations

Support the show

About The Signal Room: The Signal Room is a podcast and communications platform exploring leadership, ethics, and innovation in healthcare and artificial intelligence. Hosted by Christopher Hutchins, Founder and CEO of Hutchins Data Strategy Consultants. Leadership, ethics, and innovation, amplified.


Website: https://www.hutchinsdatastrategy.com 

LinkedIn: https://www.linkedin.com/in/chutchins-healthcare/ 

YouTube: https://www.youtube.com/@ChrisHutchinsAi

Book Chris to speak:  https://www.chrisjhutchins.com

Christopher Hutchins: The tagline for my company:

humanizing AI for care. We've talked about how healthcare needs to be emotionally ready before it can be technologically ready. How people feel safe, seen, and empowered is how change happens.

Christopher Hutchins:

Hundreds of thousands of dollars on the data that powers the technology.

Guman C:

Most dangerous situation isn't being breached. It's being breached and not knowing it. No alert doesn't mean no breach. Often means you are not looking for the right signal. In fact, technology accounts for only about 30 to 40% of the problem. But most organizations already have all the data.

Christopher Hutchins:

Organizations often measure cybersecurity maturity by policies, tools, and compliance checklists, but history shows that breaches are inevitable. The real differentiator isn't whether an organization is breached, it's whether it knows when that breach happens and how effectively it responds. Guman, you said that there are only two kinds of organizations, those that are breached and know it, and those that are breached and don't. And you've been very clear that the second group is far more dangerous. So today I want to explore what really separates assumed security from proven security, why so many breaches go undetected, and what leaders should actually be testing. Guman, welcome to the Signal Room. Excited to have you here. We talked about this recently, but there's just a whole lot of activity and buzz around AI. But there's some other things that are probably a little bit more important to make sure that we're in a position where we can be using it safely. Of course, that really is around data protection, data privacy. Really the space that you're in. I really want to make sure that we delve into some good content today because people really need to understand what the stakes are. So when you ask organizations, are you sure you would know if you were breached? What do you hear most often from your clients?

Guman C:

I think that's one of the questions that is, I would say uncomfortable, but when I work with an organization and when I ask this question, they say yes, we would know. And they say it with a lot of confidence, and they will point out their tools, their security operations center, or their dashboard full of green check marks. But when I follow up with, when was the last time you tested this assumption, that's where silence starts. In real incidents, I have been seeing leadership believed detection was solid, yet the attacker had already been inside for weeks or months using stolen credentials. As you see, some of the largest breaches in the world went undetected for months or even years. In this case, attackers were already inside, quietly moving laterally, and they were exfiltrating data and doing their work that they are supposed to do once they are inside. So in this scenario, no malware, no noisy exploits. It would show just legitimate access doing legitimate things, and no one noticed until a third party or customer raised the alarm.

Christopher Hutchins:

That's the last place you want to hear about it from, from someone who's discovered that there's a problem, especially a client. You've talked about the different situations, and you said the most dangerous one is being breached and not even knowing it. Why is that worse than a breach that's already been identified?

Guman C: I often say the most dangerous situation isn't being breached. It's being breached and not knowing it. And the reason is simple:

damage compounds silently. When a breach is identified early, the organization can act. They detect suspicious activity, trigger alerts that are actually actionable, activate their incident response plan, contain the threat, notify stakeholders and customers. And most importantly, they learn from that incident. Over time they strengthen controls, optimize their detection rules, close visibility gaps, improve response time, and retrain their team based on the lessons learned. In this scenario, a breach is painful, but it becomes a catalyst for maturity. Now compare that to the more dangerous scenario when a breach goes undetected. In unknown breaches, attackers have time, and in cybersecurity, time is everything. The organization assumes silence means safety while attackers quietly move laterally, as we discussed, and they do their work. One of the biggest challenges in our field is that attackers can use valid credentials, so you can't even do anything with that. In this scenario, the organizations that cannot identify the breach, it means they never did their due diligence. The data is still being used by the attacker. They never reached out to customers, customers never changed their passwords or set up MFA, because they are totally unaware and unprotected. Regulators often learn from this kind of incident after the attackers do.

Christopher Hutchins:

Right.

Guman C:

Because they don't do anything for the incident response. No forensic investigation, no due diligence, and worst of all, no learning from the incident. That's why I always emphasize visibility matters more than perfection, because the breach you can see is painful, but the breach you can't see is devastating.

Christopher Hutchins:

That's really interesting because when you talk about things being undetected, there are so many ways now that people are able to get their hands on credentials. So I think it seems like it would be even more likely now that that would be the way that hackers are actually getting in behind the firewall because it's really hard to detect. Is that what you're experiencing?

Guman C:

Yes, correct.

Christopher Hutchins:

Historically, I think at least organizations that I've been exposed to, they have alert mechanisms for breach, things like that. But oftentimes I think that we maybe have a false sense of security. Why do you think organizations stop there and figure since they're not getting any alerts, they're fine, there's no breach going on?

Guman C:

Because alerts are often treated as a proxy for truth in our industry. If dashboards are quiet, leadership assumes everything is fine. But attackers understand this better than anyone. And they actually design attacks specifically to stay invisible. Instead of deploying malware, they use valid credentials, phishing, deep fakes, and other social engineering tactics. Once they are inside your network, they access data during normal business hours. They stay within the expected user's threshold. It means they know when the alert can be triggered in your tools, so they are more sophisticated. You never get any alert because they deliberately behave in a way that avoids triggering detection tools because they know how these tools work. We have been seeing repeatedly in cloud breaches where attackers used admin credentials or service accounts that had no expiry date, and they operated exactly like legitimate administrators. So you will not find any anomalies, no alarm, and no alerts, because they are a valid user account. So no alert doesn't mean no breach. It often means you are not looking for the right signals. And don't forget that attackers have access to more sophisticated tools in the age of AI, and it will always be a cat and mouse race.

Christopher Hutchins:

Are there ways now that they can actually imitate or fake an IP address? Because I know that a lot of the technologies that are used to prevent a breach really are tied to a specific range of IP addresses within an organization or even where an employee might actually be accessing from. Is there a lot of that kind of capability out there?

Guman C:

In terms of IP address, it's not easy to fake or spoof the IP address because those IP addresses are typically tied to the domain. If you have an application that's running publicly, like some of the public-facing applications, and if someone wants to mimic the same application using some other IP address or the same IP address, it's not possible. They can use a similar kind of domain with some change like a hyphen, a dot, or some extra character. And that might trick you. That's part of the phishing game the attackers are doing.

Christopher Hutchins:

Interesting. So that's actually a way that you could think about tightening up your detection if you're paying attention to the IP ranges.

Guman C:

Correct. Yeah.

Christopher Hutchins:

Interesting. We just talked about one example, but what are some of the other common reasons that breaches go undetected for extended periods of time? I know every once in a while we'll hear of one that's been ongoing, and by the time it's detected, there's just a lot of damage done.

Guman C:

When breaches go undetected for weeks or months, it's rarely because attackers are extraordinarily sophisticated. It's usually because they operate in a way that looks normal. The most common reason I see, as we discussed, they are using valid accounts, working as a legitimate user in your network. In cases like Colonial Pipeline and other countless cloud breaches, authentication succeeded. So no security tools, everything assumed was fine. Second reason I see is where attackers deliberately blend into normal business activities. The SolarWinds breach is a prime example where attackers looked like trusted administrators for months. Third, I would say alert fatigue. Security teams receive thousands of alerts every day, and they are not able to identify which one is true or false. I also personally see breaches go undetected due to blind spots, like unknown assets, shadow IT, not collecting enough information or logs from any device or network, excess cloud permissions, and third-party access. The Okta and MOVEit breaches are reminders that what you can't see, you can't protect. You have to understand what exactly your assets are. You can't protect if you don't know what your assets are.

Christopher Hutchins:

Right.

Guman C:

And also make sure you are collecting enough logs for your tools. There's no point in buying a fancy tool if those tools don't have enough information to identify anomalies or any suspicious activity going on in your network. We saw that most of the time, many organizations don't detect breaches themselves at all. They learn from banks, customers, regulators, or law enforcement. And that's a clear sign detection has failed. The pattern is consistent. Breaches go undetected not because security teams don't care, but because visibility isn't validated, alerts aren't trusted, and response is delayed.

Christopher Hutchins:

You mentioned tools here. I think that's one of the areas that most organizations are spending a fair amount of money on. I know it's not inexpensive to try to protect an enterprise these days. But let's talk just a little bit about what some of these are. We've got SIEM, which is a security information and event management tool, and there's EDR, like endpoint detection and response, the orchestration automation and response, and then SOC, Security Operations Center. On paper, that stuff sounds pretty good. It sounds like the organization is mature. With all that, why do you think breaches are still slipping through the cracks, even with all the spend on technologies to prevent it?

Guman C:

Because tools don't equate to outcomes. Most organizations invest in fancy AI-driven tools nowadays. There are a lot of companies out there in the market that are selling these kinds of tools with the tag of AI-driven. Companies are buying tools for collecting logs, storing telemetry information, and automatic workflows. The tools that you discussed are pretty standard tools that every company is using nowadays. And you find that most of these tools are running in isolation mostly, and that's one of the biggest gaps because they are doing their own task, they are not integrated. They don't invest in tuning detection, they don't invest time in validating those alerts, and they don't train engineers on real attack patterns. No matter how many tools you buy, if they are not fine-tuned, if you are not testing them regularly, if they are really alerting for the right purpose or not. We have been seeing environments with world-class tools where alerts fired but no one trusted them enough to act. And in other cases, alerts went to shared inboxes, and no one was actually monitoring during off-hours. In many post-breach reviews, teams often discovered logs weren't ingested or alerts weren't escalated. I see cases where organizations were not collecting any logs from assets because they didn't know about them. Again, you cannot protect assets if you don't know they exist in your environment. That's the first thing that as a security practitioner you have to understand. You have to know your stuff, you have to know what you are protecting.

Christopher Hutchins:

That's an excellent point. I know there have always been challenges in any health system that I've been around, having a good active inventory and having things actually connected so that you can monitor things in real time. How much of the breach activity that goes on is a problem with technology versus people or organizational behavior? There's definitely ways that probably fall under each of those. But how would you say that breaks down?

Guman C:

When breaches go undetected, it's rarely a pure technology failure. In fact, technology accounts for only about 30 to 40 percent of the problem. Most organizations already have all the fancy tools that we discussed, SIEM, EDR, SOC, everybody in place, and they're super confident that they have everything that they need. But the bigger issue I see is process and people. Roughly 40% of detection failure comes from broken or untested processes, unclear escalation paths. We have incident response plans that exist on paper, and alerts that fire but aren't acted on, and delays caused by approval and handoff. The remaining 30% comes from people and behavior. Security analysts hesitate to escalate because they fear false positives or business disruption. Leadership often prioritizes uptime over rapid containment. As a result, early warning signs are ignored or delayed. As we discussed, buying more tools or technology won't fix detection automatically. You have to fine-tune those tools and make sure you have proper processes. You have a lot of process policies documented, but when was the last time you validated these are working as designed? I found in several breaches that security engineers saw suspicious behavior, but they hesitated to escalate because they didn't want to trigger a false alarm.

Christopher Hutchins:

You've kind of touched on this already a little bit, but alert fatigue is something that we hear about in the healthcare space a lot because there are so many alerts that are fired from the clinical record that the clinicians and the nurses are bombarded with alerts. When you have such a high volume of them, it's difficult to know what to act on. How do you think about managing alerts and really making sure that, to your point, people aren't reluctant to report something, but that the alerts are fine-tuned enough to really make sure that they understand you have to escalate this? What are some of the things that could be done to help distinguish meaningful signals from the volume and the noise of all the alerts?

Guman C:

Alert fatigue is something we hear about constantly, and for good reason, because it's one of the biggest reasons real attacks get missed. Most security teams receive thousands or maybe millions of alerts, depending on the size of the organization. When the same alert fires over and over again, engineers naturally become desensitized. And if your SOC can't clearly distinguish signal from noise, attackers don't need to be sophisticated, they just need to be patient. The solution isn't more alerts. It's a shift from volume-based alerting to risk-based detection. Instead of asking did something happen, teams should ask, does this activity meaningfully increase business risk? Effective security teams correlate alerts with the asset, they define if the asset is critical or not, and behavior. These teams prioritize alerts tied to privileged access or sensitive data, not just technical anomalies. And they also regularly disable alerts that never lead to a real incident. I've seen organizations reduce alert volume by 60% and actually improve detection simply by removing rules that hadn't resulted in a single actionable event in a year. The goal is not to see everything. It's to see what matters most and act on it fast.

Christopher Hutchins:

That brings up, when we talk about what matters, how do you even approach the testing of these applications and the risks that you may have, whether it's human behavior or organizational behavior or a technology thing? You've been a strong advocate for external testing. Why is external penetration testing still one of the most effective ways to understand the real-world risk that you're dealing with?

Guman C: Because it shows reality, not assumptions. Many organizations hesitate to authorize full-scope independent penetration testing because they are worried about business impact. And my response is simple:

ships aren't built to stay in harbor. You don't learn how strong your defenses are by protecting them from stress. You learn by testing them in real conditions. Independent third-party testing is one of the things we can use, because it lets you see your environment exactly as an attacker would, but in a controlled and responsible way. External testing answers questions that internal teams might not see, such as what can an attacker actually see from outside? Which assets are truly exposed? Which security controls fail silently? In practice, these tests often surface uncomfortable truths like systems believed to be internal that turn out to be public-facing. We found old and forgotten credentials that still work. And most concerning, critical alerts never fire even during active exploitation. This kind of testing gives you a real picture of how your network or application or infrastructure is working from outside of your organization. The part many organizations miss is that pen testing shouldn't end with a report. It should validate whether your security team or SOC actually detects what testers were doing. That's one of the best ways to optimize your SOC tools to make sure you're getting the alert if someone's trying to do something with your network or application or data. The most mature teams correlate test activity with alerts, fine-tune their detection rules, and improve response workflows based on real attack behavior. Today, this doesn't have to be limited to annual. Nowadays we have AI-driven security testing platforms that do pen testing regularly and can continuously simulate real-world attacks on demand. Provided they are scoped correctly and unbiased, which is one of the reasons we always go for independent third parties, when these tools are used properly, they give organizations ongoing visibility into exposure, detection, and response gaps.

Christopher Hutchins:

You've really outlined some really important things. And I think the thing that I'm hearing loud and clear is you've got to test more than just your detection capability by measuring whether an alert fires or not. There's much more to it than that. And organizations really have to be aware that they need to be looking at many other factors. Honestly, organizations in any industry, you can pick almost any one of them, the focus of the business is not primarily data security. You have accountability for it, but their expertise is really running the business. So I think what you say around having an external partner to help with that testing is extremely important. Let's jump a little bit into what happens in an organization once there's been a breach. What separates organizations that are mature from those who are repeating maybe the same mistakes again and again?

Guman C:

What really separates organizations after a breach is how they respond to it. Mature organizations don't treat a breach as something to hide. They treat it as a learning moment. They take the time to understand why it happened and they fix the root cause and then retest their controls to make sure the same issue can't happen again. On the other hand, immature organizations do the opposite. They look for someone to blame and they apply a quick fix to stop the immediate bleeding and move on as fast as possible without changing anything meaningful. That's the differentiator. This is not about buzzwords or tools. It comes down to accountability and follow-through. Organizations that are mature always ask, what did this incident teach us? Those that are not mature ask, how quickly can we forget this happened? The mindset is what determines whether the next breach is smaller or far more damaging.

Christopher Hutchins:

You're touching on something I think is really worth emphasizing. We're talking about some things that the leaders really need to be aware of, much like creating psychological safety within an organization so that if people see something that's wrong, if there is a mistake made somewhere, they're not afraid to raise their hand, or if they see something that could be vastly improved and maybe it could actually shift their focus to something else that's more high value to do for the company because we automated it. There's a culture component to this that leadership needs to be really dialed in on. When I was running an IT shop years ago, one of the things that I had to really make sure the team knew is if you find something that needs attention, don't hesitate. Just bring it and we'll circle the wagons, we'll bring everyone together, we'll go in and take a look at what happened, we'll rectify the situation, and then we'll put some safeguards in there to make sure it doesn't happen again. And the best opportunity to close those loops is the people that are actually doing the work and they're detecting what's really happening. There's no substitute for having that kind of a culture where there's trust. And if you've heard any of the recent episodes, for my listeners, we've had a conversation with a medical psychologist who really emphasized there's been a massive erosion of trust in human relationships over the last 20 years. And we're talking about information security. We're trying to keep nefarious actors from accessing our environment. That's one of the aspects. But then there's also the inadvertent mistakes that people can make with technologies. So it's got to be an environment where people are not afraid to speak up if they see a risk. And I really appreciate that you emphasize that. We're going to get towards the end of our time, but I want to just get some bigger picture things that you can share with the audience and things that they can actually carry away from this. If an organization could do just one thing this year to improve their breach readiness, what would you recommend that they do?

Guman C:

To be honest, my recommendation is simple. Just test a real incident end-to-end. Not just a tabletop exercise where everything goes according to plan. Plan something, simulate an actual breach, and see what really breaks.

Christopher Hutchins:

Right.

Guman C:

Test whether alerts fire, whether they reach the right people, whether anyone has the authority to act. Test how quickly leaders are notified, how legal and security coordinate, and how communication flows when decisions have to be made under pressure. In real breaches, it's not the attack that surprises the organization. It's their own response that surprises them because they are confused and they have no idea what to do. Testing exposes gaps you didn't know existed before attackers do. You don't build confidence by hoping your plan works. You build it by proving it before a real incident forces the test. That's my advice based on this discussion.

Christopher Hutchins:

That's very well said. Let's touch on another interesting way to do things because oftentimes if you don't say something that's a little controversial, people might not be paying attention. We can't really afford to have that risk. So what's an uncomfortable question that every security leader should be asking their teams today?

Guman C:

Based on this discussion, the question that I would ask is, if we were breached right now, who would know and how soon? This question cuts through the tools, dashboards, and policies. Because if the answer is not immediate or specific, or if it's vague or depends on assumptions, it means there's a real risk right there. Many organizations didn't learn about incidents from their own security teams, as we discussed. They find out from customers, banks, regulators, or media after weeks or months. If you can't clearly name who gets the alert, who makes the call, and how quickly action happens, then detection is not working as planned. I know this question may be uncomfortable, but it is powerful because clarity, not confidence, is what exposes risk.

Christopher Hutchins:

That's excellent. It is a really important thing to make sure that we're asking the right questions and we're not afraid to be a little bit provocative in how we ask. So as we close out, how would you summarize the difference between assumed security and proven security?

Guman C: That goes back to our introduction. What I define as assumed security is built on tools, policies, procedures, certifications, and compliance frameworks like ISO, SOC 2, GDPR, and others. These are important and they help establish a baseline and create a structure, but they often create a false sense of confidence because they describe what should exist, but they don't tell you how well it actually works in the real world. This is assumed security, based on policies, programs, and plans. Proven security, on the other hand, I define as based on evidence demonstrated through testing, detection that actually fires, response that happens quickly, and teams that know how to act under pressure. Proven security is measured by how fast you know something is wrong and how effectively you contain the damage. In real breaches, organizations don't fail because they lacked policies or certifications. They fail because assumptions were never tested. The final takeaway from this question is:

silence is not safety. It often means you are not seeing what matters. Visibility, testing, and response. That's what proves security.

Christopher Hutchins:

I love that. Silence is not security. I think that's a brilliant way to put it. As we wrap, if our listeners wanted to get in touch with you, they want to have a conversation, maybe they've heard some things today that give them a little bit of reason to be concerned in their own organization, how do they reach out to you?

Guman C:

People can find me on LinkedIn. I'm pretty active on LinkedIn and they can reach out to me anytime. They can also use my personal email. I'm happy to mentor if anyone is interested. I want to give back to the community.

Christopher Hutchins:

That's amazing, Guman. I really appreciate that. I'll make sure that I put all the information in the show notes, folks. So if you want to reach out, I'll make it easy for you to find him. Guman, thank you so much for a practical and grounded conversation. This has been fascinating to me. I am certainly not a security wizard, but I have a lot of respect for what you do and people like you who are out there likely not resting as well as I do because there's so much horrible activity that people are trying to pull off. And the whole issue of having to defend ourselves against breach is just mind-blowing to me that people are so motivated to cause that kind of havoc. So thank you for doing what you're doing. And thanks for being my guest today on the Signal Room. And to our listeners, that's it for this episode of the Signal Room. We're here to amplify the signals that matter across leadership, ethics, and innovation. Until next time, stay tuned, stay curious, and stay human.

Guman C:

Thank you so much for having me.

Christopher Hutchins:

That's it for this episode of the Signal Room. If today's conversation sparks something in you, an idea, a challenge, or a perspective worth amplifying, I'd love to hear from you. Message me on LinkedIn or visit SignalRoomPodcast.com to explore being a guest on an upcoming episode. Until next time, stay tuned, stay curious, and stay human.