AI Café Conversations | AI for Executives: Leadership Insights | Transforming with AI

Are Your Executives Hiding Their AI Use? It's Not a Trust Problem | AI For Executives

Season 4 Episode 7

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 15:30

Send us Fan Mail

Why 93% of Senior Leaders Use Shadow AI — And What Your Nervous System Has to Do With It

Most CEOs think Shadow AI is a compliance problem. A policy problem. A trust problem.

It's not.

According to a 2025 UpGuard report, 93% of senior executives use unapproved AI tools — Shadow AI — behind their organization's back. 75% share sensitive information. And Shadow AI breaches cost $670,000 MORE than sanctioned ones.

But the statistic that matters most isn't in any report. It's what's happening inside the brain of every leader who hides their AI use.

When identity fuses with expertise, asking for AI help feels like a threat. The amygdala runs the show. And a dysregulated amygdala doesn't create psychological safety — it destroys it.

In this episode, Sahar Andrade, MB.BCh — neuroscience-based AI leadership consultant — breaks down:

- Why 93% of senior leaders use Shadow AI (and why it's a nervous system response, not a character flaw)

- What Shadow AI costs beyond the $670K breach figure

- The 3 steps regulated leaders take to reverse the Shadow AI spiral

- Why policy never fixes what regulation can

Not sure where your Shadow AI risk actually sits? Take the free Shadow AI Assessment — 

https://www.saharandrade.com/assessments/2148598163

  Email me at sahar@saharconsulting.com 1. Why do executives hide their AI use from their organizations?

2. What is Shadow AI and how does it affect executive leadership?

3. Is Shadow AI a trust problem or a neuroscience problem?

4. How does the nervous system drive Shadow AI behavior in leaders?

5. What does regulated leadership mean in the context of AI adoption?


 #AIForExecutives #ExecutiveLeadership #NeuroscienceLeadership #AIStrategy #ShadowAI #AITransformation #RegulatedLeadership   #HumanCenteredAI #AILeadership #AICulture #PsychologicalSafety #HumanAdvantage   #AI #Artificialintelligence #executivepresence  #neuroleadership #neuroscienceinleadership #Futureofwork   #AInotechrequired 
 #neuroscience  #humancenteredai #notechrequired #AItoolsforexecutives #AIandleadership #executivecoachingwithAI  #AIStrategy  #aistrategyforexecutives #executiveaiinsights  #AIleadershiptransformation #Leadershipdysregulation  #Neurosciencebasedleadership #ShadowAImanagement #SHADOWai #changemanagement #teamtrust 

Support the show

--- 

AI Cafe Conversations: Neuroscience-based AI leadership for executives. Hosted by Sahar (The AI Whisperer) | New episodes Wed & Fri 

🔗 Connect: https://www.linkedin.com/in/saharandradespeaker/

📧 Work with me: sahar@saharconsulting.com

🌐 Website: https://www.saharconsulting.com/

 📧 Instagram: https://www.instagram.com/saharthereinventcoach

SPEAKER_00

Why do executives hide their AI use? According to twenty twenty five UpGuard report, ninety-three percent of senior executives use shadow AI and approved tools behind their organization's back. Most leaders assume it's a trust problem. It's not. It's a nervous system problem. When identity is built on expertise, asking for help with AI feels like a threat. The amygdala, our threat center, quotes vulnerability as danger. I'm Sahar Andradi, your AI whisperer. I am a neuroscience-based AI leadership consultant, and this is AI Conversations Podcast, the only podcast teaching regulated leadership for AI disruption. No tech required. I want to start today with a number that stopped me cold. 93%. That is the percentage of senior executives, CEOs, CHROs, senior leaders who are using AI tools that their own organization have not approved. Not their teams, their leaders. So here is the question I want you to sit with before we go any further. If your executives are hiding their AIUs, what does that tell you about the safety of your culture? And here is what I know after working with executives across Fortune 500 companies, government agencies, and healthcare systems. It is not a trust problem. It's a nervous system problem, and that changes everything about how to fix it. Let me paint you a picture. I was working with a senior director at a large organization. I'll call her Maria. 22 years of experience, reputation of knowing everything in her domain. And for six months, she had been quietly using an unapproved AI tool to draft reports, synthesize data, and prepare briefings. When I asked her why she hadn't used the approved tool her organization provided, she looked at me and said, Because if I ask for help with AI, they will think I don't know what I'm doing anymore. That sentence right there, that is shadow AI in its purest form. Not a technology problem, a survival response. Here is what the data shows us: 93% of senior executives use shadow AI. 75% of them are sharing sensitive information like employee data, financial projections, internal documents with unapproved tools. And shadow AI data breaches cost organization$670,000 more than breaches from sanctioned tools per incident. But those are the external numbers, the ones that show up in report. What doesn't show up in the report is what's happening inside the brain of every executive who clicks agree on a consumer AI tool at 11 pm before a board meeting. Let me tell you what that looks like neurologically. When a leader who has spent decades building expertise suddenly feels like they need to catch up on AI, the prefrontal cortex or your rational brain doesn't run to show anymore. The amygdala does. The threat detection system does. The part of your brain that was built to survive predators, not board meetings. And what does the amygdala do with the feeling of I don't know enough about AI? It quotes it as danger. It says if they find out you don't know this, you lose status, you lose credibility, you lose the room. So the brain does what brains do under threat. It hides, it protects, it finds a workaround that keeps the threat invisible. That workaround is shadow AI. And here is the part that should concern every leader listening to this right now. This is not happening to weak leader, it's happening to your most experienced ones. The people who have the most to lose from being seen as not knowing. So what does it actually cost when leaders operate from this place? And I'm not talking about the$670,000 breach statistic, though that matters enormously. I'm talking about what happens to an organization when the people at the top are making decisions from a dysregulated nervous system. When your amygdala is running your AI strategy, you don't think clearly. You don't assess risk accurately. You don't create the psychological safety that your teams need to come to you with problems. You make fast decisions that feel safe in the moment and they cost you later. Here is something I see consistently in the organization I work with. When senior leaders hide their use of AI, even with good intentions, they send the signal, not through the words, through the nervous system. Teams feel the inconsistency. They sense the gap between what's being said about AI policy and what's actually happening. And their brains, because our brains are wired for social mirroring, pick up on that incongruence. Remember, as leaders, as executives, we are the thermostat in the room, not the thermometer. So 70% of employees are aware that people in their organizations are sharing sensitive data with AI tools. 70%. They know they're watching. And what they're watching is whether the leaders are regulated enough to lead this conversation with integrity. When leaders are not, the organization doesn't get confused. It gets quiet. Quiet is not safe. Quiet is a nervous system shut down at scale. The teams stop asking questions, they stop raising concerns, they go underground with their own AI use, and the cycle accelerates. This is what I call the shadow AI spiral. One dregulated leader at the top creates a culture where everyone hides. And what should be your greatest competitive asset or your organization's ability to learn and adapt with AI, it becomes, guess what? It becomes your greatest liability. So what do you do about it? I want to be clear about something first. This is not about punishment. This is not about policy enforcement. This is not about bad people or sneaky people or toxic people or people that don't know what they're doing, or people that are incompetent. Might be 2%, but it's not the main reason. So if you respond to that, what do you do about it? By any of that, or if you respond to shadow AI with surveillance and restriction, you will drive the behavior deeper underground. People will become smarter on how to hide it. Your amygdala cannot regulate someone else's amygdala. Only safety can. Credentials doesn't have to be a degree, but it has to do maybe to case studies and how you're showing up. Credibility is about building that trust step by step. The solution starts with regulation, not of AI tools, but of nervous systems. Here is what I know from both neuroscience and years of working with executive teams. A regulated leader, one whose nervous system is in what we call the ventral vagal state, does something remarkable. They can hold complexity without stress without stress and without threat. They can say, I don't know, without it feeling like a professional collapse. They can create space for their team to be honest. And here is what that looks like practically. Step one, the leader has to be the first one to name it. Not in a policy meeting, in a conversation. I have been learning AI too. It has been uncomfortable, and here is what I have found. That one moment of regulated vulnerability does more for psychological safety than six months of policy rollouts and a hundred of pages defining what psychological safety is and how to do it. Step two, separate identity from expertise. This is where the neuroscience gets precise. The reason shadow AI spreads through executive ranks is that these leaders have used their identity with their domain knowledge. When AI disrupts that knowledge, even partially, the brain reads it as an identity threat. And identity threats produce the same cortisol response as physical danger. The regulated intervention is to create a new identity anchor. Not I am the expert, but I am the one who creates the conditions for my team to be the experts. That identity can hold AI disruption without collapsing. Step three: Build the infrastructure for honest AI conversations, not a hotline, not a forum, a standing meeting. A conversation that's already scheduled, a place where the question, what are we actually using, can be answered without consequences and without fear. When leaders do these three things from a regulated nervous system, the shadow AI spiral doesn't just slow down, it reverses. Because what was driving the secrecy was never the AI. It was the threat. Remove the threat through regulation and psychological safety and the hiding stops. I work with the executives to identify which nervous system zone is driving their AI decisions. There are three zones, and which zone you are in determines whether your shadow AI problem gets better or worse, regardless of your situation. I'm not gonna go walk you through all three zones today. That diagnostic is specific to each leader and each organization. What I can tell you is this the leaders who solve their shadow AI problem fastest are not the ones who add the most controls. They are the ones who get regulated first. So if you are not sure if shadow AI is already inside your organization, most leaders find out too late. The Shadow AI assessment, the free one that I created, shows you exactly where your risk sits and what to fix first. It takes 10 minutes. No tech required. The link is in the show notes or description. Don't wait to find out the hard way. AI Cafe Conversations is the only podcast teaching regulated leadership for AI disruption with a medical and neuroscience lens on executive AI adoption. No tech required. I am Sahar Andradi. I will see you on Friday for our Forbes article like podcast. This is a shorter podcast today because this is an issue that I keep meeting almost every day in the organization I consult for. But before I go, I want you to show me some love. So like, subscribe, share this podcast. I want to thank you for making our podcast one of the top 2% global podcast. You can find us on all the major platforms like Apple, Spotify, Amazon Music, iHeart Radio. I appreciate you. You can get hold of me through my email, Sahar at Saharconsulting.com or on LinkedIn, Sahar and Razi, or on Instagram. Thank you for being here today until we meet on Friday.