The Inner Game of Change

Inside The Messy Middle - AI And The Organisational Immune System

Ali Juma

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 10:22

Inside the Messy Middle is a special series from The Inner Game of Change

This fortnightly short series is for people who carry responsibility inside complexity.
 Between strategy and delivery. Between intent and impact. Between what was imagined and what must now be made real.

In this episode of Inside the Messy Middle, I explore a powerful idea:

What if organisations behave like immune systems?

As AI enters the workplace, many organisations are not simply reacting to new technology. They are responding to perceived threats to identity, expertise, workflows, control, and certainty.

Through stories from medicine, business, and lived organisational experience, I explore why good ideas are sometimes rejected, why thoughtful people hesitate during change, and why resistance is often more complex than it first appears.

This episode examines:
• the hidden defence mechanisms inside organisations
• why AI triggers different reactions across legal, IT, policy, and leadership teams
• the psychological side of resistance and uncertainty
• the role of leadership and change management in creating safe movement during disruption
• what business history can teach us about adaptation and survival

Featuring reflections on Semmelweis, Kodak, Nokia, and modern AI adoption dynamics, this episode is a deeper look into the invisible tensions shaping organisational change today.

Send us Fan Mail

Ali Juma 
@The Inner Game of Change podcast

Follow me on LinkedIn


Welcome And The Body Metaphor

Ali

Welcome back to Inside the Messy Middle, which is a special series from the Inner Game of Change podcast. You know, one thing that I have been thinking about lately is this idea that organizations behave a little bit like the human body. Every healthy body has an immune system. Its job is protection, detect threats, respond quickly, keep the body stable. Without an immune system, the body becomes vulnerable. But sometimes the immune system gets confused. Sometimes it attacks the very thing trying to help the body improve. And frankly, organizations can behave exactly the same way during change, especially during AI adoption. And once you see this pattern, you start seeing it everywhere. Every organization builds protective mechanisms over time, processes, approvals, governance, hierarchy, risk controls, ways of speaking, ways of making decisions. At first, these things are useful. They create consistency, predictability and safety. But over time, something subtle can happen. The system slowly starts protecting itself instead of protecting performance. And you can usually see this most clearly during moments of disruption. Now, AI is fascinating because AI does not just introduce a new tool. It quietly challenges expertise, workflows, decision making, identity, and even people's sense of value. So naturally the system reacts, not always dramatically, sometimes very quietly. Another approval layer, another committee, a pilot that never scales. Some people will be saying let's wait until the technology matures, maybe next year, we need more governance first and so on and so forth. Now to be fair, sometimes these concerns are absolutely legitimate, but sometimes those are antibodies. There's actually a historical story that captures this beautifully. I may have shared this example before, but it is too good not to revisit. Back in the eighteen hundreds, a Hungarian doctor named Ignaz Semelweis noticed something strange. Women giving birth in hospitals were dying at a much higher rates than women giving birth at home. And Ignaz eventually realized doctors were moving from autopsies directly into delivery rooms without washing their hands. So he introduced hand washing, and almost immediately death rates collapsed. Lives were saved. But here's the fascinating part. Many doctors rejected him, not because the evidence was weak, because the implications were psychologically unbearable. Accepting the idea meant accepting that they themselves may have been causing harm, and that threatened identity, status, and professional legitimacy. The immune system attacked the signal, and honestly, I think versions of this still happen inside organizations today. You can actually see different immune responses depending on the role people occupy. To a legal team, AI might look like risk, liability, privacy exposure, copyright issues, compliance problems, so naturally their immune response becomes policy, governance, control, restriction. Not because they hate innovation, because their role is literally designed to reduce organizational harm. To IT teams, AI can look like shadow systems, unsupported tools, security vulnerabilities, integration headaches. So their response becomes platform control, architectural reviews, access management, again, protection logic. To policymakers, AI can look like misinformation, labor disruption, ethical instability, social risk. Policy systems are actually designed to slow things down when uncertainty arises. That is what they are there for. And to many middle managers, AI can quietly feel like exposure, exposure of capability gaps, pressure to lead without clarity, uncertainty about future relevance, extra workload while still keeping your operations moving. So the immune response becomes delay, translation, local filtering, careful reinterpretation, and most of this is not malicious. This is the important part. A lot of good people inside organizations are simply trying to preserve stability while making sense of uncertainty. I think one of the mistakes leaders make during transformation is assuming every immune response is irrational resistance. Sometimes what looks like resistance is actually fear of public failure, identity protection, cognitive overload or unsafe learning conditions. I say this often in my workshops. When thoughtful people slow down, it is usually a signal worth paying attention to, because thoughtful hesitation is often telling you something about the design of the environment, not just the mindset of the person. And when leaders misread the signal, they usually increase pressure instead of increasing clarity, which often strengthens the immune response, not the adoption. And I think this is where leadership and change management become incredibly important, not as communication machines, not as rollout functions, but as stabilizing forces during uncertainty. Because when organizations experience disruption, people start watching leadership differently. They watch for signals. Is it safe to experiment? Is it safe to ask questions? Is it safe to not know yet? Will mistakes be punished? Are leaders learning too or only directing? And people in change roles often sit right in the middle of this tension, trying to move the organization forward while also helping people feel psychologically safe enough to move at all. I sometimes think good change management acts a bit like an immune regulator, not suppressing the system completely, but stopping the organization from attacking every unfamiliar idea before it has a chance to prove its value. And honestly, some of the best leaders I have seen during AI adoption are not necessarily the loudest or the most technical. They are the ones who create enough safety for learning to happen publicly. Because real adoption usually begins the moment people stop feeling they have to pretend. You can see this in business history too. I have mentioned Kodak before because it's such an important example. Kodak actually invented one of the first digital cameras. Think about that for a second. The future was already inside the building, but digital photography threatened the very thing Kodak was optimized around film, supply chains, revenue models, capability structures, identity. Sometimes organizations reject innovation not because they cannot see it, but because they can. And they understand exactly what it threatens. The same thing happened to Nokia. Many people inside Nokia reportedly understood the smartphone shift early, but internally there was a pressure to appear confident, pressure not to challenge hierarchy, pressure not to destabilize their system. So the immune system protected certainty instead of adaptation. And if we are honest, a lot of organizations today are somewhere in that exact tension with AI. Now this does not mean the immune system is bad. An organization without protective mechanisms becomes reckless. Some caution is wisdom, some governance matters. Some AI applications absolutely should be challenged carefully. But organizations become fragile in a different way when every unfamiliar idea is treated as danger. And I think that is the real leadership challenge now. Not destroying the immune system, helping the system learn the difference between threat and evolution, because the future rarely arrives dramatically. Usually it arrives quietly, as small disruptions to workflow, expertise, identity, decision making and certainty. And maybe the real question for organizations is not whether immune responses will appear, they will. The deeper question is whether organizations can recognize the difference between protecting themselves from danger and protecting themselves from growth. Until next time.