The Digital Transformation Playbook
Kieran Gilmurray is a globally recognised authority on Artificial Intelligence, intelligent automation, data analytics, agentic AI, leadership development and digital transformation.
He has authored four influential books and hundreds of articles that have shaped industry perspectives on digital transformation, data analytics, intelligent automation, agentic AI, leadership and artificial intelligence.
𝗪𝗵𝗮𝘁 does Kieran do❓
When Kieran is not chairing international conferences, serving as a fractional CTO or Chief AI Officer, he is delivering AI, leadership, and strategy masterclasses to governments and industry leaders.
His team global businesses drive AI, agentic ai, digital transformation, leadership and innovation programs that deliver tangible business results.
🏆 𝐀𝐰𝐚𝐫𝐝𝐬:
🔹Top 25 Thought Leader Generative AI 2025
🔹Top 25 Thought Leader Companies on Generative AI 2025
🔹Top 50 Global Thought Leaders and Influencers on Agentic AI 2025
🔹Top 100 Thought Leader Agentic AI 2025
🔹Top 100 Thought Leader Legal AI 2025
🔹Team of the Year at the UK IT Industry Awards
🔹Top 50 Global Thought Leaders and Influencers on Generative AI 2024
🔹Top 50 Global Thought Leaders and Influencers on Manufacturing 2024
🔹Best LinkedIn Influencers Artificial Intelligence and Marketing 2024
🔹Seven-time LinkedIn Top Voice.
🔹Top 14 people to follow in data in 2023.
🔹World's Top 200 Business and Technology Innovators.
🔹Top 50 Intelligent Automation Influencers.
🔹Top 50 Brand Ambassadors.
🔹Global Intelligent Automation Award Winner.
🔹Top 20 Data Pros you NEED to follow.
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 Kieran's team to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/30min
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
The Digital Transformation Playbook
Congrats, You Trained The Bot That Took Your Job
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Stop asking what AI can do and start asking what it can’t. We dig into fresh MIT Sloan research that maps the human edge with EPOCH - empathy, presence, opinion, creativity, and hope - and show why these capabilities predict safer, more meaningful careers as automation spreads.
Along the way, we dismantle the “junior trap,” where digital natives get handed AI strategy without the system fluency to manage risk, and we lay out a pragmatic playbook for leaders who need to design guardrails that scale.
At A Glance / TLDR:
• reframing the job worry to human‑intensive skills
• EPOCH explained: empathy, presence, opinion, creativity, hope
• empathy as connection not detection
• presence for physical work and serendipity
• accountable judgment over probabilistic answers
• creativity through humour and improvisation
• hope and subjective belief beating status‑quo data
• why the junior trap misallocates AI strategy
• task‑level fixes versus system‑level risk design
• citations over explanations for trustworthy outputs
• clearing the data bottleneck with royalties for expertise
• safe augmentation with fatigue‑aware use cases
• J&J skills inference and career lattices
• equity risks, unions, freelancers, and burnout
We get specific about how to match use cases to model reliability, why experts ask for citations instead of explanations, and how to treat a model like a brilliant yet untrustworthy database.
Then we tackle the data bottleneck blocking real enterprise value: your best people hold the patterns your AI needs, but sharing that craft can devalue their advantage. The fix is economic, not technical. Think royalties and residuals for employee‑generated training data, turning knowledge transfer into an asset instead of a threat. If a salesperson’s workflows lift model close rates, a share of that lift should flow back to the source.
You’ll also hear how Johnson & Johnson used skills inference to surface hidden strengths from everyday work, moving from rigid ladders to flexible career lattices. We balance the promise of augmentation - like fatigue‑aware support for radiologists - with the reality of equity and burnout, spotlighting why unions won protections while freelancers face steep declines.
The throughline is simple: models predict the future from the past; humans create futures that never existed. Keep empathy at the centre, design for serendipity, hold judgment where accountability lives, cultivate real creativity, and defend hope as a strategic asset.
If this conversation helps you rethink your AI strategy or your own career moat, follow the show, share with a friend, and leave a quick review - what’s your strongest EPOCH skill?
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray
📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK
Flipping The AI Job Debate
Google Agent 1Welcome back to the deep dive. Okay, so today is Monday, February 16th, 2026. And it's been, what, a little less than three years since ChatGPT just exploded onto the scene and, well, collectively raised everyone's blood pressure. Trevor Burrus, Jr.
Google Agent 2Exploded is the right word.
Google Agent 1And I have to say, for all that time, it feels like the conversation's been stuck in a loop. It's always the same question. Is the robot coming for my job?
Google Agent 2Aaron Powell Is it going to replace me or is it going to augment me? Trevor Burrus, Jr.
Google Agent 1Right. Whatever augment even means this week. It's this binary debate, displacement versus augmentation.
Google Agent 2Aaron Powell It is. And it's the anxiety that keeps people up at night. And frankly, it's getting a little bit boring. It misses the entire point.
Google Agent 1Aaron Powell It really does. But today, we are going to flip the script. We've got a huge stack of research here from the MIT Sloan School of Management, their workforce intelligence report. And the researchers, Isabella Loiza and Roberto Rigabon, are basically saying we've been asking the completely wrong question.
Google Agent 2Aaron Powell They are. I mean, instead of sitting around asking, what can AI do, which, let's be honest, is a terrifyingly long list that just gets longer every week. They're saying we should be asking.
Google Agent 1What can humans do that AI is actually terrible at?
Google Agent 2Exactly. And that is a much more comforting question to ask.
Google Agent 1Aaron Powell It is, provided the answer isn't just nothing.
Google Agent 2Aaron Powell Thankfully, it's a pretty substantial answer. We're going to look at this framework they developed called EPOCH. It's basically a checklist for, you know, a robot-proof career. Trevor Burrus, Jr.
Google Agent 1And it's not just about being nice to people, it's much more specific.
Google Agent 2Much more.
Google Agent 1And we're also going to get into something they call the junior trap.
Google Agent 2I have to admit, when I read this part, I felt a little attacked on behalf of Gen Z. You know, this idea that just because you grew up with an iPad, you should be the one to lead the AI strategy.
Google Agent 1And MIT is saying that is a massive, costly mistake. Not because of age, but because of how they how they view systems.
Google Agent 2Okay, we'll get into that. And then finally, we're going to look at the money. How companies like Johnson and Johnson are using AI to find skills they didn't even know their employees had. And maybe more importantly, why you should probably be getting paid royalties for training your own replacement.
Introducing The EPOCH Framework
Google Agent 1That is the economic elephant in the room. So let's dive in. The human edge. Let's start with that first study from Louisa and Rigabon. They didn't just theorize, they looked at a massive amount of data, right?
Google Agent 2Oh, huge. They analyzed 19,000 specific work tasks.
Google Agent 119,000.
Google Agent 2Yeah. Defined by the U.S. Bureau of Labor Statistics. So they weren't just guessing, they were looking at the tiny granular things people actually do all day. And from that, they built a framework to identify what they call human-intensive capabilities.
Google Agent 1And these are the things that if your job has them, you are statistically much safer from automation.
Google Agent 2Exactly.
Google Agent 1And this is where we get the acronym E P O C H. E P O C H. And I got to say, usually I roll my eyes at these kinds of acronyms, but this one. This one actually works.
Google Agent 2It really does. So the E stands for empathy.
Google Agent 1Right. But they make a distinction here that I thought was so important. It's not just about detecting emotion.
Google Agent 2That's it. And that's the trap so many people fall into, right? They hear, oh, AI can read facial expressions. And they think empathy is, you know, a solved problem. Right. It's just an input. Exactly. The report makes this great distinction. It says AI can detect.
Google Agent 1But it can't connect.
Google Agent 2It can't connect. It's like the difference between a thermometer reading a fever and actually feeling that fever with someone. An AI can analyze a therapy transcript and say, the patient is showing signs of anxiety.
Google Agent 1But it can't say, I know what that feels like. It can't offer that solidarity.
Google Agent 2It can't. In a crisis, you don't just want a diagnosis, you want to be seen by another consciousness.
Google Agent 1So if your job is about just identifying problems, you should be worried. But if it's about sitting with the problem alongside someone, you're safer.
Google Agent 2Connection is the currency. That's it.
Empathy Means Connection Not Detection
Google Agent 1Okay. P is for presence. Now this one surprised me because the narrative for the last few years has been all about remote work. The office is dead.
Google Agent 2And now MIT says physical presence is a premium asset.
Google Agent 1So what's that about?
Google Agent 2Well, it's about two things. One is obvious, physical manipulation. The other is less obvious. Unexpected collision.
Google Agent 1Collision.
Google Agent 2Yeah. I mean, think about a nurse. You have to physically be there to put in an IV. But then think about a journalist. You have to be on the ground to get the texture, the off-the-record comment that happens when the mic is off.
Google Agent 1And AI can scrape the internet, but it can't go to a disaster site.
Google Agent 2It can't witness. And the collision part, that's like the water cooler effect, but scientifically, so much innovation happens in unplanned interactions. AI is really bad at bumping into ideas it wasn't programmed to look for.
Google Agent 1Oh, I love that framing. AI optimizes for the known.
Google Agent 2And humans stumble into the unknown just by physically navigating a chaotic world.
Google Agent 1Okay. O is for opinion. This feels like a big one for any lawyers listening.
Google Agent 2It's huge. Opinion, judgment, ethics. The key word here is accountability. An AI can give you an answer, a probability, but it can't take responsibility for it.
Google Agent 1You can't sue an algorithm for malpractice. I mean you can try.
Google Agent 2Good luck. Humans have to navigate open-ended systems. You're not just following a rule book, you're interpreting it. You're making a judgment call based on values, and AI gives you a calculation.
Google Agent 1A human provides a verdict.
Google Agent 2A verdict. And we still want a human signature at the bottom of the page when the stakes are high.
Google Agent 1Okay, C is creativity. But what about Midjourney and Sora? I mean, AI is making movies now. Is creativity really a human edge anymore?
Google Agent 2The report argues yes, but it gets really specific. It's about uh things like humor and improvisation.
Google Agent 1Right. I mean, have you ever tried to get one of these things to write a genuinely funny stand-up routine?
Google Agent 2It's painful, right?
Google Agent 1It's so bad. It's like a dad joke generator that skimmed a textbook on comedy theory.
Google Agent 2It understands the structure of a joke, the setup, the punchline, but it has no idea why it's funny. It can't subvert social norms because it doesn't understand them.
Google Agent 1Which brings us to H hope. And this feels a little poetic for an MIT report.
Google Agent 2It does, doesn't it? But this is the capability they found that was most associated with employment growth. Hope, vision, leadership.
Google Agent 1How does that look on a job description? Must have five years of hope.
Google Agent 2It looks like grit. It looks like what you might call irrational perseverance. So think about an entrepreneur starting a company. If you fed all the data to an AI, the failure rates, the market saturation. The AI would say, stop, do not proceed, probability of success is four percent.
Google Agent 1And the human says, I don't care. I see something you don't.
Google Agent 2Vision defies the existing data. And this connects to a really profound point they make about subjective beliefs. Aaron Powell, Jr.
Google Agent 1What do they mean by that?
Presence And Serendipity In Work
Google Agent 2Well, there are things we do on principle, not on probability. The report mentions the civil rights movement or women's suffrage. If you had pulled the data or the sentiment of the ruling class at the time, an AI would have concluded those movements were destined to fail.
Google Agent 1Aaron Powell The data supported the status quo.
Google Agent 2Exactly. And AI is a status quo machine. It predicts the future based on the past. So if you want to change the future, you need a human.
Google Agent 1That's actually kind of beautiful. So if your job is to maintain things, worry. If your job is to imagine something new, you're essential.
Google Agent 2That's the takeaway.
Google Agent 1Okay, that's the what? The EPOCH framework. But I want to shift to the who because this next bit from the report, it challenges a really deep-seated bias in the corporate world.
Google Agent 2The junior trap.
Google Agent 1The junior trap. It feels like the standard playbook for the last three years has been the CEO is 60, doesn't know Chat GPT, so they find the 23-year-old intern and say, You figure out our AI strategy. You're a digital native.
Google Agent 2It seems to make sense on the surface, but Kate Kellogg, another MIT slum professor, found this is actually dangerous.
Google Agent 1Aaron Powell Dangerous, how?
Google Agent 2No, not like that. They're breaking the business logic. So she studied 78 junior consultants using GPE4. These are smart kids, top schools. But she found they kept falling into what she calls novice AI risk mitigation tactics.
Google Agent 1Aaron Powell Okay, that's a mouthful. Break that down for me.
Google Agent 2Aaron Powell Well, it's basically because they don't understand the deep systems of the business.
Google Agent 1Okay.
Google Agent 2So they focus on fixing the individual tasks. Let's say they ask the AI to write a report. Yeah. And the AI makes up a fact. It hallucinates.
Google Agent 1Which it does all the time.
Google Agent 2All the time. The junior employee sees that, and their solution is, okay, I need to manually fact check every single sentence this thing produces.
Google Agent 1Aaron Powell, which defeats the whole purpose of using AI in the first place. You've just become a robots editor.
Google Agent 2Exactly. That's a task level fix. A senior leader, though, they see that same hallucination and they think about the system. They think, okay, this tool is maybe 80% accurate. That's way too risky for client memo, so we're not using it for that. But for internal brainstorming, where volume matters more than accuracy, it's perfect.
Google Agent 1Ah, so the senior fixes the use case while the junior tries to fix the output.
Google Agent 2Correct. The senior manages the risk profile of the whole system. And there's another layer here with transparency.
Google Agent 1But this was the explain your logic part.
Google Agent 2Yes. So the juniors tend to ask the AI, why did you say that? They want the chatbot to explain its reasoning.
Google Agent 1Which sounds like a responsible thing to do.
Google Agent 2It sounds responsible, but it's actually naive. Experts know that AI reasoning is often just an illusion. It's a black box. If you ask it why, it will just make up a plausible sounding reason that fits the pattern of what an explanation looks like.
Google Agent 1It's confidently gaslighting you.
Judgment Accountability And Opinion
Google Agent 2Basically. So experts don't ask for explanations. They ask for citations. They don't want to know why the bot thinks something. They want a direct link to the source document so they can verify it themselves.
Google Agent 1So they treat the AI like a search engine, not a colleague.
Google Agent 2A very smart but untrustworthy database. That's the key distinction. And it's why you can't outsource AI strategy to the interns. You need that senior level system design thinking to make it work safely.
Google Agent 1Okay, so we need senior leaders to design the system, but then we hit the next wall. Even with the right strategy, you need the AI to actually know your business. And this is where we get to the data bottleneck.
Google Agent 2Right. This is the part of the report that Danielle I, another professor at MIT, really drills down on. We have these genius models like Claude or Gemini that know a little about everything. But a company doesn't need a general genius.
Google Agent 1They need a capable colleague.
Google Agent 2Exactly. Someone who knows the internal mess, who knows how we format our PL statements or how we handle a certain customer complaint.
Google Agent 1And to learn that, the AI needs training data. It needs examples of good work from your best people.
Google Agent 2Which leads to a huge problem. Let's say I'm your top salesperson and you ask me to record all my calls and upload all my emails to train a sales bot.
Google Agent 1Why on earth would I do that? I'm digging my own grave.
Google Agent 2You are. And this is where Danielle Lai just she puts it so brutally. She says, the moment your expertise stops being rare, you stop getting paid.
Google Agent 1Wow.
Google Agent 2Yeah.
Google Agent 1That's that's the quiet part, screamed out loud.
Google Agent 2It is the fundamental economic tension of this decade. Companies need clean expert data. But the experts who have that data have zero incentive to share it if they think it's just going to devalue their own skills.
Google Agent 1So the smartest people will just hoard their knowledge, they'll keep their best tricks offline.
Google Agent 2Won't engage in knowledge hiding. And you end up training your AI on the work of your mediocre employees because they're the only ones who will share. And you get a mediocre AI.
Google Agent 1So what's the fix? You can't just stop progress.
Google Agent 2You have to change how you pay people. We need to stop thinking about just paying for time or output and start thinking about paying people for their data as a form of intellectual property.
Google Agent 1Like royalties.
Google Agent 2Conceptually, yeah. If I'm the one who trains the model that ends up closing 50% of your deals, I should get a residual from that. You want me to build the blueprint for my own replacement? You better pay me for the blueprint.
Google Agent 1That is a massive, massive shift. We're talking about treating every employee like a content creator, licensing their work.
Google Agent 2It is, but without it, that data bottleneck is not gonna clear.
Creativity Humour And Improvisation
Google Agent 1Now, before everyone panics, Lai does offer a bit of a chill pill here with the tired radiologist idea.
Google Agent 2Right. This is the argument for augmentation. Think about a radiologist at three in the morning. They've been on a 12-hour shift, their eyes are tired.
Google Agent 1That is not the person I want reading my MRI scan.
Google Agent 2Exactly. So in that moment, having an AI that can pre-scan the images and flag potential anomalies isn't replacing the radiologist. It's keeping the patient safe. It's taking a task, the initial screening off the human's plate, so they can focus on the role.
Google Agent 1The diagnosis, the patient care, the EPOCH stuff.
Google Agent 2The empathy, the judgment, exactly. It's about giving the dangerous, boring, or high fatigue tasks to the bot.
Google Agent 1Speaking of moving people around, let's talk about Johnson Johnson. They did something really fascinating to figure out what their employees could actually do.
Google Agent 2This is a great case study in what they call skills inference. Jane J. realized that just asking employees, what are you good at, doesn't really work.
Google Agent 1Because we lie or we just don't know.
Google Agent 2We don't know. Or we undersell ourselves. The terminology changes so fast we don't know what to call our skills. So they use an AI to scan the digital exhaust of their employees.
Google Agent 1Digital exhaust, I like that.
Google Agent 2They looked at project management tools, documents, learning platforms. They didn't ask, can you do data visualization? They looked and saw, oh, this person has been building Tableau dashboards for three years.
Google Agent 1It feels a little intrusive, but also incredibly useful.
Google Agent 2It walks a line. But the key was they validated it. They showed the data to the employee and said, the AI thinks you're good at these five things. Is that right? And the employees mostly said yes.
Google Agent 1And what did they find?
Google Agent 2A gold mine of future-ready skills hiding in plain sight. They found people with skills in robotic process automation who were stuck in jobs that didn't use them at all.
Google Agent 1So your marketing manager might secretly be a Python expert.
Google Agent 2Exactly. And this let J and J move from a career ladder to a career lattice.
Google Agent 1A ladder is just up or out.
Google Agent 2Right. It's rigid. Lattice is flexible. You can move sideways diagonally into a new role that didn't even exist yesterday. They create jobs based on the skills they find instead of shoving people into old boxes.
Google Agent 1That sounds incredible. Flexible, personalized careers. But there's always a but on this show. We have to talk about the risks. Because not everyone works at Johnson Johnson.
Google Agent 2And this is where we have to listen to Molly Kinder from the MIT Stone Center. She raises the alarm on equity. Because the benefits of this human edge, they're not going to be distributed equally.
Google Agent 1She points to the difference between the Hollywood writers and freelance illustrators.
Google Agent 2It's such a stark contrast. The Writers Guild, they went on strike, they shut down Hollywood, they used their collective power to force studios to put up guardrails on AI.
Google Agent 1They protected their EPOCAs.
Hope Vision And Changing The Future
Google Agent 2They did. But the freelance illustrator on Upwork, they have no union. They're facing the algorithm alone. And the data shows their wages and opportunities are falling much, much faster.
Google Agent 1And then there's the burnout risk. If the AI takes all the easy cases.
Google Agent 2The humans left with only the hard ones. Think about therapist or a customer service rep. Your day is usually a mix of easy and hard. The easy stuff, the I lost my password calls, they give you a mental breather.
Google Agent 1It lowers your heart rate.
Google Agent 2It does. But if an AI solves all those easy problems instantly, the human is left with eight straight hours of pure distilled crisis, eight hours of trauma. That is not sustainable.
Google Agent 1That's a recipe for a mental health collapse.
Google Agent 2It is. The report reminds us that unemployment is a health hazard. We've known that for a century. But extreme intensity employment is a health hazard too.
Google Agent 1So bringing this all together, it feels like we are on a very, very thin tightrope.
Google Agent 2We are. And to stay on it, we need those EPOCH skills, empathy, presence, opinion, creativity, hope. We need to lean into the things that require a soul.
Google Agent 1And we need to stop assuming the 22-year-olds are going to save us. We need senior leaders to actually lead and design safe systems.
Google Agent 2And we have to fix the money, pay people for their expertise if you want good data.
Google Agent 1Aaron Powell I want to end on that idea of hope and subjective beliefs, because that, for me, was the most profound part of all this.
Google Agent 2Aaron Powell It really is something.
Google Agent 1We live in this world that's obsessed with being data driven. We want the data to make the decision for us. But what this report is saying is that the most important human achievements, the big leaps forward, are inherently not data driven.
Google Agent 2Aaron Powell That's the danger, the efficiency trap. If we hand all our big decisions over to an AI, we're handing them over to the past. The data only knows what has happened. It can't imagine what should happen. So if we had run the data on going to the moon in the 60s, the algorithm would say inefficient, expensive, high probability of death, no immediate ROI. Recommendation.
Google Agent 1And yeah, we went.
Google Agent 2We went because of hope, because of vision, because of a subjective belief that it was worth doing. That's the human edge. It's the ability to make that irrational choice that pushes the entire species forward.
Google Agent 1So keep your grit, keep your irrational hope, and maybe double check your own work instead of letting the intern do it.
Google Agent 2That's it for this deep dive. Thanks for listening.
Google Agent 1See you next time.