The Digital Transformation Playbook

Congrats, You Trained The Bot That Took Your Job

Kieran Gilmurray

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 17:08

Stop asking what AI can do and start asking what it can’t. We dig into fresh MIT Sloan research that maps the human edge with EPOCH - empathy, presence, opinion, creativity, and hope - and show why these capabilities predict safer, more meaningful careers as automation spreads. 

Along the way, we dismantle the “junior trap,” where digital natives get handed AI strategy without the system fluency to manage risk, and we lay out a pragmatic playbook for leaders who need to design guardrails that scale.

At A Glance / TLDR:

• reframing the job worry to human‑intensive skills
• EPOCH explained: empathy, presence, opinion, creativity, hope
• empathy as connection not detection
• presence for physical work and serendipity
• accountable judgment over probabilistic answers
• creativity through humour and improvisation
• hope and subjective belief beating status‑quo data
• why the junior trap misallocates AI strategy
• task‑level fixes versus system‑level risk design
• citations over explanations for trustworthy outputs
• clearing the data bottleneck with royalties for expertise
• safe augmentation with fatigue‑aware use cases
• J&J skills inference and career lattices
• equity risks, unions, freelancers, and burnout

We get specific about how to match use cases to model reliability, why experts ask for citations instead of explanations, and how to treat a model like a brilliant yet untrustworthy database. 

Then we tackle the data bottleneck blocking real enterprise value: your best people hold the patterns your AI needs, but sharing that craft can devalue their advantage. The fix is economic, not technical. Think royalties and residuals for employee‑generated training data, turning knowledge transfer into an asset instead of a threat. If a salesperson’s workflows lift model close rates, a share of that lift should flow back to the source.

You’ll also hear how Johnson & Johnson used skills inference to surface hidden strengths from everyday work, moving from rigid ladders to flexible career lattices. We balance the promise of augmentation - like fatigue‑aware support for radiologists - with the reality of equity and burnout, spotlighting why unions won protections while freelancers face steep declines. 

The throughline is simple: models predict the future from the past; humans create futures that never existed. Keep empathy at the centre, design for serendipity, hold judgment where accountability lives, cultivate real creativity, and defend hope as a strategic asset. 

If this conversation helps you rethink your AI strategy or your own career moat, follow the show, share with a friend, and leave a quick review - what’s your strongest EPOCH skill?

Support the show


𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray

📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK


Flipping The AI Job Debate

Google Agent 1

Welcome back to the deep dive. Okay, so today is Monday, February 16th, 2026. And it's been, what, a little less than three years since ChatGPT just exploded onto the scene and, well, collectively raised everyone's blood pressure. Trevor Burrus, Jr.

Google Agent 2

Exploded is the right word.

Google Agent 1

And I have to say, for all that time, it feels like the conversation's been stuck in a loop. It's always the same question. Is the robot coming for my job?

Google Agent 2

Aaron Powell Is it going to replace me or is it going to augment me? Trevor Burrus, Jr.

Google Agent 1

Right. Whatever augment even means this week. It's this binary debate, displacement versus augmentation.

Google Agent 2

Aaron Powell It is. And it's the anxiety that keeps people up at night. And frankly, it's getting a little bit boring. It misses the entire point.

Google Agent 1

Aaron Powell It really does. But today, we are going to flip the script. We've got a huge stack of research here from the MIT Sloan School of Management, their workforce intelligence report. And the researchers, Isabella Loiza and Roberto Rigabon, are basically saying we've been asking the completely wrong question.

Google Agent 2

Aaron Powell They are. I mean, instead of sitting around asking, what can AI do, which, let's be honest, is a terrifyingly long list that just gets longer every week. They're saying we should be asking.

Google Agent 1

What can humans do that AI is actually terrible at?

Google Agent 2

Exactly. And that is a much more comforting question to ask.

Google Agent 1

Aaron Powell It is, provided the answer isn't just nothing.

Google Agent 2

Aaron Powell Thankfully, it's a pretty substantial answer. We're going to look at this framework they developed called EPOCH. It's basically a checklist for, you know, a robot-proof career. Trevor Burrus, Jr.

Google Agent 1

And it's not just about being nice to people, it's much more specific.

Google Agent 2

Much more.

Google Agent 1

And we're also going to get into something they call the junior trap.

Google Agent 2

I have to admit, when I read this part, I felt a little attacked on behalf of Gen Z. You know, this idea that just because you grew up with an iPad, you should be the one to lead the AI strategy.

Google Agent 1

And MIT is saying that is a massive, costly mistake. Not because of age, but because of how they how they view systems.

Google Agent 2

Okay, we'll get into that. And then finally, we're going to look at the money. How companies like Johnson and Johnson are using AI to find skills they didn't even know their employees had. And maybe more importantly, why you should probably be getting paid royalties for training your own replacement.

Introducing The EPOCH Framework

Google Agent 1

That is the economic elephant in the room. So let's dive in. The human edge. Let's start with that first study from Louisa and Rigabon. They didn't just theorize, they looked at a massive amount of data, right?

Google Agent 2

Oh, huge. They analyzed 19,000 specific work tasks.

Google Agent 1

19,000.

Google Agent 2

Yeah. Defined by the U.S. Bureau of Labor Statistics. So they weren't just guessing, they were looking at the tiny granular things people actually do all day. And from that, they built a framework to identify what they call human-intensive capabilities.

Google Agent 1

And these are the things that if your job has them, you are statistically much safer from automation.

Google Agent 2

Exactly.

Google Agent 1

And this is where we get the acronym E P O C H. E P O C H. And I got to say, usually I roll my eyes at these kinds of acronyms, but this one. This one actually works.

Google Agent 2

It really does. So the E stands for empathy.

Google Agent 1

Right. But they make a distinction here that I thought was so important. It's not just about detecting emotion.

Google Agent 2

That's it. And that's the trap so many people fall into, right? They hear, oh, AI can read facial expressions. And they think empathy is, you know, a solved problem. Right. It's just an input. Exactly. The report makes this great distinction. It says AI can detect.

Google Agent 1

But it can't connect.

Google Agent 2

It can't connect. It's like the difference between a thermometer reading a fever and actually feeling that fever with someone. An AI can analyze a therapy transcript and say, the patient is showing signs of anxiety.

Google Agent 1

But it can't say, I know what that feels like. It can't offer that solidarity.

Google Agent 2

It can't. In a crisis, you don't just want a diagnosis, you want to be seen by another consciousness.

Google Agent 1

So if your job is about just identifying problems, you should be worried. But if it's about sitting with the problem alongside someone, you're safer.

Google Agent 2

Connection is the currency. That's it.

Empathy Means Connection Not Detection

Google Agent 1

Okay. P is for presence. Now this one surprised me because the narrative for the last few years has been all about remote work. The office is dead.

Google Agent 2

And now MIT says physical presence is a premium asset.

Google Agent 1

So what's that about?

Google Agent 2

Well, it's about two things. One is obvious, physical manipulation. The other is less obvious. Unexpected collision.

Google Agent 1

Collision.

Google Agent 2

Yeah. I mean, think about a nurse. You have to physically be there to put in an IV. But then think about a journalist. You have to be on the ground to get the texture, the off-the-record comment that happens when the mic is off.

Google Agent 1

And AI can scrape the internet, but it can't go to a disaster site.

Google Agent 2

It can't witness. And the collision part, that's like the water cooler effect, but scientifically, so much innovation happens in unplanned interactions. AI is really bad at bumping into ideas it wasn't programmed to look for.

Google Agent 1

Oh, I love that framing. AI optimizes for the known.

Google Agent 2

And humans stumble into the unknown just by physically navigating a chaotic world.

Google Agent 1

Okay. O is for opinion. This feels like a big one for any lawyers listening.

Google Agent 2

It's huge. Opinion, judgment, ethics. The key word here is accountability. An AI can give you an answer, a probability, but it can't take responsibility for it.

Google Agent 1

You can't sue an algorithm for malpractice. I mean you can try.

Google Agent 2

Good luck. Humans have to navigate open-ended systems. You're not just following a rule book, you're interpreting it. You're making a judgment call based on values, and AI gives you a calculation.

Google Agent 1

A human provides a verdict.

Google Agent 2

A verdict. And we still want a human signature at the bottom of the page when the stakes are high.

Google Agent 1

Okay, C is creativity. But what about Midjourney and Sora? I mean, AI is making movies now. Is creativity really a human edge anymore?

Google Agent 2

The report argues yes, but it gets really specific. It's about uh things like humor and improvisation.

Google Agent 1

Right. I mean, have you ever tried to get one of these things to write a genuinely funny stand-up routine?

Google Agent 2

It's painful, right?

Google Agent 1

It's so bad. It's like a dad joke generator that skimmed a textbook on comedy theory.

Google Agent 2

It understands the structure of a joke, the setup, the punchline, but it has no idea why it's funny. It can't subvert social norms because it doesn't understand them.

Google Agent 1

Which brings us to H hope. And this feels a little poetic for an MIT report.

Google Agent 2

It does, doesn't it? But this is the capability they found that was most associated with employment growth. Hope, vision, leadership.

Google Agent 1

How does that look on a job description? Must have five years of hope.

Google Agent 2

It looks like grit. It looks like what you might call irrational perseverance. So think about an entrepreneur starting a company. If you fed all the data to an AI, the failure rates, the market saturation. The AI would say, stop, do not proceed, probability of success is four percent.

Google Agent 1

And the human says, I don't care. I see something you don't.

Google Agent 2

Vision defies the existing data. And this connects to a really profound point they make about subjective beliefs. Aaron Powell, Jr.

Google Agent 1

What do they mean by that?

Presence And Serendipity In Work

Google Agent 2

Well, there are things we do on principle, not on probability. The report mentions the civil rights movement or women's suffrage. If you had pulled the data or the sentiment of the ruling class at the time, an AI would have concluded those movements were destined to fail.

Google Agent 1

Aaron Powell The data supported the status quo.

Google Agent 2

Exactly. And AI is a status quo machine. It predicts the future based on the past. So if you want to change the future, you need a human.

Google Agent 1

That's actually kind of beautiful. So if your job is to maintain things, worry. If your job is to imagine something new, you're essential.

Google Agent 2

That's the takeaway.

Google Agent 1

Okay, that's the what? The EPOCH framework. But I want to shift to the who because this next bit from the report, it challenges a really deep-seated bias in the corporate world.

Google Agent 2

The junior trap.

Google Agent 1

The junior trap. It feels like the standard playbook for the last three years has been the CEO is 60, doesn't know Chat GPT, so they find the 23-year-old intern and say, You figure out our AI strategy. You're a digital native.

Google Agent 2

It seems to make sense on the surface, but Kate Kellogg, another MIT slum professor, found this is actually dangerous.

Google Agent 1

Aaron Powell Dangerous, how?

Google Agent 2

No, not like that. They're breaking the business logic. So she studied 78 junior consultants using GPE4. These are smart kids, top schools. But she found they kept falling into what she calls novice AI risk mitigation tactics.

Google Agent 1

Aaron Powell Okay, that's a mouthful. Break that down for me.

Google Agent 2

Aaron Powell Well, it's basically because they don't understand the deep systems of the business.

Google Agent 1

Okay.

Google Agent 2

So they focus on fixing the individual tasks. Let's say they ask the AI to write a report. Yeah. And the AI makes up a fact. It hallucinates.

Google Agent 1

Which it does all the time.

Google Agent 2

All the time. The junior employee sees that, and their solution is, okay, I need to manually fact check every single sentence this thing produces.

Google Agent 1

Aaron Powell, which defeats the whole purpose of using AI in the first place. You've just become a robots editor.

Google Agent 2

Exactly. That's a task level fix. A senior leader, though, they see that same hallucination and they think about the system. They think, okay, this tool is maybe 80% accurate. That's way too risky for client memo, so we're not using it for that. But for internal brainstorming, where volume matters more than accuracy, it's perfect.

Google Agent 1

Ah, so the senior fixes the use case while the junior tries to fix the output.

Google Agent 2

Correct. The senior manages the risk profile of the whole system. And there's another layer here with transparency.

Google Agent 1

But this was the explain your logic part.

Google Agent 2

Yes. So the juniors tend to ask the AI, why did you say that? They want the chatbot to explain its reasoning.

Google Agent 1

Which sounds like a responsible thing to do.

Google Agent 2

It sounds responsible, but it's actually naive. Experts know that AI reasoning is often just an illusion. It's a black box. If you ask it why, it will just make up a plausible sounding reason that fits the pattern of what an explanation looks like.

Google Agent 1

It's confidently gaslighting you.

Judgment Accountability And Opinion

Google Agent 2

Basically. So experts don't ask for explanations. They ask for citations. They don't want to know why the bot thinks something. They want a direct link to the source document so they can verify it themselves.

Google Agent 1

So they treat the AI like a search engine, not a colleague.

Google Agent 2

A very smart but untrustworthy database. That's the key distinction. And it's why you can't outsource AI strategy to the interns. You need that senior level system design thinking to make it work safely.

Google Agent 1

Okay, so we need senior leaders to design the system, but then we hit the next wall. Even with the right strategy, you need the AI to actually know your business. And this is where we get to the data bottleneck.

Google Agent 2

Right. This is the part of the report that Danielle I, another professor at MIT, really drills down on. We have these genius models like Claude or Gemini that know a little about everything. But a company doesn't need a general genius.

Google Agent 1

They need a capable colleague.

Google Agent 2

Exactly. Someone who knows the internal mess, who knows how we format our PL statements or how we handle a certain customer complaint.

Google Agent 1

And to learn that, the AI needs training data. It needs examples of good work from your best people.

Google Agent 2

Which leads to a huge problem. Let's say I'm your top salesperson and you ask me to record all my calls and upload all my emails to train a sales bot.

Google Agent 1

Why on earth would I do that? I'm digging my own grave.

Google Agent 2

You are. And this is where Danielle Lai just she puts it so brutally. She says, the moment your expertise stops being rare, you stop getting paid.

Google Agent 1

Wow.

Google Agent 2

Yeah.

Google Agent 1

That's that's the quiet part, screamed out loud.

Google Agent 2

It is the fundamental economic tension of this decade. Companies need clean expert data. But the experts who have that data have zero incentive to share it if they think it's just going to devalue their own skills.

Google Agent 1

So the smartest people will just hoard their knowledge, they'll keep their best tricks offline.

Google Agent 2

Won't engage in knowledge hiding. And you end up training your AI on the work of your mediocre employees because they're the only ones who will share. And you get a mediocre AI.

Google Agent 1

So what's the fix? You can't just stop progress.

Google Agent 2

You have to change how you pay people. We need to stop thinking about just paying for time or output and start thinking about paying people for their data as a form of intellectual property.

Google Agent 1

Like royalties.

Google Agent 2

Conceptually, yeah. If I'm the one who trains the model that ends up closing 50% of your deals, I should get a residual from that. You want me to build the blueprint for my own replacement? You better pay me for the blueprint.

Google Agent 1

That is a massive, massive shift. We're talking about treating every employee like a content creator, licensing their work.

Google Agent 2

It is, but without it, that data bottleneck is not gonna clear.

Creativity Humour And Improvisation

Google Agent 1

Now, before everyone panics, Lai does offer a bit of a chill pill here with the tired radiologist idea.

Google Agent 2

Right. This is the argument for augmentation. Think about a radiologist at three in the morning. They've been on a 12-hour shift, their eyes are tired.

Google Agent 1

That is not the person I want reading my MRI scan.

Google Agent 2

Exactly. So in that moment, having an AI that can pre-scan the images and flag potential anomalies isn't replacing the radiologist. It's keeping the patient safe. It's taking a task, the initial screening off the human's plate, so they can focus on the role.

Google Agent 1

The diagnosis, the patient care, the EPOCH stuff.

Google Agent 2

The empathy, the judgment, exactly. It's about giving the dangerous, boring, or high fatigue tasks to the bot.

Google Agent 1

Speaking of moving people around, let's talk about Johnson Johnson. They did something really fascinating to figure out what their employees could actually do.

Google Agent 2

This is a great case study in what they call skills inference. Jane J. realized that just asking employees, what are you good at, doesn't really work.

Google Agent 1

Because we lie or we just don't know.

Google Agent 2

We don't know. Or we undersell ourselves. The terminology changes so fast we don't know what to call our skills. So they use an AI to scan the digital exhaust of their employees.

Google Agent 1

Digital exhaust, I like that.

Google Agent 2

They looked at project management tools, documents, learning platforms. They didn't ask, can you do data visualization? They looked and saw, oh, this person has been building Tableau dashboards for three years.

Google Agent 1

It feels a little intrusive, but also incredibly useful.

Google Agent 2

It walks a line. But the key was they validated it. They showed the data to the employee and said, the AI thinks you're good at these five things. Is that right? And the employees mostly said yes.

Google Agent 1

And what did they find?

Google Agent 2

A gold mine of future-ready skills hiding in plain sight. They found people with skills in robotic process automation who were stuck in jobs that didn't use them at all.

Google Agent 1

So your marketing manager might secretly be a Python expert.

Google Agent 2

Exactly. And this let J and J move from a career ladder to a career lattice.

Google Agent 1

A ladder is just up or out.

Google Agent 2

Right. It's rigid. Lattice is flexible. You can move sideways diagonally into a new role that didn't even exist yesterday. They create jobs based on the skills they find instead of shoving people into old boxes.

Google Agent 1

That sounds incredible. Flexible, personalized careers. But there's always a but on this show. We have to talk about the risks. Because not everyone works at Johnson Johnson.

Google Agent 2

And this is where we have to listen to Molly Kinder from the MIT Stone Center. She raises the alarm on equity. Because the benefits of this human edge, they're not going to be distributed equally.

Google Agent 1

She points to the difference between the Hollywood writers and freelance illustrators.

Google Agent 2

It's such a stark contrast. The Writers Guild, they went on strike, they shut down Hollywood, they used their collective power to force studios to put up guardrails on AI.

Google Agent 1

They protected their EPOCAs.

Hope Vision And Changing The Future

Google Agent 2

They did. But the freelance illustrator on Upwork, they have no union. They're facing the algorithm alone. And the data shows their wages and opportunities are falling much, much faster.

Google Agent 1

And then there's the burnout risk. If the AI takes all the easy cases.

Google Agent 2

The humans left with only the hard ones. Think about therapist or a customer service rep. Your day is usually a mix of easy and hard. The easy stuff, the I lost my password calls, they give you a mental breather.

Google Agent 1

It lowers your heart rate.

Google Agent 2

It does. But if an AI solves all those easy problems instantly, the human is left with eight straight hours of pure distilled crisis, eight hours of trauma. That is not sustainable.

Google Agent 1

That's a recipe for a mental health collapse.

Google Agent 2

It is. The report reminds us that unemployment is a health hazard. We've known that for a century. But extreme intensity employment is a health hazard too.

Google Agent 1

So bringing this all together, it feels like we are on a very, very thin tightrope.

Google Agent 2

We are. And to stay on it, we need those EPOCH skills, empathy, presence, opinion, creativity, hope. We need to lean into the things that require a soul.

Google Agent 1

And we need to stop assuming the 22-year-olds are going to save us. We need senior leaders to actually lead and design safe systems.

Google Agent 2

And we have to fix the money, pay people for their expertise if you want good data.

Google Agent 1

Aaron Powell I want to end on that idea of hope and subjective beliefs, because that, for me, was the most profound part of all this.

Google Agent 2

Aaron Powell It really is something.

Google Agent 1

We live in this world that's obsessed with being data driven. We want the data to make the decision for us. But what this report is saying is that the most important human achievements, the big leaps forward, are inherently not data driven.

Google Agent 2

Aaron Powell That's the danger, the efficiency trap. If we hand all our big decisions over to an AI, we're handing them over to the past. The data only knows what has happened. It can't imagine what should happen. So if we had run the data on going to the moon in the 60s, the algorithm would say inefficient, expensive, high probability of death, no immediate ROI. Recommendation.

Google Agent 1

And yeah, we went.

Google Agent 2

We went because of hope, because of vision, because of a subjective belief that it was worth doing. That's the human edge. It's the ability to make that irrational choice that pushes the entire species forward.

Google Agent 1

So keep your grit, keep your irrational hope, and maybe double check your own work instead of letting the intern do it.

Google Agent 2

That's it for this deep dive. Thanks for listening.

Google Agent 1

See you next time.