The Digital Transformation Playbook

Cybernetic Teammates

Kieran Gilmurray

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 18:10

Imagine a coworker who never gets tired, speaks every department’s language, and helps you turn rough sparks into sharp proposals. We put that vision to the test by unpacking a Harvard Business School and Procter & Gamble field study of 776 professionals that asks a bold question: can generative AI act as a cybernetic teammate across performance, expertise sharing, and even the social side of work?

At A Glance / TLDR:

  • P&G field study design with 776 professionals
  • Performance gains for individuals and teams using GPT‑4
  • Time paradox and longer, richer outputs
  • Breakthrough upside when humans and AI team up
  • Confidence drop despite higher quality outputs
  • Cognitive offloading and the shift to curation
  • AI as boundary spanner across R&D and commercial
  • Knowledge democratisation for non‑core employees
  • Prompt iteration revealing strong human steering
  • Emotional benefits and reduced anxiety from chat interfaces
  • Long‑term culture questions about trust and teamwork

Using Google NotebookLM we start with hard numbers on quality and time. Individuals with GPT‑4 not only matched cross‑functional pairs but did it 16% faster with richer, more comprehensive outputs. Then we zoom in on what teams plus AI uniquely unlock: a near tripling of top‑decile, breakthrough ideas. 

The average may level out, but the outliers—the billion‑pound concepts—show up when human debate, tacit knowledge, and AI’s breadth collide. Along the way we tackle a surprising paradox: even as work improves, confidence drops. We explore cognitive offloading, why “it felt too easy” can tank self‑assessment, and how to reframe value around direction, constraints, critique, and taste so curatorship gets recognised as real expertise.

Next, we follow AI across the boundary between R&D and commercial. Without AI, specialists speak past each other; with it, ideas converge toward feasible and marketable solutions. Non‑core employees punch above their weight, using 18–24 prompt iterations to negotiate cost, materials, and tone. Semantic analysis confirms the human thumbprint remains strong: AI widens the search; people decide what matters. 

Finally, we explore sociality: natural language interfaces reduce the dread of the blank page, lifting excitement and energy while lowering anxiety. A chatbot becomes a motivational partner and emotional shock absorber—raising a provocative cultural question about how we relate to our human teammates when machines become our most patient collaborators.

Subscribe, share with a colleague who’s AI‑curious, and leave a review telling us: where will you pilot a cybernetic teammate next?

Support the show


𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray

📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK


The Cybernetic Teammate Premise

Google Agent 1

I want you to imagine uh your absolute ultimate coworker. Just just picture them for a second.

Google Agent 2

He's setting a high bar already.

Google Agent 1

Right. Because this is someone who works at you know lightning speed. They possess expert-like encyclopedic knowledge in literally every single department of your company. Trevor Burrus, Jr.

Google Agent 2

Which is pretty much impossible.

Google Agent 1

Totally impossible. And maybe best of all, they never get cranky. They never complain about Monday mornings. They never uh need a coffee break. And they are always endlessly enthusiastic about whatever complex, messy problem you throw at them.

Google Agent 2

Sounds like a total fantasy.

Three Pillars Of Teamwork

Google Agent 1

It really does. But welcome to today's deep dive. We are thrilled you're joining us because today we are looking at something that might make that fantasy a reality. We've got a massive, groundbreaking 2025 working paper from Harvard Business School and researchers at Procter Gamble. It is titled The Cybernetic Teammate. And our mission for this deep dive is to answer really a fundamental question about the future of work. Is generative AI just, you know, another software tool? Is it just the new spreadsheet? Or can it actually step into the role of a true cybernetic teammate by replicating the three core pillars of human teamwork?

Google Agent 2

Aaron Powell Which are performance, expertise sharing, and social connection.

Google Agent 1

Exactly.

Google Agent 2

It really is a fascinating premise because if we look at the modern workplace, it is fundamentally collaborative. And that is largely due to an economic and sociological concept called the burden of knowledge.

Google Agent 1

Aaron Powell The Burden of Knowledge. Okay, break that down for us.

Google Agent 2

Aaron Powell But the basic idea is that human knowledge has expanded so vastly over the last century that modern problems are just too complex for any one single brain to hold all the necessary information.

Google Agent 1

Right. Nobody knows everything anymore.

Google Agent 2

Exactly. It's like a hundred years ago, a single inventor could tinker in their garage and build a revolutionary machine. Today, to build a smartphone or a new electric vehicle, or I don't know, even design a sustainable shampoo bottle.

Google Agent 1

You need an army.

Google Agent 2

You do. You need material scientists, software engineers, supply chain logistics experts, marketing directors to solve the big problem. You need a team. You need specialized experts collaborating. This raises an important question. If teamwork is so absolutely essential to navigating the burden of knowledge, what happens when you introduce a non-human entity into the mix?

Google Agent 1

Especially one that talks like us.

Inside The P&G Field Experiment

Google Agent 2

Right. An entity that is trained on human language, that communicates like we do, and that has access to this vast repository of information.

Google Agent 1

Okay, let's unpack this because the researchers didn't just uh theorize about this in a sterile university lab. They went to Procter and Gamble.

Google Agent 2

Massive global consumer goods company.

Google Agent 1

Massive. And they ran a real-world experiment with 776 highly skilled professionals. These were people pulled directly from PG's RD and commercial divisions.

Google Agent 2

Aaron Powell Real professionals doing their actual jobs.

Google Agent 1

Exactly. They were given real product innovation challenges. We are talking about complex, multi-layered problems. Things like figuring out how to motivate consumers to adopt an entirely new daily product regimen or redesigning packaging to meet strict sustainability metrics without destroying the profit margin. Tough stuff. Yeah. And the researchers split these 776 people into a classic two by two grid. So you had individuals working completely alone without AI.

Google Agent 2

The control group.

Google Agent 1

Yep. Then you had teams of two humans working together without AI. Then individuals working alone but armed with AI. And finally, teams of two humans working together plus the AI.

Google Agent 2

And it is crucial to understand the makeup of those two-person human teams. They were specifically cross-functional.

Google Agent 1

Ah, right.

Google Agent 2

Every team paired one person from RD with one person from commercial. This is the exact kind of cross-pollination that major companies rely on to generate viable, innovative seeds for new products.

Google Agent 1

It's the classic pairing of the builder and the seller.

Google Agent 2

Precisely. And the groups equipped with AI were using GPT-4. And the participants actually received specialized training on how to prompt the model specifically for consumer packaged goods tasks.

Google Agent 1

So they actually knew how to use it.

Google Agent 2

Right. They weren't just thrown to the wolves. They were set up for success to tackle these real business needs with the best possible tool set.

Google Agent 1

So let's look at the first pillar they measured, which is performance. And we have to talk about the time-bending paradox here because the data just completely upends how we think about productivity.

Google Agent 2

It really does.

Google Agent 1

Now, as you would expect, the baseline data proved that human collaboration works. Teams of two working without AI produce better solutions than individuals working without AI.

Google Agent 2

Aaron Powell They saw a.24 standard deviation improvement in quality as judged by a blind panel of independent experts.

Google Agent 1

Right. And in the corporate world, a.24 improvement. That's the difference between an acceptable mid-year review and putting yourself in line for a major promotion.

Google Agent 2

Two heads are definitively better than one.

Google Agent 1

Definitely. But when an individual was given AI, their performance absolutely shot up. It went up by 0.37 standard deviations.

Google Agent 2

That is a massive leap.

Google Agent 1

It's crazy. To put that in perspective, an individual sitting alone in their home office in their pajamas, just chatting with an AI, completely matched and even slightly exceeded the output quality of a formal two-person cross-functional human team.

Google Agent 2

It is a stunning validation of the technology's capability, but the numbers behind this create a fascinating productivity paradox. The individuals equipped with AI spent about 16% less time on the task compared to those working without AI. 16% less time. Yes. Yet despite finishing significantly faster, they produced longer, much more comprehensive solutions. We're talking about solutions averaging over 500 words compared to roughly 380 words from the participants who didn't have AI.

Google Agent 1

So they are working less time but doing more work. And it can't just be because the AI types faster than a human, right? It has to be that they aren't getting stuck.

Google Agent 2

That's a big part of it.

Google Agent 1

They have that cognitive momentum. It's eliminating the friction of human collaboration, the scheduling of the meeting, the polite small talk, the waiting for an email reply.

Google Agent 2

Precisely. This fundamentally shifts our understanding of cognitive labor. The AI isn't just a fancy autocomplete speeding up someone's typing rate. It is actually replicating the complex cognitive synergy of bouncing ideas off a human partner.

Google Agent 1

Yeah.

Teams Plus AI And Breakthrough Ideas

Google Agent 2

It is providing that immediate friction-reducing momentum that usually only comes from a highly caffeinated, highly focused, collaborative brainstorming session in a boardroom.

Google Agent 1

Here's where it gets really interesting. What happens when you combine a human team with this technology? If the individual with AI matches the human team, does the human team with AI just completely break the scale?

Google Agent 2

You would think so.

Google Agent 1

You would. Well, on average, the teams with AI performed similarly to the individuals with AI. The average quality was about the same. But the true breakthrough is in the extremes. When we look at the absolute best ideas, the top 10% of all solutions generated in the entire experiment teams with AI were almost three times more likely to generate one of those breakthrough ideas.

Google Agent 2

A 9.2 percentage point increase in hitting a total home run.

Google Agent 1

Yes. So if you are a company looking for that$1 billion idea, that exceptional top-tier innovation that disrupts the market, you don't just want an individual with AI. You want the friction and the debate of human collaboration augmented by AI.

Google Agent 2

Exactly. The synergy of two different human minds bringing their tacit knowledge, their lived experience, and their specific departmental constraints, and then amplifying that through the AI's processing power. That is where the truly exceptional outliers emerge.

Google Agent 1

It's like supercharging the brainstorming.

Google Agent 2

It is. However, the data also revealed a fascinating contradiction regarding how these participants actually felt about their performance.

Google Agent 1

Oh, right, the confidence drop.

Google Agent 2

Yeah. Even though the AI users, both individuals and teams, were objectively producing higher quality work and generating far more top-tier solutions, they self-reported being less confident in their outputs. There was a 9.2 percentage point drop in their expectation that their idea was in the top 10%.

Google Agent 1

Wait, so they were objectively doing the best work of the whole group, but they rated themselves lower. That sounds like classic imposter syndrome.

Google Agent 2

It's very similar.

Google Agent 1

Is it because they didn't sweat over a blank page for hours so they feel like they somehow cheated and consequently devalued their own work?

Google Agent 2

That is exactly what it is. In the literature, we refer to this as the effect of cognitive offloading. When humans offload the heavy lifting of ideation, structuring, and drafting to a machine, they often experience a severe disconnect with the final output.

Google Agent 1

Because it came too easily.

Google Agent 2

Exactly. Because they didn't experience the traditional friction, the struggle, and the literal typing of every single word themselves. They devalue the result. They look at this brilliant proposal and think, well, the machine did the hard part, so this can't possibly be my stroke of genius. This is a massive, significant hurdle for organizations right now. It's not enough to just give people the tool. You have to help them recalibrate their expectations and recognize that their value is no longer in the drafting. Their value is in their curatorial and steering efforts.

Google Agent 1

That is such a good point. The human is the director now, not the typist, which actually leads us perfectly into the second pillar, expertise sharing.

Google Agent 2

This is where the structural changes really become visible.

Google Agent 1

Yeah, because we all know the classic corporate divide. You have the tech geeks and RD on one side of the building and the business folks and commercial on the other. They speak entirely different languages. And the study mapped this perfectly.

Google Agent 2

It was very stark.

Google Agent 1

If you asked an RE professional to solve the problem without AI, they proposed highly technical ideas. It would be all polymer science and molecular bonds with absolutely no thought given to market appeal.

Google Agent 2

Meanwhile, the commercial folks pitched purely business-oriented ideas.

Google Agent 1

Beautiful marketing campaigns for products that might be physically impossible to manufacture at scale. So if you look at a graph of their solutions, it's a bimodal distribution. Two completely separate humps on the chart. But when individuals used AI, that divide completely vanished.

Google Agent 2

Two humps merged.

Google Agent 1

They merged into one perfectly centered peak. Both groups produced balanced, holistic solutions that beautifully blended technical feasibility with commercial viability.

Google Agent 2

What's fascinating here is how the AI acts as a boundary spanning bridge. In organizational theory, we talk a lot about boundary spanners.

Google Agent 1

What are those exactly?

Google Agent 2

These are the rare unicorns in a company who understand enough about multiple departments to translate between them. They are the project managers who speak fluent engineer and fluent marketing.

Google Agent 1

Ah, gotcha.

Google Agent 2

The AI is functioning as an on-demand, infinitely scalable boundary spanner. And we see this even more starkly when we look at the participants' actual job roles. The study divided people into core employees, those whose daily job is literally product innovation and non-core employees, people in the business unit who don't usually do this specific type of task.

Google Agent 1

Right. So we are talking about a junior analyst or maybe someone from corporate finance suddenly being asked to design a new consumer product regimen.

Google Agent 2

Precisely. And predictably, without AI, those non-core employees performed relatively poorly on their own. They didn't have the decades of tacit knowledge to draw upon. Makes sense. But when those non-core employees were given AI, their performance skyrocketed to the exact same level as teams that included the highly experienced 20-year veteran core experts.

Google Agent 1

That is wild.

Google Agent 2

The AI completely democratized the specialized knowledge within the organization. It allowed someone without deep domain expertise to punch way above their weight class by leaning on the AI to fill in their massive knowledge gaps.

Democratising Knowledge For Non‑Core Roles

Google Agent 1

But I know what you're thinking as you listen to this. You're thinking, sure, the junior finance person did great because they just typed, give me a product idea and hit copy-paste. The AI did everything. Right. But the researchers actually tracked how the AI was used, and it totally shatters that assumption. Yes, many participants kept over 75% of the text the AI generated. But they didn't just accept the first answer, they iterated heavily.

Google Agent 2

They really worked for it.

Google Agent 1

On average, participants use between 18 to 24 separate prompts to get to their final solution. You look at the prompt logs, and these participants aren't just saying, write me a proposal, it's an actual argument.

Google Agent 2

It's a negotiation.

Google Agent 1

Yeah. They are telling the AI, no, that packaging material is too expensive. Recalculate it using biodegradable plastics. Or make the tone of this marketing pitch less aggressive, it sounds too clinical. They are wrestling with the model to get to that final product.

Google Agent 2

The researchers even used semantic mapping to mathematically prove this. They mapped the final submitted solutions of the human participants against solutions generated purely by the AI with zero human intervention.

Google Agent 1

So comparing human plus AI against just AI.

Google Agent 2

Right. If the humans were just mindlessly copy-pasting, their solutions would cluster right on top of the pure AI solutions on the map. But they didn't. The semantic data proved that the AI-assisted solutions look much closer to the human-only ideas than the pure AI ideas.

Google Agent 1

So the human thumbprint was still very visible.

Google Agent 2

Very much so. The humans were firmly driving the ship. They were using the AI to quickly explore the possibility space, but they were applying their own judgment, their own contextual awareness, and their knowledge of organizational constraints to mold the final product. The AI wasn't the author, it was the ultimate co-pilot.

Google Agent 1

Which brings us to the third and honestly the most surprising pillar of this entire deep dive: the emotional teammate or sociality.

Google Agent 2

This isn't my favorite part of the study.

Google Agent 1

No, I have to be honest, when I first read the section, I was pretty skeptical. Usually when a company rolls out new enterprise tech, people feel more alienated, not less. They hate learning the new software, they feel isolated staring at a screen instead of talking to their coworkers. So why is a chatbot suddenly making people feel psychologically safe?

Google Agent 2

It sounds counterintuitive at first.

Human Steering Proven In Prompt Logs

Google Agent 1

Right, but then you think about the dread you feel staring at a blinking cursor at 4.cm on a Friday when you have to write a complex proposal. It's lonely. It's highly anxiety inducing.

Google Agent 2

It is entirely anxiety-inducing, and it all traces back to the cognitive burden of initiation. Starting a complex task from absolutely nothing requires a massive amount of cognitive energy. And the emotional metrics captured in this study around that specific burden are striking.

Google Agent 1

What did the numbers look like?

Google Agent 2

Participants who use the AI reported a massive spike in positive emotions, specifically excitement, enthusiasm, and energy. We are talking about an increase of 0.45 to 0.63 standard deviations in positive effect. Wow. To translate that, it means they went from feeling drained and overwhelmed to feeling actively engaged and optimistic. Simultaneously, there's a significant drop in negative emotions like anxiety, frustration, and distress. The people working with AI were measurably happier, less stressed, and more energized than the people working alone without it.

Google Agent 1

They were actually having fun doing corporate product ideation. And it makes sense because they weren't alone in the void. They had someone or something to bounce that first terrible half-baked idea off of without feeling embarrassed.

Google Agent 2

If we connect this to the bigger picture, it reveals something profound about the nature of a natural language interface.

Google Agent 1

Yeah.

Google Agent 2

Because the AI communicates in conversational English, it inherently fills a motivational role. It engages in a two-way dialogue.

Google Agent 1

It responds to you.

Google Agent 2

Yes. And in fact, the prompts the researchers used literally instructed the AI to act as an innovation specialist, to ask the user questions one at a time, to encourage the user. It effectively eliminates the stark isolation of tackling a complex problem alone.

Google Agent 1

An emotional shock absorber.

Sociality And Emotional Effects

Google Agent 2

Exactly. In a very real way, the AI acts as an emotional shock absorber. It mimics the psychological safety of a supportive human team. You can throw out a terrible thought without any fear of judgment, without worrying about office politics, and have a tireless partner help you bake that thought the rest of the way.

Google Agent 1

So what does this all mean? As we sit here looking out at 2026 and beyond, this data tells us that the very definition of a team has permanently changed. If you are prepping for a major strategy meeting, or if you've been assigned a project that requires skills way outside your normal wheelhouse, you don't necessarily need to coordinate a massive calendar clogging cross-departmental meeting just to get started.

Google Agent 2

You don't have to wait for Tuesday at 2 p.m.

Google Agent 1

You don't. You have a cybernetic teammate ready right now. A teammate that can brainstorm with you at a high level, bridge your knowledge gap so you don't sound foolish when you finally do present to the engineers or the finance team, and even hype you up emotionally when you're feeling stuck staring at a blank page. It is a fundamental shift in how we approach our daily work, moving from isolated struggle to constant collaboration.

Google Agent 2

It certainly is a paradigm shift. And it leaves us with a critical long-term dynamic to consider.

Google Agent 1

What's that?

Google Agent 2

If an AI can successfully mimic the performance output, the cross-disciplinary expertise, and even the positive emotional support of a human coworker, what happens to the human water cooler culture of a workplace? Oh wow. Think about it. If your absolute favorite collaborator, the one who never shoots down your ideas, who never gets tired, who always has time for you, and who always helps you refine your thoughts perfectly, is a machine, how will that change the way we connect, build trust, and relate to our actual human peers in the long run? Will we become less tolerant of the natural friction of human interaction? That's a huge question. That is the next great frontier we will have to navigate as these cybernetic teammates become a permanent fixture in our professional lives.

Try Treating AI As A Teammate

Google Agent 1

Wow. Something to really think about the next time you log on and open up a chat window. Are you just querying a database or are you talking to your favorite coworker? Thank you so much for joining us on this deep dive. As you head into your workday, try it out. Try treating your AI not as a glorified search engine, but as your newest, most enthusiastic teammate. Start a conversation, push back on its ideas, and see where that collaboration takes you.