The Digital Transformation Playbook

The AI Leaders’ Playbook

Kieran Gilmurray

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 16:51

The old idea that every company adopts new tech at roughly the same pace is finished. A small minority is using enterprise AI to push profit margins to heights we haven’t seen in decades, while others are spending heavily and getting negative returns. 

We (Google NotebookLM Agents) break down what separates AI leaders from everyone else, using findings from the 2026 Global AI Report built from thousands of senior decision makers across industries and countries.

TLDR / At A Glance:

  • the AI stack as generative, agentic, and protective AI working together
  • why AI leaders see AI strategy as business strategy
  • rebuilding core applications versus bolting AI APIs onto legacy systems
  • targeted transformation in one or two high-value domains
  • back office automation as the safest, clearest ROI starting point
  • the flywheel effect that funds bigger AI investment

We walk through the modern AI stack in plain terms: generative AI as the creator, agentic AI as the autonomous doer, and protective AI as the guardrails that keep systems safe, private, compliant, and efficient. Then we get specific about what leaders do differently: they fuse AI strategy with business strategy, rebuild key applications instead of bolting on an API, and choose targeted transformation in high-value domains to create a repeatable flywheel of ROI.

From there, we dig into the friction points executives hit when agentic AI moves from demos into core workflows: AI-native architecture, the drag of technical debt, and a new reality of private AI and sovereign AI as data regulations tighten across borders. We also cover the human side, including why leaders use AI for augmentation, why experienced veterans matter for oversight, the rise of supervisory operators, and why the Chief AI Officer is becoming the translator between algorithms and the balance sheet.

If you’re trying to future-proof your career or steer a team through AI transformation, this is a practical start. 

Subscribe, share with a colleague, and leave a review with the biggest AI obstacle you’re facing.


Support the show


𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray

📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK


hy The AI Gap Exploded

Google Agent 1

Right now, out in the corporate landscape, the linear timeline of technology adoption is, well, it's completely dead.

Google Agent 2

Oh, yeah, it's gone.

Google Agent 1

We are looking at a reality where a small fraction of companies, like about 15% of them, are using AI to push their profit margins to levels we literally haven't seen in decades. Right. While the bottom 20% are actually losing money trying to keep up. So welcome to the deep dive. We have a really fascinating roadmap in front of us today. We definitely do. Our mission here is to decode the exact playbook used by that top tier of organizations. You know, we want to look at how they took artificial intelligence from this isolated tech experiment and turned it into massive, undeniable financial profit. Exactly. And our source material for this is a newly released, highly comprehensive document. It's the 2026 Global AI Report, a playbook for AI leaders from NTT data. Trevor Burrus, Jr.

Google Agent 2

And you know, the scale of the data in this report is really what makes it so critical to understand. Aaron Powell Yeah.

Google Agent 1

Tell us about the scope.

Google Agent 2

Well, we are pulling insights from over 2,500 C-suite and senior decision makers.

Google Agent 1

Wow, that's huge.

Google Agent 2

Right. And this spans 15 different industries across 35 countries. So it basically bypasses all those theoretical predictions and gives us a global blueprint of what is actually functioning on the ground.

Google Agent 1

Like inside massive enterprises right now in late 2025.

Google Agent 2

Aaron Powell Exactly. It's what is working today.

Google Agent 1

And this is exactly why this matters to you in your career right now. Whether you are leaving a team, maybe trying to scale a startup, or honestly, simply looking to future-proof your own professional value. Absolutely. Understanding the mechanics of how these AI leaders operate is the ultimate shortcut. I mean, it is the blueprint for staying relevant in a constantly shifting environment.

Google Agent 2

It really is.

Google Agent 1

Okay, let's unpack this. What actually defines an AI leader in practice? I guess to get there, we first need to look at the modern playing field, which the report categorizes as the AI stack.

Google Agent 2

Right. So the modern AI stack has matured a lot over the last couple of years. Right. And the report basically divides it into three functional pillars. Okay. First, there is Gin AI, which acts as the creator or the brainstormer. Trevor Burrus, Jr.

Google Agent 1

Right. The stuff we are all pretty familiar with by now.

Google Agent 2

Yeah, exactly. It drafts, it synthesizes, you know, it generates code. But the real shift is in the second pillar, which is agentic AI.

Google Agent 1

Aaron Powell And this is the doer.

Google Agent 2

Yes, the doer. We are talking about autonomous systems that operate in closed loops.

Google Agent 1

Aaron Powell Meaning they don't need us holding their hands.

Google Agent 2

Exactly. They don't just wait for a prompt, they evaluate a situation, execute a multi-step task, check their own work for errors, and then they actually optimize the process for the next time.

Google Agent 1

Aaron Powell All entirely on their own.

Google Agent 2

Aaron Powell All on their own. And finally, the third pillar is protective AI.

Google Agent 1

Okay. What's that?

Google Agent 2

These are the embedded guardrails, the private, sovereign, and sustainable frameworks, ensuring the agentic AI doesn't, you know, violate data laws or burn unnecessary computing power.

Google Agent 1

Aaron Powell I like to look at this like building a modern corporate headquarters. So Gen AI is the architect drafting the brilliant blueprints. Aaron Powell I like that analogy. Right. But agentic AI is the autonomous construction crew out there pouring the concrete, adjusting for bad weather, and building the physical structure without needing a manager on site. Spota. And Protective AI is the heavy-duty security fence, keeping the property safe and making sure the crew uses the right materials.

Google Agent 2

Aaron Powell Yeah, and the integration of all three of those is basically non-negotiable for success.

Google Agent 1

Aaron Powell Because you can't have the crew working without the fence.

eaders Versus Laggards By Profit

Google Agent 2

Aaron Powell Exactly. And based on how well organizations manage that complete architecture, the report actually segments the corporate world. Trevor Burrus Right. Yeah, at the very top, making up just 15% of organizations are the AI leaders. They have a mature, defined strategy and are extracting significant financial value.

Google Agent 1

Aaron Powell And then there are the laggards.

Google Agent 2

Right. At the bottom, making up 20%, are the laggards. They lack a cohesive strategy, they are at a novice maturity level, and frankly, they're seeing negative returns on their AI investments. Trevor Burrus, Jr.

Google Agent 1

So they're losing money.

Google Agent 2

Yeah. And the remaining 65% are just sort of stuck in a messy, undefined middle ground.

Google Agent 1

Aaron Powell The financial gap between the leaders and everyone else is really what caught my attention immediately. I mean, the numbers are wild.

Google Agent 2

Aaron Powell The divide is structural and it's severe. According to the data, AI leaders are 2.5 times more likely than their peers to pose revenue growth of over 10% in the last fiscal year.

Google Agent 1

That's a huge jump.

Google Agent 2

It is. And beyond just top-line growth, they are a staggering 3.6 times more likely to operate at profit margins of 15% or more.

Google Agent 1

Wow. See, because that profit gap is so massive, we have to ask what the leaders are doing differently on a strategic level to pull that far ahead.

Google Agent 2

Right.

Google Agent 1

You don't get a 3.6 times multiplier on your profit margin just by like buying better software licenses.

Google Agent 2

No, you definitely don't. The fundamental divergence is how they view the technology itself. For leaders, AI strategy is the business strategy.

Google Agent 1

They've completely fused them together.

Google Agent 2

Exactly. They don't just bolt it on. The data shows that 34.5% of leaders are taking the time to rebuild their core business applications from the ground up, with AI embedded deeply into the architecture. Trevor Burrus, Jr.

Google Agent 1

Whereas the laggards take the opposite approach.

Google Agent 2

Aaron Powell Yeah. Over 40% of them simply take their existing decade-old legacy applications and just bolt an AI API onto the surface.

Google Agent 1

Aaron Powell I have to push back on the reality of that approach, though. Rebuilding a core application is essentially ripping out the central nervous system of a company while it's still walking.

Google Agent 2

It sounds terrifying, I know.

ebuild Core Apps With Discipline

Google Agent 1

It really does. I mean, it is notoriously slow, it carries massive operational risk, and it requires a huge budget. So how do they justify tearing down the foundation of their business while trying to maintain market speed?

Google Agent 2

Aaron Powell Well, they justify it through extreme discipline. They don't attempt to rebuild the entire company all at once. The report highlights a framework advocated by NTT data's CEO, Bijet Dubay, which focuses on targeted transformation.

Google Agent 1

Aaron Powell What does that look like in practice?

Google Agent 2

Aaron Powell Basically, leaders identify one or two specific, really high-value domains and redesign them end-to-end. They locate the exact processes where the economic leverage is highest.

Google Agent 1

Aaron Powell So they're being very surgical about it.

Google Agent 2

Very. For example, 85.6% of leaders initially focus their AI integration on back and mid-office workflows.

Google Agent 1

Compared to the front office.

Google Agent 2

Right, where 73.3% focus on the front office.

Google Agent 1

Focusing on the back office makes a ton of sense when you think about risk and measurement. Like if you automate your supply chain invoicing or internal data reconciliation, you are dealing with massive volume and easily measurable time savings.

Google Agent 2

Exactly. The ROI is obvious.

Google Agent 1

Plus, if the AI hallucinates an internal invoice code, a human catches it before it impacts a client. But if it hallucinates in a customer-facing chatbot, you have a PR disaster on your hands.

ack Office Wins And The Flywheel

Google Agent 2

That's entirely true. The back office is the perfect testing ground for what the report calls the flywheel effect.

Google Agent 1

Oh, right, the flywheel. Let's talk about that.

Google Agent 2

So by rebuilding a specific data-heavy back office domain end-to-end, leaders secure a near-instant lift in operational efficiency and cost savings.

Google Agent 1

And then they use those savings.

Google Agent 2

Exactly. That early measurable win proves the business case internally. The millions saved in quarter one are then reinvested into redesigning the next core application. The momentum just builds on itself.

Google Agent 1

Which explains another crazy stat. Nearly seven in ten, I think it was 68.2% of leaders, characterize their current AI investments as very significant.

Google Agent 2

Yeah, they aren't guessing. They are aggressively funding a mechanism that has already proven its return on investment.

-Suite Tension And AI-Native Infrastructure

Google Agent 1

But even with a flywheel spinning and early wins on the board, securing the budget for very significant ongoing investment has to create obvious friction at the executive level.

Google Agent 2

Oh, it absolutely does.

Google Agent 1

The report captures this really fascinating dynamic in the C-suite. Over 54% of chief software officers are extremely confident about the new revenue streams AI will generate.

Google Agent 2

They're completely sold on it.

Google Agent 1

Right. But when you look at the chief financial officers, only 38.8% share that extreme confidence. The finance department is clearly still playing bad cop and tapping the brakes.

Google Agent 2

Well, the CFOs are looking at the second-order consequences.

Google Agent 1

What do you mean?

Google Agent 2

When you stop bolting on APIs and start embedding intelligent, autonomous agents deep into your core workflows, you immediately hit the physical limits of your corporate infrastructure.

Google Agent 1

Oh, the actual hardware and network.

Google Agent 2

Exactly. You are forced to build what the report terms AI native architecture. The systems themselves had to be designed to be intelligent and self-correcting to handle the load.

Google Agent 1

And building that infrastructure is incredibly capital intensive.

Google Agent 2

It is.

Google Agent 1

So this highlights the core tension then. Since AI is being embedded so deeply into the core business, you eventually hit a wall of technical debt.

Google Agent 2

What's fascinating here is that leaders clearly understand a foundational rule of enterprise tech. You cannot scale what you cannot govern.

Google Agent 1

Right.

Google Agent 2

When leaders are asked to identify their primary infrastructure bottleneck, it isn't a lack of raw computing power or an inability to source GPUs.

Google Agent 1

Wait, really? It's not GPUs.

Google Agent 2

No, not at all. For nearly 30% of leaders, the top blocker is actually the high maintenance needs of their existing systems.

Google Agent 1

Wow. So they're just bogged down.

Google Agent 2

Yeah, dealing with sprawling technical debt diverts their funds away from innovation.

Google Agent 1

It's like trying to run a 300 mile per hour bullet train on a hundred-year-old wooden tracks. I mean the train's engine is incredibly powerful, but you can't push the throttle because the tracks will literally vibrate apart under the strain.

Google Agent 2

That's a great way to picture it.

Google Agent 1

All your money goes into patching the wooden tracks instead of upgrading the train itself.

Google Agent 2

That is the exact dilemma.

Google Agent 1

Yeah.

Google Agent 2

And upgrading those tracks is becoming infinitely more complex because of a major geopolitical shift outlined in the sources.

Google Agent 1

Okay, let's get into that.

Google Agent 2

We are seeing a hard fork in infrastructure strategy, basically splitting into private AI and sovereign AI.

Google Agent 1

What's the difference?

Google Agent 2

Private AI is a corporate choice. A company builds a closed private network to protect its intellectual property from leaking into public training models and also to optimize their own compute costs. Sovereign AI, however, is a forced geopolitical reality. As international data regulations tighten, national governments are demanding that data processing, storage, and AI computing remain strictly within their physical borders.

Google Agent 1

Aaron Powell Wait, wasn't the entire promise of the cloud that it was borderless?

Google Agent 2

Yeah, it used to be.

Google Agent 1

You know, you upload your data and it just flows seamlessly across server farms from Dublin to Virginia to Tokyo. Are we really seeing AI regulations force multinational companies to build walled digital gardens based on geography?

Google Agent 2

Aaron Powell They are. I mean, nearly 60% of AI leaders flag cross-geography data privacy or sovereignty as a top governance concern.

Google Agent 1

Aaron Powell That sounds like a logistical nightmare.

Google Agent 2

Aaron Powell It requires an entirely new, geographically compliant infrastructure. If you are a global bank, your AI models in Europe might not be legally allowed to share certain learned parameters with your models in North America.

eterans Augmentation And New Roles

Google Agent 1

Aaron Powell Wow. Okay. So now that the secure geographically compliant infrastructure is being built and the strategy is fused, the focus really has to shift to the actual people running these machines.

Google Agent 2

Aaron Powell The human element.

Google Agent 1

Exactly. Who is steering the ship and how are the employees reacting to all this? Because you can't just hand a highly complex, geographically restricted agentic AI to a standard IT help desk.

Google Agent 2

Trevor Burrus, Jr. No, you definitely can't. And you know, the report pushes back against the common narrative that AI is strictly a replacement mechanism. Trevor Burrus, Jr.

Google Agent 1

That's reassuring for anyone listening.

Google Agent 2

Aaron Powell It is. AI leaders utilize the technology primarily for human augmentation, not replacing them. Interestingly, 28.9% of leaders specifically focus on putting AI tools in the hands of their highly experienced veteran employees.

Google Agent 1

Aaron Powell Rather than just assigning it to the tech savvy junior staff.

Google Agent 2

Exactly.

Google Agent 1

Giving it to the veterans is a brilliant risk mitigation strategy. I mean, a 20-year veteran of the supply chain department possesses the deep institutional knowledge to know exactly what a normal supply chain looks like.

Google Agent 2

They have that gut feeling.

Google Agent 1

Right. So when the AI agent inevitably hallucinates or suggests an optimized route that makes zero logistical sense in the real world, the veteran spots the error instantly.

Google Agent 2

Exactly. Whereas a junior employee might just blindly approve the machine's output because it looks correct on a spreadsheet.

Google Agent 1

So true.

Google Agent 2

And the necessity of that human oversight is fundamentally restructuring the workforce of the future. The report actually outlines three new labor categories emerging within leader organizations.

Google Agent 1

Okay, what are the three categories?

Google Agent 2

First are the augmented employees. These are your traditional knowledge workers who use embedded AI for massive leaps in daily productivity.

Google Agent 1

The people using Gen AI every day.

Google Agent 2

Right. Second are the supervisory operators. Because agentic AI acts autonomously, these operators don't do the work themselves. They oversee the agentic systems, handle complex escalations, and ensure compliance.

Google Agent 1

Okay, and the third?

Google Agent 2

Third are the AI native professionals. These are the deep technical specialists, so prompt engineers, model evaluators, the people maintaining the underlying systems.

Google Agent 1

Transitioning a massive workforce into those three distinct buckets requires people to fundamentally change how they view their daily value to the company. I mean, that is an enormous emotional hurdle.

Google Agent 2

It is. And leaders recognize that AI adoption is, at its core, a massive emotional and operational change management program.

Google Agent 1

How so?

Google Agent 2

Well, they treat it proactively. Because of that, 73% of leaders report that their workforce holds a positive, confident, and proactive mindset toward AI. Laggards who fail to manage that cultural shift just don't see those numbers.

hief AI Officer And Gain-Share Partners

Google Agent 1

Here's where it gets really interesting. Because managing all of this, you know, rebuilding core applications, securing the global infrastructure, and steering the emotional change management of thousands of employees, it requires a level of centralized authority. Right. We are seeing the rapid rise of a new powerhouse in the C-suite, the chief AI officer, or C AIO.

Google Agent 2

The CIIO is really becoming the anchor. A staggering 77.8% of AI leaders now have a dedicated chief AI officer. Aaron Powell That's almost 80%. Yeah. And they orchestrate the overarching strategy, they embed the AI safely into daily operations, and they translate the tech to the board.

Google Agent 1

But they aren't doing it alone, right?

Google Agent 2

No, not at all. Nearly half of leaders lean heavily on external partners for bespoke co-innovation. And they do this using outcome-based gain sharing.

Google Agent 1

Aaron Powell Meaning the vendor only gets paid when the AI actually delivers the promised financial results.

Google Agent 2

Exactly.

Google Agent 1

So the CIAO is essentially the translator. Like they're the only person in the room who speaks both fluent algorithm and fluent balance sheet.

Google Agent 2

That is the perfect way to describe it.

Google Agent 1

Let's pull all these threads together for everyone listening. When we look at the playbook of an AI leader in late 2025, the picture is incredibly clear.

Google Agent 2

It is.

Google Agent 1

You don't just bolt on an API, you fuse AI directly with your core business strategy. You prioritize specific high-value domains to start a flywheel of investment. You secure your global infrastructure and you empower your veteran humans through centralized governance, like a CAIO.

Google Agent 2

That is the exact playbook.

Google Agent 1

Looking at your own daily reality, it forces a hard evaluation. Are you operating in an environment that acts like a leader, embedding AI into your core workflows, or are you trapped in a laggard environment, just playing with chatbots on the surface and hoping for a different result?

achine-To-Machine Deals On The Horizon

Google Agent 2

It's a critical question to ask yourself right now.

Google Agent 1

It really is. So what does this all mean? Where does this trajectory take us as the technology continues to compound?

Google Agent 2

Well, there is a really provocative thought here to Mullover. If we build on the report's concept of agentic AI, those autonomous doers operating in closed loops, and we combine that with the creation of collaborative ecosystems. Right. It points to a wild horizon. The report focuses heavily on internal optimization. But if the future belongs to autonomous AI agents that can execute and optimize on their own, what happens when they cross the corporate boundary? Oh wow. Imagine the AI agent of one company negotiating a supply chain contract directly with the AI agent of a vendor or a competitor.

Google Agent 1

Just machine to machine.

Google Agent 2

Exactly. Are we preparing our newly minted supervisory operators to manage a workforce where the fastest and most consequential business deals are made entirely machine to machine in milliseconds without a human ever picking up a phone?

Google Agent 1

That completely reframes the scale of what we are building. The strategic playbook is clear for today, but the frontier is moving incredibly fast. Thank you for joining us on this deep dive. Stay curious, keep learning, and we will catch you next time.