The Digital Transformation Playbook
Kieran Gilmurray is a globally recognised authority on Artificial Intelligence, intelligent automation, data analytics, agentic AI, leadership development and digital transformation.
He has authored four influential books and hundreds of articles that have shaped industry perspectives on digital transformation, data analytics, intelligent automation, agentic AI, leadership and artificial intelligence.
𝗪𝗵𝗮𝘁 does Kieran do❓
When Kieran is not chairing international conferences, serving as a fractional CTO or Chief AI Officer, he is delivering AI, leadership, and strategy masterclasses to governments and industry leaders.
His team global businesses drive AI, agentic ai, digital transformation, leadership and innovation programs that deliver tangible business results.
🏆 𝐀𝐰𝐚𝐫𝐝𝐬:
🔹Top 25 Thought Leader Generative AI 2025
🔹Top 25 Thought Leader Companies on Generative AI 2025
🔹Top 50 Global Thought Leaders and Influencers on Agentic AI 2025
🔹Top 100 Thought Leader Agentic AI 2025
🔹Top 100 Thought Leader Legal AI 2025
🔹Team of the Year at the UK IT Industry Awards
🔹Top 50 Global Thought Leaders and Influencers on Generative AI 2024
🔹Top 50 Global Thought Leaders and Influencers on Manufacturing 2024
🔹Best LinkedIn Influencers Artificial Intelligence and Marketing 2024
🔹Seven-time LinkedIn Top Voice.
🔹Top 14 people to follow in data in 2023.
🔹World's Top 200 Business and Technology Innovators.
🔹Top 50 Intelligent Automation Influencers.
🔹Top 50 Brand Ambassadors.
🔹Global Intelligent Automation Award Winner.
🔹Top 20 Data Pros you NEED to follow.
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 Kieran's team to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/30min
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
The Digital Transformation Playbook
Relying On AI Can Weaken Your Critical Thinking
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
How does frequent AI usage impact our critical thinking skills? A polished answer in three seconds feels like magic until you realise it might be dulling the very skills that keep you sharp.
Google Notebook LM agents are used to dive into new research from SBS Swiss Business School that links frequent AI use to a measurable drop in critical thinking and unpack the mechanism driving it: cognitive offloading that skips the friction where learning happens.
The twist that surprised us both is who is most at risk and why authority bias makes smooth interfaces look like truth.
At A Glance / TLDR:
- key findings from a 666-participant SBS Swiss Business School study
- how cognitive offloading moves from storage to judgment
- why authority bias and polish increase AI trust
- why younger digital natives are more vulnerable than older cohorts
- education as a protective buffer measured by the Halpern assessment
- productive friction for AI UI design and human-in-the-loop norms
- assessment redesign that grades thinking process, not just product
- practical habits to keep the judgment phase with the human
We walk through the study’s methods and findings in plain language, from ANOVA-backed evidence to interviews that map how users hand over the “mental driving.” Younger digital natives show higher AI trust and dependence, while older cohorts carry a built-in scepticism forged in clunky tech eras. Then comes the good news: education functions as a cognitive shield. Using the Halpern Critical Thinking Assessment as a lens, we explore how training in reasoning, hypothesis testing, and probabilistic thinking lets people treat AI as raw material rather than an oracle.
From there, we get practical. For technologists, we argue for productive friction: interfaces that surface contradictions, show uncertainty, and prompt verification. For policymakers, we outline human-in-the-loop requirements where it matters and better guardrails for developmental settings. For educators, we propose assessment redesign that grades the thinking process—version histories, prompt audits, oral defences—so students learn to critique AI rather than copy it. Finally, we share daily habits to keep your edge: treat the model like a fast intern, verify sources, ask for counterarguments, and never surrender the judgment phase.
If you care about staying sharp in an AI-saturated world, this conversation offers evidence, frameworks, and tools you can use today.
Subscribe, share with a colleague who leans on AI, and leave a review telling us your rule for when to trust the machine.
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray
📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK
The Big Question About AI And Thinking
Google Agent 2Welcome back to the deep dive. We are looking into something today that you probably interact with every single day.
Google Agent 1Aaron Powell Oh, definitely.
Google Agent 2Right. In fact, if you were anything like me, you might have already used it maybe three or four times before even finishing your morning coffee.
Google Agent 1Yeah, I know I have.
Google Agent 2We are talking about artificial intelligence tools. Yeah. And we aren't just looking at how these tools help you write an email faster or generate some code.
Google Agent 1Aaron Powell Or summarize those really dense hour-long meetings.
Google Agent 2Aaron Ross Powell Exactly. We are diving into a question that is honestly a little bit unsettling. Is relying on this incredible technology secretly changing the very architecture of how your brain works?
Google Agent 1Aaron Powell Are we actively outsourcing our own ability to think? Yes. It is a profound tension we're all grappling with right now. I mean, you use a large language model, you type in a prompt, and you feel that immediate rush of cognitive relief. Oh, it's the best feeling. Right. Because you just saved yourself two hours of reading and synthesizing. That convenience is absolutely undeniable. But the core of our exploration today is the invisible price tag attached to that convenience.
Google Agent 2The hidden costs.
Google Agent 1Exactly. We need to examine the hidden cognitive costs that come with letting a machine do all that heavy intellectual lifting.
Google Agent 2Aaron Powell So today's mission hinges on some really fascinating new research out of the Center for Strategic Corporate Foresight and Sustainability. A researcher there, Michael Gerlich, just published a comprehensive paper on this exact phenomenon. The study is out of the SBS Swiss Business School in Kloten Zurich.
Google Agent 1It just dropped.
Google Agent 2Yeah, it just dropped in the 2025 volume of the Academic Journal Societies. That's volume 15, issue one, if you want to track down the primary text yourself.
Google Agent 1And it is highly recommended reading.
Google Agent 2For sure. We are going to use Gerlich's findings to figure out if relying on artificial intelligence is actually eroding our ability to think for ourselves. And critically, what we can do to protect our cognitive edge.
Inside The Swiss Study And Methods
Google Agent 1Gerlich's work here is vital because it moves the conversation past, you know, just philosophical speculation. It grounds it in rigorous data. Right. He isn't just asking if AI makes us lazy. He's actually quantifying the impact of these tools on our fundamental reasoning skills.
Google Agent 2Okay, let's unpack this. Looking at the methodology, Gerlich didn't just run a quick online poll where people click a few buttons about how they feel.
Google Agent 1No, not at all.
Google Agent 2We are looking at a robust sample size of exactly 666 participants. And these participants were intentionally drawn from highly diverse age groups and educational backgrounds.
Google Agent 1Which gives us a wide societal lens on the issue.
Google Agent 2He also utilized a mixed method approach, capturing both the hard statistical data and the nuanced human experience behind it.
Google Agent 1And the strength of that mixed method approach really cannot be overstated for a study like this. Oh so well when Gerlich ran the ANOVA models across those 666 participants.
Google Agent 2And real quick, for anyone who hasn't taken a stats class in a while, what is an ANOVA model?
Google Agent 1Ah, right. It stands for analysis of variance. Simply put, it's a statistical tool that lets researchers compare the averages of different groups to see if the differences between them are actually significant or if they're just random chance.
Google Agent 2Got it. So it proves the findings are solid.
Google Agent 1Exactly. And the ANOVA results here showed we weren't just looking at a minor, negligible dip in cognitive scores for heavy AI users. We are talking about a statistically massive variance between the groups. Wow. The quantitative correlation analysis laid it bare. There is a definitive, significant negative correlation between frequent AI tool usage and critical thinking abilities.
Google Agent 2So simply put, the more you lean on the machine, the lower you score on critical thought.
Google Agent 1That is the primary finding, yes.
Google Agent 2And the thematic analysis of the qualitative interviews Gerlich conducted, the actual conversations with participants, revealed the precise mechanism driving this decline. The paper points to a phenomenon called cognitive offloading.
Google Agent 1Which is a crucial concept to understand.
Google Agent 2To ground this in an everyday experience, think about how we use GPS navigation apps. Fifteen years ago, you had to build a mental map of your city.
Google Agent 1You had to actually know where North was.
Google Agent 2Right. You actively used your brain to track spatial relationships. Today, you plug in the address, a voice tells you when to turn, and your brain entirely checks out. You offload the task to the machine.
Google Agent 1It's a great analogy, but the GPS analogy works mostly on a surface level. Offloading critical thinking is actually much more insidious.
Google Agent 2Because if a GPS app crashes, you just get lost.
Google Agent 1Or you pull over and ask for directions. It's a temporary inconvenience. But if your critical thinking atrophies, you lose your fundamental ability to evaluate reality itself.
Google Agent 2That is terrifying.
Google Agent 1It is. What Gurlick's interviews highlight is that we are treating AI tools as a GPS for our intellect.
Google Agent 2Oh, that's a brilliant way to phrase it.
Google Agent 1Instead of wrestling with a complex problem, identifying the variables, synthesizing conflicting information, and structuring a logical argument, users are just handing the destination to the AI. They're letting the algorithm do the mental driving.
Google Agent 2We are essentially skipping the friction of the thought process.
Google Agent 1Yes.
Google Agent 2And that friction is where the learning actually happens.
Google Agent 1Precisely. Now historically, human beings have always engaged in cognitive offloading.
Google Agent 2Sure, writing things down.
Cognitive Offloading Explained
Google Agent 1Exactly. Socrates famously worried that the invention of written language would destroy human memory. The advent of the calculator offloaded basic arithmetic.
Google Agent 2Right. I don't need to do long division in my head.
Google Agent 1But those tools offloaded the storage of information or the execution of rote mechanics. What Gerlich is pointing out is that AI is the first tool where we are offloading the processing of information.
Google Agent 2We are outsourcing the synthesis and the judgment.
Google Agent 1And the thematic analysis of the transcripts showed that this lack of use leads to a literal atrophy of our mental muscles.
Google Agent 2Here's where it gets really interesting. I went into this data, assuming older generations would struggle the most with this transition.
Google Agent 1That's a common assumption.
Google Agent 2Yeah, I figure they might be misusing the tools or failing to integrate them. But Gerlitz demographic numbers flipped that assumption entirely on its head.
Google Agent 1The data paints a very different picture.
Google Agent 2The younger participants in the study exhibited a radically higher dependence on AI tools. And as a direct consequence of that dependence, those younger cohorts scored significantly lower on critical thinking assessments compared to the older participants.
Google Agent 1If we connect this to the bigger picture, it comes down to a concept the paper identifies as AI trust.
Google Agent 2Which is heavily tied to digital dependence.
Google Agent 1Right. Older participants generally grew up in an environment where they had to develop fundamental critical thinking skills without seamless digital assistance. They have an established baseline of cognitive resilience.
Google Agent 2Because they had to do it the hard way.
Google Agent 1Exactly. When they use an AI tool, they possess a degree of built-in friction. They remember when technology was clunky, when dial-up failed, when early software hallucinated wildly.
Google Agent 2Oh man, the dial-up tones.
Google Agent 1So their baseline AI trust is naturally lower. They treat the output with skepticism.
Google Agent 2They view the AI as a slightly unreliable assistant whose work requires proofreading rather than an all-knowing oracle.
Google Agent 1That skepticism is a cognitive shield. Now contrast that with younger participants, many of whom are digital natives.
Google Agent 2They've never known a world without smart devices.
Google Agent 1They have grown up interacting with technology that is incredibly seamless, highly responsive, and functionally authoritative. Their baseline level of digital dependence is much higher.
Google Agent 2So they just trusted by default.
Google Agent 1We have to consider the psychological mechanism of authority bias here. When an AI generates a beautifully formatted, highly articulate, grammatically flawless answer in three seconds, it mimics the aesthetics of absolute truth.
Google Agent 2It looks right, so it must be right.
Google Agent 1Exactly. Younger users, conditioned to trust seamless interfaces, are far more likely to accept that output without applying a layer of critical scrutiny. The interface completely disarms their natural skepticism.
Google Agent 2That paints a vivid and honestly somewhat alarming picture for you as a listener, especially if you manage a team of younger employees or if you are a younger professional trying to navigate a highly competitive landscape.
Google Agent 1It's a serious vulnerability.
Google Agent 2If your default mode is high AI trust, you might be unintentionally sacrificing the very analytical skills that make you valuable. But there is a massive plot twist in Gerlich's data.
Google Agent 1There is a silver lining.
Google Agent 2The research uncovered a highly optimistic factor that changes this equation entirely: education. The study found that higher educational attainment was associated with better critical thinking skills, entirely independent of how frequently the person used AI.
Google Agent 1This is perhaps the most crucial nuance in the entire Swiss business school paper.
Google Agent 2Break that down for us.
Why Younger Users Trust AI More
Google Agent 1To measure these cognitive developments without bias, the researchers utilized the Halpern critical thinking assessment.
Google Agent 2Which isn't just a multiple choice history test.
Google Agent 1No, this isn't a standardized test that asks you to regurgitate, memorize historical dates, or mathematical formulas. The Halpern assessment evaluates the structural architecture of your cognition.
Google Agent 2Aaron Powell Meaning what exactly?
Google Agent 1It tests verbal reasoning, the ability to analyze complex arguments, how you test hypotheses, and your grasp of likelihood, uncertainty, and cognitive biases.
Google Agent 2Aaron Powell So it tests how you think, not what you know.
Google Agent 1Beautifully said. And the data shows that individuals with higher educational backgrounds performed remarkably well on this assessment, even if their daily AI usage was extremely high.
Google Agent 2Aaron Powell So the AI didn't dull their minds.
Google Agent 1Because education acts as a protective buffer for the human mind. Higher education, at its functional best, trains you in the rigorous methodologies of questioning information.
Google Agent 2It teaches you to be a skeptic.
Google Agent 1It teaches you how to construct a valid argument, how to spot logical fallacies, and how to synthesize entirely conflicting viewpoints into a cohesive thesis.
Google Agent 2So when a person with that strong educational foundation sits down with a large language model, they don't just passively consume the output.
Google Agent 1Right. They treat it as raw material.
Google Agent 2They might use the AI to gather data, outline a chaotic set of notes, or draft an initial brainstorm. But then their trained critical thinking takes over the steering wheel. They edit the logic, they question the source material the AI is pulling from, they refine the tone.
Google Agent 1The education ensures the user remains the master of the cognitive process. They use the tool to accelerate their workflow, but they do not surrender the judgment phase to the algorithm.
Google Agent 2The buffer exists because their brain has already been wired to demand evidence and internal consistency.
Google Agent 1Exactly. The neural pathways for critical thought are already deeply entrenched.
Google Agent 2So what does this all mean? If you are listening to this on your commute or at your desk, you want to leverage AI to be as efficient as possible, but you clearly don't want to degrade your own intellectual capacity. Where do we go from here?
Google Agent 1What's fascinating here is that Gerlik's proposed solution is not prohibition.
Google Agent 2We aren't smashing the looms.
Google Agent 1Right. The paper explicitly states that banning artificial intelligence in workplaces or academic institutions is both impossible and counterproductive. We cannot put the technological genie back in the bottle.
Google Agent 2It's you to stay.
Google Agent 1Instead, the focus must shift entirely towards systemic strategies that promote critical engagement with AI technologies.
Google Agent 2Gerlich explicitly targets this research at three key demographics who have the power to shape this transition technologists, policymakers, and educators. He provides highly practical recommendations for mitigating these adverse cognitive effects across society. Let's look at the technologists first, the people actually building and designing the user interfaces for these tools.
Google Agent 1For technologists, the mandate is about re-engineering the user experience to include what we might call productive friction.
Google Agent 2Productive friction, I like that.
Google Agent 1Right now, the UI of most AI tools is designed to be totally frictionless. You ask a question, you get an authoritative answer.
Google Agent 2Seamless.
Education As A Cognitive Shield
Google Agent 1Gerlich suggests that developers need to design systems that occasionally prompt the user to verify facts. Or perhaps present multiple conflicting perspectives on a complex prompt rather than a single synthesized truth.
Google Agent 2So the AI pushes back a little.
Google Agent 1Yes. The AI should be designed to foster dialogue and require user input on the logic, actively pushing back against the user's authority bias.
Google Agent 2It reminds me of software that asks, are you sure you want to delete this permanently before wiping a hard drive? We need AI that says, here is a synthesis, but the underlying data contains significant contradictions. How would you like to resolve them?
Google Agent 1It forces the user to stay engaged.
Google Agent 2Exactly. Moving to policymakers, the paper emphasizes the need for frameworks that govern AI deployment, particularly in developmental environments like schools and junior workplace roles.
Google Agent 1Setting the rules of the road.
Google Agent 2Policy shouldn't just dictate what data the AI can scrape. They need to establish guidelines on human-in-the-loop requirements for critical decision-making processes.
Google Agent 1Which flows directly into Geralitch's recommendations for educators, which might be the most challenging shift of all. If higher education is the protective muffer, the educational system itself has to adapt immediately.
Google Agent 2It has to evolve.
Google Agent 1We can no longer just assign a five-page essay on the French Revolution and grade the final printed document, because an AI can generate a B plus version of that product in seconds. The paradigm of education has to shift from grading the final product to grading the actual process of thinking.
Google Agent 2Show your work, essentially.
Google Agent 1Yes, but on a much deeper level. Educators need to integrate AI into the curriculum not as a shortcut, but as an object of critique. Teachers might generate an AI essay in front of the class and spend the entire seminar having students dismantle its logical fallacies, identify its biases, and trace its hallucinatory citations.
Google Agent 2It requires a return to the Socratic method, oral defenses of written work, having students submit the version history of their thought process, showing how they prompted the AI, how they corrected the AI, and where their original synthesis elevated the machine's baseline draft.
Google Agent 1Fostering this level of critical engagement in an AI-driven world is not a theoretical academic concern to be casually debated. According to this data, it is an urgent necessity for the preservation of human intellect.
Google Agent 2We have to teach people how to outthink the tools they are using.
Google Agent 1We do.
Google Agent 2We started this deep dive looking at a very real, very modern tension. The balance between incredible life-changing efficiency and the silent threat of cognitive offloading. Through Michael Gerlish's rigorous data on those 666 participants, the reality is undeniable.
Google Agent 1It's right there in the numbers.
Google Agent 2Leaning heavily on AI without engaging our own analytical minds leads directly to a decline in our critical thinking abilities, the mental muscle atrophies when we let the algorithm do the heavy lifting.
Google Agent 1We must recognize that younger individuals who possess higher digital dependence and a more seamless relationship with technology are currently the most vulnerable to this decline. But the vital takeaway is that educational foundations, the wigorous training in how to evaluate information serve as our most reliable defense.
Google Agent 2It is the ultimate cognitive shield.
Google Agent 1It really is. This raises an important question for how we operate daily. My advice for the listener based on this research would be to actively monitor your own AI trust level starting today.
Google Agent 2Be aware of it.
Google Agent 1The next time you open a large language model to solve a problem, consciously visualize the tool not as an omniscient oracle, but as a very fast, very eager intern.
Google Agent 2The eager intern, I love that framework.
Google Agent 1The intern can pull the raw data, they can format the spreadsheet, they can draft the initial memo, but you, as the expert, must rigorously review, question, and finalize the logic. Do not surrender the judgment phase.
Design For Productive Friction
Google Agent 2As we wrap up this deep dive, I want to leave you with one final lingering thought to ponder as you navigate your digitally assisted life. Yeah. It builds directly on the most optimistic finding in Girlex's data, the fact that higher education acts as our protective buffer against the adverse effects of cognitive offloading.
Google Agent 1The shield.
Google Agent 2We know that rigorous education teaches us the process of questioning, analyzing, and synthesizing, the very skills that protect our minds from atrophy. But consider the current reality of our academic institutions. What happens when students increasingly use these exact same AI tools to bypass the friction of their coursework?
Google Agent 1It's a profound paradox.
Google Agent 2They use AI to write their essays, solve their complex equations, and generate their research summaries. If the ultimate shield against AI dependence is a strong, rigorous education, but we are using AI to skip the hard, frustrating work of actually acquiring that education.
Google Agent 1Are we actively outsourcing the very process that was supposed to be our cognitive defense?
Google Agent 2Exactly. Something to think about the next time you or someone you know asks an algorithm to summarize a difficult book instead of actually reading it.
Google Agent 1Very true.
Google Agent 2Thank you for joining us for this deep dive. Keep questioning the seamless answers, keep engaging the friction of critical thought, and we will catch you next time.