Human x Intelligent

Cognitive debt: are AI tools making you worse at thinking?

Madalena Costa Season 2 Episode 20

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 17:23

Cognitive debt is what happens when AI tools do your thinking instead of supporting it and the research is now proving it costs you more than you realise.

In this episode of Human × Intelligent, I walk through what the science actually says about AI tools and critical thinking, share a practical framework for structuring your thinking before you open any tool and give you three concrete practices for keeping your judgment intact.

MIT Media Lab's 2025 study 'Your Brain on ChatGPT' found that people who relied on AI for cognitive tasks showed up to 55% weaker brain connectivity and 83% were unable to recall what they had just produced. Harvard Business School and BCG's study of 758 knowledge workers found that using AI on the wrong type of task makes your output 19% worse and highly skilled professionals couldn't tell which tasks those were. Microsoft Research (CHI 2025) found that higher confidence in AI is directly associated with less critical thinking.

This is not an argument against AI tools. It is a framework for using them without losing your judgment.

What's covered in this episode: 
→ The cognitive debt research MIT, Harvard/BCG, Microsoft and what it means for product people 
→ The goal, problem, process
→ The three thinking modes: capture, synthesise, decide and which tool belongs in each 
→ Think first, mode check, own your conclusion: 3 daily practices for keeping your thinking sharp 
→ Why the tools will keep changing and the process is what stays with you

The tools will change but the process is yours.

Connect with Madalena:
🌐 humanxintelligent.com
📸 Instagram: @designwithmaddie
📸 Instagram: @humanxintelligent
💼 linkedin.com/in/madalenafigueirasdacosta
💼 linkedin.com/company/human-x-intelligent

Support the show

🎙️ Human × Intelligent - a podcast about trust, transparency and human agency in AI systems, for product designers, PMs and founders building with AI. 

🔔 Subscribe so you don't miss the next episode 

🌐 humanxintelligent.com 

Hosted by Madalena Costa · Senior product designer and AI systems strategist 

SPEAKER_00

Hey, welcome back to Human X Intelligent. I am Medalina and I want to start today with something that I believe you will recognize, at least in some sense of a way. So, have you ever been in a situation where you had to make a product decision very quickly? There was a deadline, and you had multiple tools open. Let's say that one was for research, another one was for brainstorming, and another one was for meeting us. This kind of happened to me and I was moving between tabs like generating outputs, getting summaries, and I felt quite peaceful. I had absolutely no idea what I actually thought. And that is the problem I want to talk to you about today. Not which AI is best for product people. I want to talk about the problem underneath all of that, which is that most of us have built a very impressive system for producing output and a very, very weak system for actually thinking. And here's the part that stops me, because the research is starting to confirm what many of us are quietly feeling. The tools we are using to think faster may be making our thinking much, much worse. So today I want to give you something that doesn't go out of date when the next tool drops, basically. A process or something to think about, a way of structuring your thinking, let's say it like this, that works regardless of what it's in your stack. And that protects the thinking or the thing that no tool can replace, which is your judgment. But let's get into it. So I think it's very important to start here because it is the base of what we do and how we behave. So let's do science for a little bit. Because this is actually a way of reframing what we will talk about today. In 2025, my MIT Media Lab published a study called Your Brain on ChatGPT. So researchers divided around 54 participants into three groups over formats. One group wrote essays using AI, one used search engines, and one used tools was no tool at all, actually. They measured brain activity using those EG, those EEG during every session, basically. So they could control the brain, how it was working and how it was reacting to these situations. And the results were very striking, I would say, because participants who relied on the LLMs showed what the research called cognitive load, or death, the accumulation of long-term cognitive costs from over-reliance on AI, include including dismissed critical thinking, uh reduced creativity, and actually showell info processing. The brain cognitivity of AI users was up to 55% weaker compared to the group who wrote without any tools. And 83% of quote of the LLM users were unable to quote from essays they had just written. So they basically lost ownership of their own thinking. Yeah. So now I want this to be careful here, because this study is a preprint and it has real limitations because it is a small sample size. And it has also a specific context in mind. So the researchers themselves ask people to avoid using alarmist languages about it. But here's what I think is important. It's not about whether AI makes you dumb, because it's about what happens when you use AI instead of your own reasoning, rather than alongside it, like a co-working, like what we're doing with Human X Intelligence. So, in a larger, more rigorous study that I want to share with you, this backs up from a different angle. So in 2023, Harvard Business School and Boston Consulting Group ran the field experiment with around 758 knowledge workers, so real consultants, real tasks. They found that AI assistance improves performance from some tasks but actually worsens it for others. Even within the same workflow at a similar level of difficulty, they called this the Jagged Technology Frontier. So consultants using AI for tasks outside their frontier were around 90% less likely to produce correct solutions than those without AI access at all. The problem with this is that it was not obvious to highly skilled knowledge workers which of their everyday tasks could be done well by AI and which required a different approach. And that's the core issue here, because most of us are using AI on tasks it's not suited for without realizing. Another study in 2025 basically found that something that connects is directly to how we use these tools day to day. So the higher confidence in AI is associated with less critical thinking, while higher self-confidence is associated with more. So the more you trust a tool, the less you interrogate its output. And the less you interrogate it, the more outsource the thinking that should be staying with you. So before we talk about any specific tool, we need to talk about processing or way of thinking, let's say a structuring of this. Because without this, you can't even see where your thinking stops and a tool's output begins. So let's do this. Here's what I found working with product teams building AI products, which is the overwhelmed people feeling isn't about too many tools. Although there's study about this, there's a lot of studies coming around about how many tools you use and how dumb or not dumb they make you feel, and how much they help versus how much they don't help. But let's focus today on this because usually it's about missing processes. So when you don't have a clear process, every tool looks like the solution. And when you let the tool define the process, when the prop becomes the problem statement and the output becomes the goal, you actually already lost the thread of your own thinking. So before you open anything, you need actually three things. So you need the goal, you need the problem, and you need the process. So basically, let's give a little bit about each concept and what they mean. So let's start with the goal, because this is the outcome you're actually trying to reach, not the task. The actual reason the task matters. For example, I need to research competitor, is not the goal. What is the goal is I need to decide whether to build this feature in key three. This is the goal. The distinction matters because it changes how you evaluate everything the toolkits use. When you know the goal, you can tell whether an output is moving toward it or just generating activity. And that goes round and round into the problem, which is the specific obstacle between you and the goal right now. And this is the part people skip most. Are you missing info, running in information you can't make sense of, stuck between two equally valid options, suspicious your own reasoning might be biased? So each of this is different, is a different problem, and it requires a different kind of support. For example, if you name the problem before you open anything, that will work because you know actually what you're doing. So you have the goal, you have the person, and you have the problem that you're trying to solve. And then you have the process, which is a sequence of thinking moves that gets you from the problem to the goal. And here I do have a structure that maps almost every knowledge work situation. So it has three modes, and you almost always need to move through all three in order. The first one is capture. So in capture is getting raw material out of your head and into a form you can work with. So the job here isn't fidelity, it's not to interpret yet, it's just to collect. The second one is synthesis, finding the pattern across everything you've captured, and actually this is the hardest mode because it's the most the the one that we mostly skipped. It's where you go from I have a lot of information to understand what it means. And this is the part where we don't and we could cannot rush because this is where the quality of your decision is almost entirely determined by the quality of your synthesis. And the third one, which is the last one, is decide, committing to a direction and stress testing it before you act on it. Because, and this is specifically to this, actively looking for the reasons you might be wrong, right? So you need to do this. This is where most people use artificial intelligence most badly. Because to confirm what they already think rather than challenge it, and you need to challenge it. But we'll come back to that. So the three are goal, problem, and processing. You capture, you synthesize, and you decide. Now and only now do the tools become relevant. Before I name anything specific, I want to say this clearly. The tools I use right now will change. I'm 100% sure. Because some will get better, some will disappear, some will get observed in the things you already use because they will be both. But what won't change is the process, your structure thinking. So what I'm really describing here is the job to be done in each mode, and what's currently doing that job well for me. So for capture, the job is to record accurately and fast without pulling you out of the moment, basically. So right now I use granola for meetings and for recordings. It writes quietly in the background and it gives me a structured summary after so I can stay present in the conversation instead of transcribing, which is something that I love because I'm very curious and I love to ask questions. But any tool doing this job well should support collection, not interpretation. If it's pushing you to summarize or decide before you've even finished in gathering, it's the wrong tool for this mode. The next one for synthesis, this is the job is to find the pattern, right? So across all your material. Not across the internet, not across everything AI was trained on, but across specific documents, notes, and data you've actually collected. If you haven't known, if you don't know what I'm talking about, you'll know now what it is because it's the one for this right now, which is not book LM. It actually works only with what you give it. So you upload your research, your interview describes, your brief, and basically you can ask things like what are the contradictions here? Or what is the user data telling us that the brief doesn't address? And the tool doing this job will change, of course, like we were saying. The job making sense of your own material before you decide anything. So here's why this mode matters so much. That's actually we can go grab it again, the Microsoft Research Study, that found that AI changes the nature of critical thinking for knowledge workers. Shifting toward information verification and response integration rather than on regional analysis. This is where the original analysis leaves. So don't skip it or even outsource it entirely because it is a very important step. The next one is for deciding, because the job splits in two things. So, first, scanning the landscape. What's actually true out there? What are you missing? And the good one for this is perplexity. This he does very well because it cites its sources. You can follow the thread rather than just accepting something. And this matters enormously because this is what the research shows about the AI and confirmation bias, right? The second one, and this is the one I want to stay on for a moment. So, stress testing your own reasoning. This is the most misused mode of all. Most people use AI in the decision phase to confirm what they've already concluded. And they've made the decision, consciously or not, and they use the tool to validate it. The higher confidence in AI is associated with less critical thinking, like I said above. The more you trust the output, the less you push back on it. The fix is the fix needs to be very intentional because when you're in deciding mode, your prompt is not is this a good idea? Your prompt is what are the three strongest arguments against this? Or what am I assuming that might not be true? Or still meant the opposition, right? So you are using the tool to find the holes in your thinking before you commit. A great one is Claude. Claude is what I use for this right now, because the job is stress testing. Any tool that can genuinely challenge your reasoning will serve this mode. And the Arvid BCG research gives you a practical way to think about when to trust AI and when to be more careful. Understanding which of your tools are within AI's capability and which aren't is the key to using it well. And now we know more or less how to structure this. But the MIT Sloan creativity synthesis nuances judgment decision with high stakes and ambiguity. And these are often the frontier because if we use these tools to support our thinking, they are not supposed to replace it, right? Another one is for documenting like the decisions, right? Because we now have all of this and how to structure our thinking, how to structure our work and which tools to use in each part of it. But for when it comes to documenting decisions, most of us keep it, right? And we pay for it. Because when this decision gets retegated, what do we do? Right. And our job or the job for that is to turn your the massive reasoning into a clear record first. What we decide, why and what did we consider in the track, what we will revisit. And a good one for this and I like to use is Notion AI. Because it does this very well right now. And this tool will actually change a lot because the practice of documenting decisions won't. And I want to close with something practical. Something that we can actually do. A concrete thing. We can keep your thinking intact while using this tool every day. The first is what I would like to call the thinking first rule. So before you open an AI tool on a problem, spend five minutes with your own thoughts first. Write down what you actually think, like I always write on my notebook. From there, you want to know what's your hypothesis, what's your instate, and this is basically to see what you already know versus what you don't. And this is very important because it's about having something to compare the tool's output against. The MIT research found that people who started from their own reasoning before introducing AI were far more able to use it productively and critically. The Brain to LLM group in that study, the ones who built their own thinking first, demonstrated high memory recall and re-engagement of widspread prefrontal areas when they did use AI. Starting from your own thinking gives you an anchor. Without it, you're just following the tool. The second is the mood check. So before you prompt, ask yourself, am I capturing, synthesizing, or deciding? Because as the Harvard research makes clear, using AI on the wrong type of task doesn't just fail to help. It actually makes your output worse. A decision task done with a synthesis tool or a synthesis task done with a confirmation seeking prompt, these don't just produce bad outputs, they produce confidently wrong outputs. And this is the key. Confidently wrong outputs. The third one is what I like to call only your conclusion. After an AI assistance thinking session, before you act on anything, you ask yourself, can I explain this in my own words without referencing the tools? Not to be anti-AI, which you know I'm not, but because if you can't do this, you haven't actually decided anything. You've deferred. And this happens to the rest of us. But we need to take this into consideration to do a better job now and in the future. But the research constantly shows significant negative correlation between frequent AI tool usage and critical thinking, mediated by cognitive offloading, because there's a lot happening and we don't want to feel burned out, right? And the offloading isn't the problem, I would say. I don't think it is because it helps us do a faster job, a better job. But offloading, the wrong thing is. Because we need to keep the judgment, we need to keep the conclusion, and we need to keep the ownership of our own thinking. So to bring it back to where we started, the problem isn't too many tools. Or at least not for right now. There are studies being done with this, so might change the my our idea, but I don't think that it is because the problem is using them without a defined process. Because if you have a good process, this will work. And without staying present as a person who is actually thinking. Before you open anything, remember, define your goal, name your problem, identify which mode you are in. Capture synthesis and decide. Think first. Check which mode you're in and own your conclusion. And also don't forget to hold the tools lightly. The ones I mentioned today, which were granola, notebook LM, Perplexity, Cloud, Notion AI, and even Microsoft, are what's working for me right now. In a year or even less, some of them will have changed and some of them will be replaced. The process is what travels with us, or with you. That's the shift, and once you make it, the overwhelmed starts to lift. And this is not because there are a fewer tools, because you know exactly what you're doing with them, and you know what you're keeping for yourself. If this landed with you, share it with one person on your team who's drowning in devs right now. That's the best thing you can do with an episode like this. I am Medalena, this is Human X Intelligence. See you in the next one.