The Deepdive
Join Allen and Ida as they dive deep into the world of tech, unpacking the latest trends, innovations, and disruptions in an engaging, thought-provoking conversation. Whether you’re a tech enthusiast or just curious about how technology shapes our world, The Deepdive is your go-to podcast for insightful analysis and passionate discussion.
Tune in for fresh perspectives, dynamic debates, and the tech talk you didn’t know you needed!
The Deepdive
AI Brain Fry: When Bad Management Meets GenAI
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Your company didn’t hit an “AI limit.” It hit a human limit. We walk through the real-world generative AI workplace: sales teams quietly building rogue features, HR teams dealing with a new kind of cognitive exhaustion, and executives sending polished messages that sound empathetic but create distance from reality. The big twist is that the AI tools are often working exactly as designed, and that’s the problem. They amplify whatever leadership system they get plugged into.
We dig into research on AI productivity and why so many gains vanish into rework, editing, and verification. Then we unpack Boston Consulting Group’s term “AI brain fry,” a measurable cognitive overload state tied to decision fatigue and major mistakes, hitting hardest in text-heavy functions like marketing and HR. If you’ve been stuck in a loop of prompting, checking, and re-prompting, you’ll recognize the pattern instantly.
From there, we zoom out to leadership: the taxes of bad leadership, the trust tax that turns curiosity into threats, the alignment tax that fuels vibe coding, and the product slop that appears when teams skip discovery because AI makes delivery feel instant. We also confront the collapse of middle management, the loss of the translation layer, and what disasters like Zillow’s algorithmic overreach reveal about context and accountability. Finally, we explore a hopeful counterintuitive idea: AI as executive coach, “algorithmic humility,” and why taste and judgment may become the most valuable professional skills in the AI era. If this made you rethink how generative AI should be deployed, subscribe, share with a leader on your team, and leave a review. What part of AI adoption is causing the most friction where you work?
Leave your thoughts in the comments and subscribe for more tech updates and reviews.
The AI Utopia Meets Reality
AllanSo picture this. You are uh walking through a modern corporate office right now.
IdaOkay, I'm picturing it.
AllanAnd on one floor, the sales team is secretly using generative AI to build their own software features entirely behind the engineering team's back.
IdaAaron Powell Which is just a nightmare waiting to happen.
AllanRight. A total nightmare. And then you walk down the hall, and the HR department is just staring blankly at their screams, suffering from this completely new, like mathematically measurable form of cognitive exhaustion.
IdaOh, absolutely.
AllanAnd upstairs, the CEO is sending out these beautifully written, highly empathetic, just deeply moving emails to the whole company.
IdaAaron Powell Let me guess.
AllanYep. Generated entirely by a robot. Trevor Burrus, Jr.
IdaOf course they are.
AllanTrevor Burrus I mean, uh, we were promised this absolute AI utopia, right? A frictionless world of hyperproductivity where nobody does grunt work ever again.
IdaAaron Ross Powell That was the pitch. Yeah.
AllanBut instead, it seems like we're getting what researchers are now calling AI brain fry and just a massive system-wide amplification of bad management.
IdaIt's a it's the grand paradox we really find ourselves in right now. I mean, we treated AI like it was this magical solution to human problems, but it's not. It's really just an amplifier. Trevor Burrus Aaron Powell, Jr.
AllanAn amplifier, really. Trevor Burrus, Yeah.
IdaIt's like think of it like plugging a high voltage wire into a crumbling circuit board.
AllanWell, that's a good way to put it.
IdaTrevor Burrus, Jr. Right. It's not the electricity's fault that the house is touching on fire. The wiring was already shot. The technology is actually working perfectly, but it's exposing the fact that our human organizational structures are, frankly, incredibly fragile.
AllanAaron Powell That is such a fascinating way to look at it. So welcome to the deep dive. Today we are unpacking a huge stack of recent research from BCG to the California Management Review to explore why the AI revolution is hitting this massive human bottleneck.
IdaIt's a huge bottleneck.
AllanIt really is. So we're going to dissect the hilarious, sometimes terrifying taxes of bad leadership, and we'll discover why the most valuable professional skill in the future might simply be uh having good taste.
IdaWhich is a profound shift, honestly, in how we think about work. I mean, for the last two years, everyone has been just obsessively focused on the capability of the AI models themselves.
AllanYeah, exactly. Like how many parameters does it have?
Rework Eats Productivity Gains
IdaRight. How fast is the generation? But the thing is, the bottleneck isn't the processing power of the computer anymore. The bottleneck is a processing power of the human being who's actually sitting in front of the screen.
AllanAaron Powell Start right there, actually, with the immediate reality of AI implementation versus the hype we've all been sold. Because the prevailing narrative has been, you know, AI will instantly double your output. Right. But according to this recent future of work analysis, almost 50% of the productivity gains from AI are currently just being lost to rework.
Ida50%. That's huge.
AllanHalf of it. People are generating this massive amount of text or code, but then they have to go back and painstakingly fix it or edit it or, you know, verify the AI's output. We are basically spending half our time just babysitting algorithms.
IdaAaron Powell And that babysitting, it has a really profound physiological and psychological cost. So Boston Consulting Group recently completed this major study on this dynamic, and they identified a phenomenon they are officially calling AI brain fry.
AllanBrain fry. I mean, it sounds like a novelty fast food menu item, but the data here is actually wild.
IdaAaron Powell It's a very real cognitive state. BCG surveyed this massive pool of workers and found that 14% of them are actively suffering from it right now.
AllanWow. 14%. Yeah.
AI Brain Fry Explained
IdaAnd they define brain fry as the specific mental fatigue that results from the excessive use of or interaction with AI tools well beyond your natural cognitive capacity. Trevor Burrus, Jr.
AllanAnd downstream effects are pretty alarming, right?
IdaOh, definitely. People experiencing brain fry show 33% more decision fatigue, and there is a 39% spike in major errors in their actual work.
AllanOkay, but here's the thing: is this just the new Zoom fatigue or something worse? Because it sounds like we gave everyone a coagulator jetpack, but totally forgot to tease them how to land. Are people just, you know, tired of looking at screens, or is there something mechanically different happening in the brain here?
IdaAaron Powell It's a really critical distinction to make. The research is very clear that this is fundamentally different from traditional burnout.
AllanHow so?
IdaWell, if you think about traditional workplace burnout, historically it's a measure of emotional and physical exhaustion. It comes from interpersonal conflict, managing difficult stakeholders, uh emotional labor. Trevor Burrus, Jr.
AllanRight. Office politics, that kind of thing.
IdaExactly. Brain fry, on the other hand, is pure cognitive overload. It's literally a working memory problem. We are treating human brains like unlimited hard drives.
AllanAaron Powell But what does that actually look like, like on a random Tuesday afternoon for a normal employee?
IdaAaron Powell Okay, so imagine you ask a chatbot to generate a marketing strategy. In three seconds, it spits out a highly confident, incredibly dense 40-page document with five different strategic options.
AllanRight, it's instantaneous.
IdaInstantaneous. The machine did the work instantly, but now the human is forced into this rapid-fire, relentless decision-making loop. You have to read it.
AllanYou have to evaluate it.
IdaEvaluate it, fact-check it, synthesize it. Your brain just wasn't built to process that volume of synthetic information that quickly. Eventually the human hard drive just crashes.
AllanSo if you're listening to this and you've spent the last three hours prompting a chatbot just to write a single strategy brief, you are exactly who BCG is talking about.
IdaYep. You've got brain fry.
AllanAaron Powell And the irony of who is suffering the most from this is staggering to me. You'd assume it would be the software engineers, right? The coders who were deeply embedded in these systems all day.
IdaAaron Powell You would think so.
AllanAaron Powell But the BCG data shows the highest rates of brain fry are actually in human resources at 19.3% and marketing at nearly 26%.
IdaAaron Powell Which logically follows, honestly, when you consider the nature of those jobs.
AllanAaron Powell Really? How do you mean?
IdaWell, those departments are dealing with massive, unstructured volumes of text, communication, and human synthesis. They are often pushed to the bleeding edge of adopting these generative tools to handle, like hiring pipelines or mass content creation.
AllanAaron Powell Oh, that makes sense.
IdaYeah. They are essentially the canaries in the coal mine for what happens when a biological brain tries to match the cadence of a generative machine.
Who Gets Brain Fry Most
AllanAaron Powell That individual overload is wild, but it's really just the first symptom, I think. Things get so much more interesting when you look at what happens when you drop this highly disruptive technology into a leadership system that was already fundamentally broken.
IdaOh, yes.
AllanThere's this brilliant framework developed by product leader Stephanie Liu about what she calls the taxes of bad leadership.
IdaYes. Her central thesis is that AI does not fix a broken leadership system. It merely accelerates it in the exact wrong direction.
AllanRight.
IdaA weak leadership structure has always exacted a tax on an organization. You know, it makes everything slower, more painful. But AI has basically added a massive multiplier to all of those existing taxes.
AllanI was reading through her framework and her breakdown of the trust tax is incredibly revealing. This tax focuses on the fragile relationship between the CEO and their product leaders. Stephanie points out that executives are seeing their competitors prototyping new features at record speeds because of AI. So the CEO turns to their own product teams and asks, why is it taking us so long?
IdaAnd if the foundational trust in that relationship is already low, that question doesn't land as genuine curiosity.
AllanNot at all.
IdaIt lands as a direct threat. It lands as doubt.
AllanThe product leader instantly goes on the defensive?
IdaExactly. The whole mechanism here is just self-preservation. Instead of focusing on leading the team to build better products, the product leader is forced into these endless, dragged-out status update loops.
AllanJust making slide decks all day.
IdaLiterally building It really does.
Bad Leadership Taxes Multiply
AllanSo because generative AI has essentially lowered the barrier to entry for coding to zero, non-engineers are getting restless. People in sales or marketing are seeing a cool AI demo on social media, and instead of waiting for the engineering team, they just decide to independently prototype their own software features using AI.
IdaJust going totally rogue.
AllanYes. That's vibe coding. They're just building rogue software based on vibes. I love that this exists, but also why? It's like an orchestra where the violins are playing Mozart, the brass section is playing jazz, and the conductor is just nodding politely. How does anything actually get built?
IdaIt doesn't. Or rather, a lot of disjointed things get built, but absolutely nothing cohesive survives. That is the essence of the alignment tax. Everyone is moving at 100 miles an hour, but nobody's moving together. Right. The CTO wants to rebuild the entire architecture. Marketing is spinning up rogue AI tools because they vibe coded a new dashboard, and the poor product leader is just caught in the middle. What this exposes is the massive difference between real alignment and fake alignment in corporate culture.
AllanI've definitely been in meetings that had fake alignment.
IdaWe all have. Real alignment is extremely difficult. It involves genuinely uncomfortable, behind closed doors conflict. Yeah. It's two leaders arguing over resources until they forge a single unified direction that they both actually commit to. Fake alignment is what usually happens when leaders want to avoid conflict entirely.
AllanRight. Everyone's just being polite.
IdaExactly. Everyone nods in the meeting, says, great idea, but then everyone goes back to their desks and does exactly what they're going to do anyway. It leaves the teams underneath them completely guessing about what the actual priorities are.
AllanAnd when you pour the accelerant of AI onto fake alignment, you get what the industry is now calling product slop. Teams are moving so fast, skipping crucial steps, just because generating code feels faster than actually researching a problem.
IdaThis really gets into the mechanics of how software is supposed to be built. Product managers often use a framework called the double diamond. So the first diamond, the left side, is the discovery phase. You are researching the market, talking to users, and asking the fundamental question who actually buys this? What problem are we even solving?
AllanThe important questions.
IdaRight. Then the second diamond, the right side, is the delivery phase, where you actually write the code and build the thing.
AllanBut AI makes the right side of the diamond practically instantaneous.
IdaPrecisely. AI makes building so cheap and so fast that teams are completely skipping the left side of the diamond. They skip the discovery phase entirely. So you end up shipping products at lightning speed that absolutely nobody wants to buy. That is product slop.
AllanWhen teams are churning out this product slop because of fake alignment, it inherently breeds a massive amount of insecurity among the people actually doing the work, which leads to this terrifying psychological toll, what Stephanie Liu calls the therapy tax.
Vibe Coding And Alignment Breakdown
IdaYeah, this is the hidden emotional labor cost of the whole AI transition. Product managers and knowledge workers right now are dealing with profound anxiety about their own relevance.
AllanYeah, I can imagine.
IdaImagine spending 10 years mastering a highly specific, skill-like writing technical user stories, and suddenly an intern can do it in four seconds with a prompt. Their entire professional identity is shattering.
AllanAnd the leaders are just stuck absorbing this grief in their weekly one-on-ones. One of the articles uses this amazing phrase saying leaders feel like they are hurting cats who think they're lions.
IdaIt's so accurate.
AllanWait, what? Seriously.
IdaYeah.
AllanThe employees are terrified, but also wildly overconfident because of the AI tools. If the leaders are paying a confidence tax and the team is paying a therapy tax, aren't we just automating our own professional midlife crises?
IdaWe are absolutely automating an existential crisis. Let's look at that confidence tax you just mentioned. This affects the leaders themselves. They are experiencing deep, deep uncertainty about what their role even is anymore. And when leaders lack internal stability and don't know what the future looks like, they default to the worst possible human behaviors. The mechanism here is risk aversion. When the company actually needs bold, decisive vision, insecure leaders freeze up.
AllanThey don't want to make the wrong call.
IdaExactly. They start deferring to whoever sounds the most certain in the room, even if that person is completely wrong.
AllanSo the system is emotionally fragile, structurally misaligned, and running at 100 miles an hour.
IdaWhich is why adding AI to this mix is like putting a massive high-performance jet engine into a car with a cracked chassis. It doesn't help the car win the race. The torque just rips the car apart much, much faster.
AllanAnd we are seeing the structural fallout of that cracked chassis breaking apart in real time. Let's talk about the collapse of the middle. According to labor data, middle management is essentially evaporating.
IdaIt really is.
Product Slop From Skipping Discovery
AllanJob postings for middle managers are down 42%. And Gartner predicts that 20% of organizations will eliminate more than half of their middle management roles in the near future, replacing those functions with AI.
IdaThis represents a massive hollowing out of the modern organization. The senior executive assumption is that, well, AI can handle the reporting, the scheduling, the status updates, all the administrative tasks they associate with middle management.
AllanBut that's not all they do.
IdaExactly. That fundamentally misunderstands what a good middle manager actually does. They aren't just bureaucrats, they are the translation layer. They provide the context, the coaching, the emotional steadiness, and the continuity between the grand vision of the CEO and the daily reality of the frontline worker.
AllanAaron Powell But the senior leaders don't seem to realize they are losing that translation layer because they are using AI to create the appearance of leadership.
IdaYes.
AllanThey're using algorithms to generate these perfectly polished, empathetic sounding company-wide emails or to instantly summarize on hundred-page reports. They feel incredibly productive, but mechanically they're becoming more distant and disconnected from the reality on the ground than ever before, which leads directly to disasters like the Zillow offers situation.
IdaThe Zillow case study is the perfect, terrifying example of what happens when you remove human context and rely entirely on algorithmic oversight.
Therapy Tax And Confidence Tax
AllanFor you listening who might not remember the details of this, Zillow had an algorithmic pricing model for buying and flipping homes. It was supposed to be completely automated. And it ended up overestimating home values so badly that the company had to take a$304 million write down in a single quarter.
IdaOuch.
Allan$304 million. And the underlying reason why is just fascinating. The algorithm was looking at historical data, but it couldn't understand the unprecedented real-world human context of the COVID-19 pandemic.
IdaExactly. Automated valuation models assume the future will look roughly like the past. When COVID hit, the housing market exhibited bizarre, emotionally driven human behaviors, panic buying, massive migrations, that the algorithm simply couldn't contextualize.
AllanYou didn't know there was a pandemic going on.
IdaRight. It kept buying houses at inflated prices because it lacked the capacity to watch the evening news or understand human anxiety. It was an anomaly that any competent human analyst or middle manager would have caught immediately.
AllanWhat does this say about us as a society? If we willingly rip out the middle managers, who are the glue that understands human context, and the top executives are hiding behind AI-generated summary emails, who is left in the building to catch the$300 million mistakes.
IdaIt creates a profound systemic vulnerability. We are confusing the ability to generate information with the ability to make a decision.
AllanOh, that's good.
IdaAI generates infinite complexity and endless options. But leadership, true leadership, still requires a human being to look at the context, choose one direction, and own the consequences of that choice. More information floating in a broken, hollowed-out system is just creating hesitation, not conviction.
AllanOkay, we've talked about brain fry, we've talked about the taxes of bad leadership, vibe coding, and$300 million algorithms gone completely off the rails. I feel like we need to find the silver lining here.
IdaLet's find one.
AllanBecause the research actually suggests that AI might be the exact tool we need to train better human leaders.
IdaIt is a deeply ironic twist, but the data actually supports it. There is a fascinating study published recently in the California Management Review that looked at using AI not to replace workers, but as an executive coach for leaders.
AllanOkay, interesting.
IdaYeah, they ran a comprehensive 12-week experiment comparing highly trained human executive coaches against AI coaching agents.
Middle Management Collapse And Zillow
AllanAnd the results kind of blew my mind. The AI coaching agents improved the leaders' cognitive flexibility by 28%. And this is the crazy part. They reduced implicit bias by 35%. The AI significantly outperformed the human experts. But how? How is a chatbot better at teaching leadership and reducing bias than a human being?
IdaIt comes down to a psychological mechanism, the researchers coined as algorithmic humility.
AllanAlgorithmic humility.
IdaThe mechanism is actually rooted in human biology. When a human coach gives you critical feedback, your brain's social threat response naturally activates. You get defensive.
AllanOh, sure. Nobody likes being criticized.
IdaRight. The human coach knows this. So they naturally deploy empathy. They might soften the blow of the critique or subtly validate your excuses to protect the relationship, but an AI simply does not care about your feelings.
AllanIt really doesn't.
IdaIt does not possess empathy. And crucially, your brain knows it's talking to a machine, so the social threat response doesn't trigger in the same way. The AI is relentlessly honest. It just holds up a mirror of unfiltered objective data. It forces the leader to confront their blind spots and their biases logically without the comfortable cushion of human empathy to protect their ego.
AllanThis is simultaneously impressive and completely ridiculous. Wait, it gets better. We are literally using emotionless robots to teach human managers how to be better humans, specifically because the robots don't politely coddle their egos.
IdaIt is absurd, but it is highly effective. And it points toward what the midterm future of leadership actually looks like. The leaders who survive and thrive in this new era won't be the ones with the deepest functional domain expertise. They won't be the fastest coders or the most technically proficient marketers.
AllanWho will they be then?
IdaThey will have to evolve into orchestrators. Their primary job will be ensuring the best possible collaboration between human employees and generative AI.
AllanWhich completely shifts how we define valuable human work. There's an analysis from Workday that highlighted this beautifully. They asked the fundamental question: what is the next artisan in the AI era?
IdaYeah.
AllanIn a world where a machine can predict the next word perfectly, or write a flawless block of code, or generate a 50-page marketing strategy in seconds, what is left for us to do? And the answer they came to is taste.
IdaTaste and judgment. It is a profound realization. When the execution of a task becomes basically free and instantaneous, the value no longer lies in the execution. The value lies entirely in deciding what to execute. Right. Is this good? Is this right for our brand? Does this solve the actual messy human problem we are facing?
AllanAaron Powell It's like having a million master painters standing behind you who can perfectly execute any brushstroke you ask for.
IdaYeah.
AllanThe value isn't knowing how to mix the paint or hold the brush anymore. The value is knowing what the painting should actually look like.
IdaExactly. The human advantage isn't coding or processing data or generating reports anymore. The human advantage encourage it's emotional steadiness. It's the ability to navigate ambiguity, to build actual trust instead of fake alignment, and to have refined taste. AI actually helps us by stripping away our biases and taking over the administrative burdens so we can finally focus entirely on those uniquely human traits.
AI Coaching Algorithmic Humility
AllanThat is a surprisingly hopeful place to land. Let's wrap this up. We started this deep dive looking at a corporate landscape that felt like an AI-induced fever dream sales teens vibe coding, HR departments suffering from brain fry, and the complete hollowing out of middle management.
IdaIt's a lot.
AllanIt is a lot. Yeah. But what we've discovered is that AI is incredibly powerful, yes, but ultimately it functions as a mirror. It is just reflecting and accelerating our own organizational and leadership flaws.
IdaIt exposes the cracks we've been ignoring for decades. The fake alignment, the lack of trust, the fear of conflict.
AllanThe technology isn't the problem, and it isn't the savior. To fix the AI strategy, we first have to fix the human strategy. We have to stop paying the taxes of bad leadership, lean into algorithmic humility, and focus on cultivating the uniquely human skills of taste, judgment, and connection.
IdaIt's a powerful reminder that no matter how advanced our computational tools become, the foundation of any successful endeavor is still deeply unavoidably human.
Taste Judgment And The Final Question
AllanBut as we leave you to ponder all of this, I want to leave you with one final lingering thought. We just talked about how AI is proving to be relentlessly honest. It's highly effective at inducing algorithmic humility by pointing out the biases of middle managers and product leads because it strips away the ego and just looked at the data. Well, if it's that good at evaluating performance without bias, how long until corporate boards of directors start using AI to evaluate the CEO's performance? Right. And when that happens, when the machine turns its unfiltered gaze to the very top of the food chain, will the C suite be ready for the raw truth from an algorithm that doesn't care about their title? Or will those beautifully written, robot generated, empathetic emails suddenly sound a lot like an automated pink slip? Thanks for joining us on this deep dive. We'll see you next time.