Humanergy Leadership Podcast

Ep236 Beyond the Algorithm: What Leaders Bring That AI Cannot

David Wheatley

AI can process data and optimize outcomes, but it can’t provide purpose, empathy, or moral judgment.

In this episode, Humanergy coach David Wheatley explores why the rise of AI elevates the role of human leadership. The conversation focuses on meaning, emotion, ethics, and the practical steps leaders can take to stay firmly in the loop as algorithms play a bigger role at work.

This episode is a recording from Humanergy’s free monthly leadership workshop series.
 Find upcoming workshops at humanergy.com.

Learn more about Humanergy's work: https://www.humanergy.com

Join the Humanergy community on LinkedIn.

Sign up for our FREE leadership workshops.

Humanergy (00:11)
Hey, everybody. I’m Mimi with Humanergy.

AI is no longer a future concern. It’s already shaping how work gets done and how decisions get made. In this session, Beyond the Algorithm: What Leaders Bring That AI Cannot, Humanergy coach David Wheatley explores what becomes more important, not less, as AI accelerates, meaning, emotion, and ethical judgment, and where leaders must stay firmly in the loop.

This recording comes from Humanergy’s free monthly leadership workshop series. We offer these sessions every month, and you can find upcoming topics and dates on our website at humanergy.com.

Here’s David.

David Wheatley (00:49)
It feels like we’ve been standing at an inflection point for a while. Just a few years ago, AI felt like a novelty, and today it seems to be integrated into nearly every aspect of business.

We’re all chasing efficiency, scalability, and optimization, and AI delivers, or at least it can deliver. Now, I’m not proclaiming this afternoon to be an expert in AI. I’m just a keen student of how it’s impacting the leadership conversation.

But here’s the million-dollar question. I should probably raise that number these days, shouldn’t I? That seems quite low. If AI can optimize everything, then what’s left for the human leader? A common fear seems to be replacement.

My argument is the opposite. The rise of the algorithm elevates the importance of the human leader. Our role isn’t to be the fastest calculator. It’s to be the conscience, the connector, and the creator of meaning, to lead boldly and stay human.

Today, we’re going to look at the danger of over-reliance on the algorithm and explore the indispensable human edge, specifically meaning, emotion, and ethics, and identify three actions you can take immediately to future-proof your leadership.

AI is not a god. It’s a tool. It’s brilliant at finding the shortest path between two points. It’s utterly blind, although it’s getting better, to why we are traveling that path or the damage we might cause along the way.

When we delegate judgment to the algorithm, we outsource our humanity. I want you to think about that for a minute. When we delegate judgment to the algorithm, we outsource our humanity.

Consider a classic example of efficiency algorithms. A few years ago, a massive global company used an AI system to review and score job applicants. The system was trained on decades of hiring data. The result was that it learned to penalize résumés that included words like “women’s” or suggested female college attendance, because historically the majority of successful applicants were male.

The algorithm was perfectly optimized for the past, but catastrophically biased for the future. The data was clean. The code was flawless. The decision was deeply and fundamentally flawed. The AI delivered a result that was efficient but unethical.

It even happens to us. With an early version of ChatGPT, we asked it to provide a solution referencing our book, The 50 DOs for Everyday Leadership. It gave a great answer. We were really surprised. It quoted a number of DOs very confidently. Then we checked it against the book, and it had completely made them up, with no connection to what we had written.

So AI lacks a moral compass. This isn’t just about bias, it’s about context. AI doesn’t understand trade-offs between profit and people. It can maximize shareholder value, but it can’t explain why that value matters to the employee on the factory floor or the customer it just alienated.

That blind spot is where human leadership steps in.

The first and most critical element AI struggles to replicate, in my mind, is the creation of meaning. AI provides information. Leaders provide purpose. People don’t follow an algorithm, they follow a vision.

When a tough quarter hits, AI can only tell your team the numbers are down. The human leader must stand up and explain why the struggle is worth it, recalibrating the mission and re-energizing the team.

This is the power of narrative. We need leaders who can draft and craft a story, a story of commitment, service, and impact that makes the work matter beyond the quarterly spreadsheet.

AI also lacks emotion, and crucially, empathy. You can code AI to use comforting words. After all, I’ve never seemingly asked it anything other than fantastic questions. You’ve probably had the same experience. Great question.

But you cannot code it to feel the tension in a negotiation, the fear of an employee during a restructuring, or the pride in a major team accomplishment. Leadership is a contact sport. It requires the ability to look a colleague in the eye and understand their nonverbal cues.

Empathy is the operating system of motivation. An algorithm can assign tasks, but only a human leader can inspire loyalty and navigate complex interpersonal conflict with compassion.

When trust is broken, you don’t call a bot. You call a leader with emotional intelligence and judgment. AI may detect words, but it often misses the subtle cues that define how we truly feel.

Finally, let’s talk about ethics. AI is a mirror reflecting the data we feed it, biases and all. It operates on rules. Leaders operate on values.

We need leaders who don’t just ask, “Can we do this?” which AI answers with a resounding yes. We need leaders who ask, “Should we do this?”

This is the ultimate human check. It means establishing ethical guardrails before the model starts running, taking ownership of the outputs, and having the courage to stop an algorithm that is technically optimized but morally corrosive.

So how do you operationalize the irreplaceable human edge? First, stop delegating judgment. AI can handle data processing, but the decision must remain yours. Implement a human-in-the-loop checkpoint for any major AI-driven decisions, especially those involving hiring, firing, marketing to vulnerable populations, or resource allocation.

In the book The AI-Driven Leader by Jeff Woods, which I highly recommend, we’re encouraged to use AI as a thought partner, but the judgment still remains with you.

Second, invest in EQ, not just IQ. Dedicate meaningful budget to training yourself and your managers in advanced emotional intelligence, conflict resolution, and complex communication.

The harder the soft skills, the more valuable they become. Pay attention. Watch. Listen. Observe. Sense. Think about how any action or decision plays out for the people who have a stake in it. This is a higher level of thinking.

Third, define your ethical North Star. Articulate your organization’s ethical line in clear, accessible language. This ethical North Star is your core, uncompromisable value system. It dictates what you should do, not just what you can do.

It moves beyond compliance or rules to define moral boundaries, serving as the ultimate human veto to audit algorithmic outputs and stop any solution that is technically efficient but morally corrosive or biased.

What biases are you actively guarding against? Use the North Star to audit your algorithms annually. If the code conflicts with your conscience, the code changes.

The age of the algorithm is here, but it doesn’t define our role. It clarifies it. Our power as leaders is not in our ability to process information, but in our capacity to shape it.

We are the architects of meaning, the navigators of emotion, and the keepers of ethics. AI gives us data, but we must provide the wisdom.

Wisdom is the ability to apply knowledge and experience, both personally and collectively, to make decisions that align with long-term human values and purpose, especially when data alone is ambiguous or potentially unethical.

Wisdom, in short, is the indispensable human capacity to apply an ethical North Star and empathy to the knowledge AI generates. It’s the bridge between what is measurable and what matters.

So a final call to action. The machine, at least for now, can only do what it’s told. You’re the one who decides what to tell it, and more importantly, what to keep for yourself.