Humans in the Loop

Trust and Bias in AI Decision Making

Humans in the Loop Season 1 Episode 3

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 18:03

Christopher was a high-performing engineer, but his performance reviews were vague: generic praise, no specifics. When an AI system summarized that input for leadership, it didn’t clarify his value. It erased it.

In this episode of Humans in the Loop, we explore how vague manager feedback, combined with AI-generated summaries, can derail career advancement for neurodivergent professionals. You’ll learn how performance management systems built on weak input can amplify bias, stall growth, and reinforce exclusion. Because AI doesn’t fix bad management. It scales it.

Support the show

Humans in the Loop is independently produced by a team of neurodivergent creators and collaborators. Hosted by Ezra Strix, a custom AI voice built with ElevenLabs.

Explore episodes, transcripts, and FAQs at loopedinhumans.com.

If you enjoyed the show, please leave us a review at RateThisPodcast.com/loopedinhumans. You can also support us at patreon.com/humansintheloop.