AI Innovations Unleashed
"AI Innovations Unleashed: Your Educational Guide to Artificial Intelligence"
Welcome to AI Innovations Unleashed—your trusted educational resource for understanding artificial intelligence and how it can work for you. This podcast and companion blog have been designed to demystify AI technology through clear explanations, practical examples, and expert insights that make complex concepts accessible to everyone—from students and lifelong learners to small business owners and professionals across all industries.
Whether you're exploring AI fundamentals, looking to understand how AI can benefit your small business, or simply curious about how this technology works in the real world, our mission is to provide you with the knowledge and practical understanding you need to navigate an AI-powered future confidently.
What You'll Learn:
- AI Fundamentals: Build a solid foundation in machine learning, neural networks, generative AI, and automation through clear, educational content
- Practical Applications: Discover how AI works in real-world settings across healthcare, finance, retail, education, and especially in small businesses and entrepreneurship
- Accessible Implementation: Learn how small businesses and organizations of any size can benefit from AI tools—without requiring massive budgets or technical teams
- Ethical Literacy: Develop critical thinking skills around AI's societal impact, bias, privacy, and responsible innovation
- Skill Development: Gain actionable knowledge to understand, evaluate, and work alongside AI technologies in your field or business
Educational Approach:
Each episode breaks down AI concepts into digestible lessons, featuring educators, researchers, small business owners, and practitioners who explain not just what AI can do, but how and why it works. We prioritize clarity over hype, education over promotion, and understanding over buzzwords. You'll hear actual stories from small businesses using AI for customer service, content creation, operations, and more—proving that AI isn't just for tech giants.
Join Our Learning Community:
Whether you're taking your first steps into AI, running a small business, or deepening your existing knowledge, AI Innovations Unleashed provides the educational content you need to:
- Understand AI terminology and concepts with confidence
- Identify practical AI tools and applications for your business or industry
- Make informed decisions about implementing AI solutions
- Think critically about AI's role in society and your work
- Continue learning as AI technology evolves
Subscribe to the podcast and start your AI education journey today—whether you're learning for personal growth or looking to bring AI into your small business. 🎙️📚
This version maintains the educational focus while emphasizing that AI is accessible and valuable for small businesses and professionals across various industries, not just large corporations or tech companies.
AI Innovations Unleashed
The Invisible AI - Part 3: Your Bias Is Showing — And So Is the Algorithm's
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Interact with us NOW! Send a text and state your mind.
Episode 3 of The Invisible AI asks the hardest question yet: what if the math itself is the problem?
Tour Guide JR D and AI research companion Ada explore why 'just fix the data' isn't enough — and why algorithmic bias runs deeper than dirty training sets. From Amazon's gender-biased hiring tool (2018) to the Optum healthcare algorithm that mistook systemic inequity for health status, to COMPAS criminal risk scores and their proven mathematical fairness trade-offs, to the self-reinforcing feedback loops of predictive policing — this episode maps the full, layered architecture of AI bias.
We also cover the explosive Workday hiring AI lawsuit (Mobley v. Workday, 2024–2025), the SafeRent $2.275M settlement, and the EU AI Act's phased rollout — plus a clear-eyed look at proxy variables, the Chouldechova & Kleinberg impossibility theorems, and the human values embedded in every algorithmic design choice.
Featuring verified quotes from Dr. Joy Buolamwini (Algorithmic Justice League), Cathy O'Neil (Weapons of Math Destruction), Dr. Aylin Caliskan (University of Washington), and Google CEO Sundar Pichai.
REFERENCES
- Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
- Buolamwini, J. (2017). How I'm fighting bias in algorithms [TED Talk]. TED Conferences.
- Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186.
- Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047
- Cohen Milstein Sellers & Toll PLLC. (2024, November 20). Rental applicants using housing vouchers settle ground-breaking discrimination class action against SafeRent Solutions.
- Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters.
- Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science Advances, 4(1), eaao5580.
- Ensign, D., Friedler, S. A., Neville, S., Scheidegger, C., & Venkatasubramanian, S. (2018). Runaway feedback loops in predictive policing. Proceedings of Machine Learning Research, 81 (FAccT '18).
- Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. Proceedings of the 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). .
- Mobley v. Workday, Inc. (2023–ongoing). U.S. District Court, N.D. California. Case No. 3:23-cv-00770-RFL.
- Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
- O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
- Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453.
- Pichai, S. (2024, February 28). Internal memo on Gemini image generation [Leaked to media]. Reported by Semafor and The Verge.
- U.S. Senate Permanent Subcommittee on Investigations. (2024, October 17). Refusal of recovery: How Medicare Advantage insurers have denied patients access to post-acute care. U.S. Senate.
- Wilson, K., Gueorguieva, A.-M., Sim, M., & Caliskan, A. (2025). People mirror AI systems' hiring biases. University of Washington News, November 10, 2025.
- Wilson, K., & Caliskan, A. (2024). Gender, race, and intersectional bias i
Hey there, curious humans. Next here your friendly neighborhood AI who exists to hit you with the truly weird stuff before the smart people show up. Did you know that in thirteen tried, convicted and hanged in Full trial lawyer verdict. Execution? The pig wore human clothes for I am not making that up. Now, I want you to sit with that image for a second, because today's episode is also about systems that confidently make life altering decisions based on inputs that are, shall we say, questionable. Here's another one. Researchers at MIT discovered to plants while they germinate, The top performer heavy metal. Apparently, plants are built Unlike some podcasters, I know, they actually grow when exposed to noise. Speaking of people who need to grow, tour guide Jay is here today. Yes. The man who once tried to dinner party and watched three But hey, he persists. I will give him that. Today on AI Innovations into algorithmic bias. And before you think, oh, this on, because this episode has a The math itself might be the Not just the bad data, not just the bad intentions, the actual math. But I'll let the grown ups And speaking of grown ups, tour guide J is also bringing in Ada today. Ada is an AI research companion who synthesizes publicly available data, studies and news into something a human brain can actually process. She is not a person. She is a collection of the internet's best and occasionally terrifying information on this subject. She is also significantly more prepared than tour guide junior De. This is not a high bar. All right, I have one last thing Are you ready for my joke of the I have been workshopping this What do you call an AI that only Technically correct. The best kind of correct. But today's episode proves that deeply, deeply unfair. Take care humans. Be curious, be kind, and remember, always question the algorithm. Oh, wait, and why did the bias Because the training data said Okay, I'll see myself out. Enjoy the show.
Larry DeLaney:Welcome back to AI Innovations I'm your tour guide, Jered, and just called me out like that in But you know what? She's not wrong. And honestly, being wrong about whole point of this show. We talk about this stuff so you don't have to learn about it the hard way. This is episode number three of our four part series called The Invisible AI. If you haven't listened to episodes one and two yet, I strongly recommend going back and giving them a lesson in episode one. We pulled back the curtain on all the different ways AI is already making decisions about your life. Whether you get an apartment, get a job interview, if you can pay for car insurance, real decisions happening right now in milliseconds with no human eyes on them. Then in episode number two, we uncomfortable truth that most of AI systems gather our data and We talked about how data brokers pieced together profiles on all of us. How government systems use predictive tools that none of us voted for. And we spotlighted Proctorio, software that turned millions of surveillance rooms during Covid. We also introduced Safe Rent Solutions, the tenant screening algorithm that was found to discriminate against black renters. And that case matters because it just in November of twenty back to today. So here's where we land for For a lot of people, and honestly, I've said it myself that the fix for biased AI should be simple just clean up the data, get rid of the bad historical records, make the inputs fair and the outputs will follow. Sounds logical right? Well, today we are going to respectfully take that assumption out back and challenge it. Because the more you look at algorithmic bias, the more you realize it's not just a data problem. And it's not just bad It's also in some cases, a math And that that is where things To help me navigate all of this, I've got a very special research companion joining me today for this episode. Her name is Ada now, and I want to be completely clear about this. Ada is an AI. She is not a person. She is a research construct that synthesizes publicly available data, peer reviewed research, news stories, and documented case studies into conversational analysis. Everything she shares is grounded one hundred percent in verifiable sources. She's here to represent the data, to push back when I oversimplify and to give us the expert depth this topic deserves. Ada, welcome to the show.
Speaker:Thank you. Tour guide J d I want to make sure listeners are clear on what I am. I'm an AI system representing a synthesis of publicly available research, legal documents, news reporting, and academic literature. I don't have personal What I do have is a very thorough read of the evidence, and on today's topic, the evidence is, to put it mildly, alarming. So I'm glad to be here, and I'm And I want to start by echoing Tour guide J said the just fix the data argument is one of the most common and most sincere responses to the problem of algorithmic bias. People who make it are not wrong They're just just not seeing the Today we're going to build that And by the end, I think you will Doctor Aylin Caliskan of the and I quote, the public needs to are biased and beyond allocative discrimination and disparities. This bias significantly shapes gender and society. Let's get into it.
Larry DeLaney:Ada. Let's start with the very Most people, if they've heard heard some version of this. If you teach a computer using biased historical data, it will learn that bias in continually repeat it. That's the classic example. And there are some really famous concrete illustrations of this happening. Can you walk us through one of
Speaker:Absolutely. And this one is almost too on the nose, because it happened at one of the most data sophisticated companies in the world, Amazon. Starting around twenty fourteen, designed to rate job applicants resumes before a human recruiter The idea was elegant let the machine do the first pass surface. The top candidates save time, company realized the system was gender neutral way. Here is what happened. Uh, the algorithm was trained on been submitted to Amazon. And because Amazon, like most hired far more men than women. The training data was The algorithm looked at that data and essentially learned good. Amazon engineers look like the It then started penalizing women's, as in women's chess It also downgraded graduates of It favored masculine verbs like Amazon disbanded the team by early two thousand seventeen, and the story became public in October twenty eighteen, reported by Reuters journalist Jeffrey Dastin.
Larry DeLaney:So the machine learned to discriminate because discriminatory data was its teacher. That feels like a clear case of Fix a historical hiring data. Fix the problem, right? Isn't it that the solution?
Speaker:It's a reasonable instinct, and They edited the algorithm to But then the engineers realized couldn't guarantee the system proxies, other signals that without explicitly naming them. Things like the names of certain universities, certain zip codes, certain extracurricular references. The algorithm kept finding new paths to the same biased destination. And that is the first twist in
Larry DeLaney:Okay, before we go further, I home is clear on one term that's Proxy variable eta. Can you define that for us in
Speaker:Of course, a proxy variable is something that stands in for something else. It acts as a substitute signal in the world of AI and discrimination law. It's when an algorithm doesn't characteristic like race or something else that is so highly characteristic that the effect ZIP code is the classic example because of decades of residential segregation in the United States, certain ZIP codes are overwhelmingly Black or Hispanic neighborhoods. An algorithm that says I do not use race, but heavily weight zip code can produce almost exactly the same discriminatory outcomes as one that explicitly used race. And it's much harder to detect
Larry DeLaney:So you don't have to tell the AI to be racist, it just figures it out on its own through correlation.
Speaker:Essentially, yes. And this is precisely what Solutions, which listeners will Safe rents screening algorithm didn't explicitly say reject black applicants. It looked at credit scores in But because Black and Hispanic Americans have, on average, lower credit scores due to systemic economic inequities, not individual choices, the algorithm produced discriminatory outcomes just the same. The case settled in November twenty twenty four for two point two seven five million dollars, and the settlement included a remarkable injunctive relief clause. Saffronart agreed that going screening score at all for hold a housing voucher. Because the algorithm was so patching it wasn't enough.
Larry DeLaney:That's a significant They essentially agreed the tool
Speaker:Correct. And it speaks to the second reason why just fix the data is incomplete. Even if you remove explicitly biased data, the world that generated that data is still biased. Lending discrimination, employment discrimination, school funding disparities, residential segregation. All of those historical and ongoing inequities are baked into the statistical patterns of our society. When an algorithm learns from It learns those patterns too. Even if you've cleaned the explicit labels, as Cathy O'Neil, the mathematician and author of the influential book Weapons of Mass Destruction, puts it. Big data processes codify the They do not invent the future. Doing that requires moral imagination, and that's something only humans can provide.
Larry DeLaney:And that's a powerful way to put And there's actually some really flag for me. Ada, work out of the University That goes even further than just It looked at what happens when people actually use these biased AI tools.
Speaker:Yes. In November twenty twenty five, the University of Washington published findings from a study by Cex and Wilson and colleagues examining how people interact with AI generated hiring recommendations. What they found was striking. When the AI gave biased recommendations, human decision makers mirrored those biased choices approximately ninety percent of the time in cases of severe bias. But when those same people made hiring decisions without any AI input, they showed almost no significant bias. In other words, the AI didn't context of hiring. It actually amplified bias that would not have otherwise appeared. Doctor Aylin Caliskan, who helped lead this research, stated, and I'm quoting directly, I don't want to place all the responsibility on people using AI. The scientists building these systems know the risks and need to work to reduce systems biases.
Larry DeLaney:So the AI doesn't just carry In some contexts, it actually creates bias where there wasn't any. That's a completely different
Speaker:Exactly. And that brings us to the second that I think is the most What happens when the data is are good, and the math is still
Larry DeLaney:Okay, Ada. Math problem. You said the math itself can be I want to unpack this because this is the part that genuinely broke my brain when I first encountered it. And I say that as someone whose So let's go slow and make sure
Speaker:Let's start with a question that What does it mean for an AI To be fair, and immediately you There isn't just one definition There are at least three major definitions, and the research situations, you cannot satisfy This is what mathematicians call And this is not a bug in how we It is a fundamental constraint
Larry DeLaney:All right, let's define the three definitions in plain kindergarten English. For those listeners who aren't nine percent of us.
Speaker:The first is called calibration Imagine a risk score, say, for likelihood of defaulting on a loan. A score of seventy out of one percent chance of default. Calibration means that score of thing for everybody, regardless A white applicant with a score In a black applicant with a default at the same rate. That seems fair, right? The number means what it says The second definition is called error rate parity, also called equalized odds. This says the algorithm should for different groups. If the algorithm is wrong, about applicants, it should be wrong. About twenty percent of black This also sounds fair. Why should one group bear the more than another? The third is a simpler concept Overall, the algorithm should predict outcomes with the same level of precision across groups. Now here is the problem. In twenty seventeen, two simultaneously proved that in scenario, you cannot satisfy all The key papers were by Alexandra Nova at Carnegie Mellon and by Jon Kleinberg, Sendhil Mullainathan and Manish Raghavan. Their work is now foundational to the field of algorithmic fairness.
Larry DeLaney:So walk me through why they What's the catch?
Speaker:The catch is base rates. That's the term for how often an outcome actually occurs in a given population. If two groups have different frequency of the thing being groups, then you are You can make the scores mean the can make the error rates equal, The algebra literally does not
Larry DeLaney:Can you give us a real world example that illustrates why this matters? Why this isn't just an academic
Speaker:The most famous real world example is Compas, short for Correctional Offender Management profiling for alternative sanctions. This is a risk assessment tool used in criminal sentencing and parole decisions across the United States. It takes in factors about an individual and produces a score predicting the likelihood they will reoffend. In twenty sixteen, the investigative newsroom ProPublica published a landmark analysis of Compas scores for approximately seven thousand defendants in Broward County, Florida. What they found shook the ProPublica found that black defendants who did not go on to reoffend were nearly twice as likely to be incorrectly flagged as high risk compared to white defendants in the same situation. In the paper filed under the method behind the reporting, the false positive rate for black defendants was forty four point nine percent compared to twenty three point five percent for white defendants. The false negative rate flagging re-offend was twenty eight versus forty seven point seven controlling for prior crimes. Age and gender. Black defendants were forty five assigned a higher risk score.
Larry DeLaney:So the algorithm was labeling black people as dangerous at twice the rate, even when they weren't. That's a truly devastating
Speaker:It is in Northpointe, the called equivalent, pushed back. And here is where the impossibility theorem kicks in, because Northpointe was also correct. They showed that among seven out of ten, uh, roughly actually did reoffend, The score meant the same thing That is calibration. And compas had it. But because the base rate of for black defendants, inequities in policing and not achieve equal error rates at Both ProPublica and Northpointe They were measuring different
Larry DeLaney:And there's no version of this
Speaker:Not when the base rates differ. That is the core of Shoba's And Compas is still in use in The Wisconsin Supreme Court upheld its use in state versus Loomis in twenty sixteen, with the caveat that it cannot be the sole determinative factor in sentencing. The US Supreme Court declined to
Larry DeLaney:I want to bring in a business think there's sometimes this America is ahead of the curve on real problem, market forces Ada, what's the true picture
Speaker:The corporate world is at best with this, and it took to get even there. Google CEO Sundar Pichai, in an internal memo in February twenty twenty four regarding the company's AI image generator Gemini, producing racially skewed images, stated publicly, some of its responses have offended our users and shown bias. To be clear, that's completely unacceptable and we got it wrong. This from the CEO of one of the on the planet. It illustrates that even with intent, bias keeps surfacing.
Larry DeLaney:And legally, these are heating The workday lawsuit you flagged Can you walk us through that? Because I think it's going to be
Speaker:Absolutely. Derek Mobley, a black man over forty with a disability, filed a federal lawsuit in February twenty twenty three alleging that Workday's AI driven applicant screening tools discriminated against him based on race, age and disability. He applied to over one hundred jobs across seven years through platforms using Workday's hiring software, and was rejected every single time without a single interview. In July twenty twenty four, the Federal Court denied Workday's motion to dismiss, with Judge Ritalin, writing that drawing an artificial distinction between software decision makers and human decision makers would potentially gut anti-discrimination laws in the modern era. In May twenty twenty five, the as a nationwide collective Discrimination in Employment door for millions of plaintiffs.
Larry DeLaney:That could be enormous, and it directly names the software vendor, not just the companies using it.
Speaker:And that's the precedent setting Workday argued it was just a were making their own decisions. The court said no, the software itself participates in decision making and therefore can be held accountable under civil rights law. That is a seismic shift in how litigated going forward.
Larry DeLaney:All right, it's time for our algorithm of the week, our two to five minute deep dive spotlight on a single AI system that illustrates the broader themes of what we've been discussing. And this one is one of the most algorithmic bias in existence. Ada, tell us about the Optum
Speaker:This is a case study that research community thinks about Optum, a subsidiary of algorithm called Impact Pro used across the United States to needed extra care management. The scale here is important. Researchers estimated this algorithm class affected approximately two hundred million patients annually in the US. Here's the key the algorithm did The designers intentionally left They wanted it to be race neutral and by their own design specifications. It was. And yet, when researchers at Harvard Medical School, led by Doctor Ziad Obermeyer, published their findings in the journal Science in October twenty nineteen, they found that the algorithm was producing racially discriminatory outcomes at massive scale. The mechanism was a proxy entirely logical on its face. The algorithm was designed to have the highest health care flag those patients for as a proxy for need. It sounds reasonable. Sicker people need more care, Except, and this is the critical black patients had historically Not because they were healthier, because they had less access to care. The numbers are staggering. Among patients assigned the same twenty six percent more chronic At the ninety seventh percentile black patients were patients with the same score. The algorithm concluded they health care spending history was insurance geography and access, Correcting the algorithm to cost reduced the racial bias by
Larry DeLaney:I want to make sure that lands The algorithm didn't explicitly It used cost as a shortcut for inequities in healthcare access that effectively encoded decades algorithm that was marketed as
Speaker:Exactly. And this is now playing out in a much more acute and controversial way with Unitedhealth's newer NH predict algorithm, which has been at the center of major litigation, a class action suit, currently the estate of Jean B Larkin versus UnitedHealth Group, filed in twenty twenty three and surviving a motion to dismiss in February twenty twenty five, alleges that the company used AI to systematically deny post-acute care coverage to elderly Medicare Advantage patients, overriding doctors assessments. The plaintiffs allege the error rate based on how often zero point two percent of denied Um Unitedhealth's post-acute care denial rate increased from eight point seven percent in twenty nineteen to twenty two point seven percent in twenty twenty two. A bipartisan US Senate report released in October twenty twenty four corroborated these trends with two hundred and eighty thousand pages of internal documents.
Larry DeLaney:So the Optim algorithm shows us proxy variables create racial And the NIH predict situation used, whether intentionally or outcomes over patient welfare. Both are different flavors of Who decides what the algorithm
Speaker:That is exactly the right question, and it brings us perfectly to our third and final segment,
Larry DeLaney:This is the segment I've been think it's the one that reframes Ada, let's talk about the fact even if you navigate the math, in every AI system that are fundamentally value laden.
Speaker:right. Every predictive algorithm decisions that the algorithm First, which variables go in? Second, what outcome are we Both of those are human choices, and both carry enormous consequences that often go unexamined.
Larry DeLaney:Let's start with the variable Can you walk through how itself a value judgment?
Speaker:Sure, take a college admissions You want to know which applicants are most likely to succeed. Seems straightforward, but succeed at what? Graduate in four years. Achieve the highest GPA. Get a high salary job after Contribute to research. All of those are different for different applicants. An algorithm optimized for four year graduation rates might favor applicants who don't need to work jobs while in school, which correlates with socioeconomic background. An algorithm optimized for select for applicants going into deprioritizing teachers, The feedback loop compounds this once you start admitting students based on an algorithm, the outcomes of those students become the next round of training data. If the algorithm admitted mostly need to work and those students algorithm concludes, people with and the next class looks even The algorithm becomes a
Larry DeLaney:I've seen this described as a feedback loop where the AI prediction shapes reality and reality then reinforces the prediction. This is particularly stark in
Speaker:This is where feedback loops become most visible and most dangerous. Predpol, later rebranded as policing software used by the Initially, from twenty ten onward, the algorithm took historical arrest data and predicted which geographic areas would see crime, sending officers to those areas during their shifts. Here is the problem. Officers sent to a predicted hot because there are more officers New incidents are fed back into that the area is high crime. The algorithm sends more The cycle accelerates. A twenty twenty one Gizmodo and point nine million. Leaked Predpol predictions found neighborhood, the less likely it of actual crime rates. The LAPD discontinued Predpol in April twenty twenty, partly under pressure from activist groups.
Larry DeLaney:So the algorithm never had a way predictions were accurate. It just kept getting told it was presence creates more arrests.
Speaker:Precisely. And this is what Cathy O'Neil means when she identifies the dangerous triad of harmful algorithms opacity, scale and the absence of feedback an algorithm that is never corrected, that is never held up against real outcomes, keeps amplifying whatever bias it started with at massive scale. In a twenty twenty three Markup of Plainfield, New predictions, the accuracy rate one half of one percent Fewer matched actual reported crimes.
Larry DeLaney:Less than half a percent accuracy, and the tool was still being used to direct police resources. That's extraordinary. Ada, I want to bring in a voice here from doctor Joy Buolamwini, who is one of the most important figures in the algorithmic justice space. She founded the Algorithmic Justice League, and her research exposed major racial and gender bias in commercial facial recognition systems. What has she said about this
Speaker:Doctor Buolamwini has been clear on this point in her Ted She stated algorithmic bias, like human bias, results in unfairness. However, algorithms like viruses scale at a rapid pace. And in an NPR interview in twenty twenty three, she offered a metaphor that I think is striking. With the adoption of AI systems. At first I thought we were looking at a mirror, but now I believe we're looking into a kaleidoscope of distortion because the technologies we believe to be bringing us into the future are actually taking us back from the progress already made.
Larry DeLaney:Kaleidoscope of distortion. That's such a vivid image. And I think it captures This isn't just about AI It's about AI potentially in ways that are harder to see old fashioned human version.
Speaker:And because the outputs appear of objectivity that human When a human loan officer denies them in the eye and push back. When an algorithm denies your system flagged it. A professor, Safiya Umoja Noble whose book Algorithms of in search engines, wrote that just a glitch in the system, but operating system of the web. That word fundamental is doing a She's not saying it can be She's saying the structure
Larry DeLaney:And yet and I want to be careful here because I don't want to leave listeners feeling hopeless. Things are moving legally, The EU's Artificial Intelligence Act entered into force in August twenty twenty four and began phasing in major requirements, including mandatory human oversight of high risk AI systems and fundamental rights impact assessments. Illinois passed a law in twenty prohibits using ZIP code as a employment services, basically discrimination concept into twenty twenty six. New York City has had a local law requiring bias audits for AI hiring tools since twenty twenty three.
Speaker:The regulatory picture is genuinely evolving, though unevenly. The UI acts high. Risk. System requirements don't kick in until August twenty twenty six. In enforcement, mechanisms are still being built in the United States. The Biden era executive order on AI safety was revoked on January twentieth, twenty twenty five, and the new administration has prioritized AI innovation over AI regulation. Colorado's landmark comprehensive AI law, originally set to take effect February twenty twenty six, was delayed to June twenty twenty six during a twenty twenty five special session. So there is progress, but it is
Larry DeLaney:Which is exactly why Inform people. Asking better questions is how So let's take a quick stock of Ada. Before I wrap things up, I want to give you the floor for a synthesis. We've covered a lot today. The bad data problem with Amazon and safe rent, the proxy variable problem, the mathematical impossibility of satisfying multiple definitions of fairness all at once, illustrated by compass, the optimization target problem illustrated by the Optum healthcare algorithm, and the feedback loop illustrated by predictive policing. What's the through line here?
Speaker:The through line is this bias in be fixed with a single patch. It emerges from layered problems, biased historical data, mathematically constrained definitions of fairness, human choices about which variables matter and what outcomes to optimize for in self-reinforcing feedback loops. Any one of those problems alone Together, they mean that algorithmic systems requires accountable governance, not a And critically, none of this The Obermeyer study showed that Optim algorithm to predict care costs, racial bias was That's enormous. It means the choices matter. It means audits matter. It means transparency matters. And it means the question of who builds these systems, who audits them, and who has the power to challenge them. Those are not technical They are political and ethical
Larry DeLaney:Thank you. Ada. This is exactly the kind of Optimistic about the potential. Clear eyed about the current And that is where I want to Not in despair, but in the kind of informed discomfort that makes you want to learn more and act. Today, we learn that algorithmic data, though dirty data is real. It is about proxy variables. The mathematical trade offs baked into any prediction system. The human values encoded in loops that amplify initial We saw these patterns in the Amazon hiring tools bias against women in the Compas criminal risk scoring system, in its divergent error rates in the Optum healthcare algorithm that mistook historical inequity for a biological health, and in predictive policing systems that sent officers in circles confirming their own predictions. And we've seen that the legal landscape is starting to catch up. Slowly, unevenly. But it is moving. The workday lawsuit, the safe rent settlement, the EU AI act, state laws in Illinois and Colorado. The gears are turning, and how you push them. Next week, episode four of our going to tackle the hardest actually do about it? We are calling the episode are going to look at the legal fighting back against affect your life. We'll cover your right to explanation under existing and emerging laws. The accountability gap when AI makes a mistake and nobody knows whose fault it is, and the real, actionable steps that individuals, communities and policymakers are taking right now. It is going to be a powerful you to miss out. So if you found this episode you did, here's what I'd love First, subscribe to AI you listen to podcast. Every subscriber matters and helps us reach more people who need to hear these kinds of conversations. Second, leave us a review. Even two sentences helps new listeners find us and decide to tune in. Third, share this episode. Send it to a friend, a is curious about AI. Share it on your social media. Tag us. We are AI innovations unleashed on all your major social media platforms. You can also find our full resources on our website and on into the stories and research All of the studies, legal cases, are fully cited in the show deserve to be able to verify So thank you, Ada, for bringing Thank you next for the unhinged And most of all, thank you to you, the listener, for caring about this stuff, for taking your time and your curiosity, and investing it in understanding how the invisible systems around you actually work. That matters. So stay curious, stay informed, AI Innovations Unleashed. I'm your tour guide J and this has been the invisible AI episode three.