Heliox: Where Evidence Meets Empathy π¨π¦β¬
We make rigorous science accessible, accurate, and unforgettable.
Produced by Michelle Bruecker and Scott Bleackley, it features reviews of emerging research and ideas from leading thinkers, curated under our creative direction with AI assistance for voice, imagery, and composition. Systemic voices and illustrative images of people are representative tools, not depictions of specific individuals.
We dive deep into peer-reviewed research, pre-prints, and major scientific worksβthen bring them to life through the stories of the researchers themselves. Complex ideas become clear. Obscure discoveries become conversation starters. And you walk away understanding not just what scientists discovered, but why it matters and how they got there.
Independent, moderated, timely, deep, gentle, clinical, global, and community conversations about things that matter. Breathe Easy, we go deep and lightly surface the big ideas.
Heliox: Where Evidence Meets Empathy π¨π¦β¬
βοΈ When the Math Decides: Algorithms, Liberty, and the Fight to Stay Human
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
π Read: Available for Broadcast, Apple, Spotify, YouTube and much more.
Your childcare just got cancelled by math. No human saw your file. Welcome to the episode that explains why β and what's fighting back.
A deep dive into artificial intelligence, human rights law, and the invisible architecture shaping your daily life.
What the SyRI ruling gave us, beyond justice for the people it harmed, was a mirror. Held up to every government and corporation in the world, it showed us what happens when we hand moral authority to an algorithm without legal constraint.
For years, the tech industry managed this mirror by placing a fig leaf in front of it called ethics. And the ethics documents were beautiful β glossy, thoughtful, full of sincere-sounding language about dignity and fairness. They just weren't enforceable.
HANDBOOK ON HUMAN RIGHTS AND ARTIFICIAL INTELLIGENCE
#AIandHumanRights #AlgorithmicJustice #SurveillanceState #DigitalRights
This is Heliox: Where Evidence Meets Empathy
Independent, moderated, timely, deep, gentle, clinical, global, and community conversations about things that matter. Breathe Easy, we go deep and lightly surface the big ideas.
Disclosure: This podcast uses AI-generated synthetic voices for a material portion of the audio content, in line with Apple Podcasts guidelines.
We make rigorous science accessible, accurate, and unforgettable.
Produced by Michelle Bruecker and Scott Bleackley, it features reviews of emerging research and ideas from leading thinkers, curated under our creative direction with AI assistance for voice, imagery, and composition. Systemic voices and illustrative images of people are representative tools, not depictions of specific individuals.
We dive deep into peer-reviewed research, pre-prints, and major scientific worksβthen bring them to life through the stories of the researchers themselves. Complex ideas become clear. Obscure discoveries become conversation starters. And you walk away understanding not just what scientists discovered, but why it matters and how they got there.
Independent, moderated, timely, deep, gentle, clinical, global, and community conversations about things that matter. Breathe Easy, we go deep and lightly surface the big ideas.
Spoken word, short and sweet, with rhythm and a catchy beat.
http://tinyurl.com/stonefolksongs
Imagine waking up on a Tuesday morning. You go to your mailbox, expecting the usual clutter of bills and flyers. Right, just a totally normal day. Exactly. But instead, there's this formal letter from your local government. So you open it, and the text is just stark. Oh no! Yeah, it says your childcare subsidy, you know, the money you rely on to keep your kids in daycare while you work has been suspended. Wow. And not only that, but you are now under official investigation for welfare fraud. That is terrifying. Right. So you sit down at your kitchen table just completely bewildered. I mean, you review your tax returns, your employment stubs, all your application forms. Looking for the mistake. Yeah. But you did everything right. There's no missing income. There's no, like, hidden bank account anywhere. But what you don't know and what that letter definitely doesn't explain is that your actual paperwork didn't trigger this. Right. The trigger was entirely invisible. Silently, just in the background of some municipal server somewhere, a machine flagged you. And the truly terrifying part is the criteria that the machine actually used. Yeah, because you weren't flagged because of an action you took. You were flagged based entirely on like... the neighborhood you live in. Combined with maybe a late utility bill from three years ago? Exactly. Or the fact that your roommate once had a dispute over a parking ticket. Right. So the algorithm just clusters those totally disparate data points, runs them through a predictive model, and basically decides your profile mathematically matches the behavioral patterns of a fraudster. And perhaps the most chilling aspect of this entire scenario is the human element or, well, the complete lack of it. Exactly. Because no human being ever looked at your file before that letter was sent. Never. A black box algorithm just ingested your digital footprint, assigned you a risk score and totally upended your life. It sounds like a movie. It really does. Now, if this sounds to you like a paranoid plot from some dystopian science fiction novel, I have some sobering news. This isn't fiction. No, it really isn't. This exact scenario played out in the real world. Real people lost their livelihoods, their peace of mind, and their dignity just because a machine categorized them as a threat. It is the ultimate nightmare of the modern administrative state. How do these two forces, the cold, optimizing logic of AI and the foundational rights of human beings, how do they coexist? It's the central tension of our era, really. I mean, on one side, you have this relentless push for technological efficiency and profit. And on the other, you have these fragile legal frameworks that we've designed over centuries just to protect individual dignitaries. So we are going to map out this new algorithmic frontier today. We have a massive stack of landmark sources to get through. We really do. We're looking at the sweeping new Council of Europe Framework Convention on AI. Also the highly anticipated European Union AI Act. Our mission today is to trace the journey from vague, feel-good tech ethics to hard and forcible global law. By the end of this deep dive, you'll understand exactly how these invisible frameworks operate, and more importantly, what this means for your privacy, your autonomy, and your daily life. Because every single time you apply for a job, cross a border, or visit a doctor, this invisible architecture is assessing you. Let's start by unpacking that inciting incident I mentioned earlier. Let's look at this very real digital welfare dystopia. Yes, let's go there. Because to understand why global lawmakers are suddenly rushing to regulate AI, we have to look at the collateral damage that forced their hands. We have to look at the Netherlands in the year 2020. Right. And a system known as Syri, S-Y-R-I. The system risk indication case. If there is a patient zero for why human rights law most aggressively intersect with artificial intelligence, it is absolutely this case. CIRI was, at its core, a digital welfare fraud detection algorithm deployed by the Dutch government. But calling it a fraud detection tool wildly understates the scope of what it was actually doing in practice. So let's get into the mechanics of it. How did this thing actually function? Well, think about the sheer volume of data a modern government holds on you. I mean, they know where you live, who you live with, where you work. How much you pay in taxes, your debts, what subsidies you receive. Exactly. And historically, this data lived in separate silos. The tax department had their files. The housing authority had theirs. They didn't really talk. Right, which was kind of a natural privacy barrier. Right. But Siri was designed to explicitly break down those walls. It was an automated pipeline that just vacuumed up vast amounts of disparate citizen data from multiple government agencies and pooled it all together. So they basically created a massive centralized data lake of human lives. Precisely. And once it found those patterns, it applied them to the general population. It was a predictive risk scoring machine. So it wasn't looking for people who had already committed a crime. No, not at all. It was scoring people on their statistical likelihood of committing one in the future. And it was deployed in a highly specific way, wasn't it? They weren't just running this on everyone in the country. No. And this is where the human rights violation becomes truly egregious. The government deployed Siri in specific targeted geographical areas. Let me guess. Yes. Civil rights groups took the Dutch government to court. And in 2020, the district court of The Hague actually stepped in and halted the entire system. The Hague's ruling was just a watershed moment in global jurisprudence. I mean, it wasn't just a minor administrative reprimand. They really dropped the hammer. They did. The court looked at this automated risk scoring system and unequivocally ruled that it violated international human rights law. The core philosophy of the safety net completely changes. The system becomes entirely obsessed with fraud elimination. Budget cutting, market driven efficiency. Exactly. The algorithm doesn't see a struggling single mother. It just sees a mathematical risk factor threatening the municipal budget. Now, I have to stop here and play devil's advocate for this. for a second. OK, go for it. Because I can hear a certain segment of the audience asking a very logical question. Wait a minute. Isn't finding fraud a good thing? Sure. If people are stealing taxpayer money, the government should try to stop it. Right. And using computers to crunch numbers is just efficient. So how is Siri fundamentally different from, say, my credit score? That's the classic comparison. Right. Because a private bank looks at my financial history, they run it through an algorithm, and they give me a three-digit number that predicts if I'm going to default on a mortgage. We accept credit scores as a normal part of life. Why is a government risk score suddenly a human rights dystopia? It's a crucial pushback. And grappling with that exact comparison is honestly what took regulators years to figure out. So what's the difference? When you look at the mechanics, the difference between a credit score and a system like Siri is the difference between a regulated tool and an arbitrary weapon. Okay, break that down for me. First, consider transparency. A credit score, while complex, is essentially transparent in its inputs. You know exactly what builds or destroys your credit. Right. It's payment history, credit utilization, length of credit history. I can pull my credit report any time. Exactly. And if there is a mistake, if they think you missed a payment but you didn't, you have a clear legal avenue to contest it and force a correction. I have agency in that system. You do. Now look at Siri. It was entirely covert. Citizens didn't know they were being scored. They didn't even know what data the algorithm was pulling. Right. And because it was a proprietary predictive model, they couldn't contest the logic. How do you argue against a machine that says people who live on your street and pay their water bills three days late have a 78 percent higher chance of committing tax fraud? You just can't. There's no mechanism to prove your own future innocence against a statistical probability. The second and perhaps more profound difference lies in international human rights law itself. In law, any state interference with a fundamental right, like your right to privacy, has to pass what is known as the test of necessity and proportionality. Let's define that for a moment because proportionality means the government's action has to be the least intrusive way to achieve a legitimate goal. Right. You don't use a sledgehammer to kill a fly. Exactly. So preventing fraud is a legitimate state. spectacularly. It is the absolute definition of a sledgehammer. It treats entire demographics as inherent suspects just based on their socioeconomic status. Exactly. So bringing this back to you, the listener, what the Seara case really revealed is that the traditional ways we interact with authority are changing fast. Very fast. If a government algorithm can categorize you as a criminal risk without a human trial, without probable cause and without you even knowing it, the baseline protections of democracy just... start to crumble.- Siri wasn't a warning about the future. It was proof of present real-world harm. And it triggered a massive realization across the globe.- The realization that we could no longer rely on good intentions. I mean, for years, the tech industry had been operating under this assumption that we could manage the risks of AI through voluntary ethics.- Yeah, tech ethics. But Slender proved that ethics are just a flimsy shield against algorithmic harm. We needed a fundamental shift. And that brings us to the global regulatory response. We are officially moving away from the soft world of tech ethics and crashing into the hard, enforceable world of human rights law. There is a brilliant analysis in our sources by Alison Berthed, writing for the Open Global Rights Platform that perfectly captures this transition. She points out how incredibly fond technology companies are of the word ethics. Oh, they love it. I mean, if you look at any major tech firm in Silicon Valley or globally, really, they all have these beautifully designed AI ethics charts. Right. They hire ethicists, they form internal review boards, they publish these sweeping statements about their commitment to AI for good. But Berthet argues that this obsession with ethics is actually a calculated distraction. It is, because ethics, in a corporate context, are entirely voluntary and highly subjective. Right. There is no universal, legally binding definition of what constitutes an ethical algorithm. But the moment those ethical guidelines delay a product launch... Or threaten a billion-dollar revenue stream... The Internal Ethics Board is often just sidelined or completely disbanded. There's zero enforcement mechanism. They're essentially writing their own rules, grading their own homework, and just giving themselves an A plus A. The Council of Europe's Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law. It's a mouthful of a title, but its significance just cannot be overstated. The Council of Europe includes 46 member states, and this convention is the very first legally binding international treaty designed specifically to ensure that the life cycle of AI systems aligns with human rights. It essentially establishes the floor. It sets out seven core principles that signatories have to build into their domestic law. Right. And those principles are human dignity, individual autonomy, transparency and oversight, accountability, non-discrimination, privacy and reliability. Now, on the surface, those principles sound universally agreeable. I mean, nobody is going to stand up and argue against human dignity. Nobody. But when you look at the history of how this treaty was drafted, the negotiations were an absolute battlefield. Let's dig into that friction. Where did the countries actually disagree? The most intense legal tension revolved around scope. Specifically, who should this treaty apply to? Because historically, international human rights treaties are designed to restrain state actors. I mean, a single social media algorithm run by a tech giant can impact the democratic process or the mental health of millions far more rapidly than a traditional government policy. So during the drafting of the convention, this massive fight broke out. Mm-hmm. Some nations argued the treaty must strictly regulate private tech companies. And the other side. Other nations, terrified of stifling innovation and losing the global tech race, argued the treaty should only apply to public authorities. Basically protecting the tech giants. So where did they land? Did they find a compromise? They engineered a very delicate legal compromise. The treaty absolutely obligates states to apply these principles to public authorities. So no more serious systems. That's a hard rule. Good. But what about the private companies? Regarding private companies, the treaty says states must take measures to ensure private actors respect these rights, which gives individual countries significant flexibility in how they actually enforce that. Ah, so it leaves the door open for interpretation. A lot of interpretation. But while the Council of Europe was building this broad philosophical framework... The European Union itself was taking a completely different approach. Oh, completely different. Yeah. They weren't just writing principles. They were writing a technical manual. Enter the landmark EU AI Act. The EU AI Act is arguably the most complex piece of technology legislation drafted since the General Assembly. Data Protection Regulation, the GDPR. It is just exhaustive. The way I understand the EU AI Act is to look at it not as a human rights document, but as a product safety law. That's a great way to look at it. Think of it like a nutritional food pyramid or the warning labels on a prescription drug. The primary legal obligation here is just transparency. Meaning the AI cannot pretend to be human. Correct. If you are interacting with a chat bot, the system must clearly inform you that you are speaking to a machine. If you are looking at a defake image, it must be watermarked or labeled as synthetically generated. It's about protecting human cognitive autonomy. you have the fundamental right to know what is real. Exactly. But the real teeth of the AI act, you know, the parts causing tech executives to lose sleep right now are the top two tiers. Yes. Let's look at the absolute peak of the pyramid. Unacceptable risk. This is the prohibited list. These are AI applications that the European Union has decided are so fundamentally toxic to human rights that they're explicitly banned from being deployed anywhere in the EU. And looking at this prohibited list is fascinating because it reads like a direct response to the specific abuses we've seen over the last decade. It's basically a history of what went wrong. Right. So first, the act bans any AI that deploys subliminal, manipulative, or deceptive techniques and your historical viewing habits. Okay, so it knows a lot about my physical skate. It knows exactly what microstimuli trigger anxiety or compulsion in your specific brain. If an algorithm uses that data to feed you highly targeted rapid-fire content designed to subconsciously push you into a destructive financial decision or self-harm, That is a subliminal distortion of your autonomy. And the EU says absolutely not banned. Banned. They also ban AI systems that exploit the vulnerabilities of specific groups. So an algorithm designed to identify people suffering from dementia and specifically target them with deceptive telemarketing. that would be totally illegal. Rightfully so. Another major prohibition on this list is social scoring. This is a big one. It really is. The act bans the evaluation or classification of natural persons based on their social behavior or known or predicted personal traits if that score leads to unjustified or detrimental treatment. Oh, so this directly outlaws the exact mechanism used in the CRI case. Yes, directly. You cannot arrest, investigate, or harass someone just because a statistical model says their profile looks like a criminal. Here is one that really shocked me when I read the text of the act. The ban on emotion recognition in the workplace and an educational institution. This is a profound protection of cognitive privacy. Let's talk about how this actually works, because how does a camera know what you are feeling? Well, companies have developed AI systems that use high-resolution cameras to track your facial microexpressions. Like really tiny movements. Yeah. They track pupillary dilation, the slight contraction of your zygomatic muscles, your blink rate, your vocal inflections. Wow. And then the algorithm takes these physical biometrics and assigns a statistical probability to your internal mental state. It categorizes you as engaged, frustrated, bored, or angry. The idea that an employer could point an AI camera at your cubicle, monitor the microscopic movements of your face for eight hours a day, and use an algorithm to report to your boss that you were 14% less engaged on a Tuesday afternoon, it is fundamentally dehumanizing. It is. And the science behind it is highly contested anyway. Expressions of emotion vary wildly across cultures and neurodivergent populations. Right. Not everyone shows frustration the same way. Exactly. The EU recognized that using the pseudoscience to make employment or educational decisions is just an unacceptable violation of privacy and dignity. So it's banned. It is banned, except for very strict medical or safety reasons. like, say, a camera monitoring a long-haul truck driver specifically to detect if they are falling asleep at the wheel. That's a safety exception. Okay, that makes sense. Finally, the act prohibits biometric categorization systems that infer sensitive attributes. You basically cannot use AI to scan a crowd and categorize people based on race, political opinions, trade union membership, religious beliefs, or sexual orientation. Which brings us to the most fiercely debated, incredibly nuanced section of the entire prohibited list. Here we go. The use of real-time, remote biometric identification in public spaces. Or in plain English, live, facial recognition cameras operated by the police. This was the absolute lightning rod of the AI Act negotiation. such as a foreseeable terrorist attack. That makes sense. And lastly, they can use it to identify or locate a suspect who has committed a serious crime. But the act explicitly lists what constitutes a serious crime. We are talking about murder, rape, armed robbery, or trafficking weapons. Right. They cannot use live facial recognition to track down someone who shoplifted a candy bar or dodged a subway fare. Exactly. And even if a situation meets one of those extreme criteria, the police can't just flip a switch in the precinct and activate the city's cameras. No. Even in those exceptional cases, the procedural burden is incredibly high. Law enforcement must conduct a prior fundamental rights impact assessment. We know the rules for a resume sorting algorithm, but there is a massive elephant in the room here. Yes, there is. What happens when the AI isn't built for just one specific task? You are talking about the massive paradigm shift in the technology itself. Exactly. I mean, in the last few years, the tech industry has completely pivoted toward massive foundational models. systems that can write poetry generate photo realistic images write functioning software code and pass the bar exam all at the same time the technology that is completely reshaping the global economy as we speak right so the legal terminology for this is general purpose AI or GPA I How on earth does a law designed around specific use cases handle a machine that can do almost anything? It was a massive headache for the drafters of the EU AI Act because these models didn't really exist in their current form when the law was first proposed. They had to pivot mid-draft. They did. The act defines general purpose AI as models trained on massive amounts of data, using self-supervision at scale, capable of performing a wide range of distinct tasks, and that can be integrated into an infinite variety of downstream systems. Think of the underlying engines that power the most famous chatbots today. Exactly. But if they can be used for anything, how do you regulate them? The EU decided they couldn't regulate them based on their application. So instead, they decided to regulate them based on their raw power. They introduced this fascinating, highly technical metric to determine if a GPAI model poses a systemic risk to society. This is one of the most interesting parts of the act to me. Explain this threshold. The act states that a general purpose AI model is presumed to present a systemic risk if the cumulative amount of computational power used to train it exceeds 10 to the power of 25 floating point operations. And stop and translate that because floating point operations or FLOPs is heavy computer science jargon. What exactly is a FLOP? A fleeting point operation is basically a single calculation involving decimal numbers. It is the fundamental unit of math that a computer processor performs. Okay. When an AI is learning, when it is training its neural network, it has to perform trillions upon trillions of these tiny calculations to adjust the weights and biases of its internal logic. So 10 to the 25th power FLOPs. That is a 10 with 25 zeros behind. It is an unfathomable amount of math. To give some context to the physical reality of that number, achieving that level of compute requires warehouses full of highly advanced microchips, running at full capacity for months, consuming enough electricity to power a small city, and requiring massive industrial cooling systems just to keep the servers from literally melting. Exactly. It's an infrastructure project on the scale of building a dam or a power plant. The EU's logic is brilliant in its pragmatism. They're basically saying, we don't know exactly what your software is going to do. But if you possess the financial and physical resources to build a computational engine that massive, the engine inherently possesses the potential to cause systemic damage across society. It's like regulating a vehicle not by asking the driver where they plan to go, but simply by measuring the horsepower of the engine. Right. If you build an engine with 50,000 horsepower, you are subject to federal oversight, whether you plan to drive it to the grocery store or race it in the desert. That is the perfect analogy. If your model crosses the 10 to the 25th FLOP threshold, you are no longer viewed by the law as just a scrappy software developer. You are operating a system of systemic societal importance. So what happens when a tech giant crosses that line? What are their new legal obligations? Well, all providers of general purpose AI have basic transparency rules, right? Yes, they have to maintain technical documentation. And crucially, they must publish a sufficiently detailed summary of the data they use to train the model so that authors and artists can exercise their rights under copyright law. But if you cross that systemic risk threshold, the requirements escalate dramatically. They enter a whole new regulatory regime. Yes. Providers of systemic models must perform rigorous model evaluations. This includes mandatory adversarial testing, commonly known in the industry as red teaming. Red teaming is such a fascinating concept to me. It's essentially paying brilliant engineers to attack your own product. Exactly. Before the model is released to the public, you have to actively try to break it. You have to try to trick the AI into generating instructions for a biological weapon and or outputting highly classified code, or generating discriminatory propaganda. You must find the vulnerabilities and patch them yourself. Furthermore, these providers must track and report any serious incidents to the newly formed European AI office without undue delay. And they must ensure an adequate level of cybersecurity protection for the model itself. On that note about cybersecurity, one of our sources, an analysis from the International Association of Privacy Professionals, highlights a critical shift in thinking. The article discusses rethinking cybersecurity and AI governance. Because it points out that securing a massive, systemic AI model isn't just about traditional IT security. It's not just about preventing a hacker from stealing user passwords. No, it's far more complex than that. It's about ensuring the integrity of the model's weights and biases. If a bad actor gains access to attend to the 25th FLOP model, they could subtly alter its logic. They could weaponize it to autonomously launch cyber attacks on critical infrastructure at a speed and scale that human security teams couldn't possibly count. The model itself becomes a vector for systemic vulnerability. Okay, we've covered a massive amount of ground here. We've mapped out the grand legal frameworks, the EU AI Act, the compute thresholds, the prohibited lists. We have. But laws are written on paper by politicians in Geneva and Brussels. Artificial intelligence is actually built, deployed, and sold by private corporations sitting in boardrooms in Silicon Valley, London, and Shenzhen. That's the real disconnect. Exactly. So how do we bridge that gap? How do we ensure that these massive profit-driven businesses actually operationalize these human rights standards internally? That is the multi-billion dollar question. And to find the answer, we turn to some incredible deep dive academic research from Isabel Ebert at the Harvard Kennedy School. Her work focuses exactly on this intersection, how to force corporate compliance to align with international human rights. Ebert's journey in researching this is a story in itself. She didn't just sit in a university library reading legal theory. She went out into the trenches. Her methodology, which is detailed in the paper, is staggering. She conducted 25 extensive virtual interviews with global policymakers, tech executives, and human rights representatives. She's a great story. She organized in-depth focus groups in Oxford. She held massive multi-stakeholder consultations in Geneva. And then she traveled to run closed-door business roundtables with executives in Brussels and San Francisco. What she was desperately searching for was the reality on the ground. She wanted to know how the people writing the laws and the people writing the code actually interacted when it came to human rights. And what she discovered through all these interviews was a critical systemic problem. The landscape was a mess. It was incredibly fragmented. The experts she interviewed widely agreed that existing tech regulations were completely inconsistent. Like one jurisdiction might demand strict algorithmic audits while a neighboring country required absolutely nothing. And this lack of consistency is dangerous. It dilutes corporate responsibility and creates a highly uneven playing field. If you are a multinational tech company and the rules change every time your data crosses a digital border, it creates massive compliance confusion. Or, honestly worse than confusion, it creates an opportunity for exploitation. Companies will naturally gravitate toward the jurisdiction with the weakest laws. It's a race to the bottom for human rights. Exactly. Furthermore, Ebert noted that policymakers themselves were struggling. They often lacked the time, the budget, and the highly specialized technical resources required to truly grasp the deep complexities of how machine learning impacts human rights. So to solve this massive disconnect, Ebert and her team developed a practical framework for policymakers. They called it the UNGP's compass. The UNGP's compass. It is a strategic tool designed to explicitly align domestic tech regulation with the United Nations guiding principles on business and human rights. The UAGP's. These principles are widely considered the gold standard for corporate responsibility. Let's walk through the four steps of Ebert's compass. If I'm a regulator trying to rein in a tech giant, how do I actually use this? Step one is scoping and identification of the objective. Ebert's compass. Before you write a law, you need to clearly define the specific human rights problem the technology is causing. Are we dealing with privacy violations? Labor rights? discrimination. You also have to identify the stakeholders and map out the exact business model of the tech company, right? Right. How are they making their money and how does that revenue stream incentivize human rights abuses? Okay, so step two is the regulatory gap analysis. This is where regulators look at their existing domestic laws like consumer protection, data privacy, anti-discrimination, and ask what is missing? Where are the specific AI risks slipping through the cracks of 20th century legislation? Step three is crucial. It's the identification of avenues for regulation. Ebert argues that policymakers need to find a smart mix of responsive regulations. You can't just rely on traditional command and control penalties like slapping a fine on a company after the damage is already done. Right, because by then, lives are ruined. The regulation might need to include mandated transparency reports, continuous iterative consultations between the company and civil society,
But corporations exist for one fundamental reason:to generate profit for their shareholders. That is their fiduciary mandate. That's true. So if a piece of AI software is generating billions of dollars in revenue and adopting this human rights framework requires them to slow down, alter their code, and limit their market reach. Why would a board of directors ever voluntarily agree to it? Without the threat of severe existential financial penalties, isn't this UNGP's framework just asking massive corporations to politely regulate themselves? That is a very cynical and highly accurate critique of voluntary corporate ethics. But this is exactly where the U.N. guiding principles introduce a legal mechanism that completely changes the dynamic. The goal of integrating the UNGPs into hard law is to make a concept called human rights due diligence, or HRDD, legally mandatory. Mandatory human rights due diligence. Break down how that actually changes a company's behavior in practice. Exactly. Because the liability shifts. Right. If a tech company deploys a massive AI system and they fail to conduct rigorous documented due diligence and that AI subsequently causes harm, say it systematically denies loans to minorities. The company faces immense legal liability. They can be sued. They can face massive regulatory fines that impact their global revenue, and their executives can be held personally accountable. Mandatory HRDD forces the board of directors to align the financial risk of a massive lawsuit with the risk of human rights abuse. Yeah. It builds the cost of human rights violations directly into the business model. Okay, that makes sense. If you make ignoring human rights more expensive than respecting them, the math of corporate greed actually starts to work in favor of the public. That's the goal. So we've looked at the grand frameworks, the EUAI Act, the massive compute thresholds, the corporate compliance models. But let's bring this down to earth. Let's look at how this invisible architecture actually impacts you, the listener, across the different sectors. of society. We are drawing here from the deeply detailed Council of Europe handbook on AI and various reports from the Office of the High Commissioner for Human Rights.
Let's start with a sector where the stakes are literal life and death:healthcare. Healthcare perfectly encapsulates the dual-use nature of AI. On one hand, the benefits are practically miraculous. AI models can process complex medical imaging like MRIs or CT scans. and detect early-stage tumors far more rapidly and accurately than a human radiologist. It can synthesize millions of medical journals to suggest personalized treatment plans. But the integration of these models introduces massive, life-altering risks. The sources detail some truly disturbing examples of what happens when the math goes wrong. For instance, there are documented cases where AI algorithms used in hospitals have systematically blocked kidney transplants for black patients. Yes. The algorithms were relying on outdated, racially biased metrics embedded deep within their historical training data. We see the same systemic bias regarding ageism. The World Health Organization released reports noting that AI diagnostic systems often perform poorly on older patients. Why? Because the historical data sets used to train the models heavily favored younger, healthier populations. If the machine's entire universe of data suggests that a healthy body is a 30-year-old body, it is statistically prone to misdiagnosing the natural aging process of a 75-year-old as an anomaly. or just failing to recognize geriatric-specific symptoms. But beyond the technical issue of biased data, there is a fundamental philosophical and legal crisis occurring in doctors' offices right now regarding the concept of consent. Yes. This strikes at the heart of Article 8 of the European Convention on Human Rights, which guarantees the right to a private life, encompassing personal autonomy and the absolute necessity of informed medical consent. Let me try to wrap my head around this. Imagine you are a patient facing a life-threatening illness. Your doctor sits down and says, based on your charts, the hospital's new AI diagnostic system recommends we proceed with this highly aggressive surgery. Naturally, you ask why. Why that surgery and not the medication? Right. Now, if that AI is a deep learning neural network, it is essentially a black box of probabilistic math. The doctor might honestly have to look at you and say, "I don't know exactly why the machine chose this route, it synthesized three million variables, and I can't explain its causal logic." AI is entering the classroom at lightning speed, often marketed as a way to personalize learning. It's being used to track student attendance, monitor behavioral engagement, and analyze language processing to predict academic success. The human rights risks here involve children's cognitive autonomy, privacy, and the right to an equal education. AI systems that categorize students early in their lives based on predictive analytics can permanently reinforce structural inequalities. Think about the mechanism of this. Imagine an AI system monitors a seven-year-old's keystrokes, their focus time on a screen, and their behavioral microdata. Moving on to the workplace. The stakes shift here from cognitive development to economic survival. The sources highlight the increasing deployment of smart glasses in warehouses to monitor worker efficiency and AI screening tools used by HR departments to filter resumes and conduct automated video interviews. It's exactly why the EU AI Act classifies AI systems used for recruitment, promotion, and termination as high risk. The threat to a worker's right to privacy and non-discrimination is immense. If an AI is scanning your resume, it might penalize you for the link of the gap in your employment, or subtly discriminate based on the zip code of your high school. If you are wearing smart glasses that track your eye movement to measure your productivity rate, Your employer is using an invisible algorithm to make critical economic decisions about your livelihood based on parameters you cannot see and cannot negotiate with. Exactly. The asymmetry of power between the worker and the algorithm is absolute. Now, let's examine the macro level. Democratic processes and borders. This is where AI intersects with raw state power. The sources highlight the profound threat AI poses to democratic elections, specifically the proliferation of hyper-realistic deepfakes and the algorithmic manipulation of voters through hyper-targeted social media feeds. The new regulations are attempting to mandate that massive platforms maintain complete tamper-proof audit logs to trace exactly how and why specific political information is disseminated to specific users. users. But there is an even more politically sensitive topic detailed in the reports regarding the intersection of law enforcement, automated security, and civil protests. Yes, and we want to be very clear that we are impartially reporting the findings and concerns raised by the UN Special Rapporteur and the Office of the High Commissioner for for human rights here. Right, because there was a massive ongoing debate between national security agencies who argue they need advanced tools to maintain public order and civil society organizations who warn of authoritarian overreach. The UN reports detailed deep concerns about the use of AI-based surveillance, specifically the deployment of facial recognition cameras and biometric tracking on the organizers and participants of peaceful assemblies and protests. The OHCHR has expressed such alarm over this that they have officially called for a moratorium on the use of facial recognition in the context of peaceful protest. Their argument is that even if the police don't arrest anyone, the mere presence of an AI system quietly logging the biometric identity of every single citizen attending a political rally creates a massive chilling effect on the freedom of assembly and freedom of expression. People will simply stop protesting if they know their face is being permanently fed into a state database. And there are similar highly debated concerns reported regarding border enforcement. Yes. The U.N. and various human rights bodies suggest that the deployment of digital technologies and border management is inadvertently creating racially discriminatory feedback loops. They point to complex automated screening systems, like the European Travel Information and Authorization System, or TIAS. ETIAS uses AI-driven predictive models to screen travelers before they reach the border, assigning risk profiles to determine who gets a visa. The concern raised by human rights advocates is that these predictive models are trained on historical border enforcement data, which is heavily skewed by decades of existing geopolitical and xenophobic biases. If the AI learns from a history of discrimination, it will mathematically reproduce and amplify that discrimination at scale, hiding it behind a veneer of objective technology. Again, we are simply relaying the profound concerns published by international human rights bodies regarding how automated security systems can codify bias. It is perhaps the most heavily contested battleground on the algorithmic frontier. So across all these sectors we've discussed, healthcare, education, work, the justice system, and our borders, when a human right is violated by an AI, we inevitably hit a massive brick wall. The Council of Europe handbook breaks down three critical technical concepts that are constantly and wrongly used interchangeably by politicians in the media. Transparency, explainability and interpretability. Understanding the distinct difference between these three words is the key to surviving the AI revolution. let's use an analogy to separate them let's imagine you are an inspector at a high-end bakery and you are trying to figure out why a massive wedding cake collapsed I like where this is going transparency is the ingredients list transparency means you know exactly who the baker is where they source their flour and what type of oven they bought In AI, explainability means we can translate the complex mathematical architecture of the system, the number of parameters, the general function of the neural network layers into terms a human can broadly comprehend. We understand the general physics of the model. But even explainability isn't enough if you've been personally harmed by a specific decision.
You need the ultimate standard:interpretability. Interpretability is knowing exactly what happened inside the oven at minute 42. It is understanding that the baker accidentally swapped a teaspoon of salt for a cup of sugar, fundamentally altering the chemical structure of that specific tier of the cake, causing
the collapse. Interpretability is the direct cause and effect. In AI, it answers the question:why did input A lead specifically to output B? Why did the algorithm weigh my specific zip code against my specific age to output a decision to deny my specific loan application. And here is the terrifying truth about modern artificial intelligence. That level of true interpretability is incredibly difficult, and in some of the most advanced deep learning neural networks, practically impossible to achieve. Because the models have billions, sometimes trillions, of parameters adjusting weights and biases across hidden layers of math in ways that even the engineers who designed them cannot fully trace. which creates a massive, unprecedented legal problem known as the algorithmic injury. We have to unpack this because the sources point out a crucial, radical legal evolution happening right now to address this exact impossibility. Historically, for hundreds of years of common law, if you were harmed, say, a defective lawnmower injured you, the burden of proof was entirely on you, the plaintiff. You had to go to court and prove exactly how the manufacturer's negligence caused your specific injury. But how can we apply that standard to AI? How can an average citizen prove the internal logic of a proprietary trillion parameter black box algorithm? They can't. You cannot ask a fired warehouse worker to reverse engineer a neural network to prove to a judge why the AI's productivity metric was biased against them. It's mathematically impossible. Precisely. The legal system recognized that if they maintained the old burden of proof, tech companies would have complete impunity. Therefore, regulations are actively shifting the burden of proof. We are seeing a monumental move toward establishing a presumption of AI malfunction in certain cases of harm. That is a total paradigm shift. It is. The law is starting to say to the operators of high-risk AI systems, if a person was harmed by your algorithm, if they were falsely arrested due to facial recognition or unfairly denied a life-saving surgery, we, the court, legally presume that your AI malfunctioned. The burden is no longer on the citizen. The burden is now on you, the multi-billion dollar tech corporation, to open up your black box and explicitly prove that the machine acted fairly. Wow. If you operate the black box, you are absolutely responsible for the consequences. of what comes out of it. It forces corporations to either ensure their systems are interpretable or face the devastating legal consequences of blind trust. Okay, let's take a breath and recap this incredible, dense journey we've been on today. We started with the visceral, real-world harms of the SEER algorithm in the Netherlands, watching a cold digital welfare system categorize vulnerable people as criminal risks based purely on their zip codes and utility bills. We saw how that dystopia served as a global wake-up call, triggering a massive legislative response. It forced society to move away from the empty promises of voluntary corporate ethics and toward hard, legally enforceable frameworks. We explored the foundational principles of the Council of Europe Convention and the granular, uncompromising risk pyramid of the EU AI Act. We looked at how regulators are grappling with the raw scale of general-purpose AI, treating massive models like high-horsepower engines, and using metrics like 10 to the 25th power FLOPs to enforce systemic risk management and red teaming. We wrote along with Isabel Ebert as she designed the UNGP's compass, a tool designed to force human rights due diligence out of the PR department and into the corporate boardroom, shifting financial liability. And we grounded all of this immense legal theory in the reality of your daily life, how these algorithms are silently operating in hospitals, elementary schools, warehouses, and at our borders. We learned that technical concepts like explainability, interpretability, and the radical shifting of the burden of proof are rapidly becoming the primary legal shields of the 21st century. Every single time you apply for a job, every time you sit in the doctor's office, every time you cross an international border or simply post a photo online, These invisible legal frameworks are the active hidden battleground. They are fighting to preserve your autonomy, your privacy and your fundamental dignity in a world increasingly run by code. It is a profound, irreversible shift in human history. But as we wrap up this deep dive, I want to leave you with a final thought to ponder a concept that pushes beyond the regulations we've discussed today. That's all. We've talked a lot about holding AI accountable to human logic, but as these models become increasingly complex, we are witnessing a new phenomenon. AIs are now training themselves on synthetic data generated not by humans, but by other AIs. We are entering a recursive loop of machine logic and are borrows of code. If the mathematics become so deeply layered, so completely alien to human cognition, that true interpretability becomes fundamentally impossible. Even for the most brilliant engineers who built the system, can a human rights legal framework designed for human logic actually survive? If we fundamentally cannot interpret the machine, does the entire concept of legal accountability break down? That is the ultimate existential question of the algorithmic frontier. If the black box becomes utterly impenetrable, how do we prove a human right was ever violated at all? A chilling thought, and one that regulators, engineers, and all of us will have to confront sooner rather than later. You've been listening to The Deep Dive. I want to thank you for joining us as we navigated the complex collision between artificial intelligence and human rights. Until next time, stay curious, stay vigilant, and always keep asking why the machine made its choice.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
Hidden Brain
Hidden Brain, Shankar Vedantam
All In The Mind
ABC Australia
What Now? with Trevor Noah
Trevor Noah
No Stupid Questions
Freakonomics Radio + Stitcher
Entrepreneurial Thought Leaders (ETL)
Stanford eCorner
This Is That
CBC
Future Tense
ABC Australia
The Naked Scientists Podcast
The Naked Scientists
Naked Neuroscience, from the Naked Scientists
James Tytko
The TED AI Show
TED
Ologies with Alie Ward
Alie Ward
The Daily
The New York Times
Savage Lovecast
Dan Savage
Huberman Lab
Scicomm Media
Freakonomics Radio
Freakonomics Radio + Stitcher
Ideas
CBCLadies, We Need To Talk
ABC Australia