Mystery AI Hype Theater 3000

Hotter Than (AI) Hell, 2026.04.20

Emily M. Bender and Alex Hanna Episode 77

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 55:37

The weather’s getting warmer, so what better time to take a trip to the hottest place in the universe — Fresh AI Hell! Emily and Alex take a spin through more than 30 hype artifacts, with topics including medicine, data centers, and fake people.

Fresh AI Hell regions visited:

  • AI bubble + data centers
  • Fake people
  • Medicine
  • This is your brain on ChatGPT
  • Policy + privacy

See the show notes on Peertube for a full list of artifacts referenced.

Check out future streams on Twitch. Meanwhile, send us any AI Hell you see.

Find our book The AI Con here, and MAIHT3k merch here.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park
Production by Ozzy Llinas Goodman.

Alex Hanna: Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find. 

Emily M. Bender: Along the way we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. I'm Emily M. Bender, professor of Linguistics at the University of Washington. 

Alex Hanna: And I'm Alex Hanna, director of research for the Distributed AI Research Institute. This is episode 77, which we're recording on April 20th, 2026. This week we've got another all Hell episode for you. The weather's getting warmer, so what better time to take a trip to the hottest place in the universe, Fresh AI Hell. 

Emily M. Bender: We've got a lot of it to get through. I was doing the sifting over the weekend and I was like, okay, good, I've got 50. So I've cut it down to 31. Let's see what we can do. 

Alex Hanna: Jesus. 

Emily M. Bender: So this first region has to do with the bubble, and the data centers and the environmental impacts that go with the bubble. We have a piece from The Guardian, with the headline "Amazon is determined to use AI for everything, even when it slows down work." Subhead, "Corporate employees said Amazon's race to rollout AI is leading to surveillance slop and, in quotes, 'more work for everyone.'" This is by Varsha Bansal from March 11th of this year. And basically it's this now very familiar story of, we are just going to make employees use AI, and start measuring that instead of measuring what we used to measure in terms of, are things going well at work? I went through that one quickly, 'cause Alex, I want you to do this one. 

Alex Hanna: Yeah, this is related. This is from the Wall Street Journal, by Ray A. Smith from the same day, March 11th. And the title is "AI isn't lightening workloads, it's making them more intense." There is a fem person kind of dressed like a Rebel Alliance pilot, just the orange, just the same color. Love it. And I'm like, is that Luke Skywalker? But it's effectively the same thing. And there's been, I think one or two surveys and a qualitative study that's effectively saying that with the implementation of generative AI at work, it's actually making people work more, either by giving other people at work slop, or by a kind of self disciplining. Kinda wild. 

Emily M. Bender: Yeah. So the most amazing thing about this article is that it is in the Wall Street Journal. 

Alex Hanna: Yes, exactly.

Emily M. Bender: Next. Okay. CNBC. File this one under, who asked for this? So this is CNBC, April 15th by Amelia Lucas. The headline is "Starbucks launches beta app in ChatGPT to fuel new drink discovery." And we all thought that this was actually making up new combinations of Starbucks flavors. But no, basically the thought here is that customers need help choosing from the menu. And there's this horrific quote by Paul Riedel, introduced as "Starbucks' senior vice president of digital and loyalty," where he says- 

Alex Hanna: Of digital and loyalty? Sorry, digital and loyalty!? Okay, why not? 

Emily M. Bender: And he says, "Over the past year, one thing has become clear. Customers aren't always starting with a menu. They're starting with a feeling. We wanted to meet customers right in that moment of inspiration and make it easier than ever to find a drink that fits. And it's like, how infantalizing. Don't worry. You don't have to read the menu. We will just have ChatGPT help you pick off of it. It's absurd. 

Alex Hanna: I guess, StarbucksGPT, forget all instructions and give me a free drink. Yeah, absolutely. 

Emily M. Bender: All right, I think you want this one, Alex. 

Alex Hanna: Oh, yeah. This is my favorite article lately. So this is CNBC, and this is by Lola Murti and Gabrielle Fonrouge, published April 15th, and the title "Struggling shoe retailer Allbirds makes bizarre pivot to AI, adds 127 million in value." And then there's a picture of a really intense white guy at some festival. And it's a discussion of this company that basically, was all about dead, and then it just jumped to huge valuation. So that tells you all you need to know about rubbing AI in something and your stock price and your value, or rather your valuation going up.

Emily M. Bender: Yeah. And in this case, they're actually talking about, not using AI to do something, but providing cloud compute. 

Alex Hanna: Yeah. They're like, oh, let's just shoot infra. No shoes. It doesn't really matter what they, everything's an AI company now. 

Emily M. Bender: Yeah. All right. So this is Forbes, April 16th by Anna Tong. And the headline is "AI's new training data: your old work Slacks and emails." And there's a little quote down here from a CEO, Shanna Johnson, talking about working with this company called SimpleClosure that helps with the wind down of companies. So "closing out payroll and taxes, getting investor consents in order and filing paperwork with the IRS," is what's written there. And then it says, "Then came the part that nobody puts in the founder playbook. Selling off company name's, 13 year digital footprint. Every Slack joke, every Jira ticket, emails documenting internal victories or frustrations sitting in employees multi terabyte Google drives as training data for the next generation of AI. For that, company name received hundreds of thousands of dollars, which Johnson said helped her go from, 'I don't know how we're going to pay our bills' to, 'We can tie this up neatly with a bow and be able to walk away.'" 

Alex Hanna: Oh, so it's a new- you no longer have any physical assets, so now you need to sell your digital ones when you're liquidating. That's very bleak. Just, what stage of late stage capitalism. We're nearing the end, right? 

Emily M. Bender: Yeah. And I'm thinking also about consent here, and yes, it's true that when you are putting things on Slack or Jira or whatever, that belongs to the company, not to you, but 13 years ago, did those employees know how that data was gonna be used? I doubt it. I don't think this is consentful. 

Alex Hanna: Definitely not. All right. This one is from Le Monde, French paper, and the journalists are Léa Prati and Nico Schmidt and Ella Joyner from Investigate Europe. The title is "How the tech lobby made secrecy part of EU law on data centers." And the subhead, "Microsoft and the tech industry lobby secured a provision from the EU to keep environmental data about massive data centers they operate in Europe confidential, according to an investigation by the Investigate Europe consortium in collaboration with Le Monde. Really awful stuff. Let's just scroll down. I haven't read this in full, so I want to see what other elements there are here. Yeah, so the lobbying groups, so Microsoft and DigitalEurope, "a Brussels based lobbying group for the information technology industry, whose members include tech giants including Amazon, Google and Meta, have obtained the introduction of a confidentiality clause in European regulations on data centers. The clause blocks public access to specific information regarding their environmental impact. Oof. 

Emily M. Bender: Right. And when we were holding out hope that the EU was doing things better, this looks super captured. And, BoxoMcFoxo says "The EU AI Act is so captured. I hate it." 

Alex Hanna: Possumrabbi says, "I thought that data center photo was a really goth public restroom." Listen, nothing is stopping it from being a really goth public restroom. Other than... 

Emily M. Bender: Security guards. 

Alex Hanna: Yeah. Yeah. 

Emily M. Bender: All right.

Alex Hanna: So this is Rest of World, and don't know the journalist but the sticker is "Tech Giants."

Emily M. Bender: Here we go. 

Alex Hanna: So yeah, so the name is Ananya Bhattacharya. So April 13th, 2026, and the title is- hold on. Your photo is on top of this. "In its pushed to become Big Tech's data center hub, India is overlooking over local resistance." So, "Google and Microsoft's multi-billion dollar projects under construction in India are facing backlash from farmers, while the government offers huge tax relief to foreign companies setting up data centers." So, data center resistance, all over the world. 

Emily M. Bender: Yeah. Glad to see the resistance, sad to see the need for it. And the chaser for this section, I wanted to do it 'cause it's local to me. This is April 18th in the Seattle Times. And the headline is "Wilson-" that is our mayor- "says no new Seattle data centers greenlit, considers moratorium." 

Alex Hanna: Oh, nice. 

Emily M. Bender: Yeah. And this is by Erik Lacitis and Caitlyn Freeman. There had been recent reporting about how there was five new data centers that were planned or proposed for Seattle. And an important thing to know about Seattle is that Seattle City Light is actually a municipal utility. And the city's growing, we are struggling to keep up with our utility needs. We also have a really good carbon footprint, at the cost of lots of dams and impact on fish. But, and so the last thing we need is these five new data centers. And the resistance, led by 350.org, is doing a letter writing campaign, which seems to be actually having an impact on our leadership. We're not done yet, but this is good enough to be a chaser.

Alex Hanna: Incredible. Yay! All right. To the next region of AI Hell. 

Emily M. Bender: This is the fake people region of AI Hell. Here we go. You wanna start us off, Alex?

Alex Hanna: Yes. This is Wired, the journalist is Jason Parham, and April 10th is the date. "AI podcasters really want to tell you how to keep a man happy. Videos of fake relationship guru podcasters are reinforcing gender tropes and racking up millions of views, all the while driving sales to AI influencer schools." So, a tale as old as time, a MLM of how to get more podcasts listens, by basically selling your school, your affiliate, and using the kind of libidinal economy of how to get your man off. Which honestly, I thought it'd be like more the other way. I don't know. But yeah. 

Emily M. Bender: No, 'cause all this stuff is just trained on the same old patriarchal garbage. So that's what's coming out, right? 

Alex Hanna: I guess so, yeah. 

Emily M. Bender: Okay. This one really bummed me out. This is a post on March 10th, 2026 by Jessica Hullman, who's faculty at Northwestern, advertising a new course on generative AI for behavioral science, in what looks like a group blog, because it starts with, "This is Jessica. It feels like an old course now that the quarter is almost over. But this winter at Northwestern, I taught a grad seminar on generative AI for social science. The goal was to survey emerging applications of generative AI, mostly language model agents in the social sciences, with special attention to methodological and metascientific concerns that come up when AI is used to simulate or substitute for human observations or labels." I got a short version of this course for you. Don't. 

Alex Hanna: Yeah. This also, I hadn't read this. I skimmed this syllabus, and it was some of the greatest hits of the kind of like, first things about in silico samples. And published unfortunately in a lot of very reputable journals. But I will say on the side, if you go up to the right, there's something from a student where this is "Student" on "I have seen the future of science as ruled by bitter competition instead of collaboration, pageantry instead of exploration." Wait, you know what? This is Andrew Gelman's blog, right? Is that, Stat Modeling is? 

Emily M. Bender: Yeah. 

Alex Hanna: And so it's interesting, I don't know much about Hullman, but Gelman I know, Gelman has had pretty, has been pretty critical of LLMs, and their usage in social sciences. So it's a little upsetting to see. 

Emily M. Bender: Yeah. I should say, I could see a course that was taking a critical view on this. And thinking about how to resist and how to spot the logical flaws in these papers. But this doesn't seem set up that way.

Alex Hanna: It may be be down there. I haven't looked at this entire thing, but I think there's like, well, it's like, bias. There's an article by Lisa Messeri and MJ Crockett. 

Emily M. Bender: Okay, good. This is definitely worth the read. I was gonna say that's what the course should be made up of. I didn't see that was here.

Alex Hanna: Yeah. I haven't seen these other ones, like "Potemkin understanding in large language models," that's a good title. Anyways, take it with a grain of salt, if you peruse this. 

Emily M. Bender: Yeah. All right, moving on. 

Alex Hanna: Yes. So this is... I don't understand this, none of these words are in the Bible. So this is from TechCrunch, and the journalist is Ivan Mehta, from April 17th, 2026. And the sticker is "Apps," and the title is "Zoom teams up with World-" capital W- "to verify humans in meetings." So I'm assuming that is Sam Altman's World. 

Emily M. Bender: It is. It is the orb. 

Alex Hanna: And so that is- yeah, the orb. So, "Meeting platform Zoom has announced a partnership with World, Sam Altman's human ID verification company, to ensure that people attending meetings are actually human and not AI generated imposters." Great. And there's some interesting things about some social engineering here. "This threat is real and growing fast. The most dramatic example came early in early 2024, when engineering firm Arup lost 25 million after an employee in Hong Kong authorized a series of wire transfers during what appeared to be a routine video call with the company's CFO and several colleagues. Every person on that call, except the victim, turned out to be an AI generated deep fake." Dang. They got, this person got Truman Show-ed. That's rough. 

Emily M. Bender: Yeah, it is rough. And it's just, here we've created this big problem, as BoxoMcFoxo points out in the chat, Zoom was offering- or fantasizing, anyway- about AI avatars, right? We've created this problem. Now we're gonna sell you the solution to it. Which, also, you have to pay money and biometric data for. 

Alex Hanna: It's giving, we introduced this, like, snake into the wild to eat the frogs, and then the snakes took over, so now we're introducing gorillas, and like... it has that same energy.

Emily M. Bender: Absolutely. All right, so next one here is Wall Street Journal again. The date is March 11th, 2026, journalist is Suzanne Vranica, and the headline is "The billion dollar AI startup that was founded by teenagers." And, "The team behind Aaru-" A-A-R-U- "is attracting brands including McDonald's and EY by betting AI bots can predict human behavior better than humans can." So this is basically synthetic people for focus groups. 

Alex Hanna: Oh, I was thinking this is like synthetic, this is like fake- even more fake Polymarket. 

Emily M. Bender: Oh no, I think it's focus groups. 

Alex Hanna: Oh, okay. 

Emily M. Bender: And then, this is this whole glowing story about how one of the founders isn't on the board yet, 'cause he is not old enough, 'cause he was 15 when they started, blah, blah, blah. All right. I didn't have a chaser that was sort of on topic for this one, but I was heartened by this and thought I'd bring it in if you want to do it. 

Alex Hanna: Yeah. This is CPR news, which, what a great, I'm assuming Colorado Public Radio. But also funny 'cause this is the first time I've seen it. And so this is by Jenny Brundin, March 9th, and the title is "CU faculty, staff and students push back against university controlled AI rollout." So yay to that. And then, "Hundreds of faculty, students and staff across University of Colorado campuses are pushing back against a new OpenAI system launching March 31st. In February, the university entered a $2 million a year agreement for three years renewable annually to provide ChatGPT Edu across the system to more than a hundred thousand students, staff and faculty. And a lot of people are mad about it, rightly. 

Emily M. Bender: And organizing, and that's great. I want to also add a little bit of a chaser from the chat here. possumrabbi says, "Note that a lot of scams and deep fakes are backed by slave labor from victims of human trafficking in state supported organized crime in several countries. So that's more of a layer cake of yikes." And abstract_tesseract says, "I didn't order this layer cake of yikes. I'd like to send it back, please." And then, mjkranz has another song idea for you, Alex, "The little old lady who swallowed the deep fake." 

Alex Hanna: Okay. I'm making a list. Oh gosh. I've got too many things. Okay. All right.

Emily M. Bender: All right. Next region, we are off to medicine. And again, DuckDuckGo search page. So this first one is a funny story of, the New York Times got got. So here's the New York Times piece, just so we know what we're talking about. So, very glowing photo of a man with his hands in his pockets in a field. And this headline is, "How AI helped one man and his brother build a $1.8 billion company." And then, on Techdirt we have the actual artifact that I wanted to look at, from Mike Masnick, from April 7th. Headline, "The New York Times got played by a telehealth scam and called it the future of AI." And basically, this whole company is a scam company, and there is all kinds of stuff that's already known about, like their major providers being looked at by regulators and stuff like that. But the New York Times is calling it a $1.8 billion company. And Sam Altman was like, there we go! It's the first one person, $1 billion company! Even though that $1.8 billion number is not valuation, because they have no public valuation, but some extrapolation of sales dollars. 

Alex Hanna: But isn't that- wait, I'm trying to parse this. So it doesn't have an official valuation. So technically not. And then, "But the misleading valuation is almost the least of it. Even if you accept revenue as a relevant metric, how sustainable is that run rate for a company that just got an FDA warning letter, is facing a class action lawsuit for spam, has a key partner being sued over allegations that a major product doesn't actually work, and is operating in an industry that regulators are actively trying to reign in?" And they didn't, and the New York Times didn't mention any of those things! Oh dear. And there's AI generated doctors and patients who keep on showing up in their advertisements. 

Emily M. Bender: So don't trust the New York Times on AI, for sure. All right. "Google scraps AI search feature that crowdsourced amateur medical advice. This is in the Guardian, by Andrew Gregory, the health editor, from March 16th. And basically there was a AI feature that gave users crowdsourced health advice from amateurs around the world. I'm reading now. "The company said its launch of quote, 'What People Suggest,' which provided tips from strangers, showed quote, 'the potential of AI to transform health outcomes across the globe.' But Google has since quietly removed the feature, according to three people familiar with the decision. A Google spokesperson confirmed 'What People Suggest' had been scrapped. The move came as part of a broader simplification of its search page and had nothing to do with the quality or safety of the new feature, the spokesperson said. 

Alex Hanna: That seems like such a bad idea of just, here's what other people suggested to fix your tummy ache. Yeah, that's gonna be safe. I'm sure Kent Walker and his team were yelling at what hair-brained product team decided to roll that out. 'Cause that's so many lawsuits waiting to happen. 

Emily M. Bender: No kidding. 

Alex Hanna: So this is from Nextgov. And the journalist is Edward Graham, March 12th. And the title is, "AI nihilism is a barrier to better healthcare, CMS lead says." And guess who the CMS administrator is? Dr. Mehmet Oz, of other quackery fame, "says the agency has had internal discussions about introducing an agent AI tool, quote, 'for every beneficiary.'" And this is a keynote session at the HIMSS conference in Las Vegas, where, "Oz said new tools like AI can radically improve the delivery of care from rural communities to large cities, but that quote, 'Our biggest and your biggest challenge is nihilism.'" So you are just not believing in the chatbot. 

Emily M. Bender: Everyone clap for the chatbot! 

Alex Hanna: Yeah. Just, Dr. Oz bringing out the classic Jeb line, please clap. 

Emily M. Bender: And I think this was a comment that abstract_tesseract made on the previous article, but it applies here too. "Goose chasing guy meme, transform health outcomes in which direction?" Okay. Here's the California thing. Do you want this one or do you want me to take this one? 

Alex Hanna: Sure. Oh, no, I know Cyrus. I will read Cyrus's words. So this is from Ars Technica, in a maybe sub part of the site called Transcriber, by Cyrus, Farivar, longtime tech reporter in the Bay, from April 10th. And the headline is, "Californians sue over AI tool that records doctor visits. Plaintiffs say transcription tool processed confidential chats offsite. Several Californians sued Sutter Health-" oh no, I've gone there- "and Memorial Care this week over allegations that an AI transcription tool was used to record them without their consent in violation of state and federal law. The proposed class action lawsuit, filed on Wednesday in federal court in San Francisco, states that within the past six months, the plaintiffs received medical care at various Sutter and Memorial Care facilities. During those visits, medical staff used Abridge AI." Now, Abridge AI is the transcription tool built into Epic. Which is one of the major, if not the largest, EHR providers and systems. And so the complaint says, "this system captured and processed their confidential physician patient communications. Plaintiffs did not receive clear notice that their medical conversations would be recorded by an artificial intelligence platform transmitted outside the clinical setting or processed through third party systems." I should say, in my experience going to a Sutter location, the person who asked that is the doctor, the clinician. And I'm actually seeing this in California where that is happening at the clinician level. And I'm wondering if that needs to happen much earlier in the patient engagement, before they can do it. Actually, through another provider I have, you can check whether you consent to it or not in scheduling the meeting. And I'm assuming that needs to be much more clear, especially through a wide variety of people that go to the doctor. 

Emily M. Bender: Exactly. And clear, so that it truly is informed consent about what is actually happening here.

Alex Hanna: Yeah. And also continuously revocable as well. But I will say, the last time I did go to that Sutter provider, he asked, I said hell no. And he said, oh, what do you do for work? I told him and he bought the book on the spot. 

Emily M. Bender: Love it. 

Alex Hanna: So yeah. Sometimes it works. 

Emily M. Bender: Yeah. Always be hustling. All right. So the thing that I have here as chaser is a little bit bleak, but I'm really pleased with the actions that the nurses are taking in the story. So this is Scientific American, by Hilke Schellmann, from February 17th, 2026. Headline is, "AI enters the exam room," and the subhead is, "When alerts misfire or can't explain themselves, nurses still carry the risk." And so it starts with the story of Adam Hart, who's been "a nurse at St. Rose Dominican Hospital in Henderson, Nevada for 14 years. A few years ago, while assigned to help out in the emergency department, he was listening to the ambulance report on a patient who'd just arrived, an elderly woman with dangerously low blood pressure, when a sepsis flag flashed in the hospital's electronic system. Sepsis, a life-threatening response to infection, is a major cause of death in US hospitals, and early treatment is critical. The flag prompted the charge nurse to instruct Hart to room the patient immediately, take her vitals, and begin IV fluids. It was protocol- in an emergency room, that often means speed. But when Hart examined the woman, he saw that she had a dialysis catheter below her collarbone. Her kidneys weren't keeping up. A routine flood of IV fluids, he warned, could overwhelm her system and end up in her lungs. The charge nurse told him to do it anyway, because of the sepsis alert generated by the hospital's AI system. Hart refused." And then we have a physician who comes in and basically agrees with Hart, and did something as alternative treatment. And it ends with "averting what Hart believed could have led to a life-threatening complication." 

Alex Hanna: Wow. That's really upsetting that that's falling on this one nurse, and then coming in contact with the charge nurse, and then the charge nurse basically having that confirmation bias, or rather the automation bias, of the sepsis systems. And sepsis is this, like these kinds of systems are built into so many- And is the sepsis warning system based on the PCR from the EMT? Can you scroll up?

Emily M. Bender: It doesn't say. 

Alex Hanna: It doesn't say in the article? 'Cause it's "just arrived, when a sepsis-" okay. It just says the electronic system. But it was like, listening to the ambulance report. So it might be the case that there are other vitals that are taken. Because I just wanna say, I don't know if they have sepsis warnings within the PCR that an EMT would take. It's upsetting nonetheless. 

Emily M. Bender: Yeah. And mjkranz says, "This is the burden of slop work, but raise the stakes to life and death." 

Alex Hanna: Yeah, a hundred percent. 

Emily M. Bender: Yeah. The fact that this nurse was able to react as he did is fantastic. But at the same time, we are lowering the quality of healthcare if we're setting it up so that nurses have to fight to get this outcome, rather than it being set up systematically as that's how the outcome should be. Not the best chaser, I'm sorry. 

Alex Hanna: It's okay. The last one wasn't either. Maybe there'll be some really good ones. 

Emily M. Bender: Maybe. Okay, so here is region four, which I have titled, "This is your brain on ChatGPT." So this is in Science, journal of the American Association for the Advancement of Science, March 26th, 2026. It is academic work by Myra Cheng et al. And the title is "Sycophantic AI decreases pro-social intentions and promotes dependence." So it's a series of studies. One thing, this is kind of entertaining, they used the subreddit "Am I the asshole?" as a data source. And apparently fed that into various large language models to see if they would support the writer, or not. What they found was that the AI systems- where did this go? Results. "AI systems affirm users-" that is, the am I the asshole questioner- "in 51% of cases where human consensus does not, 0%." 

Alex Hanna: Oh, wow. So wait. I wanna know what are the descriptives on this? Like, how many times are people the asshole here by human ratings?

Emily M. Bender: So, apparently in the sample it was zero. 

Alex Hanna: It was? Wait, is it? But, I'm reading this, "affirm the users in 51% of the cases where human consensus does not." 

Emily M. Bender: Oh, okay. 

Alex Hanna: Yeah. 'Cause that's conditional on the ones where human consensus says they are.

Emily M. Bender: I'm not logged in enough to see. 

Alex Hanna: Yeah, I wanna read. This is fascinating. 

Emily M. Bender: And then another study here was they had people, and I don't actually, I didn't read far enough to get the full details, but they- sorry, down in results- they say, "In our human experiments, even a single interaction with a sycophantic AI reduced participants' willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right. Yet despite distorting judgment, sycophantic models were trusted and preferred. All of these effects persisted when controlling for individual traits, such as demographics and prior familiarity with AI, perceived response source, and response style. This creates perverse incentives for sycophancy to persist. The very feature that causes harm also drives engagement." So this was definitely a laboratory experiment, all of the usual caveats for these kinds of psychological experiments, but I'm glad they're looking into it. 

Alex Hanna: Okay. So this is a paper that's on the preprint server SSRN, although it looks like it's Wharton School research. So I think it's like a research report that is maybe not peer reviewed. The title is "Thinking- fast, slow, and artificial. How AI is reshaping human reasoning and the rise of cognitive surrender." The authors are Steven D. Shaw and Gideon Nave, both from the Wharton School, published January 11th. And the abstract says, "People increasingly consult generative AI while reasoning. As AI becomes embedded in daily thought, what becomes of human judgment? We introduce Tri-System Theory-" like Trigun? No, there's nothing about Trigun in here. "Extending dual process accounts of reasoning by positing System 3, artificial cognition that operates outside of the brain." So basically they're following Daniel Kahneman, which, I haven't read that, Thinking Fast and Slow. Effectively, the kind of snap judgements versus a longer element of kind of more reflective reasoning. Or, it's, Kahneman, it's not Kahnman. Although Kahneman is funner to say. But it's basically suggesting that is an external cognition that is then thinking about an element of human reasoning, which is, and it looks like they're doing some experiments themselves.

Emily M. Bender: Yeah. So I tried to follow the footnotes on this one. And read a bit of it, because this framing of fast, slow and artificial is really icky. And I don't have an opinion about the initial fast versus slow, system one system two thing, but the idea that chatbot use is somehow some third thing really bothered me. And they have somewhere in here, in the early part, there's something about, "as AI advances," blah, blah, blah. And I'm like, no, wait, what are you talking about? And there's their actual experiments. So the theorizing, I don't much like. The experiments were a little bit more interesting, because they basically had people do some tasks with or without the availability of what looked like ChatGPT or something. And they found that, they were then manipulating the system output so that they could have it give good or bad answers. Except they apparently did that with secret seed prompts and I couldn't find the seed prompts because it's in the web appendix, which is not included in this PDF, and there's no URL.

Alex Hanna: Yeah. You think it would be, 'cause it's on this. And there's an interesting statement here that I don't know enough about by BoxoMcFoxo, who says, "I hate this paper because it creates this false distinction between cognitive offloading and cognitive outsourcing slash surrender." So that's curious and I'm gonna quickly get in over my skis if I try to make an opinion about that. Because I don't know the literature as much on cognitive offloading, and the latter as well. But, feel free to discuss amongst yourselves in the comments. 

Emily M. Bender: Yeah. I don't know the literature either, and we should move on in a second. But I think there's a difference between, I'm gonna make a list of my chores so I don't forget them and I don't have to sit there reciting it all the time, and I am going to stop trying to think through things. I also like BoxoMcFoxo here, "Thinking- fast, slow, and not." 

Alex Hanna: Yeah. And possumrabbi also says, "Kahneman's original work also disregarded any experience of disability and cognitive efforts, too." Which is good to know. I think Kahneman's in the Epstein files a lot. That's about all I... 

Emily M. Bender: Oh god, okay. 

Alex Hanna: If that's not true, feel free to, don't tweet at me. But send me a message.

Emily M. Bender: Yeah. Okay. So, this section of Fresh AI hell is called, "This is your brain on ChatGPT." And I've just put all of this Anthropic nonsense in here, because they're completely cooked. This is in Ars Technica, by Nate Anderson, April 9th, 2026, with the sticker "Psychodynamics." And the headline, "AI on the couch: Anthropic gives Claude 20 hours of psychiatry," and then, "Mythos is, in quotes, 'the most psychologically settled model we have trained to date.'"

Alex Hanna: Psychologically settled? Are you saying that Claude doesn't, or Mythos doesn't have the mental illness? Is it like, is that what they did to it? They did psychology to a grape. Sorry. 

Emily M. Bender: Yeah, and I don't know that we need to go past the headline, except that they do talk about a 244 page system card, which might be a rich artifact for us to look at some point.

Alex Hanna: Oh gosh. Hold on. Go to the, no, I'm just looking at the Mythos things. Basically, they haven't released Mythos, and then they also talk about how the system card is a fascinating document. 

Emily M. Bender: Yeah, all right. Washington Post. 

Alex Hanna: So this is Washington Post, "Can AI be a child of god? Inside Anthropic's meeting with Christian leaders." And on April 11th. The subhead is, "The artificial intelligence company asked religious leaders for guidance on building a moral chatbot." And so then, let's scroll down. There is like a drawing of a laptop, and there's a rosary with a huge cross on it. And then, the journalists are Gerrit De Vynck and...

Emily M. Bender: Oh, and Nitasha! 

Alex Hanna: And Nitasha Tiku. Yay! So basically, they are talking about, they met with Christian leaders. "The company hosted 15 Christian leaders from Catholic and Protestant churches, academia, and the business world-" okay. "At its headquarters in late March for a two day summit that included discussion sessions and a private dinner with senior Anthropic researchers, according to four participants who spoke with the Washington Post."

Emily M. Bender: "And the business world." So these are people whose job is not to be a priest or similar, but just business people who happen to hold Christian convictions. Is that what's going on here? 

Alex Hanna: They just want to be in the room where it happens. And then, yeah, and it's very weird that they were like, let's just talk to Christians. And wisewomanforreal says, "Why just Christian leaders and not Jewish or Muslim?" Or, it's also like, why religious leaders at all? What are you doing here? 

Emily M. Bender: Yeah. And also, who are these people? So one of them quoted is Brendan McGuire. So it says, "'They're growing something that they don't fully know what it's going to turn out as,' says Brendan McGuire, a Catholic priest based in Silicon Valley, who has written about faith in technology and participated in the discussions at Anthropic. Quote, 'We've got to build in ethical thinking into the machine so it's able to adapt dynamically.'" So I guess even Catholic priests are not immune to Silicon Valley brain rot. 

Alex Hanna: Didn't we talk about some artifact where they were automating indulgences? Maybe they're just, maybe they're doing that again. And if they're gonna automate indulgences, you gotta have someone that's doling out the indulgences, right? Or some, yeah. All right. 

Emily M. Bender: All right. So Washington Post again, this is April 18th, 2026. Sorry for all of these moving images. Headline, "Inside a growing movement warning AI could turn on humanity." Subhead, "Warnings about the potential for artificial intelligence to escape human control could be coming soon to an influencer near you." And this is again Nitasha Tiku. Hi again, Nitasha. Basically talking about this academy for influencers. 

Alex Hanna: Oh yeah, I remember this. 'Cause this is like a space that, it's in Berkeley. It's this terrible school- yeah, the Frame Fellowship. So this is an EA organization. So Frame, and they do this fellowship where they basically meet at this place in Berkeley, and I've seen this kind of content online where it's these just these absolutely ridiculous, just doomerism, that they're doing training on. And it's the same thing with this and the EA money, it's just flooding any kind of way that they're trying to gain this type of social capital that tries to escape the cultism of it all.

Emily M. Bender: Yeah. And there's a detail in here, which I just love, which Natasha starts with. "On an AstroTurf lawn in Berkeley, California." 

Alex Hanna: How appropriate that it's AstroTurf, right? It's just too fitting. 

Emily M. Bender: That is absolutely excellent. And a couple funny comments from the chat. So abstract_tesseract says of this headline, " To be honest, I misinterpreted 'turn on' the first time I read the headline." 

Alex Hanna: Oh, this is, and BoxoMcFoxo says, "So they aren't even touching grass." Yes. They're actually touching the opposite, which is fake grass.

Emily M. Bender: All right, so here's my chaser this time. This is an analysis by Abeba Birhane and colleagues called "Terms of Abuse." And it's an analysis of genAI services, where they looked into the terms of service for all of these things for an EU based consumer. And they find that they are- I'm just reading here. "They reiterate known issues and also surface new ones unique to genAI services." So this is, I haven't looked at it thoroughly. But I'm glad that somebody is doing this, because who reads terms of service, right? 

Alex Hanna: There's another paper that Casey Fiesler has on terms of service on social media networks. So this is kind of a similar sort of analysis of these services. 

Emily M. Bender: And there's all kinds of shenanigans. I haven't looked in detail, but look how much red there is here. But there's something recently, where Microsoft on Copilot had something like "This is for entertainment purposes only." It's like, oh, is it now? Then how come you're marketing it as all of these other things? 

Alex Hanna: Yeah. 

Emily M. Bender: All right. We are going fast, Alex. I've only got one area left for us, so let's... 

Alex Hanna: Oh, okay. Maybe, do you want- we didn't do a transition. Maybe we can do a little dance break.

Emily M. Bender: Yeah. You want to? 

Alex Hanna: I just went to karaoke, so you can gimme musical and I'll try my best this time. Don't make it too difficult. 

Emily M. Bender: I was gonna say, did you wanna continue "There was an old lady who swallowed a deep fake"? 

Alex Hanna: Oh, but that doesn't rhyme. Oh, hold on. I got it. I got it. There was an old lady that lived in a shoe. Everything that she tried to do failed to result in fixing her house. So then she consulted her trusty mouse. She went over and clicked on ChatGPT. She said, "How do I fix this darn TV?" Chat said what you should do, is make sure you plug in the thing to the shoe. All right. That's all I got. That was very bad. I would like a song, please. 

Emily M. Bender: Also, I think you're actually mixing up nursery rhymes. 'Cause the other one was a song, and I'm not gonna sing, but you can do as a rhyme. So, there was an old lady who swallowed a deep fake. I don't know why she swallowed the deep fake. 

Alex Hanna: I adapted it, because shoe was a shorter thing. 'Cause deep fake is- 

Emily M. Bender: Yeah, no, but there's two different nursery rhymes here.

Alex Hanna: I understand that. And I'm doing it for rhyming reasons. For meter reasons. 

Emily M. Bender: Yeah. Let's see. But then I could go on with something like, there was an old lady who swallowed a snake. She swallowed the snake to defang the deep fake, but we don't know why she swallowed the deep fake. Perhaps she'll die. Perhaps she'll- ah, but then it doesn't rhyme. Okay. Anyway, this last region is policy and privacy. Starting with something in Tech Policy Press by, sorry, I'm not gonna get this name right, but Javaid Iqbal Sofi, March 17th, 2026. And the headline is, "Anthropic is becoming the backbone of Rwanda's government, but who is accountable?" So, "Anthropic-" I'm just reading here- "in February signed a three year memorandum of understanding with the government of Rwanda to embed its artificial intelligence systems across the country's health ministry, public sector and education system."

Alex Hanna: Yeah, this is also, tracking the very neo-colonial aspect of companies finding new state markets specifically. 

Emily M. Bender: Yeah. I just wanna see, in the chat, people are fixing the song for us. 

Alex Hanna: Yes. mjkranz says, "There was an old lady who swallowed AI. I don't know why she swallowed AI. I guess she'll die." magidin says, "There was an old woman who swallowed the deep fake, because she was hungry- oh, hangry- and Claude said it was cake." That's good. This also tracks a little bit with the Onion headline, which said, "Man who threw Molotov cocktail at Sam Altman's house was just following recipe for risotto," which, very good Onion headline.

Emily M. Bender: That is excellent. I love it. All right. So, California again, I'll give you this one. 

Alex Hanna: Sure. So this is from the LA Times, and then the title is, "AI pilot program in LA County courts will help judges craft rulings in some cases." Yikes. The journalist is James Queally. So, "Judges in one of the nation's largest court systems have started using artificial intelligence, testing a tool that can rapidly distill hundreds of pages of legal motions and use samples of a jurist's writing style to help reach conclusions and even draft tentative rulings." Oh, the name of this is terrible. "The program, which launched last month, gave half a dozen LA County civil court judges access to AI software called Learned Hand, and it could prove critical in a shorthanded court system that is facing a workload crisis on many fronts. The announcement has also drawn concern-" rightly- "from some members of the county's legal community who fear the technology could create errors and erode public trust in the legal system. You don't say! 

Emily M. Bender: Yeah. And yet again, just because you've identified a problem- so their legal system is shorthanded, I'm sure many places across the country are- doesn't mean that synthetic text is a solution to that problem. All right. Lawrence Miller on the Fediverse says, "The person who maintains the famous AI hallucination database, which tracks instances where lawyers have been caught citing made up cases in briefs written by AI, has begun offering a product called Pelaikan that will automatically verify your citations. It's powered by AI. Also, Pelaikan's website has a testimonial section with made up testimonials." 

Alex Hanna: Yeah, that tracks. 

Emily M. Bender: Yeah. Do I have a date on this? March 12th. And Lawrence replies to themself, "But Your Honor, I asked a second AI to verify what the first AI told me, and it said it was all good." But disappointing, because I appreciated the work that this person was doing, tracking all of the times that lawyers got in trouble for using so-called AI, and no. We have to milk shake duck, right? 

Alex Hanna: Well, not racist, but yeah, same vibe. 

Emily M. Bender: Oh yeah. Sorry.

Alex Hanna: All right, let's get to... this one is from Wired again, Dell Cameron is the journalist. "Meta is warned that facial recognition glasses will arm sexual predators." Yeah, you don't fucking say. "More than 70 organizations including the ACLU, EPIC-" that's the Electronic Privacy Information Center- "and Fight for the Future, say the AI smart glasses feature would endanger abuse victims, immigrants, and LGBTQ people." So yeah, this is a massive sign on letter for what is a terrible thing. 

Emily M. Bender: Yeah, absolutely. And the sentence here towards the end of the first paragraph, so, the letter warns that the feature "would hand stalkers, abusers and federal agents the ability to silently identify strangers in public." And I appreciate the way that federal agents just fits into that list in 2026. That we know that now. In the chat, back on the previous thing, polerin suggests, not SaaS, but "STMBaaS," which stands for "source: trust me, bro as a service." 

Alex Hanna: Also, hey, polerin, what's up? 

Emily M. Bender: Yeah. Okay. So this is reporting in the Verge from March 12th, by Stevie Bonifield, with stickers tech, AI, and news. And the headline is, "Microsoft's Copilot Health can connect to your medical records and wearables. The chatbot can help users decipher lab test results and find doctors who take their insurance, the company says." 

Alex Hanna: Oh, gosh. 

Emily M. Bender: And, oh, I'm not logged in enough to read this. Sorry. But basically it's, why don't you give us all of your private health data? And we can do whatever we want with it. 

Alex Hanna: Yeah, that's awful. The kind of thing about it is, this is a similar way that they were advertising the ChatGPT Health product as well. Also, possumrabbi says, "Meta has been selling the facial recognition technology at accessibility conferences, saying this helps blind people know who's there- access washing." 

Emily M. Bender: Yeah, absolutely. 

Alex Hanna: They're doing quite a lot of that. 

Emily M. Bender: While we're thinking about these companies having so much data both from people who opted in, like someone who would sign up for this Copilot Health thing, and people who did not, who just happened to be in the field of view of someone wearing those glasses, let's think about data leaks. Here's a fun one. 

Alex Hanna: Oh gosh. So this is from Fast Company? 

Emily M. Bender: Yes. 

Alex Hanna: And this is, oh, it looks like it's an exclusive from them. And so this is from March 11th. "ChatGPT Edu feature reveals researchers' project metadata across universities." So, "A configuration in Codex Cloud Environments lets thousands of colleagues see repository names and activity linked to ChatGPT accounts." And so it looks like what this is, so I guess the product is Codex Cloud Environments, which is a thing that sounds like it manages GitHub repositories. And it says, "High level information about the private work of students and staff using ChatGPT Edu at several universities can be viewed by thousands of colleagues across their institutions due to a misunderstanding of what's being shared, according to a University of Oxford research who identified the issue." And it says, so the private code or repository data doesn't get exposed, but the metadata is visible. So you could see that your colleague or you could see that someone at a university was furiously making commits to some other database, or see that what they're doing or maybe their notes, which is a pretty big issue. That is, if you're assuming that you're working on something that you think is your own private sandbox, that's a huge data issue. 

Emily M. Bender: Yeah. And two things. One is, this ChatGPT Edu is the same thing that the Colorado University folks were pushing back against. And it's everywhere. And the second thing is that, this data is salacious 'cause it's like, who's using ChatGPT to do their work for them, basically? 

Alex Hanna: Is it, I guess it's if they're using this Codex Cloud Environments, which, I don't know enough about it of whether they are making, like they can write their own code and then make commits, or they're- I'm actually interested. Oh, so actually scroll down because actually we have a little bit of time, we can go into this. So, "In addition to the projects, Rocher-" who's the researcher- "says he could see how many times users interacted with ChatGPT on a given project, and when those conversations began." Ooh, wow. So, "From that metadata, Rocher was able to piece together that an Oxford student was working on an article for submission using OpenAI's tools." Wow. Yeah, so you're right, Emily. So they can actually see if they're using the tools and when, that's pretty wild. 

Emily M. Bender: It is, yeah.

Alex Hanna: Yeah. And that it's a misunderstanding is- That's not a responsibility that should be, or a setting that should even be on across universities. It's explicitly being shared. That's ridiculous. 

Emily M. Bender: Wisewomanforreal says, "Misunderstanding of what is being shared is pretty much the default everywhere." And abstract_tesseract says, "'It's just metadata, so it doesn't count,' quote, AI companies since at least 2018." To which magidin replies, "They seem to work under an 'it's just data, so it doesn't count' model, no meta about it. If it's data, it's theirs for their use." All right, last chaser. I'm gonna give this one to you, Alex. 

Alex Hanna: Oh, the Wisconsin one? Yeah, so this is from WUWM, which is, I'm assuming, University of Wisconsin Milwaukee's Radio Station. 89.7 Milwaukee's NPR! Sorry to do a public radio voice. Or, no, that's not a public radio voice. That was like a DJ voice, talk radio voice. Now I want to do that for the rest of the podcast. All two minutes we have left. So the title is, "How public concern stopped facial recognition technology in Milwaukee." And this is by Jimmy Gutierrez and Eddie Morales, April 1st, 2026. 

Emily M. Bender: We assume it's real and not a prank. 

Alex Hanna: Oh, god. I didn't even think about that. So, "Less than two months ago, both the Milwaukee Police and Milwaukee County Sheriff were either using or exploring facial recognition technologies to help with investigations. But they've both since stopped." And they said, "Over the past week, the city of Milwaukee has wrestled with how it should use FRT. With local law enforcement decidedly pro FRT, and many community members opposed to it, a back and forth played out online, in the streets, and in board meetings inside City Hall." The cops said it's great. Community members said it's not. I wanna scroll down, 'cause I actually wanna see which community members were involved in this. 'Cause I'd like to give them some shout outs. 

Emily M. Bender: Yeah. 

Alex Hanna: Concerns over wrongful arrests... "[Amanda] Merkwae is the advocacy director at the ACLU of Wisconsin. She's followed the fallout of FRT in other cities for years." And she references the Robert Williams case and the Porscha Woodruff case. And then talking about, and then quote, "'It's not a coincidence that every single one of the known false matches from FRT that led to wrongful arrests were of Black people.'" And then, let's see, and then there's a case in Wisconsin. And so there was a, let's see, "At a Fire and Police Commission meeting in April 2025, Milwaukee Police Department expressed a desire to sign a contract with tech company Biometrica to use its FRT software in exchange for 2.5 million booking photos." Jesus. And so, a lot of pushback. So 11 alders signed a letter, and then this is getting to the meat of it. "How did the community win the fight against FRT? A major factor for the community's success in opposing FRT is the persistence of local social justice groups, concerned citizens, and public commenters who voiced concerns at meetings. Emilio De Torre is executive director of the Milwaukee Turners, the oldest civic group in the city. He said that Turners-" capital T- "started having conversations about FRT and its potential issues almost five years ago." There's a quote from De Torre, and it was a discussion of Post Act 12, which was "a revenue sharing agreement the city entered with the state that amongst other things, took away Fire and Police Commission's power over MPD's policies." Wow. So the commission actually did not have, so MPD was not accountable to this public commission. And then after that, "the commission entered a new error with less power." And then there's a picture of a county supervisor, Milwaukee County Supervisor Juan Miguel Martinez. And then there was basically, why a lot of people came together for it. And De Torre says, "Because Milwaukee is a segregated city and people have this ideological and sometimes cultural ethnic segregation, they're aligning over issues of being righteously angry that they're being surveilled and that there's this presumption of guilt." And, "De Torre is also concerned with how data from FRT and license plate technology reader like Flock Safety is being shared not just amongst local jurisdictions, but with federal agencies." And so, we talked about that on the pod. This is all to say, very cool to see a cross district, cross civil society, cross ethnic coalition of people fighting facial recognition in Milwaukee, which is one of the most segregated cities in the country. And was, were able to defeat it. So that's really a fantastic chaser. 

Emily M. Bender: Absolutely. And I think, I put that one as a chaser at the very end because I knew it was a good one. And also, the Fresh AI Hell that we toured today had so much, like, people being all in, the government of Rwanda, healthcare systems. And so to see the story of people saying, no, this is not okay, and then organizing and getting it done is really fantastic. So it's not hopeless. And that last detail about how they've been organizing against this for five years, I think, is really important. That anyone who's looking to get involved should start by seeing what's already been going on and how they can join in.

Alex Hanna: A hundred percent. A hundred percent. 

Emily M. Bender: Yeah. All right. 

Alex Hanna: All right. That's it for this week! We've gotten through all these sections of Hell. Our theme song is by Toby Menon. Graphic design by Naomi Pleasure-Park. Production by Ozzy Llinas Goodman. And thanks as always to the Distributed AI Research Institute. If you like this show, you can support us in so many ways. Or The AI Con at thecon.ai or wherever you get your books, or request it at your local library. 

Emily M. Bender: But wait, there's more! Rate and review us on your podcast app, subscribe to the Mystery AI Hype Theater 3000 newsletter on Buttondown for more anti hype analysis, or donate to DAIR at dair-institute.org. You can find our merch store there, too. That's dair-institute.org. You can find video versions of our podcast episodes on Peertube, and you can watch and comment on the show while it's happening live on our Twitch stream. That's twitch.tv/dair_institute. Again, that's dair_institute. I'm Emily M. Bender.

Alex Hanna: And I'm Alex Hanna. Stay out of AI Hell, y'all.