Mystery AI Hype Theater 3000
Mystery AI Hype Theater 3000
Drag It All To Hell, 2025.10.27
It's been six months since our last all-Hell episode! In honor of Halloween season, we take a long journey into the very scary Fresh AI Hell mines. Topics include terrifying uses of AI in education, scientific research, and politics — plus, some delicious palate cleansers along the way.
AI bubble: bigger than dot-com bust?
No one wants to pay for ChatGPT
Meta lays off 600 from AI unit
AI data centers: an even bigger disaster than we thought
Public universities anticipate data center-driven power outages
Chaser: Deloitte has to pay back Albanese government after using AI in report
"AI" schools are "dead classrooms"
Fake sources in "ethical AI" education report
Parents letting kids play with AI
Startup sells 'synthetic influencers'
AI-powered textbooks fail to make the grade
Chaser: "High-reliability" AI slop
Nature offers "AI-powered research assistant"
AI bots wrote all papers at this conference
AI medical tools downplay symptoms in women and POC
Therapists are secretly using ChatGPT
Chaser: Microsoft blocks Israel's use of its technology
German initiative uses "AI" for voter education
Police gunshot detection mics will listen for human voices
SF's AI chatbot for RV dwellers
Cuomo campaign posts racist AI slop
DHS Ordered OpenAI To Share User Data
Chaser: LA County moves to limit license plate tracking
"AI Superintelligence" prohibition letter
Prizes must recognize machine contributions to discovery
Chaser: The hot new trend in marketing: hating on AI
Check out future streams on Twitch. Meanwhile, send us any AI Hell you see.
Our book, 'The AI Con,' is out now! Get your copy now.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
- Bluesky: emilymbender.bsky.social
- Mastodon: dair-community.social/@EmilyMBender
Alex
- Bluesky: alexhanna.bsky.social
- Mastodon: dair-community.social/@alex
- Twitter: @alexhanna
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Ozzy Llinas Goodman.
Alex Hanna: Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find.
Emily M. Bender: Along the way, we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. I'm Emily M. Bender, a professor of linguistics at the University of Washington.
Alex Hanna: And I'm Alex Hanna, director of research for the Distributed AI Research Institute. This is episode 66, which we're recording on October 27th of 2025. And it's almost Halloween, which means it's the perfect time to embrace the scariness of another All Hell episode.
Emily M. Bender: The hell has really been piling up over here. So we're excited to dig through a bunch of the news that's been making our heads spin around on our necks, Exorcist style. And I wish I could demo that, but I can't.
Alex Hanna: It would be terrifying if you could. And as always, we have plenty of palate cleansers along the way to keep things from getting too scary. Muahahahaha.
Emily M. Bender: Yeah. All right. So, here we go. We've got a bunch of windows full of this nonsense, and I very cleverly labeled all of them so that I can find them.
Alex Hanna: You're wearing a really interesting outfit. What is your costume?
Emily M. Bender: It's not actually a costume, it's just me getting into the Halloween spirit.
Alex Hanna: Okay.
Emily M. Bender: With a very ruffly collar, but orange and black. And book color nails, but you know, they look orange.
Alex Hanna: Yeah. Emily's giving a real Mrs. Frizzle.
Emily M. Bender: Yeah.
Alex Hanna: Very loud color. Loving it.
Emily M. Bender: Yeah. I could also tease my hair out to do more of a Mrs. Frizzle, but, maybe one year. All right, so we're starting with bubble, bubble, toil and trouble. Alex, I'm gonna give you this one actually.
Alex Hanna: Yeah. So this is from CNN, analysis by Allison Morrow, who's been doing a lot of reporting on AI bubbling and market tracking. And so the title is "Why This Analyst Says The AI Bubble Is 17 Times Bigger Than The Dot-Com Bust." This is a picture of Wall Street and the lede says, "At this point, even the concept of an AI bubble seems to be a bubble. In fact, Deutsche Bank analysts said last month that the AI bubble bubble had already burst. Perhaps some corners of the internet are bored of the bubble talk. That's not making the market any less, um, bubbly." And then, "Last week, the Financial Times wrote that 10 AI startups, not a dollar in profit among them, have gained nearly $1 trillion in market value over the past 12 months. That is, to use a technical term, bananas."
Emily M. Bender: Yeah.
Alex Hanna: Yeah. And so, just to name who the analyst is, so we know who it is. It's an interview here with Julien Garran. From the UK firm MacroStrategy Partnership. And so, yeah, very fun interview with this person.
Emily M. Bender: And there was a wonderful piece on Tech Policy Press this morning that was talking about how a lot of this is driven by privately held companies that are basically slipping through a loophole that prevents them from having to do the kind of reporting that, usually, you know, public firms have to do.
Alex Hanna: Yeah.
Emily M. Bender: And, well the critique from this article said basically, look, if you are raising money from what's effectively pension funds and similar, then, you know, the public deserves to know what's going on in these companies. So, appreciate that.
Alex Hanna: Yeah. Yeah. That's a helpful analysis. So we should find that and drop it in the chat.
Emily M. Bender: Drop the link, yes. All right, so, next is from The Register. This is by Thomas Claburn, October 15th, 2025. Headline is "OpenAI's ChatGPT Is So Popular That Almost No One Will Pay For It." Subhead, "If you build it, they will come and expect the service to be free." So, you know, more on the bubble here. Opening paragraph, "OpenAI is losing about three times more money than it's earning, and 95% of those using ChatGPT, which generates roughly 70% of the company's recurring revenue, aren't paying a dime to help stem the losses. For this level of success, the company is reportedly valued at about 500 billion, even as it commits to spending more than $1 trillion that it doesn't have in partnership deals over the next five years."
Alex Hanna: And the next line is really indicative because The Information which, The Information is a paywalled publication, but I recently paid to get behind the paywall. And let me tell you, it's supposed to be kind of like an industry mag, but let me tell you, it's worth the money because I'm learning so much about the internals of these things. Because they cite a report in The Information, and they say that, "OpenAI during the first half of 2025 collected only $4.3 billion in revenue while still posting a net loss of 13.5 billion during that six month period." And then, you know, as the article says, they committed to all this spend in capital expenditure and data center buildout. So, yikes.
Emily M. Bender: Yeah. So this is not looking good for a stable economy.
Alex Hanna: Yeah.
Emily M. Bender: All right, next.
Alex Hanna: So this one's from CNBC. The headline is, "Meta Lays Off 600 From Bloated AI Unit As Wang Cements Leadership." This published in October 22 this year, by Ashley Capoot and Jonathan Vanian. And so the key points, although I don't like to read these key points 'cause I assume that they're LLM generated, but they might not be, I don't know, but I don't trust it, because a lot of people don't disclose this.
Emily M. Bender: That's a good point. Yeah.
Alex Hanna: And so the article, so the lede says, scroll down here.
Emily M. Bender: Horrible picture of Zuck.
Alex Hanna: Really terrible picture of Zuckerberg. Really bad lighting for him. "Meta will lay off 600 employees within its artificial intelligence unit, as the company looks to reduce layers and operate more nimbly, a spokesperson confirmed to CNBC on Wednesday. The company announced the cuts in a memo from its chief AI officer, Alexandr Wang, who was hired in June as part of Meta's $14.3 billion investment in Scale AI." And it said, "Workers from Meta's AI infrastructure units, Fundamental AI Research unit or FAIR, and other product related positions will be impacted." And to me what this is kind of saying is that, so Meta acquired Scale AI, I think I mentioned this last podcast that we talked, basically they acquired Scale, which was effectively a way of, I mean, what they were effectively doing was, doing crowd work and controlling crowd work, and then saying that they had these amazing products. And so weighing this really terrible figure in all this became this high up position on, and something about this deal tells me that a lot of those 600 jobs that they shed are part of that deal, like that acqui-hire. But you know, I can't prove it, 'cause they're not saying who got laid off.
Emily M. Bender: Yeah. And it's just, I think it's really on the nose about the bubble that we've had all of these layoffs in the guise of, well, AI can do things now so we don't have to have workers. And now they're laying off 600 people who are supposedly building the artificial intelligence.
Alex Hanna: Yeah. Okay, next.
Emily M. Bender: All right, next. So this is bubble still, but on the environmental side. This is from Futurism, by Joe Wilkins on October 10th of 2025. Sticker is "High Latency," and the headline is "AI data centers are an even bigger disaster than previously thought." Subhead, "'No wonder my new contacts in the industry shoulder a heavy burden, heavier than I could ever imagine. They know the truth.'" And basically what's going on in this article is that, turns out that all of this buildup that's happening with data centers, seems like it's, you know, durable infrastructure, and it's not. So, second paragraph here says, "In short, this is because data center components age rapidly, either made obsolete through rapid advances in technology, or broken down over years of constant high-powered usage."
Alex Hanna: Yeah. And so that was the thing where I think the forecast for data centers and their deprecation had been something like 10 years. And now this is basically, it's closer to three to five years. So yeah, just really awful, especially as, you know, the things that are going to deprecate more quickly are the hardware. And if you completely expect to be building newer and newer, you know, GPUs or whatever, then there's just gonna be more and more churn, and then, you know, you're gonna have to put more and more capital expenditure to keep up the cost. Not to mention all the excessive mining and cobalt, and the three Ts and everything, as well.
Emily M. Bender: Exactly. And then the ewaste at the other end of it. Yeah. Backing up to the previous article, we have a contribution here from abstract_teserract: "Heavy is the head that wears the AR headset."
Alex Hanna: Very good.
Emily M. Bender: Pretty funny. I guess it applies here too, but.
Alex Hanna: magidin also has a good, post here, which is, "What do you call Facebook laying off 600 AI workers? A good start."
Emily M. Bender: Yeah. Although I also feel bad for those workers.
Alex Hanna: I feel bad for the workers. It's also...
Emily M. Bender: Yeah. Unless they're the ones who were getting like, multi-million dollar annual salaries, and then it's like, yeah, have a nice vacation.
Alex Hanna: Oh, yeah. It's also like, ah, those are so ridiculous.
Emily M. Bender: Yeah. And thx_it_has_pockets says, "AI isn't profitable or sustainable? I wish someone could have warned us! /s," for sarcasm. Okay.
Alex Hanna: This one is pretty sad. So this is from Kate Clancy on Bluesky, and they say, "I'm being asked to tell my department which pieces of lab equipment should be supported with backup power, because they expect rolling blackouts and brownouts in 2026 due to increased energy demands from AI. In case you're wondering how my day is going." And, wait, how do you get this threaded view on Bluesky? Is this new? Sorry, I haven't had this yet.
Emily M. Bender: I think it's in settings.
Alex Hanna: Oh, wow. I need to change that. You're blowing my mind here. This person is at, I think, UIUC in Urbana-Champaign?
Emily M. Bender: I think so.
Alex Hanna: Yeah. And they're in-
Emily M. Bender: University of Illinois, so that would be UIUC. Yeah.
Alex Hanna: Yeah. So really sad that that is, you know, that is something that is coming. Not only increases in price of electricity for rate payers, but also you're not even gonna get service at certain times, because of redirecting energy from the grid to these data centers.
Emily M. Bender: And seems like something like, you know, electrical utilities ought to have some collective decision making around them, and not just the tech companies get to grab it, and things like public universities have to figure out their backup power. Which by the way is also going to be, you know, generators, probably, and something else that is not super sustainable. So let's add that to the environmental cost of AI.
Alex Hanna: Yep.
Emily M. Bender: Okay. The end of each of these sets does have a little bit of a palate cleanser, pick me up, something good happened in the world. Here we have Deloitte having to pay money back to the Albanese government after using quote "AI," quotes mine, in a $440,000 report. This is reporting in The Guardian. Sticker is "Australian Politics," and subhead is, "Partial refund to be issued after several errors were found in a report into a department's compliance framework." Where's the journalist's name? Oh, here. Krishani Dhanji, yeah. So, "Deloitte will provide a partial refund to the federal government over a $440,000 report that contains several errors, after admitting it used generative artificial intelligence to help produce it."
Alex Hanna: Incredible.
Emily M. Bender: Yeah. So, you know, sad to see that they did that, but wonderful to see them being held accountable. And, you know, play stupid games, get stupid prizes, right?
Alex Hanna: Yeah, absolutely. And people telling me how to change my settings, thank you, I did it just now.
Emily M. Bender: Excellent. Okay. Moving on to the education and also fake people region of Fresh AI Hell. I'm gonna start this one. Constant Scream, aka Ajax Singer on Bluesky, is posting about a New York Times piece. I'm gonna start it, Alex, you get to pick it up. Constant Scream says, "This is a dead classroom. AI created lessons and assignments shown to students, who then use AI to complete the assignments, which are then assessed and graded by AI. It is educational kabuki." But the actual piece, Alex, would you like to describe it there?
Alex Hanna: Yeah. So this is from the New York Times. "AI is changing classrooms," and I always am annoyed with the New York Times stylization because it's A, dot, I, dot, and I don't love the periods. Anyways, "We spoke to the co-founder of Alpha Schools about how her private K through 12 schools are using AI to generate personalized lesson plans, and enabling teachers to spend their time motivating rather than teaching students." Motivate. Okay. And the picture is, this is an absolutely batshit quote from this person where, she's sitting in the classroom and has on, you know, a smart blouse and some red pants, and is saying, "Our kids are crushing their academics, and they're doing it in a fraction of the time." And that's just, what a wild thing to say about education, as if time savings or time optimization needs to be the metric on which education is optimized for. And I'm particularly curious about this because Alpha Schools is this thing, I mean, there's kind of, one of the schools is in San Francisco and is, you know, like many of the things, is all the rage. And it's just sending me up the wall. I mean, to me it's just this broader trend of the kind of push towards privatization of education, which has been going around for quite some time, but now is just really accelerated with, you know, this stuff in schools.
Emily M. Bender: And we just saw some further reporting today, about a kid who was going to one of these schools who had lost a bunch of weight. So they were being advised by their doctor to eat multiple snacks during the day. Parents send those snacks in, it goes well for a couple of days, and then the kid brings the snacks home at the end of the day and reports to their parents that they were told to not eat until they got their, it was some like, academic metrics up or something. Something horrific like that. And so, you just know that the students are, you know, data-fied to hell and back in these schools. And yeah, not treated as people.
Alex Hanna: Yeah.
Emily M. Bender: Ahh. Okay. Maybe we can do this one more quickly. This is by Benj Edwards, September 12th, 2025 in Ars Technica. Sticker is "Citation Needed," which I love. And the headline is, "Education Report Calling For Ethical AI Use Contains Over 15 Fake Sources." And the subhead, "Experts find fake sources in Canadian government report that took 18 months to complete." You know, I don't think there is ethical use of these synthetic text extruding machines, and this just really points that out nicely.
Alex Hanna: Yeah. It's just hilarious when it's just, it's the meta kind of bad usage. Yeah. And one of the subheads of this piece is, "The irony runs deep." Yes, yes it does.
Emily M. Bender: Yes it does. All right. This one is really sad. Yeah. So The Guardian again. So this is by Julia Carrie Wong on October 2nd of 2025, and the headline starts with a quote: "'My son genuinely believed it was real,' end quote. Parents are letting little kids play with AI. Are they wrong?" Yes! And then the subhead, "Some believe AI can spark their child's imagination through personalized stories and generative images. Scientists are wary of its effect on creativity." I just wanna read this first story here, because it's heart wrenching. "Josh was at the end of his rope when he turned to ChatGPT for help with a parenting quandary. The 40-year-old father of two had been listening to his super loquacious four year old talk about Thomas the Tank Engine for 45 minutes, and he was feeling overwhelmed. 'He was not done telling the story that he wanted to tell, and I needed to do my chores. So I let him have the phone,' recalls Josh, who lives in northwest Ohio. 'I thought he would finish the story and the phone would turn off.' But when Josh returned to the living room two hours later, he found his child still happily chatting away with ChatGPT in voice mode. 'The transcript is over 10K words long,' he confessed in a sheepish Reddit post. 'My son thinks ChatGPT is the coolest train-loving person in the world. The bar is set so high now. I'm never going to be able to compete with that.'"
Alex Hanna: Ugh. Yeah. So just-
Emily M. Bender: Yeah. I've been there with chatty kids. I was that chatty kid. My parents will tell you. And, you know, it's okay to say, "You know what, I wanna hear the end of the story when I'm done with the chores," or something, right. This is just another case of what would you have done three years ago?
Alex Hanna: Yeah, I mean, possibly. I mean, I guess the thing about that is, you know, you're still gonna have, even three years ago, I mean, there's, you know, the ways in which you're giving a kid an iPad or something. Even that is not a great solution, right?
Emily M. Bender: Yeah.
Alex Hanna: The thing about some of these stories with kids that really is hard is that there is, you know, parents are stressed and having to deal with this and so, I mean, tech is a possible distraction, but it's just, there's so many things that can go wrong.
Emily M. Bender: Yeah.
Alex Hanna: And you're just like, I do not want kids to be watching whatever, even without ChatGPT, if it's not that it's YouTube, or it's Roblox, or it's just any of these platforms that are just doing really terrible things for kids.
Emily M. Bender: Yeah. And there is lots and lots of terrible problems. I think there's an additional level of terrible when it is this, the synthetic text extruding machine, with no accountability. Not saying that YouTube is good, not saying that Roblox, you know, the platform is good. But just, so here, "'My kids are the guinea pigs.'" Quote, "For Saral Kaushik, a 36-year-old software engineer and father of two in New Yorkshire, a packet of freeze dried astronaut ice cream in the cupboard provided the inspiration for a novel use of ChatGPT with his four year old son. 'I literally just said something like, I'm going to do a voice call with my son, and I want you to pretend that you're an astronaut on the ISS.'" It's like, why? Like you could do that, pretend with your kid, just as a parent and child together pretending to be astronauts, right? The message that we aren't good enough with our imagination, I think, is one of the insidious things here.
Alex Hanna: Yeah.
Emily M. Bender: Okay. We could stay on that one forever, but...
Alex Hanna: Yeah. Let's go to... okay. So this one is, "a16z-" so Andreessen Horowitz- "backed startup sells thousands of 'synthetic influencers' to manipulate social media as a service." And this is by Emanuel Maiberg. And from October 24th, so, "A new startup backed by one of the biggest venture capital firms in Silicon Valley, Andreessen Horowitz, is building a service that allows clients to 'orchestrate actions on thousands of social accounts, through both bulk content creation and deployment.' Essentially, the startup, called Doublespeed, is pitching an astroturfing AI powered bot service, which is in clear violation of policies for all social media platforms." Yeah, and so there's an interesting clip from the click farm. And so there's a picture of some masc-looking people on a call, and there's an image of a phone rack and they've just got them kind of all networked, running these content mills. And then they've got these AI influencers, so it's all, a picture of this femme person with black hair, doing like one of those confessional type of things, speaking to a camera. And then they're basically doing this kind of A/B testing of finding out like, what's the most monetizable, yeah, just absolute, just slop at scale.
Emily M. Bender: And one of the interesting points in here to me is that this is clearly against the terms of service, as you said, for these platforms. And Andreessen is on Meta's board, and also investing in this. Which is, yeah. All right, this is from Rest of World, with sticker, "Innovation," ironically. Headline, "AI-powered Textbooks Fail To Make The Grade In South Korea. South Korea's AI learning program was rolled back after just four months following a backlash from teachers, students, and parents, underlining the challenges in embedding the technology in education." I would say underlining the pointlessness, but, okay. And this is by Junhyup Kwon, from October 15th. Do we trust Rest of World's bullet points at the top, Alex?
Alex Hanna: Rest of World has been better than many of them, but I mean, I still would skip them.
Emily M. Bender: Yeah. Okay. So, but basically the gist of it here is that it was, you know, all the usual promises. This is going to make things personal for every student, it's going to democratize, it's going to allow teachers to focus on what really matters. But in fact, inaccuracies, data privacy, increased workload, of course. And then this third bullet point, I'm gonna read it 'cause I'm amused, even if it's possibly fake. "The program suffered from a lack of testing, hurried implementation, and a change of government."
Alex Hanna: A change of government is really putting it softly. Wasn't there an attempted coup?
Emily M. Bender: Yeah. So, okay, then the actual chaser was this one, which you can have, Alex.
Alex Hanna: Oh, thank you. So this is someone named Ryan Kelly. So the original post is something from LinkedIn, from someone named Ryan Kelly, who is a manufacturing and supply chain technologist who says, "Hey LinkedIn, Paul Day and I are looking to form a 'high reliability,' in quotes, manufacturing peer group. If you own production slash quality slash ops where failure isn't an option..." And then it's like an AI-generated image of a patch, and they're-
Emily M. Bender: Like you would put on a uniform or something, right?
Alex Hanna: Yeah, like a uniform patch. And it's a picture of a rocket. And then the text around it is this terrible AI-generated text, where it's supposed to say, "High reliability manufacturing group: failure is not an option," but it reads more like, "Ha releb mafacshing, psi-" there's like the Greek letter psi as a U- "failure is is not a blurred N N, option." It's just incredible AI slop. And then the quoting person is Nosferatu RBMK, who says, "LMAO, god I love LinkedIn." So really just, it's incredible stuff. Like, you can't make this stuff up.
Emily M. Bender: And, two things here. abstract_tesseract says, "Relibbity boppity boo!" And that's not just any rocket, Alex, that looks like the space shuttle, which is ironic.
Alex Hanna: Yeah. Oof. Well, also the point of that space shuttle looks really Kremlin-like.
Emily M. Bender: It does, yes.
Alex Hanna: It's actually very funny.
Emily M. Bender: Yeah. Okay, so now I'm gonna get our third set of these things. And so, we've left the education and fake people part of Fresh AI Hell, and we're headed to science, medicine, and psychotherapy. So we are seeing all kinds of just really terrible applications of so-called artificial intelligence in scholarship and the production of science, and the call is coming from inside the house. Here is something from Nature, as in the highly prestigious scientific publication Nature, called Nature Research Assistant. And the URL is natureresearchassistant.com. But you can tell by the logo this is the same Nature, right? I don't have a date on this. I think it was announced this month. And it says, "Your new AI-powered research assistant. Helping you to save time reading, understanding, and writing research papers." And there's a login button. And below that it says, "Don't have an account? Join the waiting list!"
Alex Hanna: Oh, geez. Yeah. Just really terrible stuff. And so then it's got things that you can do. "Summarize papers: AI-generated summaries will help you to swiftly comprehend even the most complex papers." Somehow I doubt that. And then-
Emily M. Bender: That's not how this works.
Alex Hanna: And then click the, there's other things if you click the related articles. "See how any paper fits within the academic landscape. Find similar studies and discover key review articles, not just from Springer Nature, to support your literature reviews." Can't you just check the citation list? Okay. And then chat with the paper, which I am like, I'd love for someone to write something about this desire to have a chat interface for a static artifact. 'Cause that to me is this really fascinating and kind of absurd way to consume a piece of media. But yeah, there's something there that's weird and perverse and I don't know why so many people who are developing these things are offering that.
Emily M. Bender: Yeah, that would be a fun study to get into. So we also have "Manuscript advisor," and then, oh, it's a "responsible AI tool." Okay. More Nature.
Alex Hanna: So this is reporting on an event. So this is from October 24th.
Emily M. Bender: 14th, October 14th.
Alex Hanna: Sorry. Yes, 14th. Thank you. "AI Bots Wrote And Reviewed All Papers At This Conference." Elizabeth Gibney, and the subhead is, "Event will assess how reviews by models compare with those written by humans." There's hope, luckily. There's a, this is a stock photo, it is not extruded.
Emily M. Bender: Yeah, it's cute. Also, you can tell it's a stock photo because the Mac laptop involved is about a decade old.
Alex Hanna: There you go. So this event, so, "The event, known as Agents4Science 2025, will be held online on October 22nd. The attendees will still be humans. It will feature presentations of the submitted papers, given either by the artificial intelligence agents themselves, or by the humans who ran the experiments, and panel discussions by academics." They call it "'a relatively safe sandbox where we can sort of experiment with different submission processes, different kinds of review processes.'" Says James Zou, an AI researcher at Stanford.
Emily M. Bender: At where now?
Alex Hanna: Oh, you know where. Oh, you know where!
Emily M. Bender: So much of this terrible stuff is coming out of Stanford, I just have to say. And it is embarrassing, as an alumna of the Stanford program.
Alex Hanna: I am proudly not an alumna of Stanford.
Emily M. Bender: What can I say? I did not know in the 1990s how probably awful it was then, and how awful it was going to become. And there's still good stuff happening at Stanford, too. Like, you know, Haley Lepp, previous guest on the show, is doing some really cool work there. And Charity Hudley is doing amazing work there, so it's not all bad. But, okay. So yeah, "It is designed to capture a quote, 'paradigm shift' in how AI is used in science that has taken place over the past year, says Zou. Rather than using large language models or other tools designed for specific tasks, researchers are now building coordinated groups of models known as agents to act as 'scientists working across the research endeavor,' he says." This is so much nonsense.
Alex Hanna: Yeah.
Emily M. Bender: And I just wanted to take us to the bottom of this one, where we have a quote from Matthew Gombolay, who is "a computer scientist at the Georgia Institute of Technology in Atlanta, who is also ethics co-chair of the 2026 AAAI conference."
Alex Hanna: Oof. Okay.
Emily M. Bender: And he says, "A more rigorous experiment than the Agents4Science one would be for an existing major conference to assign papers at random to human or LLM review, and then monitor which stream leads to more consequential breakthroughs, he says."
Alex Hanna: That's alarming.
Emily M. Bender: It is really alarming.
Alex Hanna: That's very alarming for this person also to be an ethics chair. Because what does, I mean, what does it even mean to monitor for more consequential breakthroughs? I mean, first off, you're just offering, if you're doing review, you're giving short shrift to the authors. And so there's an opportunity cost of what gets basically shut down from LLM review. And then what does it mean to be consequential in that sense? I mean, it's kind of a misunderstanding of what it means to do science in particular areas.
Emily M. Bender: Yes, and again we point to the work of MJ Crockett and Lisa Messeri. And see, I think it was episode 31, I wanna say. Somewhere in the past we had them on. It was fantastic.
Alex Hanna: Yeah. And then there's a good comment from sjaylett in the chat, who says, "Aren't paradigm shifts supposed to be when we come up with a better understanding of a field, not throw away some tools that were fine in favor of lazy convenience?" And so like, thinking about the way that we do paradigm shifts in science, you know, this is not things that LLMs are going to be adept at doing.
Emily M. Bender: Yeah. And sjaylett also says, "Wait, how is the cause and effect supposed to be working there? Quality of review causes importance?" No, I think the thing is, the conference wants to publish the most consequential papers. And so, which way of reviewing leads to papers with more impact showing up in this conference, I'm guessing is the logic. But anyway, AAAI.
Alex Hanna: Yes.
Emily M. Bender: So this is a couple of things in LinkedIn. So Omer Ben-Porat two weeks ago, "#AAAI26. This year, in addition to five human reviewers, we also received an AI review. The humans sort of like the paper, so there's hope. The AI, however, took a bolder approach. It confidently declared that our paper has, quote, 'technical errors,' and provided an entire A4 page detailing a counterexample construction to one of our proofs. The construction is beautifully written, impressively detailed, and utterly wrong. So now we're facing a new 2025 era dilemma. Do we spend part of our precious 2,500 character rebuttal explaining why the AI's hallucinated counterexample doesn't actually exist, just to make sure the human reviewers don't believe it does? The future of peer review is here and it's... creative." And then we have a wonderful rant by Dagmar Monett about this whole thing. Basically saying, this is not okay, AAAI should not be doing this, and pointing out that AAAI has an in-kind sponsorship to do this from OpenAI.
Alex Hanna: Oof. Woof.
Emily M. Bender: Blegh. All right, I'm gonna keep us moving because we're actually running a little bit behind now.
Alex Hanna: So now into medicine. So this one's from the Financial Times, and filed in medical errors that everyone can predict. "AI Medical Tools Downplay Symptoms In Women And Ethnic Minorities." Subhead, "Large language models reflect biases that can lead to inferior healthcare advice to female, Black, and Asian patients." Reporting here by Melissa Heikkilä. This has got the umlauts on the final A. Published September 19th, 2025. And so, "A series of recent studies have found that the uptake of AI models across the healthcare sector could lead to biased medical decisions, reinforcing patterns of undertreatment that already exist across different groups in Western societies."
Emily M. Bender: Really nothing surprising here, but glad that it's being reported on.
Alex Hanna: Yeah.
Emily M. Bender: And also, how dare people be actually implementing this, given that this was entirely predictable?
Alex Hanna: Yeah. And so this is research by the MIT's Jameel Clinic. Yeah, and so we had a prior discussion of this, when we had...
Emily M. Bender: Roxana Daneshjou, yeah.
Alex Hanna: Roxana Daneshjou, yeah, who has published a lot on this in medical models as well. So-
Emily M. Bender: Doing, I should say, great work at Stanford. So, just to, yeah.
Alex Hanna: Yeah.
Emily M. Bender: Okay. This one made me so mad. "Therapists are secretly using ChatGPT. Clients are triggered." This is reporting by Laurie Clarke in MIT Tech Review, September 2nd. And the subhead is, "Some therapists are using AI during therapy sessions. They're risking their clients' trust and privacy in the process." And, again, it's one of these places where people are overworked, you can understand why someone would be reaching for something to help them out. And I'm just so mad that this is so easily in reach, because it is infuriating.
Alex Hanna: Yeah.
Emily M. Bender: So, just to read a little bit of it, "Declan would've never found out his therapist was using ChatGPT had it not been for a technical mishap. The connection was so patchy during one of their online sessions, so Declan suggested they turn off their video feeds. Instead, his therapist began inadvertently sharing his screen. 'Suddenly I was watching him use ChatGPT,' says Declan,
Alex Hanna: That's so, that's horrifying. I would just, ugh.
Emily M. Bender: Yeah. And there's the, full of examples here, and I wanna point out there's some work by, and now I'm going to make sure that I actually have this person's full name. Someone that I got to meet at the Mind & Life thing that I was at, last week... last week? Two weeks ago, in India. So research by Anat Perry and colleagues, "Comparing the value of perceived human versus AI-generated empathy." Basically finding very clearly that as soon as you know it's artificial, it is not perceived as empathetic. Which makes sense. So, lemme get back to this. Hey, you get the chaser again, if you want it.
Alex Hanna: All right. I'll take it, and then you get the last two.
Emily M. Bender: Okay.
Alex Hanna: So this is reporting from the, it's in The Guardian. So, "Microsoft Blocks Israel's Use Of Its Technology In Mass Surveillance Of Palestinians. Tech firm ends military use unit's access to AI and data services after Guardian reveals secret spy program." And the journalists here are, Harry Davies and Yuval Abraham. Yuval is also a reporter at +972 Magazine, who's done a lot of reporting on Israel's use of quote unquote "AI systems." And so, "Microsoft has terminated the Israeli military's access to technology it used to operate a powerful surveillance system that collected millions of Palestinian civilian phone calls made each day in Gaza and the West Bank, the Guardian can reveal." And so, basically they had been storing some stuff on Azure, and had effectively said that they couldn't do some of that storage right now. So, good news here. There's still a lot to go here. I had the good fortune to catch some of the No Azure For Apartheid activists speaking at a conference I was at recently. And they were talking about the long term campaign to push Microsoft to break its deals with Israel, especially in IDF usages of its technology. So, good job for these activists, but there's still a lot to do.
Emily M. Bender: Yeah, exactly. So this is the palate cleanser to celebrate the win, while recognizing that there's a lot to do.
Alex Hanna: Yeah.
Emily M. Bender: All right, we are now on to batch number four.
Alex Hanna: Oof. Okay.
Emily M. Bender: Yeah. Wow.
Alex Hanna: This really is a slog. I forgot, I gotta write songs for next time. I remember why we did that.
Emily M. Bender: Yeah, it helps. Okay, so we are in policy, law, politics, and policing, I think is, that's the section of Fresh AI Hell that we're in. And this is a post by Roland Meyer on Bluesky, saying, "There's a growing and deeply problematic tendency to use #genAI for political and historical education. Take this example from Germany, which translates as, 'Hey, AI, who should I vote for?'"
Alex Hanna: Geez.
Emily M. Bender: "What's typical here is that AI is not only addressed as a pseudo-person, but also as a kind of impartial judge," to which I say, nein, danke. But this basically, to take the next skeet here, he writes, "The project assembles AI images of cityscapes supposedly representing the political programs of various parties. With their otherworldly glow and cartoonish aesthetics, all of these images have the same generic AI slop look familiar from countless LinkedIn posts and PowerPoint presentations." So it's like, "Here, just, you know, get a vibe of what their party platform translated into AI slop imagery tells you." Like, no, sit down and read the policies, damn it!
Alex Hanna: Although, I will say when you go to the AfD, the sort of neo-Nazi party, it looks pretty horrifying. It's got this picture of this huge hall, and like...
Emily M. Bender: Oh, this isn't the system, though. If we read the text here, Roland Meyer says, "It's no coincidence that right-wing parties have used exactly this kind of image before in a similar manner. If your politics is all about offering imaginary solutions to mostly phantasmatic problems, generative AI is the perfect tool for propaganda."
Alex Hanna: So who, so it was the parties generating this, or was it this program?
Emily M. Bender: So in general it's the program, but this one is-
Alex Hanna: This particular one the AfD was like, it's their own fascist propaganda.
Emily M. Bender: Yeah.
Alex Hanna: Where it's got like, flying cars- so it's like, explicitly fascist. It's like everybody flying German flags and these flying cars and these weird- it's giving like really strong Starship Trooper vibes.
Emily M. Bender: Yeah.
Alex Hanna: Yeah, it's... damn, like, what the fuck?
Emily M. Bender: Yeah. Okay. Yeah, we're definitely in the what the fuck region of things here, too.
Alex Hanna: Yeah. So this is real what the fuck. This is actually from the EFF, the Electronic Frontier Foundation. And the headline is, "Flock's Gunshot Detection Microphones Will Start Listening For Human Voices." And this is by Matthew Guariglia. And it's October 2nd, 2025. So, "Flock Safety, the police technology company most notable for their extensive networks of automated license plate readers spread throughout the United States, is rolling out a new and troubling product that may cause headaches for cities that adopt it- detection of 'human distress,' in quotes, via audio. As part of their suite of technologies, Flock has been pushing Raven, their version of acoustic gunshot detection. These devices capture sounds in public places and use machine learning to try to identify gunshots and then alert police. But EFF has long warned that they are also high-powered microphones parked above densely-populated city streets. Cities now have one more reason to follow the lead of many other municipalities, and cancel their Flock contracts before this new feature causes civil liberties harms to residents and headaches for cities." Ugh, yeah. So this is just like, yeah, this is... I couldn't even imagine the training data set that they have for people in quote unquote "distress."
Emily M. Bender: Yeah. And I'm sure they're not even, you know, disclosing that. So, yeah, this is nonsense. More nonsense. Okay, so Alex, I'm gonna give you this one 'cause it's California.
Alex Hanna: Yeah. So this is really troubling. I mean, this is reporting from Mission Local. So this is an independent publication for the San Francisco region. And so, the title is, "SF Launched An AI Chatbot For RV Dwellers. It's Got Errors." So, "San Francisco's RV ban is less than two weeks away. RV dwellers will be a testing ground for the city's first public chatbot." This is by Marina Newman, October 17th. And so there is, it gives a story about this person named Armando, who, you know, basically is driving his RV around to his job, but also going to city agencies and human services locations. And then San Francisco is gonna have an RV parking ban, which is gonna give authorities the right to tow the RVs that people live in, which is, first, awful policy. And I mean, similar policies have been debated, I think, in Berkeley and Oakland, and I think there's been a lot of pushback. Yeah, so basically, there's a chatbot that's been deployed by our new mayor, Daniel Lurie, in his Office of Innovation. And so, they have a chatbot that's supposed to explain the RV parking ban. It was developed by a company called Polimorphic, and it's supposed to answer questions. And so basically it, you know, as one would expect, it gives the wrong answers. It gives conflicting information. It doesn't even give the same information if you ask something twice. When Armando asked it, versus Mission Local provides prompts for it, it gives different responses. So, yeah. And this reminds us of the NYC chatbot that they deployed, and that The Markup and The City and Documented had talked about, about a year or two ago.
Emily M. Bender: Yeah. And it also answers the question, sort of, of when Governor Newsom said, "Yeah, we're gonna use AI to address homelessness." Uh, no.
Alex Hanna: This is what he meant. He meant, what he officially, I mean, what he was actually referring to were kind of bed allocation algorithms. Which didn't really make sense. And now this is just one of these other places where the city is being, absolutely just, you know, batshit here.
Emily M. Bender: All right, I'm gonna keep us moving. Sorry for the synthetic video playing on the screen, folks. I'm gonna read the text about it. This is from Prem Thakker on Bluesky, posted October 22nd. "Andrew Cuomo's campaign just posted and quickly deleted this AI-generated ad depicting quote, 'criminals for Zohran Mamdani.' Features a Black man in a keffiyeh shoplifting, an abuser, a trespasser, a trafficker, a drug dealer, and a drunk driver all declaring support for Mamdani." And this is horrific, racist AI slop. And the only reason I decided that we should talk about it is that people should know what Cuomo's campaign did. And it's like, posting it and then taking it back down is still posting it. I can take it off the screen now. Oof, sorry.
Alex Hanna: Yeah. Hopefully by the time you listen to this podcast, Mamdani will have been elected, and we can laugh Cuomo into being the abuser that he is.
Emily M. Bender: Yeah.
Alex Hanna: All right, so this is from Forbes. So, "DHS Ordered OpenAI To Share User Data In First Known Warrant For ChatGPT Prompts." The subhead, "Filed by child exploitation investigators with the DHS, the warrant reveals the government can ask OpenAI to provide information on anyone who enters specific prompts." By Thomas Brewster.
Emily M. Bender: October 20th. Yeah.
Alex Hanna: October 20th. So, yeah. Effectively. Yeah. I mean, I think we might have talked about this before, effectively how if OpenAI is promising that it's actually going to, you know, doesn't store prompts. Yeah, of course it does store prompts, and it's because they are gonna get subpoenaed by the federal government.
Emily M. Bender: So Forbes claims this is the first known federal search warrant asking OpenAI for user data. So, privacy nightmare. Chaser, some good, effective reporting here. This is by Phoebe Huss and Khari Johnson, October 23rd, 2025 in CalMatters. Sticker is "Technology," and the headline is, "LA County Moves To Limit License Plate Tracking, Citing CalMatters Report." And basically, there's some earlier reporting by CalMatters showing, that says here, quote, "Roughly a dozen police and sheriff's departments throughout Southern California shared data from the high tech camera systems, mounted on patrol cars and above roads, with federal immigration agencies." And so, "The September motion requests that the LA County Sheriff's Department, which operates independently from the supervisors, conduct yearly privacy training for deputies with access to license plate cameras, and that the data not be used for non-criminal immigration enforcement." So this is like down in the weeds, down in the details, but we see reporting on misuse of this data, and in particular misuse in conjunction with the horrific things that ICE is doing right now, leading to at least some movement on the part of local government to do less of it, which I appreciate.
Alex Hanna: Yeah, and I think there have been some movement in different locales in California. I mean, I was talking with a friend this morning, talking about the movement and the organizing against all the automatic license plate readers, ALPRs in San Diego. And places that have stronger privacy commissions have been able to push back on a bit of this. But yeah, I mean, it's, definitely not a surprise that it's going to be shared across agency in places where you'd expect supervisors to not do so, or law enforcement in those particular places not to do so.
Emily M. Bender: All right. And just to save maybe the most hellish hell for last, section five here I've got titled TESCREALism.
Alex Hanna: Oh lord, this one. So this is from someone named Spencer Moore on the website formerly known as Twitter. "Today we reveal CogPGT-" not GPT- "the world's most powerful genetic predictor of IQ. We achieve a correlation with IQ of 0.51, or 0.45 within family. Herasight, customers can boost the expected IQ of their children by up to nine points by selecting the embryo with the highest CogPGT score." And this is just like the most, I mean, I don't even know what to say to this. This is just like, out and out nightmare eugenics. This is...
Emily M. Bender: This is horrific.
Alex Hanna: This is really horrific stuff. And the thing is, this is not particularly new either. I mean, the better predictive modeling for like large, whatever, genetic datasets or whatever things, are- And also there's a pretty horrifying graph down here too about the IQ spread, where it's separating out between European, East Asian, and African. And they're ordered in that descending order of the IQ spread. So really, really awful stuff. And some discourse in the chat, where thx_it_has_pockets says, "We've entered the eugenics phase of innovation," and mjkranz saying, "Did we ever leave the eugenics phase?" And I agree, we, I don't think we ever have.
Emily M. Bender: No, we have not ever left the eugenic phase. And I also wanna say about this, that I was curious why it's PGT and not GPT, but also unwilling to click through to find out. So, do not know.
Alex Hanna: Yeah.
Emily M. Bender: So here is something on Futurism from October 22nd. Journalist is Maggie Harrison Dupré. The sticker is "Steves Unite." And the headline is, "Hundreds Of Power Players, From Steve Wozniak to Steve Bannon, Just Signed A Letter Calling For Prohibition On Development Of AI Superintelligence." Which is to say there's another new pause letter slash statement, it's super short. And it basically says, "calls for a 'prohibition on the development of superintelligence,' which it says should not be quote, 'lifted before there is broad scientific consensus that it will be done safely and controllably,' as well as with quote, 'strong public buy-in.'" And basically, I think this is a bunch of people who, this is in fact put forward by the Future of Life Institute, who are probably feeling a little bit neglected by the media and needed some attention.
Alex Hanna: Yeah, they're just doing, I mean, they have another thing which is maybe funded by them, but we could talk about that another time.
Emily M. Bender: Another time. Yes. All right. I think I want this one.
Alex Hanna: Yeah, this is your tweet, so you might as well.
Emily M. Bender: So this is me on Bluesky, in September. And the inner one is a screen cap of something from Emad Mostaque's webpage for his book. And I wrote, "This is freaking hilarious. Mostaque wrote a book, I guess, or made a website for one anyway, with blurbs from LLMs." And so he's got, it says, "Start reading," and then you can download the PDF, download Epub, Kindle, NotebookLM- so have it fake read to you. Also GPT-5, Claude, Grok and Github as ways of accessing this book. And then he's got blurbs from Claude Opus 4.1: "5 Stars. Mostaque has written the definitive guide to economics in the age of AI." And also GPT-5 Pro and Gemini 2.5 Pro. And my larger, the quote tweet says, or the quote post, "The longer I look at this, the funnier it gets, honestly. The 'start reading' options with LLMs? What's that gonna do, give a fake summary of the book?"
Alex Hanna: Yes.
Emily M. Bender: Just, yeah.
Alex Hanna: Yeah. Absolute delusional stuff from Emad. Yeah.
Emily M. Bender: Yeah, I was amused. Okay. Less amusing.
Alex Hanna: Yeah. This one is an op-ed in Nature, by Dashun Wang, and the title is, "Prizes Must Recognize Machine Contributions To Discovery. The Future of Science will be written by humans and machines together. Awards should reflect that reality." So this is October 7th, 2025. Let's see. Yeah, so, "Today, human ingenuity is expressed through the machines we create. Almost every big scientific breakthrough of the past 50 years, from detecting gravitational waves to sequencing the human genome and mapping the structure of proteins with artificial intelligence tools, has depended on a machine that could sense more, measure more precisely, or calculate faster than any human can. Yet prestigious scientific prizes still frame achievements mainly as a human endeavor. Nobel prizes have repeatedly gone to people whose discoveries were made possible only by extraordinary technologies, but the machines and communities that built them are rarely acknowledged as co-creators." So really absurd argumentation here. Let's honor these partnerships. I mean, I would like to thank my pencils, I guess. And also, there's something different in saying, "I would like to thank the people who made scientific infrastructure possible." Sure, I think that's great. You know, like, if you build great tooling, I actually think that should be recognized. And I actually, when I was, I was briefly an editor for the Journal of Open Source Software in their social sciences category. And like, we actually had an open letter that was like, we should give credit to people who write open source software for scientific creation. And I agree with that wholeheartedly. But if you're saying that we need to reward like AlphaGo or- No, what are we doing here?
Emily M. Bender: Yeah. Yeah, exactly. So, reward the people, sure. So some stuff towards the end, he says, "It will future-proof our scientific honors. As more important discoveries are made using machines, prize systems that exclude them will become irrelevant." I don't think so. And, "This is not about replacing people with machines or diminishing human achievement. We can still celebrate human genius, but we must also admit what provided the wind in our sails. To deny that partnership is to cling to a romantic but incomplete vision of discovery. To embrace it is to honor the full constellation of creativity, human and machine, that drives science." No. There's no machine creativity here. There's human work that goes into building machines, and that work should be recognized, sure. But not the machines.
Alex Hanna: And it's really weird, I don't know where this kind of argument's coming from, unless it was indicating that you want to just grant more, agency to a quote unquote "AI" or something. But I guess it's, I suppose what it does seem to do is obscure all the human labor that is invested in those technologies. And also, I mean, we should be granting broader prizes to larger teams, and dispelling kind of the great man of science thesis. But we, you know, I think granting that honor to machines would really obscure that.
Emily M. Bender: So he wants the great man and the great machine view of science.
Alex Hanna: Yeah, apparently. I guess that's the vibe here.
Emily M. Bender: All right. One last palate cleanser.
Alex Hanna: All right, you get the last one, 'cause I had two palate cleansers in a row.
Emily M. Bender: Okay. I'll read it, but I think you'll have opinions, too.
Alex Hanna: Okay.
Emily M. Bender: This is from Bloomberg, journalists are Lara O'Reilly and Jordan Hart. And it was published on October 23rd, 2025. And the headline is, "The Hot New Trend In Marketing: Hating On AI." Which is pretty great. So, "Within throwing distance of Apple stores around Manhattan and Google's New York HQ, bus stop posters tease the Big Tech giants. 'AI can't generate sand between your toes,' one read. 'No one on their deathbed ever said, I wish I'd spent more time on my phone,' said another. The ribbing came from Polaroid, promoting its point and click Flip camera. 'We are such an analog brand that basically gave us permission. We can own that conversation,' said Patricia Varella, Polaroid's creative director." And so the brands here are something called Aerie, which I guess is clothing, Polaroid, and Heineken. And they are recent people joining the anti-AI marketing trend.
Alex Hanna: Yeah. abstract_tesseract says, "Cheers to us for hating on AI before it was cool." And sjaylett says, "We are the hipsters of AI hate." And magell-
Emily M. Bender: magidin says-
Alex Hanna: magidin says, "I'd never been a trendsetter before. Thanks AI." So kudos to everybody for being ahead of the brands. You made it all possible.
Emily M. Bender: Yeah. Oh, all right.
Alex Hanna: All right. With that, we've navigated our way out of hell, and that was barely scratching the surface.
Emily M. Bender: Oh man, I had so many links. It was so hard to narrow it down to those, I think there was what? 25 to 30 total in there that we did this hour.
Alex Hanna: Yeah, there was a lot of stuff. And I mean, I think even like, we had three to five things in the chat today. So it comes hot and heavy.
Emily M. Bender: It does.
Alex Hanna: Yeah. That said, that's it for this week! Our theme song is by Toby Menon. Graphic design by Naomi Pleasure-Park. Production by Ozzy Llinas Goodman. And thanks as always to the Distributed AI Research Institute. If you like this show, you can support us in so many ways. Order The AI Con at thecon.ai, or wherever you get your books, or request it at your local library.
Emily M. Bender: But wait, there's more. Rate and review us on your podcast app. Subscribe to the Mystery AI Hype Theater 3000 newsletter on Buttondown for more anti-hype analysis, or donate to DAIR at dair-institute.org. That's dair-institute.org. You can find video versions of our podcast episodes on Peertube, and you can watch and comment on the show while it's happening live on our Twitch stream. That's twitch.tv/ dair_institute. Again, that's dair_institute. I'm Emily M. Bender.
Alex Hanna: And I'm Alex Hanna. Stay out of AI hell, y'all.