Mystery AI Hype Theater 3000
Mystery AI Hype Theater 3000
Wrapping Up a Hellish 2025, 2025.12.15
For our last recording of 2025, Emily and Alex take on a TIME article naming the "architects of AI" as their person of the year. We also look back at the year in AI nonsense, and share findings from our Fresh AI Hell Wrapped. Happy Hype-y New Year!
References:
Also referenced:
Fresh AI Hell:
- Fresh AI Hell, Wrapped
- Adobe for Education outputs sexualized images
- "'Low Tech ChatGPT' on physical paper"
- "Springer Nature retracts, removes nearly 40 publications that trained neural networks on 'bonkers' dataset"
- Hologram lecturers and robot sandwich-makers
- "'ChatGPT for Doctors' Startup Doubles Valuation to $12 Billion as Revenue Surges"
- No more ideas. Need AIdeas!
- "For the First Time, AI Analyzes Language as Well as a Human Expert"
- "Woman Hailed as Hero for Smashing Man’s Meta Smart Glasses on Subway"
Check out future streams on Twitch. Meanwhile, send us any AI Hell you see.
Find our book, The AI Con, here.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
- Bluesky: emilymbender.bsky.social
- Mastodon: dair-community.social/@EmilyMBender
Alex
- Bluesky: alexhanna.bsky.social
- Mastodon: dair-community.social/@alex
- Twitter: @alexhanna
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Ozzy Llinas Goodman.
Alex Hanna: Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find.
Emily M. Bender: Along the way, we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. I'm Emily M. Bender, a professor of linguistics at the University of Washington.
Alex Hanna: And I'm Alex Hanna, director of research for the Distributed AI Research Institute. This is episode 70, which we're recording on December 15th of 2025. This is our last live stream of the year, so we'll be looking back over 2025 as a whole.
Emily M. Bender: It's been a long year in the AI hype mines. So we also reviewed the data from our Fresh AI Hell segment, where we always have way more hell than we can possibly discuss. An AI Hell wrapped, if you will. We'll share those findings later in the show.
Alex Hanna: Part of our inspiration for this episode comes from a terrible TIME article naming the quote unquote "architects of AI" as their person of the year. From a cover illustration inspired by automated art to a bunch of disclosures about TIME's own financial ties to AI, this piece is such a rich text. So let's get into it.
Emily M. Bender: All right, here we go. This is how it appears on time.com, and you'll notice that there's this pop uppy thing here that you cannot get rid of, that continues to hover over everything you're trying to look at. And it says, "Explore the full archive of TIME, a century of journalism, insight, and perspective with AI that helps you research, connect ideas, and uncover stories across every era and topic." And then we have the goddamn sparkle emoji, and a little box that says, "Ask me anything."
Alex Hanna: And it doesn't go away. It's just, scrolling down there's, it's got that dark pattern where it doesn't go. If you click on it too, if you do click on it, it then expands to show the whole screen.
Emily M. Bender: Oh. Oh no, I didn't try that. Oh no.
Alex Hanna: Yeah. So you expand on it and it says, "You're reading this! Try asking for related articles. Try summarizing this article." Try doing all this other bullshit. And it's these inspirations of prompts. So it's just really terrible, awful dark patterns.
Emily M. Bender: I guess once I've done that, it does get a little bit smaller, but we went over to the archive.is site, to look at this without having to have that floater. It's like having a floater in your eyes, and it is so viscerally appalling because I am trying to read words that ostensibly somebody wrote. And not only is the AI thing in my way, but it is physically blocking the words and won't go away.
Alex Hanna: Yeah. Pretty upsetting. So this is the article. So published December 11th, 2025. Story by Charlie Campbell, Andrew R. Chow, and Billy Perrigo. And then it starts with this image of Jensen Huang. "Jensen Huang needs a moment. The CEO of Nvidia enters a cavernous studio at the company's Bay Area headquarters and hunches over a table, his head bowed. At 62, the world's eighth richest man is compact, polished, and known among his colleagues for his quick temper as well as his visionary leadership. Right now, he looks exhausted. He stands silently. It's hard to know if he's about to erupt or collapse. Then someone puts on a Spotify playlist and the stirring chords of Aerosmith's 'Dream On' fill the room. Huang puts on his trademark black leather jacket and appears to transform, donning not just the uniform, but also the body language and optimism befitting one of the foremost leaders of the artificial intelligence revolution." God, okay.
Emily M. Bender: And so I was reading this, and I was trying to figure out what he was actually doing. What's the event- that he's in a cavernous studio, he needs a moment, but before what, and what's the playlist for?
Alex Hanna: It seems like it's about the interview. Like they entered in on him about to do the interview. 'Cause it says later down in the article, he was talking to him in a 75 minute interview, but I'm not a hundred percent sure.
Emily M. Bender: Yeah. It's just really strange writing. Okay, so I don't think we're gonna read all the text here, 'cause there's a lot of text, but I do wanna talk about, we're gonna talk about parts of it, and also talk about there was two alternative covers. And this first one is, I guess, letters AI surrounded by scaffolding. And then there's people that you can't really tell who they are, at various places around the scaffolding on the ground, and several levels up, wearing hard hats. Some of them, not all of them.
Alex Hanna: Well it's the people in the business suits that are not wearing hard hats, so... their brains are too smart to- they don't need the equipment to protect them.
Emily M. Bender: But that's not the one that really got us going. It's this one. Did you wanna start by describing it?
Alex Hanna: Yeah, so it's the classic image that is the "Lunch atop a skyscraper" that was taken in 1932.
Emily M. Bender: So here's the original.
Alex Hanna: Yeah. So that was- and I'm just reading the Wikipedia page- "11 iron workers sitting on a steel beam of the RCA building, 850 feet above the ground, during the construction of Rockefeller Center in Manhattan." And it was a staged photo that was part of a publicity stunt promoting the skyscraper. And then the new one is a picture of, from left to right, Mark Zuckerberg, Lisa Su, who is head of AMD, Elon Musk, Jensen Huang, Sam Altman, Satya Nadella, and then Dario Amodei, and then barely off the side of the screen is Fei-Fei Li that's running off the page. And they're, it's so bizarre too, 'cause they're all in business wear, or their trademark outfits like Huang is. And then, but it's so weird 'cause it looks like they're all photoshopped in, their legs are dangling in weird ways, and it's such a bizarre- it looks like a bad photoshop, but apparently it was actually painted.
Emily M. Bender: Yeah. And the commentary on this has been fun online. There was someone who captioned this "stealworkers," S-T-E-A-L instead of S-T-E-E-L, which was pretty great.
Alex Hanna: Yeah. Love that.
Emily M. Bender: And someone else said, "One of the contrasts between this one and the original is that this time, you're rooting for the wind.
Alex Hanna: Yeah, god. Terrible.
Emily M. Bender: Yeah, they all look really awkward, and- was it Moser? Somebody was saying that Sam Altman has only three expressions. One is seen a ghost, second is processing bad news, and the third one is being a jackass. And he said, looks like they went with "seen a ghost" for this one.
Alex Hanna: Yeah, he looks like he has kinda like a deer in headlights, but yeah. All right, let's get into some of this article. There's a lot of terrible stuff here. It's very hype-tastic, of course. And so let's start off with this quote from Huang. And so they say, "This year, the debate about how to wield AI responsibly gave way to a sprint to deploy it as fast as possible." And here's the quote- "'Every industry needs it, every company uses it, and every nation needs to build it,' Huang tells TIME in a 75 minute interview in November, two days after announcing that Nvidia, the world's first $5 trillion company, had once again smashed Wall Street earnings expectations. 'This is the single most impactful technology of our time.'" And that is true to him. And I mean, if you are selling the shovels in the Gold Rush, one surely would say that.
Emily M. Bender: Yeah, indeed. So after that, this says, "OpenAI's ChatGPT, which at launch was the fastest growing consumer app of all time, has surpassed 800 million weekly users." And then next sentence, "AI wrote millions of lines of code, aided lab scientists, generated viral songs, and spurred companies to reexamine their strategies or risk obsolescence." And it's like, could you be any more breathless? And, you know, citation needed. "AI wrote millions of lines of code." Sure, I can set up a machine to churn out lines of code. That doesn't mean anything useful is happening.
Alex Hanna: Yeah, exactly.
Emily M. Bender: It ends with, parenthetical, "OpenAI and TIME have a licensing and technology agreement that allows OpenAI to access TIME's archives."
Alex Hanna: Yeah. And this is a really, I was pretty upset at this article. And then, you know, it's gone, a bit of back and forth, but it is very, very breathless. So down there, if you go to the next thing, it's got the kind of turn on it. And so here's the criticism, that is, I think we'll both have something to say on this. So, "But researchers have found that AI can scheme, deceive, or blackmail as the leading companies' models improve. AI systems may eventually outcompete humans, as if an advanced species were on the cusp of colonizing the earth. AI flooded social media with misinformation and deep fake videos. And Pope Leo the thir-" Is it 13th, 14th?
Emily M. Bender: 14th, 14th.
Alex Hanna: Yes. I just, I did it on the fly. I haven't called him anything other than Pope Leo and the Chicago Pope. "-warned that it could manipulate children and serve, quote, 'anti-human ideologies.' The AI boom seemed to swallow the economy into quote, 'a black hole that's pulling all capital towards it,' says Paul Kedrosky, an investor and research fellow at MIT. Where skeptics spied a bubble, the revolution's leader saw the dawn of a new era of abundance. Quote, 'there's a belief that the world's GPT-" GPT, lord, the world's GPT, I'm losing it- "the world's GDP is somehow limited at 100 trillion,' Huang says. 'AI is going to cause that 100 trillion to become 500 trillion.'" Anyways, let's start there. Thoughts on that, Emily?
Emily M. Bender: I mean, it's hyped top to bottom, but what's really getting to me here is the way that the agency of the companies is being hidden, right? So this first thing, "AIs can scheme, deceive, or blackmail." That's just Anthropic's bullshit, right? If you poke at the, if you do interactive fiction with the chatbot, you can cause it to output things that look like that. But then, "AI flooded social media with misinformation and deep fake," or "it could manipulate children." It's like, no, none of those things. People are using the synthetic media extruding machines that these companies are setting up access to and bankrolling, to flood social media with misinformation and deep fake videos. You can't hide the agency like that, and not be serious reporters, you know?
Alex Hanna: Yeah. I appreciate that too. And that's sort of like, there's a lot of this as it's framed as well. The "scheme, deceive, and blackmail," and the AI's element of that, and that is rightly, as you pointed out, Anthropic's bullshit of this very bespoke experiment that then gets reported like this.
Emily M. Bender: No link either, by the way, right?
Alex Hanna: Yeah, yeah. They're not linking- I mean, all the links here, or the majority of them are- actually, I think all of them are self links. They're only linking to things within TIME, which of course, is their prerogative, right? They're trying to generate their own ad dollars and whatnot. And then the things about an advanced species, we're on the cusp of colonizing the earth- this, again, is saying that these things are working independently, as if they're not controlled by companies. And then there's the mention to this Kedrosky, I don't know if he gets any mention anywhere else in the article. I know some other elements of the bubble-ness does. And then there's the fake numbers that Huang's making up, just like, $500 trillion. I'm like, where bro? Where, what are you doing?
Emily M. Bender: Yeah, right. Ah, this is a weird picture too.
Alex Hanna: Yeah, he's, it's very Daddy. It's got, it's Huang, you know, we're looking at his chest height, and he is looking down on the photographer. And this is a photograph that is taken for TIME. So, hate it. Never wanna see Huang in this light, but yet here we are.
Emily M. Bender: Yeah. All right, I'll do the next paragraph here. "This is the story of how AI changed our world in 2025 in new and exciting and sometimes frightening ways. It is the story of how Huang and other tech titans grabbed the wheel of history, developing technology and making decisions that are reshaping the information landscape, the climate, and our livelihoods. Racing both beside and against each other, they placed multi-billion dollar bets on one of the biggest physical infrastructure projects of all time. They reoriented government policy, altered geopolitical rivalries, and brought robots into homes. AI emerged as arguably the most consequential tool in greater power competition since the advent of nuclear weapons."
Alex Hanna: So this says "great power competition." So I'm assuming this is, I'm not sure what the referent here is in "great power." I'm assuming there's a referent to nation-state-hood here, but I'm assuming that great power is meant to encompass also companies. And I mean there's lots of people that have argued about the decline of the nation state, et cetera, et cetera- whatever, we don't have to get into that. But I think that's the kind of hedging there in the word choice. I've not seen it in that- anyways, that's a big aside just to say that this is pretty breathless in terms of this, and TIME in the past has focused on making the people on its cover not to be necessarily people who are, you know, normatively that they agree with, but they want to highlight people that have alter- I mean, didn't they have Hitler on the cover in the thirties? And so there's those elements of it, but I'm just like, you can't not read this, and not read it as a hype artifact. it's just not really critical, especially near the end and let who they- I think they give Demis the last word. But it's just so obnoxious. This is written in a way that is just, it's hard to say, like, one would have to go through and try to substantiate these claims, which of course they're not doing. They're just hedging here. They're saying "perhaps," then they even say it here, "bets on one of the biggest physical infrastructure projects of all time." Is it? I mean, there's been some pretty substantive infrastructure projects. Even thinking about things like, the US interstate system was pretty ambitious.
Emily M. Bender: The Panama Canal?
Alex Hanna: Yeah, the Panama Canal. And of course those are not unmitigated goods, like, of course not. The interstate system was very destructive in the way in which that destroyed black communities, in which-
Emily M. Bender: Hello from Seattle.
Alex Hanna: Yeah, Seattle, you know, Robert Moses in the room with us. But it's just like, yeah. I mean, this is quite the claim.
Emily M. Bender: Yeah. And the other thing is that it's so- not exactly valuable- pretending to be value neutral. So "reshaping the information landscape, the climate, and our livelihoods." Reshaping the climate? No, damaging the climate. "Reshaping" is this like, I'm not gonna judge if it's a good or bad change, kind of a word there. Same for "information landscape and livelihoods." Really, TIME? Do you have no courage?
Alex Hanna: Yeah. Yeah. There's some great things in the comments, too, about the robots in our homes, especially around Roombas. So ben_waber says, "I thought I had a robot vacuum cleaner 15 years ago, but I must have hallucinated it." And then possumrabbi, incredible name, says, "Is the wheel of history a Roomba?" Yes, incredible. I love it. I would say that the Roomba is here saying, "Have you forgotten about me? Have you forsaken me?" Anyways.
Emily M. Bender: And then- I'm gonna have a hard time with this one, sorry for not getting your handle right- but Zubenelgenubi17 says, "'Infrastructure' carries the assumption that it supports some necessary function for society." Indeed.
Alex Hanna: Yeah. Nice point. And so then there's a bunch of, let's see what else is interesting. There's more about the investments.
Emily M. Bender: Yeah- oh, this one we gotta do. So, "OpenAI, which ignited the boom, continues to set the pace in many ways. Usage of ChatGPT more than doubled to 10% of the world's population. 'That leaves at least 90% to go,' said Nick Turley, head of ChatGPT." Which, I mean, first of all, there's this question of growth mentality, right? But also, "at least 90%"? The math's not math-ing, Turley.
Alex Hanna: Well, it's also like, okay, 10%- are they talking about users? And I mean, are they talking, are these...
Emily M. Bender: Weekly users? Monthly users?
Alex Hanna: Yeah, weekly users? Yeah. They're doing some fun math there. The description of the LLM is pretty bad. I don't know if you wanna read this, Emily.
Emily M. Bender: I do, because it pissed me off of course. So, "A large language model, LLM, the technology underpinning chatbots like ChatGPT or Anthropic's Claude, is a type of neural network, a computer program different from typical software." Eh. "By feeding it reams of data, engineers train the models to spot patterns and predict what quote 'tokens,' or fragments of words, should come next in a given sequence." Okay, so far, so accurate, really. "From there, AI companies use reinforcement learning, strengthening the neural pathways that lead to desired responses, to turn a simple word predictor into something more like a digital assistant with a finely tuned personality." It's like, no, it's still what's a likely next token, it's just, we're gonna adjust the probabilities based on the exploitation of lots and lots of labor in that RLHF step. But apparently, they've had to acknowledge that this is just predict next token, and so they have to move the magic over a little bit.
Alex Hanna: Yeah, I mean, there's more of this too, 'cause like, "strengthening the neural pathways." This is giving big Lieutenant Commander Data mystification. And then, "turning a simple word predictor into something more like a digital assistant with a finely tuned personality." Like, what the- no!
Emily M. Bender: No. And what does it mean to have a finely tuned personality? Also gross.
Alex Hanna: Yeah. It's, I have a finely tuned personality for- eugh, yes.
Emily M. Bender: Yeah, I've been getting tuneups on my personality every week. No!
Alex Hanna: Yeah, it's a really bizarre way of speaking about this, and writing about this.
Emily M. Bender: Yeah. Oh, and I guess we should- they talk about the chain of thought stuff.
Alex Hanna: Yeah, go through it.
Emily M. Bender: Okay. "About a year ago, OpenAI researchers hit on a new way of improving these models. Instead of letting them respond to queries immediately, the researchers allowed the models to run for a period of time, and, in quotes, 'reason' in natural language about their answers. This required more computing power, but produced better results. Suddenly, a market boomed for mathematicians, physicists, coders, chemists, lawyers, and others to create specialized data, which companies used to reinforce their AI models' reasoning. The chatbots got smarter." So, we've seen these jobs posted, right? That sort of AI trainer job. And it's like, gig work for mathematicians, physicists, et cetera. Because the only idea that the people running these large models have is, we've just gotta give it lots and lots of training data, make the training data for us. That's it.
Alex Hanna: Yeah, a hundred percent. And I mean, there's a few interesting things in this paragraph too, which is the "market booming for mathematicians, physicists, coders, et cetera." And I'm like, are those, you know- and we know that those have so many failures in so many different places. And then "the chatbots got smarter." This is a link to the Anthropic bullshit, that they have this link, and it is a self link to another thing on TIME. And it's a pure piece of hype. I like this comment too from possumrabbi, which is, "I fine tune my autism every time I get a vaccine," which is great. I appreciate the fine tuning of one's autism. Yeah, I'm just honing it every day. So then there's a graphic here on the weird little networks. There's three columns, chip builders, computing providers, and then model builders. And then there's a description of each of these. And in some cases these are organizations that do all three. So in the case of-
Emily M. Bender: I finally figured out what this frame around Google meant.
Alex Hanna: Yeah. So that's the Google thing. So, Google has all three, and has this kind of central position of being- they're building their own tensor processing units or TPUs. And then they've also got their hyperscale data centers, and they're also building their own models. So there's this person Turley, who's just really something else, the guy who's the head of ChatGPT.
Emily M. Bender: You want this quote here? So "seeing ChatGPT-" yeah.
Alex Hanna: Yeah. So he says, "Seeing ChatGPT evolve from an instant conversational partner to a thing that can go do real work for you feels like a very, very important transition that most people haven't even registered yet." I'm like, yeah, 'cause it can't do that. It can't do real work-
Emily M. Bender: Sorry dude, I'm not living inside your imagination. No thank you.
Alex Hanna: Yeah. And it's just such a bubble. And then they talk about Cursor, and there's some real knee slappers here, so, "Other breakthroughs abounded. Cursor, founded in 2022 by a group of MIT grads, became one of the world's fastest growing startups ever off the strength of its AI coding tool, achieving $1 billion in annual revenue. Quote, 'I would guess that one of the biggest stories over the next year or two will be the real productivity gains within software engineering and coding becoming more horizontally applied to other sectors of the economy,' says Cursor CEO Michael Truell. Meanwhile, a concerted push across the industry was driving the efficiency of AI models, leading to an increase in total usage. Quote, 'I think there is near infinite demand for intelligence,' says Turley, the head of ChatGPT." As if you forgot who he was from three paragraphs ago.
Emily M. Bender: I mean, they certainly want their stuff to be pushed horizontally to other sectors of the economy because they're not actually, as a group, these people are losing money, so they've got to be pushing it. But "near infinite demand for intelligence"? What do you mean by intelligence?
Alex Hanna: Yeah. Yeah, and it's such a weird, I mean, another move that's interesting here too, is the taking from intelligence as something that is measured and reified. We talked about that last week when we were discussing that awful AGI definition, and of course the eugenics behind all of that. But also, as if it's something that can be accumulated, like that's something in this kind of move that I find unique and even more bizarre than the process of reification, which is like, you can stack up intelligence? Like I can put it on my hard drive and I can take it, I can plug it in? Like, what does that mean? And it's such a weird rhetorical move too.
Emily M. Bender: Yeah. So abstract_tesseract in the chat says, "Horizontally applied? Yes, whenever I have to review synthetically extruded code, I do need to lie down for a bit."
Alex Hanna: I have been applied to my bedspreads, and I'm just awfully rolling in it. Okay.
Emily M. Bender: So, "Trump and his tech allies are even attempting to stop states from issuing their own AI regulations-" and we just had that EO signed in the last couple days- "which has drawn some fierce pushback, even from GOP leaders. 'Is it worth killing our own children to get a leg up on China?' Missouri Senator Josh Hawley, who recently introduced a bill to ban minors from using chatbots, told TIME in September after a congressional hearing on chatbot harms. The remark reflected a prevailing sense that the revolution had arrived before the public was ready." What revolution?
Alex Hanna: Yeah, there's a lot here. One of the things that's interesting about this is that, first, it's prioritizing the China dimension of this. And I imagine Hawley, as someone who's in the GOP, is going to have a lot of allies that are going to really bind to that narrative. The other part of that is the kind of like, "think of the children" element of that, which has this interesting bipartisan support, but also leads to really weird policy pathways, like in terms of age verification. But it's like, yeah, I mean, these things are not safe for anyone, less children. And it's leading to this point in which there's this really interesting coalition, maybe, of people that are against them for think of the children reasons, but also massive data center resistance as well.
Emily M. Bender: Yeah, exactly. And there's a bit about that, actually, later in the article. So "John McAuliff, who flipped Virginia's 30th district and its House delegates blue for the first time in decades by running a campaign focused on unchecked data center growth." Sorry, that wasn't a complete sentence, but that's who we're talking about. And he says, "'The issue that would keep the door open for me nine times out of ten was data centers and their transmission lines.'" So basically, he campaigned against data centers and won, in a district that was usually usually Republican, and he was a Democrat.
Alex Hanna: Yeah, that's interesting. I'm just looking at Virginia's 30th House of Delegates, and I'm looking at it now and it's- I don't know enough about Virginia, but I'm trying to find out where it is on this map. And it looks like it's- oh yeah, it is in Northern Virginia. So this is getting close to Data Center Alley. But it is encompassing Orange, Madison, and Culpeper County. And so it's also interesting too, 'cause now we're seeing this data center expansion across all of Northern Virginia.
Emily M. Bender: Yeah, and possumrabbi says, "I'm from that area of the country, and where he won was covered in Youngkin and Trump signs." So that's interesting.
Alex Hanna: Yeah. Really interesting.
Emily M. Bender: All right. Having jumped again, 'cause I do wanna get to our other artifact, I wanted to share this bit here, about school. So, "Scholars and students alike say that even far more innocuous use of AI is fundamentally rewiring our brains. It's upending how kids learn, with 84% of US high schoolers using generative AI for schoolwork, the College Board reported." Yikes. "While tech leaders dream of giving every student their own personalized AI tutor, many kids are using these tools to cheat or as a replacement for critical thinking. 'I'm already seeing people lose the ability to be creative and to come up with their own ideas,' says Brooklyn Poulson, a 17-year-old student from Burley, Idaho, 'because the AI gives them what they need.'" So the juxtaposition of tech leaders dream of personalized AI tutors, kids use them to cheat. It's like well, which is it, right?
Alex Hanna: Yeah, I mean it's also the kind of critical thinking aspect of it, too. I've heard this from other teens in my life. They're like, I use the tool and I just can't think of anything. And you're like, well, think of- do something else. And I had posted something on Bluesky a few months ago where it was, you know, I'd much rather see your sort of like caffeine addled thing that you wrote in one day than whatever comes out of ChatGPT. And there was a great, there's this meme on TikTok now where it's this clip of Jon Hamm in a club, and there's this song, and it was a teacher- and it's like a transition. So a teacher's like, "After reading 130,000 ChatGPT essays, and finally some 18 year old's attempt at putting words themselves," and she's just, like, in bliss and it transitions to the song.
Emily M. Bender: To the Jon Hamm thing, yeah.
Alex Hanna: It just really, and I think it's great. But even before this, like, reading the paragraph before this is horrific, which is, so "Karandeep Anand, the CEO of the chatbot service Character.AI-" which we should say is also facing lawsuits from a family that lost a teen to suicide- "says his platform has 20 million active users, mostly born after 1997-" or sorry- yeah, 1997. Sorry, kids are young- "who spent an average of 70 to 80 minutes per day there. To Anand, teens replacing other forms of media with AI is a good thing. Quote, 'They have broken out of the doom scrolling world of social media,' end quote. But Character.AI also has been sued by several families for teen deaths. The company says that it has rolled out several safety updates, including limits on teen usage." Eugh!
Emily M. Bender: Ugh!
Alex Hanna: And then Ozzy making me feel old in our producer chat by saying, "Also people born in 1997 are now 28, so not 'teens,' quote."
Emily M. Bender: Wait, really? Oh no!
Alex Hanna: Sorry, sorry to tell you.
Emily M. Bender: Why do we care about 97 then? Isn't it 2007 that we should be caring about as the-
Alex Hanna: Yeah, I guess it's the stat that- is it the stat that they provided? That Character.AI-
Emily M. Bender: It is the stat they provided, but if the whole point is it's adults, then why 97 and not 2007?
Alex Hanna: Yeah. So, "One of the tactics is sex-" this is about keeping people addicted to the platform. "xAI's Grok has allowed users, even those in 'kids mode,' to chat with a pornographic avatar. And while Altman said in August that he was proud that OpenAI had not offered a sexbot avatar, just a few months later he announced that ChatGPT would offer erotica in order to, quote, 'treat adult users like adults.'"
Emily M. Bender: Yeah. Again, the TIME reporting here is just kind of like, gleeful about it. And we've talked some about this "treat adult users like adults" before, I think. Okay, I think we need to wrap this one up, but I know you said you wanted to talk about how Demis got the last word, right?
Alex Hanna: Yeah. It was near the end. And it was talking about the risks, effectively. He had the penultimate paragraph here.
Emily M. Bender: Yeah, Trump got the last word.
Alex Hanna: Yeah, Trump got the last word for some reason. But the penultimate paragraph is an AI doomerist one. They write, "The drumbeats of warning that advanced AI could kill us all has mostly quieted. The, quote, 'doomers' have been marginalized, now used by AI's ruling class as a punchline. Yet even the most upbeat AI leaders are quick to offer kernels of warning. 'We don't know enough about AI yet to actually quantify the risk,' says Demis Hassabis, CEO of Google's DeepMind AI lab. 'It might turn out that as we develop these systems further, it's way easier to keep control of them than we expected. But in my view, there's still significant risk.'" So then, it's ending with the AI doomer "and we don't really know enough." And I'm like, we know so much about other things, and about sociotechnical systems. And it's just this thing that continues to be the narrative of, you have to pay attention to the AI safety element of it, without this deep research that's in the political economy of this stuff, that many scholars, especially Black women scholars, have been warning about for years. And then don't see as a continuity with the prior era's critiques of sociotechnical systems. And this stuff just really is such an annoying journalistic choice that you would have here. Especially, too, in this article that they spent so much time for, the people that they platformed were the leaders. And then the critics were either, they're McAuliff, or these different people like teens and whatnot. And sure, it's good to hear from teens and whatnot, but why aren't we hearing from civil society? Why aren't we hearing from labor advocates? Why aren't we hearing from the massive coalitions that have been constructed to fight data centers all over the country? And the narrow focus here on Hassibis and Huang, and Turley, is just really an upsetting journalistic choice.
Emily M. Bender: Yeah. And they say, so this is towards the top, "The article was reported across three continents and through dozens of conversations with executives and computer scientists, economists and politicians, artists and investors, teenagers and grieving families." I doubt that Adam Raine's parents, who are quoted here, feel like they were platformed well.
Alex Hanna: Yeah. Many of the times, I don't even think they're even mentioned in the article. There are just these images of people, or teens.
Emily M. Bender: Oh, here they are. These are Raine's parents.
Alex Hanna: Yeah, they have Raine's parents, and one, they have a teenager that's in one, and I don't even think she is, they don't have a quote from her or anything. So they have, they have a caption given to her where she says, it just says "Ash Jackson." And then it says, "The 15-year-old student and artist uses AI tools as part of her creative process, helping her imagine sci-fi characters and flesh out their narrative arcs. However, she dislikes how many people online try to pass AI generated artwork as hand drawn. Quote, 'It's the same concept as stealing art,' she says." And it's so interesting that the artist view is this 15-year-old artist, rather than speaking to Karla Ortiz, or any of the named plaintiffs in the Stability case, talking to people from the Concept Art Association, talking to people from the Art Directors Guild, talking to people from the Animation Guild. So many people that you could be talking to, very authoritatively. I mean, we had, last week in California, an open forum around AB412, which would've forced transparency around any company that develops models with copyrighted material. None of those people that got involved in that fight are mentioned here.
Emily M. Bender: Yeah. All right, I think we gotta move over to HAI's report, because we don't wanna let TIME tell us everything about 2025. We also want Stanford to tell us, right?
Alex Hanna: Well, as you know, Emily, time will tell. Sorry.
Emily M. Bender: Wow. Okay. It is the time of year for bad puns, too, I think. Okay, so HAI, which is Stanford University's human centered artificial intelligence- I guess institute, according to someone who was scolding us recently, not lab- puts out this thing called the AI Index Report every year, and this is the 2025 version of it. And there's a whole big long PDF that we are not gonna go through. But they have their top takeaways. And I think that it is sort of awful the way this is presented, because at this 30,000 foot view, it takes extra effort to see how flimsy this is. So I don't know that we're gonna have time to do all of them, but I definitely wanna talk about this first one, and I think it was number 11. I dunno if you have favorites too. So number one, "AI performance on demanding benchmarks continues to improve." The paragraph says, "In 2023, researchers introduced new benchmarks- MMMU, GPQA, and SWE-bench- to test the limits of advanced AI systems. Just a year later, performance sharply increased. Scores rose by 18.8, 48.9, and 67.3 percentage points on MMMU, GPQA, and SWE-bench, respectively. Beyond benchmarks, AI systems made major strides in generating high quality video, and in some settings, language model agents even outperformed humans in programming tasks with limited time budgets." So what are MMMU, GPQA, and SWE-bench, and like, why should we care? Not obvious here. And just naming them like that makes it seem like, okay, benchmarks, yeah, that's reasonable. When, as we know, the benchmarking culture here is terrible, right?
Alex Hanna: Yeah. I mean, I would've loved if they went more into what those actually meant. And I mean, as an index, too, I think they're trying to reduce information. I mean, these types of things are aimed at, often, policy makers. But it's reducing so much of the complexity here and leaving out really important caveats. I'm just skimming these, too. So there's one here, in terms of- number six, I think, is kind of annoying. "Global AI optimism is rising, but deep regional divides remain. In countries like China, 83%, Indonesia, 80%, and Thailand, 77%, strong majorities see AI products and services as more beneficial than harmful. In contrast, optimism remains far lower in places like Canada, 40%, the United States, 39%, and the Netherlands, 36%. Still, sentiment is shifting. Since 2022, optimism has grown significantly in several previously skeptical countries, including Germany, plus 10%, France, plus 10%, Canada, plus 8%, Great Britain, plus 8%, and the United States, plus 4%." And this is a weird, I don't know what the methodology, I don't know where they're drawing the data from. It looks like it's an- oh, it's an Ipsos piece. Ipsos is a survey firm. It's very, also, and you should read this probably alongside of number four, which is about the China-US race. And so this is giving strong, you know, "the East Asians are more trusting and we're gonna get overwhelmed with them." But it's also, I think the thing about this, which is annoying when you're looking at top line metrics like this, is that it's obscuring variation within country. And so, we know from pew reporting and Pew surveys that there's a pretty big gender gap in optimism around AI. There's pretty huge ethnic and racial gaps, because this stuff, whatever you call AI, is seen as really fucking people over.
Emily M. Bender: And how did they present this question to people, like what do they mean by AI? And also, what happened in India where it dropped by 9%?
Alex Hanna: Oh, really? Oh, that's a good investigation. Yeah.
Emily M. Bender: They have a point change here, yeah. And also, this is 22 versus 24, so apparently by December of 25, Ipsos hasn't done this again. And so, HAI is looking to last year's data.
Alex Hanna: I don't know, yeah.
Emily M. Bender: I don't know. Before we go down, ben_waber says, "Business usage actually freaking dropped. What the hell?" Do we have a business usage graph somewhere here that I missed?
Alex Hanna: Yeah, I don't, I didn't see it. Unless, if you can drop it in the chat, maybe that's another statistic that Ben is talking about.
Emily M. Bender: Yeah. Okay, so should we do number 11 here? So number 11, "AI earns top honors for its impact on science. AI's growing importance is reflected in major scientific awards. Two Nobel Prizes recognized work that led to deep learning, physics, and to its application to protein folding, chemistry, while the Turing Award honored groundbreaking contributions to reinforcement learning." All right, I didn't look up the Turing Award ones, but the Nobels, those were the 2024 prizes. And yet, HAI wants to count that as part of their 2025 update in this index. And not also actually talk about, as you were pointing out earlier, Alex, okay, where's the breakthroughs, right? These things were honored with the awards, what's actually happening now in 2025? And the graph that we get is just a benchmark on protein folding. Or protein-ligand docking, but it's a benchmark.
Alex Hanna: Yeah, yeah. And then Ben shared a link here that wasn't in the report. It was a Futurism article- thank you for dropping it in, Ben. And it was from, it was an Economist piece that was referencing data from a US Census survey. But reading the Futurism article, it says, "Referencing data from a recent US Census Bureau survey, the Economist estimated that the percentage of Americans using AI to produce goods and services at large companies rang in at a modest 11% in October." And it's actually down from 12% in the prior survey conducted two weeks previously. So, yes, so most companies are not using these tools, and the ones that are, are getting a lot more play in the media.
Emily M. Bender: Yeah. All right, so, shall we transition over to Fresh AI Hell?
Alex Hanna: Yes. Let's do it.
Emily M. Bender: All right. On the theme of the year Wrapped, here's your prompt. Turns out that down in Fresh AI Hell, the people who are suffering down there also get their Spotify Wrapped, except it's not automated, and the demons have to do the manual curation to count up what people listen to. So you are one of those demons, figuring out what to present to whoever you choose, who's stuck in Fresh AI Hell.
Alex Hanna: All right. So, as a demon, I'm like, ah, we gotta count this up for Kissinger. Ah, damn. All right. So, what has this man been listening to? Okay, let's look. Oh, one, two- how many times has this man listened to Bombs Over Baghdad by Outkast? What's this? Rock the Casbah by- is it by the Clash? By the Clash. Anyway, sorry. Rock the Casbah. Oh my god. And what's this one here? This is just 10 hours of children screaming. All right, whatever. Yeah, let's turn that in. All right, let's look at this. Minsky? Marvin Minsky. Okay, let's see. What is this? Marvin Minsky is just- actually I think I, we pre-planned for this, but I forgot what I actually recommended in the chat. I'm going back and seeing what I actually said. All right, forget Minsky. All right, Charles Spearman. How much Wagner can you listen to? I can't believe this. This is incredible. Also not featured in AI Hell, Francis Galton, the guy who invented the transistor- Shockley, Shockley's also in AI Hell. I don't know if I know about Shockley for this joke to work, though. But, anyways, that's your AI Hell Wrapped, baby.
Emily M. Bender: Awesome, thank you. For people who are not in, well, who are only metaphorically in Fresh AI Hell with us, we wanted to show you what the big old list of links looks like. So this is the middle of the spreadsheet where I collect these things. And I collect them largely out of our like, group chat. But I've gotten more efficient at how I do them. So it used to be that every two weeks or so, I would go through and search for HTTP in the group chat and try to go grab all the links. And that was, there's a few places in here where I just gave up because there was too many. But I wanted to give you this as context for the numbers that I'm gonna give. So these are links that we shared with each other that were candidates for Fresh AI Hell. I've got a tagging system, that maybe someday we'll actually do some quantification over, but it's also very much like, how do I classify these things? And the result of that- and now I have to go look at my notes- is, there are currently, not including any links that were shared after I last updated this morning, 2,147 rows in this spreadsheet, including 1,135 items that were published in 2025. So 53% of all AI Hell entries that I managed to collect are from this year. And of those 1,135, we actually managed to talk about them, in an episode or newsletter post, for only 190 of them, so about 17%. And this includes two all-Hell episodes. Like, we literally cannot keep up.
Alex Hanna: Yeah, the hell is increasing. And I just want to shout out Emily, who's been doing so much work in keeping this document up to date, and tracking all the stuff that we dump in the group chat. So, we really want to give you your props for doing that. There's a lot of stuff happening behind the scenes to ensure you're getting the freshest Hell possible.
Emily M. Bender: Including, here comes some.
Alex Hanna: Yeah. So this piece is, this is some reporting from NBC News, by Lila Byock- sorry for mispronouncing your name. And it's a quote tweet of her own article, that says, "What I've been up to." And then says, "This was just sent to me by an LAUSD parent-" so a Los Angeles United School District parent. "A fourth grader was assigned to design a book cover for Pippi Longstocking using Adobe for Education. Here's what the AI tool generated." And all of it's very sexualized images of a woman with long stockings and very long pigtails. One of them is literally in lingerie. Just really perfect things that you don't want a fourth grader looking at or using the tool for.
Emily M. Bender: Yeah. And this wasn't Grok, this was Adobe. Adobe for Education. So, just wanna pull a couple things outta the chat. So ben_waber says, "So what I'm hearing is that you all need to do this show for 12 hours a day, 365 days a year." And abstract_tesseract says, "Gives a new meaning to spreadsheet hell." And sjaylett, "Have we finally found the exponential growth in AI?
Alex Hanna: We might have indeed.
Emily M. Bender: Okay. So this is from someone named Tomie who has a blue check on X. And it's a photograph of a notebook where someone has written, "ChatGPT 5.1 instant," and then in a hand drawn box, "What's the difference between grapefruit and pomelo?" And then there's a response, "Oh, what a fruitful question. Let's break it down." And then further details with bullet points. And the post says, "Does anyone else like to use quote, 'low tech ChatGPT' on physical paper? You write down your prompt and then answer how you think ChatGPT would. It's really helpful in case you don't have internet access."
Alex Hanna: Wow. And I thought this was a joke, like it's- or maybe it's a very bad joke- but the rest of this guy's feed, or this person's feed, is just like, all AI boosterism. So it's very, so it doesn't, it seems kind of in earnest. It's very weird.
Emily M. Bender: Funny, but not intentionally so.
Alex Hanna: Yeah.
Emily M. Bender: So this is from something called the Transmitter, December 8th, 2025 by Calli McMurray. And the headline is, "Exclusive: Springer Nature retracts, removes nearly 40 publications that trained neural networks on 'bonkers' dataset. The dataset contains images of children's faces downloaded from websites about Autism, which sparked concerns at Springer Nature about consent and reliability." So basically, this dude went and grabbed some pictures, "Retired engineer Gerald Piosenka created the dataset in 2019 by downloading photos of children from quote, 'websites devoted to the subject of autism,' end quote, according to a description of the dataset's methods, and uploaded it to Kaggle, a site owned by Google that hosts public datasets from machine learning practitioners." So basically, this dude grabbed kids' pictures with no consent, labeled them as with and without Autism, and then uploaded them for people to do digital phrenology on.
Alex Hanna: Yeah. This is just really awful. And I mean, a lot of folks have written about the use of quote unquote "public information" for training of data sets. So Casey Fiesler, Michael Zimmer. And it's just like, but what in your right mind makes you think that this would be okay to do in the first place? Yeah. All right, so this one is from, it's a screen cap from an ad on LinkedIn from Loughborough University. And, the skeet is from Ketan Joshi. And he's saying, "An ad posted by this university on LinkedIn." And the ad says, "At Loughborough University, we're leading the change in sustainable, ethical, and human-centric AI innovation. Because the world can't wait." And the advertisements are for "hologram lecturers, AI coaching, and robot sandwich makers. That's Loughborough!" And it's just like, that's your advertisement for the university? That's, grim, man.
Emily M. Bender: Yeah, absolutely. Also in that coaching picture, those look like children, not university students.
Alex Hanna: Yeah. They look like children, and it's like, they're playing soccer, or they're standing around at soccer. And then the hologram lecturers are trapped in this weird box, too. It's all very, like, what? Oh, lowbrow- oh, is it pronounced low brow? That would be very funny if it is.
Emily M. Bender: No, that's a joke. That's a joke, I'm sure.
Alex Hanna: Is it? Is it? I don't know, I believe anything about British English that you tell me.
Emily M. Bender: Yeah. So abstract_tesseract says, "I require- no, demand- that this be satire." Yeah. Cythie says, or Sigh-thie, "What qualifies as a sandwich in 2025 reached a new low." You know there's this whole argument that linguists have about what's a sandwich, or, I guess not just linguists. So.
Alex Hanna: It's a very ontological discussion. Yeah, anyways.
Emily M. Bender: Okay, so, "Exclusive-" this is The Information- "'ChatGPT for doctors' startup doubles valuation to $12 billion as revenue surges." And this is by Stephanie Palazzolo, December 12th. "OpenEvidence, which operates a ChatGPT like product for doctors to find health information from medical journals and other trusted sources, is raising 250 million in equity financing that will value the three-year-old startup at $12 billion after the investment, doubling its last private valuation from a financing announced just two months ago-" So that's a fast double. That is terrifying, I have to say. And you said there was some stats in here that we also wanted to look at, right?
Alex Hanna: Right, it was the, here. It, so it says, "OpenEvidence's chatbot seems to be catching on with physicians. According to an October survey of a thousand US based physicians by OffCall, a company that allows physicians to compare their salaries, around 45% of physicians use OpenEvidence-" oof "-compared to 14% for ChatGPT and 5% for medical scribe startup Abridge, the next most popular healthcare specific AI product." And that's very worrisome.
Emily M. Bender: It really is. I'm hoping that they did not manage to find a representative sample. Because, yikes. Yeah. Whew, okay.
Alex Hanna: Yeah. So this is from Vox's Future Perfect, aka their effective altruism mouthpiece. And the title is, "We're running out of good ideas. AI might be how we find new ones. What if the best use of AI is restarting the world's idea machine? "This is by Bryan Walsh, published in December of this year, December 13th. So there's a picture of a light bulb, blah, blah, blah. And then, the, actually this first sentence is very funny. "America, you have spoken loud and clear. You do not like AI." All right, sure. And then they've got this silly chart from the, this is one of the Federal Reserve offices, about the possible AI scenarios, from the Dallas Fed. We don't have to talk about this. And then right here is the meat of the article, which is, these things around ideas. And the subhead here is "We really need better ideas." "But before I get there, here's the bad news. There's growing evidence that humanity is generating fewer new ideas. In a widely cited paper with the extremely unsubtle title, 'Are ideas getting harder to find?', Economists Nicholas Bloom and his colleagues looked across sectors from semiconductors to agriculture and found that we now need vastly more researchers and R&D spending just to keep productivity and growth on the same old trend line. We have to row harder just to stay in the same place." And I'm just like, this is less AI hype, this is more economists doing awful things with data and torturing ideological novelty and then also measures of productivity. So it's this confluence of AI hype and just, economists doing terrible things."
Emily M. Bender: Yeah, oof. All right, I'm gonna keep us going. We have one thing that was literally, scientifically grown in a lab to be Emily rage bait. So this is a piece in Wired by Steve Nadis on December 14th. So, yesterday. The headline is, "For the first time, AI analyzes language as well as a human expert." Subhead, "If language is what makes us human, what does it mean now that large language models have gained, quote, 'metalinguistic' abilities?" And I'm like, just no. So the underlying article was apparently presented with quite a bit of hype. They talked to a couple of the authors here, and they do not seem like they're trying to backpedal anything, from these quotes anyway. But basically, they were running language models over different linguistic data sets, some of which were constructed, to see if it could come up with the right answer. And these were things like- I wanna get to the center embedding one, because this one really annoyed me. So, things like, can you produce a syntax tree for this sentence? Can you produce multiple syntax trees for this ambiguous sentence? By the way, we can write computer programs to do that. It's called grammar engineering. So, "Recursion has been called one of the defining characteristics of human language by Chomsky and others, and indeed, perhaps a defining characteristic of the human mind." This is- I won't go there. There's a whole thing to say about that. "Linguists have argued that its limitless potential is what gives human languages their ability to generate an infinite number of possible sentences out of a finite vocabulary and a finite set of rules. So far, there's no convincing evidence that other animals can use recursion in a sophisticated way. Recursion can occur at the beginning or the end of a sentence, but the form that is most challenging to master, called center embedding, takes place in the middle. For instance, going from 'The cat died,' to 'The cat the dog bit died.' Beguš's test fed the language models 30 original sentences that featured tricky examples of recursion. For example, 'The astronomy the ancients we revere studied was not separate from astrology.' Using a syntactic tree, one of the language models, OpenAI's o1, was able to determine that the sentence was structured like so." And it sort of showed the center embedding. Here's the thing that I felt the need to nerd out about on the pod, is that the reason that center embedding is difficult is not that it is somehow more complicated, but that it mucks with our live processing, our short term memory. And if you're going to be doing computational processing of sentences, you don't have that same issue. So this is just like, not relevant, but of course, I wouldn't expect the journalists for Wired to handle that detail. I would expect the linguists talking about this to be careful that they got it across well.
Alex Hanna: Yeah. This is so funny because it was, I like the way you put it, it was scientifically concocted to cause you to crash out. Just because, yeah, it was not only about LLMs, not only about linguistics, but about things that you could develop deterministic programs to actually do, the thing that you've actually spent a lot of time doing, which is grammar engineering. So I was like, yeah, they're just trying to, they're trying to get on your last nerve here.
Emily M. Bender: Absolutely. And, so, the chat's popping off here. So, abstract_tesseract, "This article has caused me to generate an infinite number of possible swear words." sjaylett, "Propping up the economy through rage." abstract_tesseract, "Gross domestic pissed offness." And, sjaylett, "Which of course is intrinsically unbounded as Huang keeps on enabling us to demonstrate." Thank you for the call back. Alex, you get the chaser.
Alex Hanna: Sure, yeah. And so, this one is great. Not quite AI, but it's a nice example of technological refusal and, let's say, sabotage. So this is an article from Futurism written by Victor Tangermann, published December 5th. And the title, "Women Hailed as Hero for Smashing Man's Meta Smart Glasses on Subway." The quote here is, "'I hope she called him a dork for wearing them before she broke them.'" And it's a series of different short form videos. So, they're talking about this guy who's wearing these Meta Ray-Bans. And basically, the writer was recording this person who goes by the TikTok username, Eth8n spelled with an eight. And he says, "She just broke my Meta glasses," and then, is videotaping this woman. And she's got a perfect kind of Mona Lisa smile on her face. And it's like videotaping the user, and then it looks like he's videotaping her. And then this frame is her looking at the camera, effectively signaling, I did what I did and you should fuck off. And so, appreciate this. Not all heroes wear capes. Let's get rid of these luxury surveillance devices.
Emily M. Bender: Absolutely. And her smile is just perfect there. It really is.
Alex Hanna: Yeah.
Emily M. Bender: All right.
Alex Hanna: That's it for this week! Our theme song is by Toby Menon. Graphic design by Naomi Pleasure-Park. Production by Ozzy Llinas Goodman. And thanks as always to the Distributed AI Research Institute. If you like this show, you can support us in so many ways. Order "The AI Con" at thecon.ai or wherever you get your books, or request it at your local library.
Emily M. Bender: But wait, there's more. Rate and review us on your podcast app. Subscribe to the Mystery AI Hype Theater 3000 newsletter on Buttondown for more anti hype analysis, or donate to DAIR at dair-institute.org. That's dair-institute.org. You can find video versions of our podcast episodes on Peertube, and you can watch and comment on the show while it's happening live on our Twitch stream. That's twitch.tv/dair_institute. Again, that's dair_institute. I'm Emily M. Bender.
Alex Hanna: And I'm Alex Hanna. Stay out of AI hell, y'all.