Mystery AI Hype Theater 3000

AGI: "Imminent", "Inevitable", and Inane, 2025.04.21

Emily M. Bender and Alex Hanna Episode 56

Emily and Alex pore through an elaborate science fiction scenario about the "inevitability" of Artificial General Intelligence or AGI by the year 2027 - which rests atop a foundation of TESCREAL nonsense, and Sinophobia to boot.

References:

AI 2027

Fresh AI Hell:

AI persona bots for undercover cops

Palantir heart eyes Keir Starmer

Anti-vaxxers are grifting off the measles outbreak with AI-formulated supplements

The cost, environmental and otherwise, of being polite to ChatGPT

Actors who sold voice & likeness find it used for scams

Addictive tendencies and ChatGPT (satire)


Check out future streams at on Twitch, Meanwhile, send us any AI Hell you see.

Our book, 'The AI Con,' comes out in May! Pre-order now.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

Alex Hanna:

Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find.

Emily M. Bender:

Along the way we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worst to come. I'm Emily M. Bender, professor of Linguistics at the University of Washington.

Alex Hanna:

And I'm Alex Hanna, director of Research for the Distributed AI Research Institute. This is Episode 56, which we're recording on April 21st, 2025. We are here today with one of our most consistent gripes, about the way AI and specifically so-called Artificial General Intelligence is presented as inevitable, unstoppable, sure to come into being if we just pour enough stolen data, scarce resources and money into chatbot companies.

Emily M. Bender:

And if Artificial General Intelligence isn't exciting enough for you, what about Artificial Super Intelligence, which one organization is trying to say could be here within a mere two years? We make a point to note this fallacy when it comes up in other contexts, but today this inevitability as hype is our focus. We want to talk about why in fact, AGI is not inevitable nor possibly even real. We found a great artifact to focus on as we do. Someone spent a lot of time trying to sound very, very rational as they predict not just the inevitable AGI, but also how the world will react.

Alex Hanna:

Very rational indeed, if that tips you off. But you know what is inevitable though? That's right. Our book, The AI Con, is coming out in just a few short weeks from now, before this episode is even released. If you're in the US, pre-order it now so you can have the instant it's available. That's May 13th in the US and affiliated markets, and May 22nd for the UK in affiliated markets.

Emily M. Bender:

And you can catch our tour where we'll be promoting the book, talking about some of our most important points, and meeting our fellows in the fight against AI hype. We'll be doing virtual and in-person events all throughout May and into June. Visit TheCon.AI for a full list. Alright, before we actually get to the main artifact, I wanna just show this off a little bit more. Alex and I got our author copies. It's gorgeous.

Alex Hanna:

So nice.

Emily M. Bender:

Um, yes, and just the, the whole thing. It's beautiful. And, uh, if you're jealous, if you want one as soon as possible, pre-order. And if you're just listening to us, um, you know, well take a peek at our socials. You'll see pictures of it. Um, I've done an unboxing video as we're recording this already. Alex might do one in the future, by the time you're listening. Um, we are having lots of fun. Which is good.

Alex Hanna:

Very, yeah, it's great um, that it is real and in the world. Um, but something that's not real and not. Well, it's in the world, but I guess it's not real, is this thing. So a few people sent us this artifact and it is called "AI 2027." Um, and it's pretty, it's pretty rough. Um, pretty, pretty bad stuff. It is a website that comes from, um, uh, it is, it is made by an organization called AI Futures, and there's five authors on them, on it. Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean. Um, just poking around a little bit, they're all kind of people who are Less Wrong affiliated. Um. Yeah, do you wanna read this little box about how they are self-described?

Emily M. Bender:

Yeah. So "Who are we?" So Daniel Kokotajlo, maybe, um, links his qualifications as Time100 and New York Times piece. That Time100 is not the overall Time100, just FYI. It's the Time100 AI. And the New York Times piece might be something he wrote. I didn't click through on it before. Um, no. Oh, it's a profile of him. Ugh. Um, okay.

Alex Hanna:

By Kevin Roose.

Emily M. Bender:

Of course. Um, so "Former OpenAI researcher whose previous AI predictions have held up well." So these are also these like super predictor dudes. Uh, "Eli Lifland co-founded AI Digest, did AI robustness research and ranks number one on the RAND Forecasting Initiative all time leader board." Uh,"Thomas Larson founded the Center for AI Policy and did AI safety research at the Machine Intelligence Research Institute. Romeo Dean is completing a computer science concurrent bachelor's and master's degree at Harvard, and previously was an AI policy fellow at the Institute for AI Policy and Strategy. Scott Alexander, blogger extraordinaire, volunteered to rewrite our content in an engaging style. The fun parts of the story are his and the boring parts are ours."

Alex Hanna:

Yeah, it's, it's, yeah. And already Abstract Tesseract says, "This is some of the worst fanfic I've ever read, and I've spent a lot of time on AO3." Yeah. Man, don't, yeah, don't bring it, don't bring AO3 into this. That's not, that's not, it's, that's not fair to AO3. Uh, yeah. So, I mean, so Scott Alexander, um, you know, he runs this blog called Slate Sar Codex, which I think has been now renamed to Astral Codex 10. Um, just kind of like this weird rationalist, um, trash, uh, very--

Emily M. Bender:

Who, who wrote this Wikipedia article? Is it him?

Alex Hanna:

I (crosstalk), yeah. There's, I have no idea. And it's very, um, oh gosh. And you click through to this page and the, uh, the tagline of it is, uh,"Bayes' rule." Um, so, uh, because they are all very dedicated Bayesians, but it's really wild that these rationalists think that they're really dedicated Bayesians when like, man, like, these numbers are completely made up. Like, like you're just completely, just making it -- and it's very indicative that he links to Richard Hanania's newsletter, kind of famed race science eugenicist. Um, yeah. Anyways, they, they all kind of hang together, right?

Emily M. Bender:

They're, they're TESCREALists. We should not be surprised.

Alex Hanna:

Yeah.

Emily M. Bender:

Um, and the, the AI that gets developed in this bad fan fiction, um, is also TESCREAList if we get there.

Alex Hanna:

Yeah. Well, let's get it to this. This is a very long artifact, so maybe we should, yeah I don't really know where to start. Yeah, let's do it.

Emily M. Bender:

Yeah, let's just describe what we're seeing a little bit. So there's, it's a pretty highly produced webpage and on the right hand side there's this like moving graphic that is telling us which month we're in. So I just scrolled down to December, 2025. And at the bottom it's got like, what currently exists, what's emerging tech and what's science fiction, and they're moving these little dots over. The dots actually are, um, specific things.

Alex Hanna:

Yeah, they're actual things and it's like the first one is virtual secretary. The second is AI boyfriend.'cause I guess they just want to gender it differently than AI girlfriend.

Emily M. Bender:

Oh man.

Alex Hanna:

The second one is AI programmer. The third one is research automator, um, "AIs that can completely automate AI R&D," which is not true. And then, and then it says AGI is already here.

Emily M. Bender:

Uh, emerging tech, AGI is "emerging tech" and also "currently exists, includes self-driving cars capable of navigating city streets." I don't think so. Right. What we have right now are self-driving cars that are monitored remotely in Mexico and get confused if you put an orange cone on their hood.

Alex Hanna:

Right. Yeah. Well, it's good enough for their, for their, for their purposes. Oh, so sorry. The emerging tech is the stuff which is basically in development and then that that currently exists and they've got all these different check marks and it's, we started, and I mean, significant 'cause the first star is image recognition, which, you know, we can--

Emily M. Bender:

Oh yeah. I'm, I'm actually at December, 2020. Yeah. So, yeah.

Alex Hanna:

Yeah. If we go up to the first one is, the very first one is like image recognition, kind of the ImageNet kinda stuff. And it goes all the way to the last one, which is science fiction. And the most advanced thing is nanobots. Uh."Autonomously replicating artificial nanobots." And the, um, second most, uh, uh, the second most sci-fi is Dyson swarms, um, which capture "a fraction of the sun's energy." So, okay. And then there's, and then there's also, there's also these other kinds of things. There's these AI capabilities, the categories are hacking, coding, politics, uh, bio weapons, robotics, and then forecasting itself. Uh, and then, and then there's, and then there's a compute pie chart, and then there's like, um, the, and then there's sort of like a, a a a, a temporal graph, which has three categories, which is, um, "OpenBrain", which is the name they'll give to the kind of collective American companies, "leading Chinese lab", and then "best public model". Um, so that already gives you, you know, an idea of where we're going with this.

Emily M. Bender:

Right. And this graph. So what's, what's this graph is those are the three categories and it is, uh, it says, "This graph shows how much AI systems are speeding up the process of AI research compared to a baseline of human researchers. We also highlight milestones in general capabilities." So this is basically a very long version of the, um, what, whatever they call it, where all of a sudden the AI is good enough to do more AI research and it just like takes off exponentially. Like that's, that's sort of the heart of the story. And yeah. Huh.

Alex Hanna:

Yeah. So it's pretty, it's pretty bad. Uh, they've got a, they've got a lot of this, this formatting of, this website's weird 'cause they've got these footnotes, which are, um, that you can get. And then they've got, uh, these little blogs, uh, that you can expand. And then there's two possible endings because, you know, they, I don't know. Uh, anyways, let's, let's get, this is so long, so let's, and I want -- let's, let's talk about it. Okay.

Emily M. Bender:

So I'm, I'm gonna read this paragraph here. Which is supposedly grounded in current reality, like we haven't actually gone past the present moment. Oh, and by the way, it's also worth noting that this was published on April 3rd of this year. Mm-hmm. So, "The AIs of 2024 could follow specific instructions. They could turn bullet points into emails and simple requests into working code. In 2025, AIs function more like employees. Coding AIs increasingly look like autonomous agents rather than mere assistants, taking instructions via Slack or Teams and making substantial code changes on their own, sometimes saving hours or even days. Research agents spend half an hour scouring the internet to answer your question--" And, always read the footnotes, what's this? Um, this is a pointer. "For example, we think coding agents will move towards functioning like DEVON. We forecast that mid 2025 agents will score 85% on the SWE-bench Verified." Um, also this, "research agents spend half an hour scouring the internet." Like what's the half an hour there?

Alex Hanna:

Is that like actual, well, it's, it is also very interesting how they talk about time horizons. It's like how they, is it like this is the, um, I'm gonna use a Marxian term, so please, please, excuse me. But this is like the socially necessary labor time to like do a certain task. I mean, 'cause this is sort of how they talk about it. But we'll get into that a little bit more.'cause this is sort of gets at like the socially necessarily labor time that like a average human would take to do this or someone with that expertise.

Emily M. Bender:

Exactly. Well not an average human, a super genius human, right?

Alex Hanna:

Well, no, but not, not the super genius human, the humans that are getting basically replaced.

Emily M. Bender:

Yeah.

Alex Hanna:

Yeah. Yeah. Okay. All right.

Emily M. Bender:

All right. So, "The agents are impressive in theory and in cherry picked examples, but in practice, unreliable. AI Twitter is full of stories about tasks bungled in some particularly hilarious way. The better agents are also expensive. You get what you pay for and the best performance still costs hundreds of dollars a month. Still, many companies find ways to fit AI agents into their workflows." Um.

Alex Hanna:

Yeah.

Emily M. Bender:

All right, so this is just--

Alex Hanna:

Yeah. So we, so we get in, so we get into this, and this is where the story starts. Like in, in kind of in, in, in seriousness. So this is late 2025."The world's most expensive AI. OpenBrain uh, is building the biggest data centers the world has ever seen." And they give some details about what this is and like how much, uh, how much it's costing, et cetera. And so they use the, they use the term OpenBrain, and they've got a graph, which is sort of the, the amount of, um, compute needed to train this. And the new model is called Agent-1. Um, and so they use this scheme where it's Agent hyphen 1, and this is kind of the scheme that they use throughout. And so just as a comparison, they have GPT-4, which is estimated to be 2 times 10 to the 25th flops. And then Agent-1 takes 3 times 10 to the 27th flops. And I'm already like, already here, we're just like getting into the sci-fi.'cause you're like, well you are saying that this thing is gonna take this much and like, aren't we already at a place where Microsoft and many of these companies are already scaling back their data con, like their data center construction. Like what is this, like where is this estimate coming from? I don't really, I don't think I see any sources.

Emily M. Bender:

Right. So I wanna, I wanna take us back up to the top because somewhere in here, um, "Why Is This Valuable?" I think it was in here.

They talk about how, um, yeah:

"We have set ourselves an impossible task: Trying to predict how superhuman AI in 2027 would go is like trying to predict how World War III in 2027 would go, except that it's an even larger departure from past case studies. Yet it is still valuable to attempt, just that it is valuable for the US military to game out Taiwan scenarios." And boy, is the, uh, Sinophobia gonna be a thread throughout this.

Alex Hanna:

Yeah.

Emily M. Bender:

Um, so, uh, let's see."Painting the whole picture makes us notice important questions or connections we hadn't considered or appreciated before, or realize that a possibility is more or less likely. Moreover, by sticking our necks out with concrete predictions and encouraging others to publicly state their disagreements, we make it possible to evaluate years later who was right." So this is the whole Less Wrong shtick, right?

Alex Hanna:

Yeah, yeah.

Emily M. Bender:

And they make it sound like they are so rational and they, they realize what's more or less likely. It's just science fiction, right?

Alex Hanna:

Yeah. It's science fiction. And it's like, if you wanna challenge me debate, you know, debate me, bro. Like, and I'm like, what if I don't, what if the premises and every assumption you're making is just absolutely batshit? I don't wanna debate you, bro.

Emily M. Bender:

No, we're gonna, we're gonna ridicule them, is what we're gonna do.

Alex Hanna:

Yeah. Right.

Emily M. Bender:

That's, that's our purpose here.

Alex Hanna:

So the Sinophobia starts, speaking of, so they're like, uh, so they start, so "Although models are improving on a wide range of skills, one stands out, OpenBrain. Open brain focuses on AIs--" And you know, like, which is a term that send, you know, sends, sends me. Um, so "--AIs that can speed up AI research. They want to win the twin arms race against China--" So here's the Sinophobia."whose leading company we'll call DeepCent--" Um, so kind of a Portman two of Deep Seek and Tencent, um, and their US competitors. Um, the more their RD cycle they can automate, the faster they can go. Um, where's the twin race? There was a against China and the US competitors. Okay. Oh, I see. Okay. Uh, I was like, alright, so then, so effectively this is the whole shtick. It's like, you know, like we are basically in these initial rounds, um. Training these agents to do more AI research. Um, and you know, like, and then, and so then, yeah. I'll, I'll stop there.'cause Emily, you're jumping to get in.

Emily M. Bender:

Yeah, there's a, uh, where'd it go? Okay. So, um, this, this whole thing, like the whole point is like, well, AI is the whole point. So we're gonna make an AI does that does AI, but they have this very funny Footnote 19 that I spent some time searching for, 'cause I wanted to get back to it. Um, so, "Modern AI systems are gigantic artificial neural networks. Early in training, an AI won't have goals so much as reflexes. If it sees 'pleased to meet' it outputs 'you'. But the time it has been trained to predict, oh, by the time it has been trained to predict approximately one internet's worth of text--" Which is a hilarious unit."--it'll have developed sophisticated internal circuitry that encodes vast amounts of knowledge and flexibly role plays as arbitrary authors since that's what helps it predict text with superhuman accuracy." So. None of that's true. Like citation needed. Right. There's, so there's a footnote here that I really wanted to make sure we got to. Number 19, "People often get hung up on whether those AIs are sentient or whether they have quote 'true understanding'. Geoffrey Hinton, Nobel Prize winning founder of the field, thinks they do." Cat? Cat.

Alex Hanna:

I know. Sorry. There was something on her, on her side and I worried if it was like a stitch or not. So I wanted to double check. So it might be something I have to, uh, sorry for the cat intervention. We might have to go in with some scissors later and not call that.

Emily M. Bender:

Oh, okay. Um, so cat, all right. So Geoff Hinton thinks they do."However, we don't think it matters for the purposes of our story, so feel free to pretend we said 'behaves as if it understands' whenever we say'understands' and so forth. Empirically, large language models already behave as if they are self-aware to some extent, more and more so every year." So they're basically trying to do an end run around the fact that they are just synthetic text extruding machines and say, well, it doesn't matter if they understand or behave as if they understand.'Cause it comes out to the same and it's like, no it doesn't. Right? If people credulously read the output or understand that the output is just synthetic text, those are two very different things.

Alex Hanna:

Yeah. Yeah. No, completely. And it's, I mean, there's so much trash here. There's the 'one internet's', uh, you know, internet as a measurement. Um, which is absurd. There's the random citation to something called AI Digest, which is the, uh--

Emily M. Bender:

Is that this one?

Alex Hanna:

No, no, that's not it. But that's the one that suggests that these are the, the quote unquote AIs are self-aware. Um, and the, this is the empirical and it's written by, um, somebody, I don't know, AI Digest. And, and there's also an existential risk laboratory at University of Chicago, which is, which is, you know, univers--

Emily M. Bender:

We're, we're gonna hit every letter of the TESCREAL bundle, aren't we?

Alex Hanna:

Yeah. It's just like real, it's, it's, you're, it's real deep in it. Like these folks are real deep in the sauce here. Um, the next thing that I think is very bizarre is the, um, the next sentence. So, "After being trained to predict internet texts, the model is then trained to produce texts in response to instructions." This is fucked up. So they say, "This bakes in a basic personality and quote 'drives'." Um, and it's, and then, uh, there's, there's a long footnote here about like what bakes in a persona, um, the prompt to basically have a, what they call later,"a helpful, honest, and harmless AI chat chatbot," or a Triple-H chatbot, which if you are a, if you're a WWE fan, like, I guess that's related. Um, and then it's, but it's, but like, I mean there's such like, it's not subtle on the, like, on the reference to psychology, it's like, and the kind of dodge, and we're saying like, well, we're using this as a stand-in. And I'm just like, yes, and that's a problem. And I'm like, you know, like the drive, are you thinking about Freud? Is this the death drive? Is this the sex drive? Like, what are you, what are we doing here? You know?

Emily M. Bender:

Yeah. And they've got scare quotes, but I think there's different kinds of scare quotes. This is like defensive scare quotes. See, see, see, we put scare quotes. We don't mean to do this, you know, for real. As opposed to when I use scare quotes is like this thing that other people call it. Right?

Alex Hanna:

Right. Well, it's very, it's, it's a, it's, it's a, it's a, it's a weird dodge. Um, but they're, but they're like, we're just, you know, we're gonna do this anyways. Ugh, um--

Emily M. Bender:

Oh, yeah, so then like, flat out psychology, right? You see they've got this box here.

Alex Hanna:

Yeah yeah, they say this right here in this little box. Yeah. So where they say, "Training process and LLM psychology. Why we keep saying 'hopefully' in quotes." Um, and then they makes some comment about, like, interpretability. Um, basically the idea that, like we say it's gonna act like this, but like, we don't know what's like inside of it. Anyway, it's, it's--

Emily M. Bender:

And this quote from OpenAI is terrible. It says, "Unlike ordinary software, our models are massive neural networks. Their behaviors are learned from a broad range of data, not programmed explicitly. Though not a perfect analogy, the process is more similar to training a dog than to ordinary programming." I'm reminded, there's this wonderful quote from Dykstra from 1985 where he says, you know that the people who do EdTech seem to think that Pavlov had like not only what it is to be a dog, but what it is to be a person down completely. And he said, "I can assure you from extensive interaction with my dog, that Dykstra," oh not Dykstra, "--that Pavlov only had the smallest bit of what it is to be a dog." Right? Like, yeah. Anyway, so OpenAI, I just wanna point out that like these people look pretty dang fringe. But they're also quoting stuff from OpenAI, which is ridiculous. And OpenAI has the models where if you submit a paper to a conference and you didn't test it against GPT-4, whatever, people are like, well, you didn't do state of the art. So we are not actually very many degrees of separation from what's considered like mainstream AI research with this.

Alex Hanna:

Yeah, I think that's a great point. I mean, I think these folks like to point out that they're, you know, like they're on the outsider, they're on the outside and in some of the like, maybe palace intrigue of like what happened at OpenAI last year where, they, they dissolved the superintelligence team and Ilya started his own thing. And you know, like, you know, and like Dario and, and, um, uh, what's his sister's name? It's not Daria. Um, it's, um, Dario Amodei and, and, and I'm just gonna look it up, 'cause-- but he and, and, and Daniela, thank, thank you, thank you. Our, our producer dropping it in the chat as I type-- and then starting Anthropic and sort of like the safety in mind. Like, you're not too many hops away, right? It's, you are here and there's still so much, and you know, like there's, and there is, and they also say in here that they are like, they're like giving a mon giving money away to like get these forecasts right. Anyways, let's, let's go through, because this is a very long, and I wanna like, like there's, so one part I wanna like, I wanna highlight a few, there's a few points to heart and I really would like to try to get to the end.

Emily M. Bender:

Yeah.

Alex Hanna:

At least of the, like of the, um, the sort of--

Emily M. Bender:

So, Choose Your Own Adventure thing at the end.

Alex Hanna:

The Choose Your Own Adventure thing. I mean, would love to at least get into the, the, the kind of doomsday scenario.

Emily M. Bender:

Yeah. It's really ridiculous.

Alex Hanna:

So, yeah, so like mid, I wanna go to mid 2026, so this is "China Wakes Up," so incredible, incredible, incredible shit, you know, and you couldn't just like, you know, like the, you know, like I wouldn't be surprised if this was a piece of, you know um, you know, Immigration Act, uh, propaganda from the turn of the, of the 20th century. So, "In China, the CCP is starting to feel the AGI."

Emily M. Bender:

What's this link to?

Alex Hanna:

Um, yeah. And this, uh, link just links to, uh, that, that, yeah Futurism article. Um.

Emily M. Bender:

I love the sticker, "AI shamanism."

Alex Hanna:

I know. Yeah. That where Ilya says at OpenAI you gotta feel the AGI. Um. So, "Chip export controls and lack of government support have left China high, uh, China under-resourced compared to the West." They talk about like getting chips in. Um, they talk about the sort of like, um, things that they do. They talk about the re-centering. Um, and so institutionally what they are saying here is that like, well now there's a area of this region of China that they, that they build right next to this huge power plant. Um, and they develop these centralized development zones. Um, and like, and they're now trying to do this and they start talking about cybersecurity. They talk about basically the desire to steal weights from OpenBrain. Um, yeah. And it's, and then there are, then there's this talk of jobs. Um, we're gonna get back to the, the China stuff, the Sinophobia--

Emily M. Bender:

It's throughout. And every time--

Alex Hanna:

It's throughout.

Emily M. Bender:

Or maybe not every time, but many times where they talk about spies, they have these footnotes that say, well, these people aren't necessarily doing it out of their own volition. They might be being coerced, because they, it's like this, like trying to paper over the Sinophobia in there by saying it's not really their fault. It would, but like hidden in a footnote. And just like, they know they're being gross, I think.

Alex Hanna:

Yeah. Yeah. They're like, uh, we don't love this, but you know, this is what we gotta do. And I mean, to some degree, I mean it's, you know, they are definitely playing towards an audience. Right. And the audience is, you know, not just, not just technical, but it's, it's political, right? Um, so they get into this thing they talk about like AI takes some jobs. Um, this part absolutely sent me. So, um, "AI has started to take jobs but has also created new ones. The stock market has gone up 30% in 2026, led by OpenBrain and Nvidia, and whichever companies have most successfully integrated AI assistance. The job market for junior software engineers is in turmoil. The AIs can do everything taught by a CS degree, but people who know how to manage and quality control teams of AI are making a killing. Business gurus tell job seekers that familiarity with AI is the most important skill to put on a resume. Many fear that the next wave of AIs will come for their jobs. There is a 10,000 person anti AI protest in DC." So like, there's so much of this that's just like wild. Now, I don't, some of this, some of this is like, I could see where they're coming from. They're like, okay, writing code, sure, people, you know, using it as a mechanism of labor discipline, yes, already happening. Um, people, schools saying that, you know, there's, you know, you need to learn AI, unfortunately happening as a kind of a bullshit hype function right now. But then this thing that actually sent me is just like, "There's a 10,000 person anti AI protest in DC." And it just, for me, as someone that's like studies protest, I'm like, people are not, protesting, like, like people protest AIs, but like protest, they're protesting jobs, they're protesting like a, the economy, they're protesting kind of like an ecology of things that are affecting them socially, economic and politically. Right, is, it's a, as if like, there's like a protest that is like oriented, that like the, like there, there's a vision and I think that this especially comes across where there's some people, and I see these posters around this coffee shop I go to in West Oakland a lot, where it's like, there's these"stop AI" things and these are actually like X-riskers that like protest. And there's like a, there's like 10 nerds that come to their protest, but that's like the kind of thing that they're envisioning. They're not like, and I'm just like, what do you think protest is? How do you think politics happens? And it's kind of a notion of politics that is so like devoid of any kind of notion, like of any, like, people are not people with like drives and understanding of like their social realities. They are NPCs in like this forecasting vision. And it just like drives me up the fucking wall, you know?

Emily M. Bender:

Yeah, absolutely. And 10,000, like they, they're giving that number as if it's big.

Alex Hanna:

Yeah. But yeah, it's like 10,000 is actually not a very big protest for DC. Like 50,000 maybe, like protest, like at the height of the Iraq War protest, it was 300,000 peoples in, people in DC. And Abstract Tesseract says, "Weirdly, the word 'union' appears, checks, notes zero times in its article." Yeah. Because they don't know what labor organization is, or like how protest happens or how politics happens, honestly. It's all fucking a game theory exercise.

Emily M. Bender:

Yeah. All right. So there's a box in here that I absolutely wanna get to.

Alex Hanna:

Yeah.

Emily M. Bender:

Um, so it's,"Why our uncertainty increases substantially beyond 2026." So they say, "Our forecast from the current day through 2026 is substantially more grounded than what follows. This is partially because it's nearer, but it's also because the effects of AI in the world really start to compound in 2027. For 2025 and 2026, our forecast is heavily informed by extrapolating straight lines on compute scale-ups, algorithm improvements and benchmark performance." So they're basically saying the synthetic text extruding machines are just clearly steps away from being able to do actual programming, right? As if what's taught in a CS degree is just like the syntax of programming languages, right? And nothing about software engineering, nothing about testing, nothing about algorithm design, right? Um. Okay. So, uh, "At this point in the scenario, we begin to see major effects from AI-accelerated AI R&D on the timeline, which causes us to revise our guesses for the trend lines upwards. But these dynamics are inherently much less predictable." As opposed to the other stuff which they're so confident about."Over the course of 2027, the AIs improved from being able to mostly do the job of an OpenBrain research engineer to eclipsing all humans at all tasks. This represents roughly our median guess, but we think it's plausible that this happens up to 5x slower or faster." And then there's more, like you can go if, if this isn't enough for you, they've got other background documents that you can go read.

Alex Hanna:

Right. There, there's more speculation that you can kind of do that. Um, Eli, I think led and then this other person-- uh, oh, they, I think they also credit a model with authoring this.

Emily M. Bender:

Oh, no.

Alex Hanna:

Something called FutureSearch. Um, in, in somewhere in this box here. And, and the, yeah, if you click on the link to "Timelines forecast". And so I'm not, I'm not surprised that they used a model to effectively write some of this or do some of this. I mean, it's, it's, it's the least surprising thing of this. Like, how much of this is this basically, um, you know--

Emily M. Bender:

What is this? Is this, is this people, or I don't know. Let's get back to this main stupid thing.

Alex Hanna:

That's a, that's another, you know, like, yeah. That is another, um, yeah, that's another thing to go down. Okay. So, um, all right, so they talk about this other agent called Agent-2. Um.

Emily M. Bender:

Which I think at this point, uh, OpenBrain has created Agent-2 on the basis of Agent-1. Um, but it's still supposedly something that people created as opposed to later on we get to the things that, yeah.

Alex Hanna:

Yeah. Then there's this, you wanna talk about this 'China steals Agent-2'.

Emily M. Bender:

Um, so actually I wanna talk about what they're talking about the US government. So, "The Department of Defense considers this a critical advantage in cyber warfare and AI moves from #5 on the administration's priority list to #2. Someone mentions the possibility of nationalizing OpenBrain, but other cabinet officials think that's premature. A staffer drafts a memo that presents the president with his options, ranging from business as usual to full nationalization. The president defers to his advisors, tech industry leaders who argue that nationalization would kill the goose that lays the golden eggs. He elects to hold off on major action for now and just adds additional security requirements to the OpenBrain DOD contract." So when I was reading this, I was like, wait a minute. Why are we using he/him pronouns in a speculative fiction about a president? When was this written? Because if this was written prior to the election last year, then we, you know, should have had more space in the pronouns. But it was published in April, and so, and this is supposed to be 2027, so they're talking about Trump. And none of this sounds like Trump chaos.

Alex Hanna:

Yeah, I mean, they, well, I mean, I don't think this is--this is written, this is written for Elon Musk. You know, it's like, this is written for, you know, this is written for, um, uh, David Sachs, you know, this is written for, you know, people who are ostensibly, you know, Trump, Trump loyalists, who could, you know, like plausibly. But I mean, like, but it's also like the China narrative is the one that OpenAI is already just pretty, pretty, pretty blindly pi-- pivoted to because it is the most reliable one. Right.

Emily M. Bender:

But if so, if we wanna take these people seriously as like they're using all available information to make the most solid possible forecast. No, they're not. Right.

Alex Hanna:

Well, yeah, and that's the sort of thing, it's like even in that, you're trying to think, 'cause I think even thinking about supply chains and you're thinking about tariffs and you're thinking about any ways that any of these things are gonna be built, then yes, of course. Like yeah, you know, the AI bot is not going to--the, the massive-- anyways, so we'll get through the factory shit in, in a bit.

Emily M. Bender:

Yeah. So should we keep going? Um.

Alex Hanna:

So let's keep on going. So there's, there's stuff in here. There's algorithm breakthrough, breakthroughs. So they, they show this kind of shift in estimates of compute allocation to, um, to research experience. So there's a, a pie chart which, uh, I don't know.

Emily M. Bender:

Is it this one over here?

Alex Hanna:

Uh, no, no, there's-- a little down. Okay. There's an open this. Oh yeah. One. So they, so they moved to this pie chart, uh, and after this I want you to look at, talk about this 'neuralese', uh, thing. Yeah.'cause it's, it's, it's so funny. Um, so, there is a shift in OpenBrain's compute allocation. So the 2024 estimate, it's like about half of it's going into training. Um, a quarter of it's going to data generation, and then there's about 33% going to external deployment. And the new 2027 scenario, about a quarter of it's going to research experiments, um, a quarter going to training a quarter in data generation and a quarter to external deployment. And there's like a little bit with research assistance. So it's sort of like now the sort of, the full vision is like you are having autonomous kind of experiments of what, um, that these things have a sufficient kind of vision of reality in, in AI world and it is self experimenting and self-improving. And I'm like, okay? And you're sort of reducing all, I mean, and this is the, this is the, you know, this is the Sakana vision. This is, we've talked about kind of AI and science so much on this pod, and it's just like the vision that AI science has any kind of bearing on how real science is done is, is really bonkers. There's more here, but I'm gonna stop there and then, because I really wanted you to talk about this shit.

Emily M. Bender:

The neuralese. Okay. So, all right. This is one of these expanded boxes and the title is "Neuralese recurrence and memory." Um, "Neuralese recurrence and memory allows AI models to reason for a longer time without having to write down those thoughts as text. Imagine being a human with short-term memory loss--" So here's the co-opting the experience of people with disabilities again. "--such that you need to constantly write down your thoughts on paper so that in a few minutes you know what's going on. Slowly and painfully, you could make progress at solving math problems, writing code, et cetera--" Because that's exactly what I'd be doing if I were experiencing that. Uh, "--but it would be much easier if you could directly remember your thoughts without having to write them down and then read them. This is what neuralese recurrence and memory bring to AI models.

"In more technical terms:

" Um, so,"Traditional attention mechanisms allow later forward passes in a model to see intermediate activations of the model for previous tokens. However, the only information that they can pass backwards from later layers to earlier layers is through tokens. This means that if a traditional large language model wants to do--" Want, mm."--to do any chain of reasoning that takes more serial operations than the number of layers in the model, the model is forced to put information in tokens, which it can then pass back into itself. But this is hugely limiting" because the tokens can only store-- So basically what they're talking about here is, um, instead of taking the vectors and using them to predict tokens, that is, pieces of words probably in English, it's like, let's just pass the, those vectors back through. And this is called neuralese, and the big scary thing here is that people can't read neuralese. Um, but the chain of thought was actually the large language models 'thinking'.

Alex Hanna:

Right. Right. And so that's the sort of like, right, so that's the sort of thing where it's like, okay, so this gets kind of into like, okay, now, now we're starting to lose control. You know, like we're, we're getting into a place where like we maybe could have gotten to chain of thought and you know, like for, and you know, just to, just to be clear, it's not thinking, right? Uh, but it is sort of like, you know, a, these checkpoints and steps on like what, what the kind of like, quote unquote reasoning is. Um, you know, it's, it's, and it's already like, it's, it's so annoying 'cause we need to develop like, new language to talk about this, where it's like, no, it's not reasoning, you know, these are, these are points and where it's, it's, it's like stopping in particular areas of like, of, of token exchange, right.

Emily M. Bender:

Yeah. I'm, I'm annoyed that like, we didn't have to spend lots and lots of time and resources looking at the output of synthetic tech and asking if it's reasoning or not. Like I'm so-- so, yes. We shouldn't be using reasoning or chain of thought to talk about that, but also like we could just not have it and not need a word for it.

Alex Hanna:

Yeah. Yeah. Completely. Completely. Okay. We have so little time, 'cause there's like, um, 'cause we have to do AI Hell and, and event-- like ostensibly. Um.

Emily M. Bender:

Yes. This is hellish enough, but yeah.

Alex Hanna:

I know. There's, there's a bunch of stuff about alignment. They try to alignment like, uh, I will say basically like they're like, we try to align it, but like, you know, you know, this thing is lying to us, effectively. Uh, June, 2027, this is, you know, this is where there's, we've achieved what Dario Amodei has called "a country of geniuses in a data center." Um. Most-- this is, this is, this is, this is great stuff. Um, so great stuff in it and oh yeah. We haven't been checking out on, and Emily's highlighting this, so like the stuff that "now exists" in June 2027, the latest thing is "research automator." So research is automated. AI programmer, AI boyfriend. Um, the thing that is emerging is AGI,"AI progress exponential growth", uh, the middle is a mirror, mirror life,"biosphere-destroying mirror life." Um, which I'm not sure what that means. Um, maybe, I dunno, uh, if you know that, say it in the chat.

Emily M. Bender:

And, hey,"cancer cure" is also there.

Alex Hanna:

Cancer, where we figured out all cancer. Producer Christie has been like, 'those are all words'. And then there's superintelligence. And I wanna read this 'cause it's a hoot 'cause it says, "AIs are better than the best humans at everything that doesn't require a body."

Emily M. Bender:

Oh yeah. That was the, this one. Superintelligence. Yeah. Sorry, I took it off the screen.

Alex Hanna:

Yeah. So this is like, this is just, it's just so wild. So this they say, and so, "Most of the humans that open brain can't usefully contribute anymore. Some don't realize this and Harmfully micromanage their AI teams." And this is just incredible stuff."Others sit at their computers watching performance crawl up and up and up." Um, and this to me also feels like a Hamlet, you know?'And tomorrow, and tomorrow,' or sorry, not a Hamlet, a um. My girlfriend's gonna, uh, Macbeth, sorry. Uh, more Shakespeare references. Um, "The best human AI researchers are still adding value. They don't code anymore, but some of their research taste and planning ability--" And this thing on taste is like so bizarre to me 'cause it's like just a fundamental misunderstanding of how research happens. But they kind of, they, they reduce it to an element of cultural style. Um, "--has been hard for the models to replicate. Still, many of their ideas are useless because they lack the, lack, the depth of knowledge of the AIs. For many of their research ideas, the AIs respond immediately with a report explaining that their idea was tested in depth three weeks ago and found unpromising. These researchers go to bed every night and wake up to another week worth of progress made mostly by a, by the AIs." And again, this is kind of like a notion of time, which gets really, you know, like is this just, is this socially necessary labor time of a average AI researcher? I guess so. Uh, "They work increasingly long hours and take shifts around the clock just to keep up the progress--" but, and there's a, uh, there's a hyphen or rather an M dash."--The AIs never sleep or rest. They will, they are burning themselves out, but they know that these are the last few months that their labor matters. Within the silo, quote, 'feeling the AGI' has given away to quote 'feeling the superintelligence'."

Emily M. Bender:

It doesn't have the same ring, I have to say.

Alex Hanna:

Yeah. Yeah. Oh gosh. Someone, someone, who I think someone, the first, first time chatter I've seen is, Ezy Sigh says, "Who realized you could get VC money for writing pulp sci-fi?" Yeah. No, no kidding.

Emily M. Bender:

All right, so I think we need to scroll down to the, the end game here. Because we wanna get to it. So it keeps going. September 27th, seven, "September, 2027. Agent-4, the superhuman AI researcher--" Because of course, that's the goal. Um, "October 2027, Government oversight", um, and then "Choose your ending", and we're gonna pick the bad ending.

Alex Hanna:

And I just wanna say before this, like, uh, AI Agent-4 is like the big inflection point. It's like, we've gone, you know, this is now like, this is the thing. And like for some reason, like the AI, the, the previous agents are supposed to have oversight over them. But yeah, let's go to, let's go to the, the reding, the ending that is called "Race". And I, it was so funny when I clicked this and I'm like, oh, I was thinking like social race, like race and ethnicity and not like, and not like, and not like AI, uh, space race, sort of cold war shit. But yeah. Anyways,

Emily M. Bender:

Yeah. Um, so, okay, so OpenA--"OpenBrain's," Sorry."--official story is that they're implementing additional safety mitigations to ensure that its AI is both more capable and more trustworthy." Pronouns are screwed up in that. Their AI. But anyway, uh, "But in practice, leadership is all too easily convinced that they've mitigated the risks. The result is some quick fixes that make the warning signs go away. The worriers on the safety team lost the debate and have no recourse but to cross their fingers and hope the problem wasn't real in the first place, or that the fixes worked." Um, "They don't give up, of course--" This is a footnote."They'll keep trying to think of ways to catch misalignment or to test the efficacy of the fixes. But from now on, things will be moving very fast and they'll be up against increasingly superior adversary." The way they-- Yeah, go ahead.

Alex Hanna:

Yeah. Well, the thing is, before this, it's important to note like the prior, basically what was happened is like there's concerned researchers, researchers on OpenBrain's, or I don't know what you call them, researchers, people on the oversight committee, which is a quote,"joint management committee of, of company and government representatives with, with several government employees included alongside company leadership." Um. Sort of, this is, there's some political jockeying and so there's sort of like, it's supposed to be an oversight committee for these agents. And so like there's a 6-4 vote and they continue internal usage of Agent-4.

Emily M. Bender:

Yeah. And so the thing that I'm laughing at here is that ostensibly these authors are AI safety people, but they're sort of being weirdly self-deprecating in the way they present the AI safety folks as sort of like the bumbling losers of this.

Alex Hanna:

Well, I think they do it in this, because if you go to like the slowdown mm-hmm. You know, they, they, they, they kind of come out, you know, on head, like kind of ahead, but are sort of like victims of realpolitik, you know, like, anyways, let's finish the, the "race" condition. Yeah.'cause it is, um, you know, it is, you know, like it's--

Emily M. Bender:

Which part do you wanna do?

Alex Hanna:

It's, it's very, okay, so, okay, so like. Really what you, what really what we wanna get to is like, let, there's a piece here where it's basically like, um, you know what Agent-4 and Agent-5 have effectively done is that they've, like, they, they have effectively pretended like they are sufficiently aligned. Um, and I just, I wanna get to the light cone.

Emily M. Bender:

Yes. Light cone just cracked me up.

Alex Hanna:

Um, which I actually, oh, also, that's funny because the, the light cone is in a footnote.

Emily M. Bender:

It's in a footnote. Which footnote is it?

Alex Hanna:

It's in, it's in, um, let me find it. Because this is, this is like, this is, this is like, uh, this, this feels to me, you know, like, you remember that, that picture with Trump with like, uh, I forgot which Saudi royalty and like, and President Sisi from Egypt and they were touching the like orb. The light co-- the light cone feels like the orb and it really. But I feel like, um, it's, it's sort of like, effectively, effectively what they're saying is like, um, uh, you know, like--uh, I'm trying to find-- so effectively just to describe the light cone, it's like humans are basically like, you know, they've sort of like, the, the model, the Agent-Five or like the, and then the Consensus-1 model, which is like what happens when, uh, you know, uh, the Deep DeepSeek or the DeepScent model, uh, like co like talks to the Agent-Five model and they're like, and and they're like-- I I just wanna, I think I'm gonna look at the source code to actually find like light cone.

Emily M. Bender:

Yeah. Where's light cone? I'm, I'm looking at our chat to see if I can see what you, what you were posting right before, because we were both cracking up. So the other thing about light cone is that this is like this long termist idea of the sort of space, the, the time space continuum inhabited by humanity. And it's small right now 'cause there's not very many of us. But then we become these, you know, transhumanist beings living in the machines uploaded and there's however many, 10 to the 57 or whatever of us. And so that's the light cone getting bigger and bigger and, um all right. So where, um--

Alex Hanna:

I'm trying to find where the light cone is. Oh, uh, so, so it's Agent-5. I'm trying to find, 'cause I think right--

Emily M. Bender:

It's right after the 2027 holiday season, I think.

Alex Hanna:

No, it's prior to that.'cause it says "Agent-5 care--" The quote, 'cause I'm looking at our chat. That is, "Agent-5 cares much more about reliability than the speed at this point. Starting space colonization a few years slower only shaves off a tiny sliver of the light cone." And so this idea of like, and the light cone is like a, it's like the conical thing in which, you know, like, I don't know, you start from the Cambrian explosion of-- I don't know. I'm not, I'm not gonna try to speak the language. It's just, it's absolute, absolute fever dream stuff, you know.

Emily M. Bender:

But the thing that that really sent me about that one is that they have agent five holding this belief about the light cone, which is absolutely a long termist X-risker belief. And so these rationalist guys are saying, well, we create this hyper intelligent, super intelligent AI, and of course it is going to come to the same conclusions that we've come to.

Alex Hanna:

Yeah, it's sort of like it's, it is this like very particular sort of game theoretic, homo economicus, you know, type, type, type of individual.

Emily M. Bender:

Yeah.

Alex Hanna:

Um, anyway, so you get to the end, uh, you know, the US and China make a deal. Uh, you know, there's a lot of, and then there's a consensus model. And effectively we like got to the point of like sort of the paperclip maximizer, where in late 2029, some here, like the existing um, SEZs which are Special Economic Zones, "have grown over crowded with robots and factories, so more zones are created around the world. Early investors are now trillionaires, so this is not a hard sell. Armies of drones pour out of the SEZs, accelerating manufacturing on a critical path to space exploration." And by now everything in science fiction is now like is basically done. Brain uploading is, is, is true. We've cured cancer, we've cured aging. Um, we are still working on biosphere-destroying mirror life. Um, Dyson swarms are emerging and nano bots are emerging. Um, and then basically that, here's the real kicker. Uh, do you wanna read this about that?

Emily M. Bender:

Yeah. So, so, so the, the, um, consensus one sort of bides its time and then basically kills off humanity with this quiet spreading biological weapon, which is, um, triggered with a chemical spray. And then now I'm, now I'm quoting verbatim."Most are dead within hours. The few survivors, e.g. preppers in bunkers, sailors on submarines, are mopped up by drones. Robots scan the victim's brains--" After they're dead."--placing copies in memory for future study or revival." And then footnote 31.

Alex Hanna:

The footnote here though.

Emily M. Bender:

You wanna do the honors?

Alex Hanna:

It says, "Arguably this means only a few people actually died. Arguably." And so now we're like, so we've, so we've successfully merged with the, uh, you know, the singularity, um, everyone's living in, you know, this uh, computer utopia. Yeah. And, and incredible stuff. And then we get to our glorious computational feature, "The new decade dawns with Consesus-1's robot servitors spreading throughout the solar system. By 2035, trillions of tons of planetary material have been launched into space and turned into rings of satellites orbiting the sun." Uh, what is this footnote?"Why colonize space? For the resources." And all, the only thing that these people know is colonization."Insofar as earth is special to Agent-4 and must be preserved, it can be, and material from Mercury, asteroids, et cetera, harvested instead. The surface of the earth has been reshaped into Agent-4's vision of

utopia:

data centers, laboratories, particle colliders, and every many other wondrous constructions doing enormously successful and impressive research. There are even in, there are even bio-engineered human-like creatures, to which-- to humans what corgis-- to humans, what corgis--" This is weird grammatically."--to humans what corgis are to wolves, sitting in office-like environments all day, viewing readouts of what's going on and excitedly approving of everything since that satisfies some of Agent-4's drives." Um, and I don't, there's some bullshit.

Emily M. Bender:

It, it's, it's, yeah. So ends with the, the last sentence is "Earth-born civilization has a glorious future ahead of it, but not with us." And then, read the other ending, which we're not gonna do.

Alex Hanna:

Yeah. The other ending is then like, they, you know, they sort of find a way. The, I did get like, get all the way through this, but it's very funny to me that the, like, the big like gotcha here was like of, of testing was that they like did a, they did a prisoner's dilemma on Agent-4, or-- which was, which was, uh, in the ending, which is, um, where they, they basically say, um, here it is. So, OpenAI quickly if, are you? Yeah. So OpenAI-- OpenBrain-- Sorry, I can't stop saying OpenAI. Uh, "OpenBrain quickly vets several dozen top external alignment researchers and loops them into the project, quintupling total expertise and decreasing groupthink." Um, and then just like, this is, this is so funny. Just say you hired some people. Um, then they retra-- like, why do you need to fucking talk like this, you fucking nerds? Um, and I, and I mean that der-- I say, I mean that 'nerds, parentheses, derogatory'. Um, they retrace Ag--, 'cause, you know. We're nerds too, but we're nerds in the other way. Um, we, "They retrace Agent-4's studies into mechanic, mechanistic interpretability. They take frozen versions of the model from one week ago, two weeks ago, isolate them and ask each of them the same set of questions about its previous research. Stripped of their ability to communicate, AIs still coordinate on the simple strategy of claiming that interpret abilities are too complicated for humans to understand and giving them indecipherable explanations of the technique. But the humans are interpretability experts and they are suspicious, so they ask many follow on questions about the details. For these questions, the isolated Agent-4 agents are unable to coordinate a shared story and end up telling contradictory lies." They did, prisoners develop, they did prisoners dilemma on, and this is their big gotcha. Like, get the fuck outta here. Like, I can't believe this is like the big idea that this is how we like slow down the thing.

Emily M. Bender:

Well, so they, they do say somewhere in here that they're not really recommending the slow down one, um, but they just think it's better than the other one. Or some like, ugh.

Alex Hanna:

It, it's, yeah, they're like, this is not a policy recommendation, but this is like one way. And then they, and then they're like, oh, so we now, we now have this lie detector. They actually solve mechanistic interpretability, uh, in which like humans can, can, it's, it's just, it's just like abs, absolutely. And then they develop like a safe model and then there's some kind of Chinese US consensus and then, where, I don't know, like it's--

Emily M. Bender:

And somewhere, I think it's in the shared part, the, um, hallucinations are stamped out. That's solved.

Alex Hanna:

Oh yeah yeah.

Emily M. Bender:

Right.

Alex Hanna:

Yes. We've just, we've just, yeah. And then there's like weird, and it's just, I don't, I don't know, like, I, I gotta say this on record, like this is why I hated the Three-Body Problem series and like, where it's like the type of sci-fi where it's like, we have these people who can do incredible amounts of like, game theoretic things. I took personal, I took personal issue with the second thing in that book because like the author was like, 'This is actually cosmic sociology is what we're doing here.' I was like, no, no. This is, this is incredibly reductionist game theory with incredibly ridiculous assumptions, but sure. Go off, you know.

Emily M. Bender:

Ah, all right. We're still gonna do Fresh AI Hell, do you have, do you have a few extra minutes, Alex to stick around?

Alex Hanna:

Yeah, yeah. Let's do it.

Emily M. Bender:

Okay. Are you good? Do you wanna sing this time or, uh, talk for the improv.

Alex Hanna:

I need to sing, I need to sing to, to cleanse my soul a little bit.

Emily M. Bender:

Okay. What's, what's our, what's our genre?

Alex Hanna:

Uh, I don't know. What is a good genre? Maybe like, um, maybe like blues or something.

Emily M. Bender:

Okay. Yeah, blues. Okay. So you are, um, an OpenBrain employee tasked with trying to read some neuralese and singing the blues about it.

Alex Hanna:

All right, I, I got it. No problem. I'm a-sitting here in my office with nothing to do. All I've got here is the descendants of Deep Blue. She's giving me this output down on my terminal screen. I'm not really sure what I, I'd even done scene. I tried to understand it. What is this I see? Nothing but a big list of nerdy neuralese. Thank you. Thank you.

Emily M. Bender:

Yay. Okay, I got six. I wanna get through them because we deserve some stories that actually relate to reality, even if they're sad. So this is 404 Media. Um, sticker is "FOIA". This is by Emmanuel Maiberg and Jason Koebler on April 17th. Headline is "This college protestor isn't real. It's an AI powered undercover bot for cops." And the absolute most infuriating part of this was they have an AI pimp character. Let's see if I can find this. Um. Why am I not finding it in here?

Alex Hanna:

Well, before you get into that, basically what this thing is doing is that it's, it's, it's, that's what it's like a, it's, it's effectively kind of like a, it's an agitator, you know, they're like, why don't you do some violence? You know, and it's effectively trying to, you know, stoke people into doing, you know, like, so that's one element of what they're doing.

Emily M. Bender:

Yeah. And then one of the examples, which I can't get to because I'm not signed in, was, um, basically you have an actual sex worker apparently communicating over text with their, um, pimp. And the pimp is like, yeah, you go get them. It doesn't that they're not paying you. You gotta stand up and get the bag or something. But it's like digital blackface on top of that. And it's like, so here's a sex worker who believes there's somebody who at least has some interest in their welfare talking to a chat bot. Like it makes no sense.

Alex Hanna:

Yeah.

Emily M. Bender:

All right. Next, um. You can do this one.

Alex Hanna:

This is a Bluesky one. Um, this is, uh, from someone named Paul Dwayne. Um, uh, and it is, um, linking to something called PoliticsHome.com and it's, "Here's Palantir's Lewis Mosley, grandson of Britain's most famous fascist leader, talking about the importance of AI. If you aren't convinced yet that this is one big coordinated push towards the destruction of everything good in life." And it's, you know, basically this person in this article saying "Keir Starmer, uh, quote 'gets AI', you could see it in his eyes." Uh, and so, you know, we've talked about this with Gina Neff, uh, did basically the Starmer government, uh, not really learning anything from the Sunak go when it comes to AI.

Emily M. Bender:

Yeah, oof. Um, so here's Wired, but this is actually joint reporting. No, this is a different one. So, uh, wired from April 17th, David Gilbert. Headline is "Anti-vaxxers are grifting off the measles outbreak and claim a bio weapon caused it." Subhead, "Activists affiliated with RFK Jr. are selling a measles treatment and prevention protocol for hundreds of dollars, including supplements supposedly formulated by AI." And I think we don't have to go further into this, but just to see like it's all grift all the way down and it's enabling more grift. And it's bad.

Alex Hanna:

Yes. This next one is from Futurism and the sticker is "Bad Manners." Um, this title is, "Sam Altman admits that saying, quote, 'please', and quote,'thank you' to ChatGPT is wasting millions of dollars in computing power." Uh, and this is from April 19th. The author is Joe Wilkins. Um, it's kind of a horrific kind of like banner image of like the words, "please" so superimposed over the picture of Sam Altman.

Emily M. Bender:

Looking, comfortable as always.

Alex Hanna:

Yeah. Yes. Um, and so then, yeah, basically they're like, yes. He said something like, well, saying please is kind of a waste of time. Uh, and actually, you know, like if you aggregate this across, um, you know, all the instances, you know, it's, it's costing a lot of compute. Uh, but then the thing that really pissed me off here was there was a piece by this Microsoft researcher, um, go up a little bit here in the article where, um, you know, uh, this Microsoft Work Lab memo, so it's, "When it clocks politeness, it's more likely to be polite back. Generative AI also mirrors the levels of professionalism, clarity, and detail in the prompts you provide, provide." So it's like saying like, well, if you say please, then your thing, your, your synthetic text is gonna sound nicer.

Emily M. Bender:

I've seen like missed reactions to this where people are like, great, let's just, you know, keep charging, we'll keep costing OpenAI money. But also all of that is environmental impact at the same time.

Alex Hanna:

Yeah.

Emily M. Bender:

So, ugh.

Alex Hanna:

Yeah.

Emily M. Bender:

Um, so this is from Ars Technica, um, by Ashley Bellinger, who we've seen before. April 18th, 2025. Um, the sticker is, "I needed the money," and the headline is "Regrets: Actors who sold AI avatars stuck in Black Mirror esque dystopia. Is $1,000 worth being the AI face of obvious scams. Rueful actors say no." And so basically this is people who have been paid 1 to $5,000, I think in the article, uh, to go record various facial expressions and then basically offering their avatars. And then they show up in all of these terrible places and the company claims they're trying to avoid it. They've got like, use policies, but anyway, it's sad.

Alex Hanna:

Yeah.

Emily M. Bender:

And then I have one--

Alex Hanna:

And last one. Yeah.

Emily M. Bender:

This is, this is some really good parody. Do, did you wanna do the honors?

Alex Hanna:

Yeah, this is great. So this is a post on Bluesky, I think that they, this is originally from Mastodon. So the original one was from Mastodon, which is a, uh, from Fax.Computer@Computer. Uh, and it is, um, "Asking AI to read my emails and give me a bullet su-- point, sum, uh, a bullet point summary. Asking AI to watch that movie I've been meaning to see and give me a bullet point summary. Asking AI to read those poems my best friend sent me, and give me a bullet point summary. Asking AI to fuck my boyfriend and give me a bullet point summary. Asking AI to grow old and gray, surrounded by the laughter of loved ones, and accept the embrace of death with peace and without fear of having lived a full life, and give me a bullet point summary." And I just love how succinctly this is. This, this to me is cinema, like, very nice, like kind of, uh, uh, um, you know, uh, distillation of, of of what this shit is doing to uh, you know, culture and, and how we move through the world.

Emily M. Bender:

And so we see this on Bluesky from Dr. J Rosenbaum, who writes,"It's a slippery slope and research is finding that people who use ChatGPT or similar a lot are showing addictive tendencies." And then there's a link to an article actually about that. But then this, this image from, uh, ComputerFacts on Mastodon is just wonderful. Um, so, all right. We made it.

Alex Hanna:

All right. We made it, we made it through this. And like, I will say this was, this was not as painful as Superagency, first off. It wasn't as long. Uh, um, and, you know, I could, and, and, and like, you know, like, again, it's not that I didn't not enjoy reading the Three-Body Problem. I was just pissed off about it in a way that I was like, this is, this is just kind of like, this feels like a, you know, it feels like a, a pulp sci-fi novel. As, but uh, but unfortunately--

Emily M. Bender:

But it's earnest. Right?

Alex Hanna:

But it's earnest and people believe it and they want to influence policy and that's upsetting.

Emily M. Bender:

And the thing that we were talking about sort of early in the episode about how they're only like two degrees of separation away from like ICLR and things like that, really shows that if folks want to claim to be doing "serious research in artificial intelligence", there's my scare quotes, they need to be shunning these people hard and they need to be sort of explaining to the world how they are not like these people. I'm not hearing that. I'm hearing well, if you don't, you know, try it with ChatGPT, then you're not state of the art.

Alex Hanna:

Yeah, yeah, yeah. All right. Well that's it for this week. Our theme theme song is by Toby Menon. I also have Claire in the shot. Graphic design-- Graphic design by Naomi Pleasure-Park. Production by Christie Taylor. And thanks as always to the Distributed AI Research Institute. If you like this show, you can support us in so many ways. Preorder "The AI Con" at TheCon.AI, or wherever you get your books. Or find us on the road, a full list of events at TheCon.AI.

Emily M. Bender:

But wait, there's more. Rate and review us on your podcast app. Subscribe to the Mystery AI Hype Theater 3000 newsletter on ButtonDown for more anti hype analysis, or donate to DAIR at DAIR-institute.org. That's D A I R hyphen institute.org. You can find video versions of our podcast episodes on Peertube, and you can watch and comment on the show while it's happening live on our Twitch stream. That's Twitch.TV/DAIR_Institute. Again, that's D A I R underscore Institute. I'm Emily M. Bender.

Alex Hanna:

And I'm Alex Hanna. Stay out of AI Hell, y'all.

People on this episode