Mystery AI Hype Theater 3000

Building Worlds Through Better Reading (with Reo Eveleth), 2025.09.29

Emily M. Bender and Alex Hanna Episode 64

Powerful AI boosters claim to love science fiction novels, but why do they always seem to take the wrong lessons from them? Reporter and writer Reo Eveleth joins us to discuss the ways tech leaders misuse storytelling, and how we can avoid their visions to imagine better futures.

Reo Eveleth is a reporter, writer, and co-founder of COYOTE Media Collective. They created the hit independent show Flash Forward, which they also turned into a book of the same name. Reo’s work has been nominated for a Peabody, an Emmy, and an Eisner Award.

References:

Also Referenced:

Fresh AI Hell:

Correction: We misstated Lee Ostertag's name during the livestream for this episode. We apologize for the error!

Check out future streams on Twitch. Meanwhile, send us any AI Hell you see.

Our book, 'The AI Con,' is out now! Get your copy now.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Ozzy Llinas Goodman.

Alex Hanna: Welcome everyone, to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find. 

Emily M. Bender: Along the way, we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of bullshit mountain, we discover there's worse to come. I'm Emily M. Bender, a professor of linguistics at the University of Washington. 

Alex Hanna: And I'm Alex Hanna, director of research for the Distributed AI Research Institute. This is episode 64, which we're recording on September 29th of 2025. Today we're talking about science fiction and AI- specifically, how sci-fi writing has influenced AI boosters and shaped our current hellscape.

Emily M. Bender: Our guest this week is the amazing Reo Eveleth, a reporter and writer who's covered everything from fake tumbleweed farms to million dollar baccarat heists. They created the hit independent show Flash Forward, which they also turned into a book of the same name. Reo's work has been nominated for a Peabody, an Emmy, and an Eisner Award. Welcome to the show! 

Reo Eveleth: Thank you so much for having me! I'm excited to be here.

Alex Hanna: We're so excited to have you! 

Emily M. Bender: This is gonna be so much fun. Let me grab our first artifact. So we are starting with Scientific American, which I think you can now see and I can't, but I'm getting there. Yes, so this is a piece from the end of 2023, so almost two years ago, by Charles Stross. And the headline is, "Tech Billionaires Need To Stop Trying To Make The Science Fiction They Grew Up On Real." Subhead, "Today's Silicon Valley billionaires grew up reading classic American science fiction. Now they're trying to make it come true, embodying a dangerous political outlook." 

Alex Hanna: Yeah, so this piece is a little bit of a gloss on many of the different referents that so many of the tech boosters seem to take inspiration from or get into. And there's, you know, it's of this kind of, quote unquote, "golden era" of sci-fi. So maybe the first two paragraphs are worth getting into. So, "Sci-fi (SF) influences everything in this day and age, from the design of everyday artifacts to how we, including the current crop of 50 something Silicon Valley billionaires, work. And that's a bad thing: it leaves us facing a future we were all warned about, courtesy of dystopian novels mistaken for instruction manuals. Billionaires who grew up reading science fiction classics published 30 to 50 years ago are affecting our lives today in almost too many ways to list. Elon Musk wants to colonize Mars. Jeff Bezos prefers 1970s plans for giant orbital habitats. Peter Thiel is funding research into artificial intelligence, life extension, and seasteading. Mark Zuckerberg has blown $10 billion trying to create the Metaverse from Neal Stephenson's novel Snow Crash, and Marc Andreessen of the venture capital firm Andreessen Horowitz has published a techno-optimist manifesto, promoting a bizarre accelerationist philosophy that calls for an unregulated, solely capitalist future of pure technological chaos." All right. Thoughts so far, Reo? 

Reo Eveleth: You know, it's funny, when, Marc Andreessen published that, you know, manifesto- I'm working on a nonfiction book proposal about sort of the history of Futurism, and I was like, you know that classic tweet where it's like, "He just tweeted it out"? Like he just, it was really a lot like that. Like my guy, I just wrote a whole chapter about the Italian Futurists and how it's a model for these people. And you know, I had actually written a piece for Wired a couple of years ago about exactly this, that the Italian Futurists are this model for tech, you know, leaders these days and sort of the Italian Futurists being this art movement that was very explicitly fascist. And I got all these people sort of like writing in, being like, "Oh, you're exaggerating. This is totally, you're really out of pocket." And then, you know, he just publishes it and he quotes directly from the Italian Futurist Manifesto in his manifesto. It is very clearly modeled off of that. So, you know, he just tweeted it out. I texted my book agent being like, look! You know, it's happening. But yeah, I mean, I am really interested in this premise of this article, that these sort of technologists and, you know, alleged leaders are trying to make the things they read about in these books real, that that's kind of like the way that this works or the way they think about it. Because I'm not sure that's totally what's happening or that's totally what's true, but it is a really interesting, I think, provocation and way to think about maybe some of these questions, is where are they getting these ideas and why, you know, how do they engage with science fiction when it seems as though so much science fiction is actually telling them that they are the villain, and they don't seem to be getting that kind of message?

Emily M. Bender: Yeah. So, you know, the place where this feels true to me is all of the inevitability narratives. And I'm of an age with these jerks, which really bothers me. So we grew up with the same stuff in the sort of nerdy pop culture. And, the sort of idea that that is the necessary future, and maybe not necessarily Snow Crash, but you know, maybe it's Star Trek or maybe it's one of these, you know, happier looking ones. And I have a cat approaching the microphone, so there might be some purring. This is Euler. So this idea that this is just the future, and it's just a question of how fast we run there. And also that what's cool about speculative fiction is not the sort of explorations of how people react to different things in different scenarios. But what's cool is the tech. It's cool and it's inevitable and we gotta get there. And that's sort of the version of this critique that rings most true for me.

Alex Hanna: Yeah, I mean I think that's spot on Emily, in thinking about, you know, the sci-fi that I like, and I think, you know, I would say that the people who are good at their craft, are telling stories to reveal something about the human condition and about kind of persistent moral and ethical quandaries. And, you know, when Star Trek is at its best, that's what it's doing. And when it's at its worst, it's sort of the capsule episode in which, you know, there is some kind of a problem. And then they, you know, they, they inverted the spoiler on the, you know, the Starship Enterprise and it reversed the polarity and then it like, is leading, will eventually lead to healing the civil war between these two, whatever whatever. And it's kind of the worst element of that, even though it kind of, you know, and they have to deal with it kind of internally to, you know, not violate the Prime Directive, which says you can't really meddle in other cultures, although they violate it all the time. 

Emily M. Bender: But they feel bad about it when they do. 

Alex Hanna: They have to, there is one episode where they have to actually answer to someone about it, but they actually don't end up answering to it at all. Anyways. But yeah, no, I mean, the hyper focus on the technology is like, well, you're not building the future. You're building a dystopia based on a misunderstanding of what technology does in the world.

Reo Eveleth: Mm. I think that, when I think about the ways in which tech leaders are using fiction to inform their worldview, I think the fiction that perhaps has a bigger influence on their ethos is less the science fiction, and more the era of fiction in which the nerds take over, right? This idea that there's these downtrodden, brilliant, mostly white men, teen boys, who should be in charge because they're smarter than everyone else, but are sort of bullied and are, you know, downtrodden and all of this, and that they kind of rise up and take over and they sort of deserve that. And that sort of entitlement of, because I am "smarter," quote unquote, than everyone else, I should actually be in charge, and I'm the kind of unlikely hero. And there are a lot of stories like that in popular culture from a certain era. And today we live in the world where they're actually in charge. They, unfortunately, they have all the money, they have all the power, but they're still sort of, I think, embodying this like underdog- they think of themselves as the underdogs in some ways. And I think that kind of sci-fi, that lens through which a lot of sort of pop sci-fi stories were told at a certain time, has a really big impact on the way that a lot of these folks think about themselves. And what, like when they read a story, who is the character they're identifying with, right? But you hear all the time, and I hear all the time, people say, "Oh, well, if only, you know, Elon Musk read Octavia Butler, then, you know, X, Y, Z." And I, you know, I'm just very skeptical of the idea that if they just read better sci-fi then all of our problems would be solved, because they would understand, you know, what we maybe understand when we read these books. Because you actually hear these guys cite these books sometimes. They at least say they have read them. I mean, Elon Musk has said that one of his favorite books is Hitchhiker's Guide to the Galaxy. Unironically. Even though, like, he's the villain, he would be the villain in this book. Peter Thiel is a big Lord of the Rings fan, which, you know, has some thoughts about power and how power corrupts. A ton of them love the Iain M. Banks books, right, they like, cite these books all the time. And that is a series about a socialist utopia, right? So, I think that there's this sort of like, I think naive idea that, "Oh, if they just read the right stories or they just read different stories, they would see the sort of folly of their ways." And I just am not convinced by that. 

Emily M. Bender: So we need them to read better, not read better science fiction. 

Reo Eveleth: Yeah. Think, think better. Do more better thinking. 

Alex Hanna: Yeah, do, do better. Well, I think about that a bit too, because if you were doing a really bad job reading Octavia Butler, you could be like, "Oh yeah, go towards the stars. Well, that's what I'm trying to do." If Elon Musk... I'm like, no, that's actually not really, you know, that's not the point. 

Reo Eveleth: Yeah. Or you have your AI like summarize the books for you, which is I'm sure what some of these folks are doing when they're reading these. And so it's this weird sort of warped idea of what any of these stories are about. 

Alex Hanna: Yeah, absolutely. 

Emily M. Bender: All right, so there's one last thing in this piece that I wanna do before I get us to this next one, which is a super rich text. But here they say something ridiculous about large language models, so we can't pass it up. So this paragraph: "But there is a problem. SF authors such as myself are popular entertainers who work to amuse an audience that is trained on what to expect by previous generations of science fiction authors. We are not trying to accurately predict possible futures, but to earn a living. Any foresight is strictly coincidental. We recycle the existing material and the result is influenced heavily by the biases of earlier writers and readers. The genre operates a lot like a large language model that is trained using a body of text heavily contaminated by previous LLMs. It tends to emit material like that of its predecessors. Most SF is small-C conservative, insofar as it reflects the history of the field, rather than trying to break ground or question received wisdom." 

Alex Hanna: Thoughts? There's a lot, there's a lot there. A lot going on in that paragraph. 

Reo Eveleth: Yeah, I mean, in some ways I take his point, that like, at the end of the day, if you are trying to make a living as a popular science fiction writer, you are not writing wonky white papers about how you think solar is going to solve the problem of trans, you know what I mean? You're not getting, unless you're, you know, there are some sci-fi writers who are very interested in those questions, and that does appear in their work. Kim Stanley Robinson's work, I think, does sometimes engage in those kinds of questions in the- sometimes to a point where I'm like, okay, let's move along, let's get to the plot, you know? But I think it is, it's hard, it's hard anytime you're painting a field or a genre as wide and as varied and as rich as science fiction with a broad brush like this. I think there's a lot of really great science fiction that is not conservative, that is trying to push forward, and insofar as we see that in the field, is that a trend of science fiction or is that a trend of publishing being afraid of trying new things? I just came back actually from a science fiction writing residency in Banff, with an incredible group of 20 writers who are doing all sorts of absolutely incredible, genre-melting stuff, and it's great. And it's just a question of can you get it published, right? And so there's sort of this chicken and egg question. Is it the writers themselves? Is it the publishing industry? Is it, you know, what are we talking about here? But yeah, I mean, I take his point. We're not trying to predict the future. But also, I do think that it's a little bit of a too easy escape to say, well, we're not, we don't really have opinions and we're not really trying to tell a moral of a story. Like, many people are, right? There's a clear point of view. You know, you come into stories and, you know, Ursula Le Guin is an incredible writer, it's very clear in a lot of her stories, Octavia Butler, of what the sort of thing they're trying to say is, right. What they're trying to show you about humans, or about society, or about politics, or about the ways people can work together or can't work together. So it does feel a little bit too easy to be like, "Oh, we're not trying to say anything." 

Emily M. Bender: Yeah. And there's a lot of space between not trying to predict the future and not trying to say anything. I don't think science fiction is about what will happen. It's what would happen, if.

Alex Hanna: Yeah. And it's gonna be also telling you something, if there were ways of subverting that, or there are ways of thinking about how we could change our own relationship to our environments, you know? And if there were this intervention, you know, well, we need this intervention. I'm like thinking of, I'm thinking about the Xenogenesis series from Octavia Butler, where it's like, you know, what is it? "Two parts of the human, you know, fundamental parts of the human are like, hierarchy and...." Something else. Sorry, oh gosh. Edit me saying the right thing in post. But it's effectively, "Okay, but if we were gonna change that, how could we mitigate that without having an alien species mitigate it and splice their genes in?" So, you know, it's, and I think that's granting a lot of science fiction authors, not a lot of agency in what they're trying to do. And it is also, you know, the things are conservative in so far as the institutional publishing is conservative, and, you know, they don't like to take risks on people who are not well known.

Emily M. Bender: Yeah. So a little bit going on in the chat here. sjaylett says, "I assume Thiel also loves the Russian author's retelling of Lord of the Rings from Sauron's point of view." 

Reo Eveleth: I would love to know, I would love to know if he's ever seen that. 

Emily M. Bender: magidin supplies, "That is Kirill Yeskov, The Last Ringbearer."

Alex Hanna: Yeah, didn't know about that. 

Reo Eveleth: Oh, it's worth reading, frankly. If you're a Lord of the Rings nerd. It is incredible and strange and I actually thought it was great. 

Alex Hanna: That's great. It's not my flavor of nerd-dom, but I love the concept. Yeah. 

Emily M. Bender: So, should I take us to our next artifact, here?

Alex Hanna: Yeah, let's do it. 

Emily M. Bender: Okay. So again, Scientific American. This one a bit more recent, it's from May 16th, 2024. Headline, "How New Science Fiction Could Help Us Improve AI," by Nick Hilden. And the subhead is, "We need to tell a new story about AI, and fiction that has that power, humanities scholars say." 

Alex Hanna: Gosh, yeah. This is a rich text, as we say. And when we say rich, it is rich as in manure. Not as in chocolate or money. 

Emily M. Bender: Yes. Or cat purrs, which we have going on again. 

Alex Hanna: Or cat purrs. Yeah. So first off, it's, strong start. It says, "For the past decade, a group called the Future of Life Institute-" which, if you're a listener of this pod, we've surely talked about before- "has been campaigning for human welfare and public conversations around nuclear weapons, climate change, artificial intelligence, and other evolving threats. The nonprofit organization aims to steer technological development, away from dystopian visions that so frequently haunt media. But when it comes to discussions about artificial intelligence, its team has had to face one especially persistent foe: the Terminator." So we've already talked about the Future of Life Institute, and there was a pause letter that they had written several years ago, about why we need to pause all development of AI for six months until we could figure out what the hell is going on. And Emily, you had co-authored a response to that, with the other folks on the stochastic parrots paper. 

Emily M. Bender: Yes, and we can link to that in the show notes. Basically, because they cited us, especially, in their pause letter, we were like, "No, you're not citing us. That's not what we meant." And so, we wrote a rebuttal. And then I think, were these folks also behind that whatever 34 word statement that came out a bit later? 

Alex Hanna: I don't think that was them. I think that was another existential risk group, but I don't remember which one. I wanna skip a paragraph and get to this thing where it talks about artificial humanities, and then we'll go into it. So, "Recognizing the influence that popular narratives have on our collective perceptions, a growing number of AI and computer science experts now want to harness fiction-" as if it is solar or something- "to help imagine futures in which algorithms don't destroy the planet. The arts and humanities, they argue, must play a role to ensure AI serves human goals. To that end, Nina Beguš, an AI researcher at the University of California Berkeley, advocates for a new discipline that she calls the quote, 'artificial humanities.' In her upcoming book, Artificial Humanities: A Fictional Perspective on Language in AI, she contends that the responsibility of making these technologies is too big for the technologists to bear it alone. The artificial humanities, she explains, would fuse science and the arts to leverage fiction and philosophy in the exploration of AI's benevolent potential." Let's pause there. Thoughts on that? 

Reo Eveleth: I'm curious, Emily, what you think, as the most humanities professor person amongst us, perhaps. I'm curious what you think.

Emily M. Bender: I mean, so, the humanities are absolutely critical in this moment for understanding the systems that the technology is landing in and disrupting, and also how to think about human connection. It's a kind of scholarship that doesn't always resort to datafication of everything. And so like yes, humanities is really, really important, but "artificial humanities" sounds like the antithesis of what we need. 

Reo Eveleth: Yeah, I'm really, this piece was really mind-melting for me, because it kind of has this very circular logic of, "Okay, the media is full of stories of scary AI. Which in turn-" later it says- "makes people distrust and potentially then spoils the results of AI, cause people aren't coming to them with, you know, a generosity of spirit, which makes AI more likely to be bad. And instead of trying to fix AI, we should actually just tell the users that AI is good, so that they are nice to the AI, so then it is good." I just, it's like a very mind-melting line of reasoning in here. And this idea that, you know, I see this a lot, like I get this a lot. And many people who work in this field, and I'm sure you all have this too, there is this sense that, "Okay, we need to communicate with the public. And the public does not understand our research. And the public is, in fact, in some cases, afraid of our research." And this, you see this in all sorts of things. Technology. You see this in, you know, gene editing. You see this in all sorts of places where perhaps that fear is not always founded, you know, in great understanding of science. And so then what these places and these researchers will do is say, "Ah, we need to do storytelling to the public. Because we need to explain to them, in a way they understand, what we're doing." And it often winds up being quite condescending in my opinion, cause it's like, "Ah, these people don't know anything. These people, they won't understand it if we try to explain it to them, you know, peer-to-peer. So we need to package it up in a nice little cute package and a story. And we need to do this and we need to do that." And it kind of fundamentally misunderstands why stories are fun to listen to, and what stories do, and how we as people tell them to one another. But it is very, I get asked to do this all the time. There are many science fiction writers who make the majority of their money not doing books or stories, but instead doing "future storytelling," quote unquote for, you know, the arab Emirates and like, other places, or technology companies that are really trying to use narratives to shift the public feeling about them. And I have some complicated feelings about the ethics of doing that. But we can, there's more in this article that we can carry on and get to.

Alex Hanna: Yeah. I do wanna point out, cause you mentioned storytelling, abstract_tesseract has a really relevant point, and he says in the chat, "This reminds me..." Oh, people chatted a lot. They're having a very spirited debate about Lord of the Rings. 

Reo Eveleth: Oh, excellent. Oh, I'm so excited to read this later. 

Alex Hanna: Which I love. Thank you. Thank you, our audience remains the best in the biz. So abstract_tesseract says, "This reminds me of all the business storytelling guides that fundamentally don't understand the difference between storytelling and advertising copy. Also yet another example of why it's not enough to ask tech bros to study more humanities, because this is what you get," and that's absolutely spot on. I remember at one point in my career I was part of a fellowship that was between this very sciencey new institute at the University of Wisconsin, and they kind of had a social science slash humanities nexus that they were doing. So they gave people a small grant to participate in these monthly meetings or whatever. And it really was, you know, the conversation from the advisor was like, "I'm really interested in just like, making things kind of aesthetically pleasing, or making things that are like within the sciences just like, look nice." And I'm like, "Okay, that's not really the intervention that I think art is trying to make." I mean, there's a lot more complicated reasons why people make art, whether it's describing their own identity or their own histories or own traumas or societal dysfunctions, and imagining new futures. And I thought that was such a one dimensional way of approaching it, and I'm seeing so much of that here.

Emily M. Bender: So I think, I mean, as someone in the humanities, I think that it's not that we need the tech bros studying more humanities. We need a fundamental realigning of power within the academy about sort of where, you know, what's considered valuable and prestigious and things like that. So that the ways of knowing that come out of the humanities are seen as valuable in their own right. And I think that part of the problem, and I'm sure we've talked about this somewhere, but the folks who excel in the kind of courses where your grades are based on tests, and the grading seems very objective, and don't excel- and you know, there are people who can do both- don't excel in the much more qualitative scholarship, where what you're producing as a student is essays, for example, feel like there was nothing serious happening in the humanities classes they took because they didn't, they couldn't relate to it well enough to understand. And then you've got the rest of society saying, "Well, we've gotta make more seats for the computer science majors, because that's where all the money is," right? So it's like, "Okay, doesn't matter. That stuff's silly, and I'm justified in ignoring it," I think is where things end up being. 

Reo Eveleth: Yeah. And one thing that I think one learns in a humanities education, which I don't really have, but I wish I did- in a formal way- is that if you come into a conversation with, "You're wrong, and my goal is to convince you that you're wrong," instead of, "Let us understand together how we both came to the opinions that we have, and then maybe interrogate those together and maybe find, not necessarily a compromise, but just sort of a shared understanding of what's going on," you're gonna have very different outcomes. And there is a ton of research on storytelling and on sense-making and on the ways in which people change their minds or don't. And it is very unequivocal that if you just come in and say, "I'm gonna use storytelling to convince these people that they're wrong," you will fail. It will not work. Because we can all tell when someone comes in like that, we can all tell when someone that shows up to a conversation is not listening to you, doesn't care what you think. It's just like, "No, you're stupid. I'm right. Here's a story." Like, that doesn't work. 

Emily M. Bender: Yeah. 

Alex Hanna: It's slightly more than, "I'm going to show you the math and this is sort of going to work," or if he's a rationalist, "I'm going to show you the made up math and then it's going to convince you."

Emily M. Bender: Right, which brings me to the other thing that I wanted to say. So, Reo, you point out that there's this conflation between the work of storytelling and, and then I guess this was also abstract_tesseract's point, sort of storytelling and advertising. And there's another conflation, which you alluded to before, between like, an attempt to understand the world as it is, and storytelling. And, which isn't to say that storytelling isn't about understanding the world as it is, but there's a core or a kernel in the story, which is about the world as it is. And then there's the imaginary world. But these people live in an imaginary world that they don't understand as imaginary. And so that's the thing you were saying before about like, they think that if they tell nice stories about the AIs and then people say nice things to the AIs, we're gonna get..." That's all part of that imaginary story, right? That it's speculative fiction that they don't recognize as such.

Reo Eveleth: That's a good point, yeah. 

Alex Hanna: And it's sort of like, they wanna counter, then, the kind of, it's a, I mean, it is common when you go into these conversations and you talk about, you know, and I imagine you've had this type of interaction, Emily, you know, tell people like, "I read a book. It's about how AI is bad." And they're like, "Well, is AI gonna kill us?" And I'm like, "Well, no. There's other reasons it's bad." And it's not that it's necessarily telling a different story about what this is. It's about, okay, well what are the kind of premises that we have, and what are we thinking about when we think about what gets called AI. So let's break down, and it's the classic sociological move, which is, you know, I question the premises of what you're focusing on. And I think that's brought up in sociology, but it's a classic academic move: well, I want to question your premises. And then let's imagine, and then let's build new, what has to lead from those premises if we think about it together. 

Emily M. Bender: Yeah. And linguists, our twist on that is to be like, "Let's pin down what you mean by that word." 

Alex Hanna: Yeah, a hundred percent. 

Emily M. Bender: All right. So I think we should skip ahead a little bit in this. There's another quote here from Beguš. "'We need fictional works that consider machines for what they are and articulate what their intelligence and creativity could be,' Beguš says. And because fiction is quote, 'not obliged to mirror actual technological developments,' end quote, it can be a quote, 'public space for experimentation and reflection.'" What do you think of this, Reo? 

Reo Eveleth: I mean, on the one hand, yeah. Like it is true, fiction is not obligated to mirror what's real, and it can be a place for experimentation and reflection. I would argue that that is in fact what those dystopias that these people are mad about was doing. That is what HAL is doing. That is what these stories about AI that has gone wrong, the Terminator, is doing, right? It is talking about what happens when a technology runs unchecked. What happens when it is incredibly physically powerful? What happens when, you know, that is literally what those stories are doing. And this person is, and this article is kinda saying, "Well, no, no, no, not like that. Only if it makes the technology look good." You know? Which is, you know, you don't really get to have it both ways, you know? And later in the piece, which we'll see I think, they sort of essentially make the argument that it's not, there's a lack of hopeful science fiction stories out there, which I would contest a little bit. But I think they're arguing there's a lack of them, because to them the only thing that counts is something that positions sort of, Alex, to your earlier point, the technology as the vector by which hope happens, right? The technology as the thing that gets us to the better place. There is huge volumes of hopepunk, solarpunk, whatever term is trendy right now. But most of those stories center around the people making the change, and in fact often going away from certain technologies in order to find a more hopeful place. And so those don't count in this sort of like worldview, as a hopeful science fiction story, because it doesn't center that technology in that way. 

Alex Hanna: Mmm. Yeah. 

Emily M. Bender: And we have lots of good guy robots, right? So C-3PO, R2-D2, like, definitely good guys. Lieutenant Commander Data, if I'm getting that right, in Star Trek, one of the good guys. I mean, also the shipboard computer in various places. 

Reo Eveleth: WALL-E! You get WALL-E. 

Emily M. Bender: Yeah. So I mean, this doesn't feel like it's actually based on a real survey. And like how, what's even the denominator, right? What was their sample? How did they choose, how did they, you know, how did they count?

Reo Eveleth: I had that question about the very first anecdote, like, "Everyone kept saying Terminator!" And I'm like, what, did this happen three times, and you were like, that's it, everyone's only thinking about Terminator? 

Alex Hanna: It's everyone! 

Reo Eveleth: Like, what are we, like, what number? As a reporter, if something happens two or three times, that's not really a, it has to happen more times for it to be a story, for you to say this is always happening. And it, yeah. I'm just sort of very curious about where that comes from and how many times that actually happened, versus a person got annoyed that it happened to them once or twice.

Alex Hanna: I'm assuming it's happening a lot for the Future of Life people because they are, they are pretty much focused on existential risk and, from this article, evolving threats. And they're like, "Well, what? Like the Terminator?" I'm like, well, y'all kind of did that to yourself.

Reo Eveleth: You know what, that's very fair. Yeah.

Alex Hanna: And so, you know, if you're going to an event, you're saying, "Well, we wanna talk about existential risk." Well, people are gonna go, "Well, the Terminator?" So, I mean, you've painted yourself as an org that does this stuff. 

Emily M. Bender: Yeah. And sjaylett's pointing out the "Asimov short stories that show robots in a good light and humans less good." 

Alex Hanna: Yeah. There's lots of different good robots, which is.

Emily M. Bender: But I think they also, to your point, Reo, they also generally are not the hope bringers. I think that Ann Leckie's artificial intelligence, the name of which I can't remember right now, is like definitely trying to act and improve things, but it's also the point of view character. And so you can sort of see that it's, it's working on it, but it's not necessarily succeeding and doesn't understand everything that's going on. And, anyway. Big fan of those stories. 

Reo Eveleth: Yeah, Automatic Noodle just came out, Annalee Newitz's book, which is a very good sort of look I think at this in terms of yeah, what that could look like. So yeah, I think there are, I mean, obviously that came out before this article, so, you know, couldn't be cited. But I do think, you know, notwithstanding that, there are plenty of examples. But it, I mean, the things they say later in this article about the kinds of hopeful futures that they think of, it was sort of a very funny experience to read them and have every single one and have me be like, well, that sounds like a dystopia to me, actually.

Emily M. Bender: Why don't we go onto those, actually? 

Alex Hanna: Yeah, yeah. 

Emily M. Bender: Ugh, except that we've got Bostrom here. 

Alex Hanna: Yeah, there's, gosh, there's a lot here. So, let's say, "If these patterns hold true for more intelligent forms of AI, we need to instill them with scruples before we flip their on switches." Ugh, okay. So this is getting us into alignment stuff- "the University of Oxford's AI doomsayer, Nick Bostrom, has called this need quote, 'philosophy with a deadline.'" Terrible. 

Reo Eveleth: Also unrelated to the rest of the article. I'm a little bit like, why, if I was an editor, I would just cut that paragraph. That doesn't really connect. Like, why is this here? 

Alex Hanna: We had to get Bostrom in here. 

Reo Eveleth: Gotta get him in somehow. 

Alex Hanna: Gotta shoehorn this man in. 

Emily M. Bender: It's also a nice red flag. 

Alex Hanna: Yeah, exactly. If you take Bostrom seriously, then gonna doubt a lot of the other stuff you wrote. 

Emily M. Bender: Okay. So this part we gotta do, yeah.

Alex Hanna: Yeah. "To pull in more artists and thinkers into that discussion, the Future of Life Institute has sponsored multiple initiatives linking fiction writers and other creatives with technologists. Quote, 'You can't mitigate risks that you can't imagine,' Javorsky says. 'You also can't build positive futures with technology and steer towards those if you're not imagining them,' end quote. The Institute's worldbuilding competition, for example, brings together multidisciplinary teams to conceptualize various friendly AI futures. Those imagined tomorrows include a world in which a centralized AI manages the equitable distribution of goods." Oh lord. "A second scenario suggests a system of digital nations that are free of geographical boundaries." Is it just, this like the fucking seasteading, like, network state shit? Anyways, "In another, artificial governance programs advocate for peace. In a fourth, AI helps us achieve a more inclusive society." Okay. Your thoughts on these dystopias? 

Reo Eveleth: Yeah. I mean, right, like, this is the, the thing that's cool about fiction is that, you know, you could give these four things to five writers and you would get five wildly different stories, right? You would get, you know, a story from the perspective of the AI trying to maybe distribute the food. You would get a story from the perspective of somebody who is trying to like hack into the system to get their family fed because they're not actually getting enough. You know, there's just a million ways that this could go, and this is sort of the most flat, boring version of them, to be like, "Don't worry, magic button AI that we push and turn on fixes our problems." And we all know that that's not real. And there's a certain thing that I think is interesting about science fiction, which is that, yes, technically kind of anything can happen, right? Like you can do anything because you're not beholden to the laws of physics. You're not beholden to, you know, reality. You can do anything. And yet, at some point, it's not an interesting story if there's just a magical button that fixes everything, right?

Alex Hanna: That's right. Yeah. 

Reo Eveleth: And so, you know, the reason why people are drawn to stories and drawn to fiction is to not necessarily read about suffering, but read about real things that feel like they are really touching something, even if it's in a distant planet. I mean, Octavia Butler's Xenomorph, like there's, so many things of that you're like, I don't know what that's like at all. That's an alien, gene splice, whatever. But because the people feel very real and the problems feel very real, it feels real. This sort of takes all of the good, it shows again that they don't really understand what storytelling is and what good stories are, because it takes all of, it takes all the tension out, it takes all of the interest out and the intrigue out. You can't tell an interesting story when the premise is that everything's great already and there are no problems, because magic AI. 

Alex Hanna: Yeah, no, I mean, that's exactly true, right? And I mean, it's sort of, it's reminding me, just exposing my, one of my fandoms, so like Worlds Beyond Number is a podcast, it's a D&D actual play podcast. And the Dungeon Master Brennan Lee Mulligan was talking about this workshop that I think he was running, with, I think Molly Ostertag. And one of the things that, that somebody in the class had said, "Well, I just want a world in which there's just all royalty." And Molly was like, "Well, what do you mean all royalty? Who's doing- rich people don't clean. You know, rich people do not maintain the grounds. If it's, if there's rich people, there's always gonna be servants or an underclass, you know? What does that mean?" So that's the kind of vibe. It's, "Well, what about the centralized AI? And it's just, it's so benevolent. There's no problems." Well like, what do you mean? Who's, where's the governance? Who's running this thing? You know? 

Emily M. Bender: Yeah. So reading some of these other ones, there's this other thing in here about a script contest. And I went to see if I could find these, because one of these, one of the winners... so I'll just read the winners. "The winning entry was set in a town where AI equally serves the needs of all residents, who are shaken when a once in a generation murder complicates their potential techno-utopia." And then, "In another, AI powered advisors equipped with Indigenous wisdom support a more sustainable society." And I wanted to go look and see who wrote that one. Is this someone...? 

Reo Eveleth: Same. I had some questions about that. 

Emily M. Bender: Yeah. But what's interesting to me here is if you took the AI out of it, then you would have an interesting story, right? If there was an Indigenous writer saying, "Look, I'm looking at, contemporary Turtle Island, split up into the US and Canada and Mexico and basically saying, "Where's a story where I can take someone who uses their ancestral Indigenous knowledge and improves something locally?" That sounds fantastic. But what's the point of having an AI in the loop for that? 

Alex Hanna: Yeah, and there's also like this thing about serving the needs of all residents, and then there are at least two Next Generation episodes that I think are, kind of replicate that. There's the one in which it's like the idyllic society and then Riker steps on the grass, and then they have to murder him. And it's sort of like, and you're like, "Oh, you're just, you're just a fascism. You know, like you're just a fascist society." And they're like, "No, no, no, but this is how we ensure that like everybody does, you know, like... And I'm just like, okay, but there's an underside to that. Like what is this? And in this case, you're just replacing the fascist dictator with AI, right? And so you're just like, okay, well, what is this? There is not always this limitless upside. There's going to be some kind of other question of the worldbuilding. And if you don't imagine that you're doing a disservice to your worldbuilding, right? 

Reo Eveleth: Yeah, and I think also just to go back to, you know, what we've been saying about the ways in which these scenarios take out the story of the thing. I think so much of a story, so much of a good story is the messiness of the characters, right? And sort of how they interact with each other. And this is a thing you see, you know, AI's the latest version, but sort of like, efficiency, obsession with efficiency. And this idea that, you know, these two examples, right? This town where AI quote unquote "equally serves the needs of all the residents." And then this sort of idea of Indigenous wisdom large language model, which I have many concerns and thoughts about. What they do, what the AI does, what it functions as in that, is to take away the people and to take away the conflict between the people, and in fact, In order to make a community that serves all residents there, it's hard, right? You have to balance conflicting needs. You have to come to compromise, you have to talk to each other, you have to understand each other.

Emily M. Bender: You have to build community. 

Reo Eveleth: And that takes a long time. You have to build community. You have to build trust. It takes a long time. It is messy. It is not perfect. Someone might always be a little bit upset. There's not, it's just a lot of really high friction stuff that is, that is the work of being a person and being in community, and it's the work I think that is very valuable and worth doing. But it is the work that is being completely excised by two letters, AI, in these stories. Similarly with, you know, Indigenous wisdom. Don't worry, we don't actually have to listen to Indigenous people. We can just listen to the AI that they trained somehow and that's gonna be the magical thing. And we don't have to grapple with, you know, decades of colonialism and genocide and all of the ways in which these people have been completely disempowered and abused and still are here and continue to be here and continue to envision real ways of moving forward in real community, but we don't wanna actually have to deal with that messiness, so we're just gonna stick an AI sticker on top of it and call it good. 

Emily M. Bender: So, so artificial magical Indian. Basically. 

Reo Eveleth: Yes.

Alex Hanna: Oh gosh. Yeah. There you go. And I think that's, and there's one thing that you said, Reo, that I thought was really striking too, and it's like, yeah, the messy part of that is the discourse. And it's not as if there are not efforts to build technology that kind of facilitate that discourse in really interesting ways. And maybe they're not, you know, and a lot of them are very, you know, lots of flaws. But I mean, I know there's tools like Polis, and some other kind of like discussion tools that get discussed. I know Erik Olin Wright talks a lot about different tech tools and envisioning real utopias, but it's, it's still about the people, right? It's about facilitating all these different kinds of needs and the way that you would have to kind of maximize participation. It doesn't really get rid of the messiness, but it tries to broaden it or tries to mediate it in a different way. 

Emily M. Bender: So I wanna take us to this paragraph in the middle of the screen right here. It says, "To further inspire these lines of thinking, the Future of Life Institute is in the process of producing a free, publicly available quote, 'worldbuilding' course to train participants in hope rather than doom when it comes to AI." It's like, train people to think about community and like... anyway, continuing. "And once a person has managed to escape the doom loop, Javorsky says, it can be difficult to know where to direct efforts at developing positive AI. To address this, the Institute is developing detailed scenario maps that suggest where different trajectories and decision points could lead this technology over the long run. The intention is to bring these scenarios to creative, artistic people who will then flesh out these stories, pursuing the crossover between technology and creativity, and providing AI developers with ideas about where different courses of action may take us." What jumped out at me is this describes what they did with that AI 2027 document.

Alex Hanna: Yeah. 

Emily M. Bender: I think, I think that came out of this. Cause it's the same people, right? 

Alex Hanna: I'm not quite sure the connection of the Future of Life people with the AI 2027 people. But there, I mean, it's likely that there's a lot of community overlap. But it's also I, this sentence is like, sending me. "And once a person has managed to escape the doom loop"? That's like you went to AA, which is like, maybe it's Existential Risk Anonymous, and you've just, you've left the doom loop, you're in doom loop recovery. And then you need like a recovery plan. What are we doing here? 

Reo Eveleth: It also makes me wonder, again, sort of to your point earlier, Emily, like, these people are living in this fictional world where there aren't tons of positive AI. Like people are constantly trying to sell AI all the time as like, this magical tool that's gonna help you in school, and help you do this, and it could make teachers better. And I feel like I'm constantly getting pitched, quote unquote, "positive AI." The idea that that's a, like, completely unexplored idea does not line up, let's say, with my reality. Because it is the, I think that is what, I mean, we have a president who is claiming that that's going to- in California, our governor is like, "AI is gonna solve traffic." Like people say this stuff all the time. So I'm not sure why it is that once, okay, you've escaped this doom loop. You emerge from, you know, the milky water and you're like, "What do I do now?" It's everywhere. It's everywhere. You're, you'll be fine. 

Emily M. Bender: Yeah, and abstract_tesseract in the chat says, "My 'not in an AI cult' t-shirt has a lot of people asking questions already answered by my shirt." 

Reo Eveleth: Yes, exactly. This piece reminds me a little bit of, my understanding of the history of Law and Order, the television show, is that the creator was like, people should be more, should admire cops more, and people should be like, really feeling better and more sort of like positively towards our criminal legal system. And so made a show to show police and lawyers and prosecutors in a positive light. And it has worked extremely well. It like, when you look at studies of, you know, the effect that Law and Order has had on people's visions of the police, it has worked. And it, this to me reads like, we need to show cops in a more positive light so people stop being mad when they murder us on the streets. We need to show AI in a better light so people can stop complaining when it does all of these bad things, like take away my healthcare, or make mistakes all, you know, all these things. So this is like the Law and Order for AI. 

Alex Hanna: Yeah. Artificial humanities as copaganda. 

Emily M. Bender: Yeah. Just sums it up. Yeah. I think also that there's, two audiences maybe that we might be keeping in mind. One is the people who are, you know, subjected to algorithmic decision making for healthcare, or whose, you know, creative work is being appropriated and then they're, you know, various commissions are drying up and so on. But then there's also the people who are like deep in the AI doomerist hole. And so I'm wondering, like a lot of this reads to me like those are the people they're talking to. Like they're not questioning that this thing is going to become super intelligent. We just have to imagine like the way to make the good version of that, right?

Alex Hanna: Yeah, yeah. It does give me that vibe, you know, like leading, I mean, it sounds like people who are like really deep in those, those Less Wrong mines, that like really need to be brought and, brought along and say, let's think about the good things, without really having, getting outside of that bubble and talking to a lot of people that are being subjected to so many of those decisions without their consent or knowledge in many cases.

Emily M. Bender: Yeah. And being treated as well, you know, this is inevitable. This is the future we're running towards. This is the future that's already written. Cause we can't tell the difference between speculative fiction and reality. And so you just have to put up with, you know, constant surveillance, et cetera, et cetera. Cause that's the price of moving into the future. 

Reo Eveleth: Right, your only choice is all knowing, all seeing AI for good or all knowing, all seeing AI for evil. Those are your only two options. 

Alex Hanna: Yeah, absolutely. All right, shall we go to hell? 

Emily M. Bender: We should. Are we, do you want musical or non-musical this time, Alex? 

Alex Hanna: I forgot what we did last time, so... 

Emily M. Bender: Last time was happy birthday and you didn't want it. 

Alex Hanna: Oh, okay. Well, I guess I have to do it this time. 

Emily M. Bender: All right. So, what's a genre that's a little bit frenetic? 

Alex Hanna: Free jazz. 

Emily M. Bender: Free jazz. Okay. So you are that centralized AI tasked with equitable distribution of resources, and you are just over it. All these people asking you for things, and so sort of like, a minute in the life of that centralized AI trying to deal with everyone's requests. 

Alex Hanna: I don't, I guess it's just, I mean, free jazz doesn't really have a lot of words in it, so... 

Emily M. Bender: Someone, magidin says ragtime.

Alex Hanna: Ragtime. Okay. Come on, come on my baby, come on my Charlie, come on, my rag... Ahh! I'm, I'm done! I'm done! I, sorry, this is not, I need a trombone to do ragtime. 

Emily M. Bender: Yeah. I'll help you out. But AI, AI, we need some more green over in the eastern territories.

Alex Hanna: Oh, I mean, yeah, maybe. I, I don't got it in me. I owe you two next time. Ragtime was hard. It was hard, and it really threw me for a loop. 

Emily M. Bender: All right. All right. Maybe we can do non-musical for a little while. 

Alex Hanna: Okay. 

Emily M. Bender: Yeah. So anyway, that was our transition to Fresh AI Hell. Let me grab the actual window. I was busy trying to keep the improv going. Okay, Alex, you get this first one here. Can you see it? 

Alex Hanna: Yeah, so this week in AI bubble-dom, the Wall Street Journal. This is in a subhead called the CIO Journal. So the title is, "Stop Worrying About AI's Return On Investment"! And it needs to be read like that. And so the subhead is, "Tech leaders at Wall Street Journal's Technology Council Summit said it's nearly impossible to measure the impact of AI on business productivity. And when we try, we're measuring it wrong." And so this is by Belle Lin and Steven Rosenbush, on September 16, 2025. And there's an image below of the CEO.

Emily M. Bender: CTO, yeah. 

Alex Hanna: Or, CTO of Duolingo that famously, got under fire when they said it was gonna replace a bunch of people with AI. 

Emily M. Bender: Yeah. So this is desperate bubble inflation, right?

Alex Hanna: Mm-hmm. 

Emily M. Bender: Okay. Next one. This is the Times. And this is an opinion piece by someone named Josie Cox, from September 10th, 2025. The headline, "The Most Radical Act Of Feminism? Using AI." 

Alex Hanna: Oh my gosh. 

Emily M. Bender: Subhead, "Women are far less likely to use AI tools like ChatGPT than men, but the tech is here to stay, and the disparity risks widening workplace inequalities." And then- 

Alex Hanna: Yeah, this just sends me up a wall, this article. Yeah, finish. Go ahead. You were gonna read some of this. 

Emily M. Bender: But I don't wanna read that much because it starts with, "'What's the most radical act of feminism?' I recently asked ChatGPT." And then, I'm not gonna read what came out of that, because I'm not interested. 

Reo Eveleth: Yeah, well, I mean, if I'm gonna be treated like an object, I might as well treat tech like an object. And that's feminism. So, you know! 

Alex Hanna: This is giving me, this has given me so much Sex and the City... 

Reo Eveleth: "and I asked myself..." 

Alex Hanna: "...and I asked ChatGPT." You know, if anybody wants to write an awful fanfic, please, I will read it. 

Emily M. Bender: I guess, I would read someone making up what ChatGPT said. I'm not gonna read the ChatGPT output.

Alex Hanna: No, let me be clear. I will read the fanfic of fake outputs. Yeah. 

Emily M. Bender: Yeah, for sure. All right. You get this one, Alex. 

Alex Hanna: So this is, so recently we covered that there was an Albanian minister that they had replaced with a chatbot. And so this is from Futurism, the title is, "Leader of Albania Pelted With Trash For Appointing AI Powered Minister To Cabinet." And the quote is, "Some have called me unconstitutional because I am not a human being. So the author is Joe Wilkins, published September 20th, 2025. So there is a picture of kind of a chaotic, kind of a C-SPAN angle, and there's like a piece of garbage that's being lobbed at someone in this government looking room. Yeah, no one was happy about this. 

Emily M. Bender: And, and good for them. I'm really glad that people did not take that sitting down. Although I'm a little bit worried about what was actually thrown. "Pelted with garbage" sounds like it could be painful. 

Alex Hanna: It, I think it was just some, I watched the video, it was just some papers or something. 

Emily M. Bender: Oh, okay. All right. Not like cans or anything. 

Alex Hanna: Yeah. 

Emily M. Bender: All right. Far less levity here. This is a recent piece, September 17th, by Uchechukwu Ajuzleogu, should've practiced that name, in a publication I didn't know about called Aylgorith. And the headline is, "The human Cost Of Every ChatGPT Query: Inside The Cobalt Mines That Power AI." And then the subhead, "muntosh was six when his brother died in a cobalt mine. Today, his labor powers your ChatGPT queries." And the reason I wanted to bring this up is that we frequently talk about the power and therefore carbon footprint of this technology. And also, the water impacts. But there's another part of it, which is the rare earth minerals that are involved in creating the chips that are used to create the data centers and that everything runs on, you know, are mostly mined in high conflict regions, frequently with child labor. And so I was sad to read the story, but also happy to see that there's a reporter on the beat here. 

Alex Hanna: Yeah, there's, there's other kind of prior documentation. I, this book quite close to me, which is by Siddharth Kara, Cobalt Red: How the Blood of the Congo Powers Our Lives. And it's mostly about, I mean, a lot of it's about smartphones and tablets and laptops and EVs and so, I think- And there's a lot of the same kind of obfuscation happening with regards to AI supply chains in terms of, lots of organizations like Apple and Microsoft and Foxconn say they don't really know how much conflict minerals are in their supply chains, and so they don't really, but they also just don't really do a very thorough accounting of them. So this is truly making it much worse. 

Emily M. Bender: Yeah, definitely. All right, back to just ridiculous. 

Alex Hanna: Yeah. So this is a Twitter post by someone who is @slow_developer, and this is, says, "Sam Altman proposed a future AGI test: If a model like quote 'GPT-8' unquote solved quantum gravity and could explain the reasoning behind its discovery, would that qualify as AGI? David Deutsch agreed that it would qualify as AGI, making it a potential benchmark." And I'm like, what is, and there's a video of him talking, with like, and I'm just like, okay, what does that, what does that mean? Yeah. And abstract_tesseract is on a roll, says what we're all thinking, which is, "I just want one, parentheses, one journalist to look Sam Altman straight in the eye and say, 'What the fuck are you talking about?'" Yeah. It's just giving me the Breaking Bad meme, which is just, "What are you talking about, Jesse?" I really don't know what the hell you mean when you say things like that. It's really just, it's also, wasn't there also the Uber guy, Dara, who said something like, he's like, you know, "I was vibe physics-ing and you know, getting, getting so close to making a discovery." I'm like, no, you don't know what you're talking about. You have no way of evaluating that. 

Reo Eveleth: No. It's amazing. It's amazing. I, when I worked at Scientific American, one of my jobs was to sort of monitor our inbox, and the number of people who believe that they have in fact proved Einstein wrong, solved all of these, you know, physics problems is incredible. And I can only imagine it's gotten so much worse with all these AI, like large language models. 

Alex Hanna: It has. I mean, we get a lot of emails like that at DAIR, too, where people will email us and go, "I just figured this out. I just discovered this. I was playing around with ChatGPT, and it really, it told me I was on the cutting edge." I'm like, okay. That doesn't sound like you're doing anything worthwhile. 

Emily M. Bender: No. All right. But people who are doing things worthwhile are OK Go. I'm gonna play this. Hopefully the sound will come through. It's just a little clip. You'll see why I'm so excited. 

OK Go: Still no stochastic parrot has yet called on his nation to knock back bleach.

Emily M. Bender: When I, I found that because someone tagged me, or what, actually maybe even just said stochastic parrots, I think I eventually got tagged. And like, what an achievement unlocked to have my phrase like enter popular culture that way. And it's a great song, too. We really encourage everyone, the link in the show notes will have link to the full video. It's amazing. It involves algorithmically produced animation but not AI art. And they actually, the beginning and end sort of shows how they built the world that they're showing in the video. It's pretty cool. 

Alex Hanna: Nice. Cool. 

Emily M. Bender: Yeah. So, enjoy, and if you're like me, it'll get stuck in your head, but I'm not super sad about that. How much fun! All right, now I have to get to my right window again. Sorry, still excited about that song. 

Reo Eveleth: As you should be. 

Alex Hanna: Yeah! Huge. 

Emily M. Bender: So, that's it for this week. Reo Eveleth is a reporter, writer, amazing podcast guest, and creator of the hit independent show Flash Forward. Thanks again for joining us, Reo. 

Reo Eveleth: Thank you. I feel like we should get a medal for going this long and not saying Torment Nexus at any point during this entire podcast episode. 

High five! 

We did it. 

Yeah, 

Alex Hanna: we 

did it.

Reo Eveleth: Thank you so much for having me. This was so fun. 

Alex Hanna: It was a pleasure. Thanks for joining us! Our theme song is by Toby Menon. Graphic design by Naomi Pleasure-Park. Production by Ozzy Llinas Goodman. And thanks as always to the Distributed AI Research Institute. If you like this show, you can support us in so many ways! Order The AI Con at thecon.ai or wherever you get your books, or request it at your local library. 

Emily M. Bender: But wait, there's more! Rate and review us on your podcast app. Subscribe to the Mystery AI Hype Theater 3000 newsletter on Buttondown for more anti-hype analysis, or donate to DAIR at dair-institute.org. That's dair-institute.org. You can find video versions of our podcast episodes on Peertube, and you can watch and comment on the show while it's happening live on our Twitch stream. That's twitch.tv/dair_institute. Again, that's dair_institute. I'm Emily M. Bender. 

Alex Hanna: And I'm Alex Hanna. Stay out of AI hell, y'all. 

Emily M. Bender: And steer clear of the Torment Nexus!

People on this episode