Mystery AI Hype Theater 3000

There's No Ghost in the Machine (with Carmen Maria Machado), 2026.04.13

Emily M. Bender and Alex Hanna Episode 76

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 53:37

Why are some writers and publishers so excited to automate their work? Author Carmen Maria Machado joins Alex and Emily to unpack what writers are missing when they hand off their work to chatbots, and the underlying issues this reveals in the publishing industry. Plus, we resolve to keep fan fiction a human endeavor!

Carmen Maria Machado is the author of the bestselling memoir In the Dream House and the award-winning short story collection Her Body and Other Parties. Her essays, fiction, and criticism have appeared in the New Yorker, the New York Times, Granta, Vogue, This American Life, The Believer, Guernica, and elsewhere.

Find tickets to our April 30th live show here!

References:
- "I wrote a novel using AI. Writers must accept artificial intelligence."
- AFT shares tips for "Harnessing the Best of AI"

Fresh AI Hell:
- "With Teens Comfortable Confiding in AI, Should Schools Embrace It for Mental Health Care?"
- Longread on the use of automation by the British government
- "OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters"
- UN brief on "AI Deception"
- Reader's Digest cover on "Making Friends with AI"
- The only good "AI acceptance" policy

Check out future streams on Twitch. Meanwhile, send us any AI Hell you see.

Find our book The AI Con here, and MAIHT3k merch here.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park
Production by Ozzy Llinas Goodman.

Alex Hanna: Hi everyone! Just a reminder that we’ll be doing a live show in Brooklyn later this week. It’s this Thursday, April 30th at 6:30pm at Starr Bar. You can find the ticket link in the show notes, or just buy them at the door. See you soon! Now, let’s get to the episode.

Alex Hanna: Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find. 

Emily M. Bender: Along the way, we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. I'm Emily M. Bender, professor of linguistics at the University of Washington. 

Alex Hanna: And I'm Alex Hanna, director of research for the Distributed AI Research Institute. This is episode 76, which we're recording on April 13th, 2026. 

Emily M. Bender: Our guest this week is the amazing Carmen Maria Machado, author of the bestselling memoir in The Dream House and the award-winning short story collection Her Body And Other Parties. Her essays, fiction, and criticism have appeared in the New Yorker, the New York Times, Granta, Vogue, This American Life, the Believer, Guernica, and elsewhere. Welcome to the show, Carmen! 

Carmen Maria Machado: Thanks for having me! 

Alex Hanna: We're so excited to have you here with us, Carmen. We were on a panel together at the AWP conference last month about resisting automation in writing and teaching. So we're going to be continuing that conversation today. So, let's get into it.

Emily M. Bender: Here we go. And we've got, thanks to you sharing these terrible artifacts with us, a couple of really rich texts. So the first one is published in The Guardian under opinion, with the sticker AI, which they helpfully gloss as artificial intelligence. The author is Stephen- does he go by March or Marche? Do we know?

Carmen Maria Machado: I am actually not sure.

Alex Hanna: I don't know either. 

Emily M. Bender: All right. Stephen M. From Thursday, April 2nd, 2026. So we know it's not an April Fool's joke. And the headline is, "I wrote a novel using AI. Writers must accept artificial intelligence, but we are as valuable as ever."

Alex Hanna: Yeah. And we should say on the outright, Stephen reviewed our book for the New York Times in a really awful piece, and I think he also reviewed the Yudkowsky book, like in the same article. It was rough. But the, subhead here is, "Mastery of banal style is losing its usefulness, but language is more powerful than ever. It's up to the writer to do what machines can't." 

Carmen Maria Machado: I'm so sorry to bring this to you. I felt bad. I actually really didn't want to, 'cause I was like, ugh, this is horrible.

Emily M. Bender: We'll get there. It's cathartic. It's worth doing the read here. So diving in, he writes, "I recently heard an exchange at a playground that should worry the executives at AI companies more than any analyst prediction of a bubble. A boy and a girl, maybe 10 years old, were fighting. 'That's AI! That's AI!' the girl was shouting. What she meant was that the boy was indulging in a new and particular breed of nonsense, language that sounds meaningful, but has no connection to reality. The children have figured out the new world quickly, as they do. Artificial intelligence is here to stay, neither as an apocalypse, nor as a solution to all life's problems, but as a disruptive tool. The recent scandal over Shy Girl, the novel by Mia Ballard, was doubly revealing. Hachette canceled its publication amid claims it was reliant on AI generation. Ballard has said that an acquaintance who edited the self-published version used AI, not her. But the book was originally self-published. Apparently, readers and editors didn't mind until the use of AI was pointed out to them." Thoughts so far? 

Carmen Maria Machado: So many. Where do I even start? The Shy Girl controversy has been very interesting to me because there are writers who have written work using AI that they have explicitly admitted to. This case has a lot of, or this situation has a lot of weird sort of details in it. One is which, the author has denied using AI, but thinks that maybe a friend who edited it, I believe, used AI. If you read it, even just the opening pages are so clearly AI, it's hard to imagine that an editor did not notice, or a reader wouldn't notice. I also feel like there's something in here about labor practices in the publishing industry, because it is not surprising to me that an editor would not be able to read anything very closely, because unfortunately, publishing is an industry that really runs its employees ragged. I know from personal experience and from close friends of mine. And so I think there's something really fascinating here about AI that there's this huge mess. Clearly there's some, there's AI being used, or of course people are also questioning the tools that were used to analyze whether or not it's AI. Even though I would say the text very clearly seems like AI. And what's so odd about this is like, Stephen, that first paragraph feels like he's about to write a different thing. 'Cause it's really landing on the fact that there's this nonsense thing that has no meaning, it sounds meaningful, but has no connection to reality. He's almost like, it feels like he's writing a different kind of piece. And then sort of going into the Shy Girl thing, and then he sort of leaves it, right? Like he's only telling us about that for that one paragraph. I don't know. There's something in here about where it's like, there's actually all these issues about labor and things as it connects to AI. But we're not gonna touch on any of that really, in this piece. It's just kind of an interesting- the way he touches on it, I'm assuming he kind of had to, 'cause it had just happened, and felt like the big publishing AI scandal. But it is really interesting to see how he touches on it, ignores a lot of the major issues, and then kinda just keeps going.

Emily M. Bender: Yeah, so he leaves the playground anecdote, but our Twitch stream audience has not. So abstract_tesseract says, "Now imagining a series of new AI themed playground insults." And we've got "Your mama's AI," "I know you are, but what am I? I know you are, but AI," and "Your mama is so AI that..." Oh, "X and Y sitting in a tree, U-S-I-N-G AI."

Alex Hanna: These are very good. I was trying to think of your mom... 

Carmen Maria Machado: Wait, your mom is so AI, she has six fingers? Like, what is the end of that insult? 

Alex Hanna: Let me think. So, your mama is so fake, OpenAI acquired her for $10 billion. I dunno. Come up with better ones in the comments.

Emily M. Bender: Your mama's so fake, she requires a whole data center to run. 

Alex Hanna: Oh, that's good. 

Carmen Maria Machado: Okay. All right. I really hope that catches on. 

Emily M. Bender: Oh, and magidin in the chat, "Your mama is so AI that she wants to please everyone." 

Carmen Maria Machado: Oh my god. 

Alex Hanna: Geez. Okay. 

Carmen Maria Machado: That's actually, that's really good. And that like, really hurts me. Like that breaks my heart into a thousand pieces. I'm like, oh! 

Alex Hanna: I know. There's so many ways to, I was like, oh, my mother's a sweetheart. But I know that you're saying this in another register. Okay, why don't we move on. 

Emily M. Bender: I have thoughts about this next paragraph. 

Alex Hanna: Oh, I knew you were gonna. I read the first sentence of this and I was like, Emily's gonna lose it on this. So, "The fact that machines can generate meaning- 

Emily M. Bender: No, they can't. 

Alex Hanna: Yeah, thank you. "-in the first place is an existential curiosity." Which is like, there's so many things in that sentence that made me wanna flip a table. "But for writers, and for young writers in particular, AI has a more practical significance. A recent survey found that 86% of college students use AI regularly, which means that 14% are lying to survey takers." 

Carmen Maria Machado: What a, sorry, what a sentence. That also really knocked me on my butt. I was like, what? 

Alex Hanna: I know. What a piece of work.

Carmen Maria Machado: And what a deeply cynical way to talk about the college students who are holding on for dear life. Like outside of the AI. 

Alex Hanna: I know, yeah. I hope this man is not teaching undergrads, anyways- no, he is writing terrible op-eds. "The ordinary business of quotidian language- writing student essays, emails, memos, and all the granular sentence by sentence work that once trained writers in their craft- is dissolving. Mastery of style, the laborious gift of the skilled writer, is being automated."

Emily M. Bender: There's so many better takes to do on that survey. Starting with like, why do students feel like they should be turning in these more polished looking drafts instead of their own writing? I'm gonna move up so we don't have to watch that animation. But that's not where he's going. It's like, this gift has been automated. And also, they absolutely cannot generate meaning! 

Carmen Maria Machado: It reminds me of, there's a quote that I feel has been going around a lot in the last couple weeks. It's from Ted Chiang, and he basically says in essence, what AI takes away from us is that we create meaning. And when we write to other people, it's like, we create meaning, they apprehend our meaning. And this is just how communication happens. And it creates this facsimile, right? That's not doing either of those things. And it does feel like he just takes for granted in here that they are generating meaning. Which I feel like for me, feels like the biggest lie that I'm constantly having to explain to people in my day-to-day life, is that's not what it's doing at all. 

Emily M. Bender: Not at all. And not, I think that you and I probably use different notions of meaning on a day-to-day basis, but for me, from a linguistic point of view, that's the argument I've been having since 2019. That language models are modeling just the form, they are systems to mimic the way we use language, but that's not the same thing as meaning. And they don't have access to the meaning. We make sense of the output of them, sometimes we try and fail because it's just too anodine, that they can't generate meaning. And he just slides right past that. And then, as absolute was pointing out, also just slides right past that survey and doesn't hold any space for students who are refusing. 

Carmen Maria Machado: Yeah. I wanna talk to them, like, I'm so curious, right? What if you are one of 14% of people in your social circles or at your university or in your cohort that's not using AI? I wanna know, I'm actually so curious, I'm like, who's writing that piece? I wanna meet those students. I wanna hear what they have to say. 

Alex Hanna: Yeah, absolutely. And talking to a lot of students, it's really interesting too, because you're hearing it, there's a pretty big network effect where students feel compelled to use the technology because they see other people using it. And then if they are holding out, what is the thing that's compelling them to? And a lot of it's here, a lot of them are like, the university's not leading on this. The university's basically saying on one hand we're going to be held suspect because of intellectual dishonesty or academic integrity. But there's just so much of this where there's this unholy alliance between universities and OpenAI where they're like, oh, we're gonna get a lot of users out of this. Let's just market to them constantly, relentlessly. 

Emily M. Bender: And also AFT and OpenAI, but we'll get to that. All right. So, "How are writers to live with meaning generators? How should writers use AI?" Not at all. "My perspective is slightly different from others, mainly because I began using AI before ChatGPT. My first algorithmically generated story appeared in Wired in 2017. I published the first AI generated novel reviewed in the New York Times, Death of an Author, in 2023. Currently a generative text box I designed, An Infinite Prayer for Peace, is showing at the Bildmuseet Gallery in Sweden." That's not prayer- 

Alex Hanna: What in the what? 

Emily M. Bender: "It uses AI to articulate a different prayer every minute. It is a new kind of linguistic act, possible only through transformer based artificial intelligence." I refuse to admit that as a linguistic act. Is the Magic 8 Ball a linguistic act? 

Alex Hanna: Yeah. And the specificity here about the transformer based artificial intelligence. As if like, a hidden Markov model or or a Magic 8 Ball or a random thing. It's so, what a weird construction here. Ugh. 

Emily M. Bender: Yeah. And it's not praying if there's nobody feeling the prayer as they do it. That's not prayer. I'm not even religious. 

Carmen Maria Machado: It cheapens it in such a bizarre way that I actually am like, I almost wonder if he understands, I'm like, do you understand your own...? It's such a strange tell, I guess, like the way he's telling on himself in that paragraph. And I've read parts of Death of an Author, it was quite bad. So I dunno. 

Alex Hanna: I remember when we were, it's mentioned in his sign off on the review of our book where it said that he used not one, not two, but three different chatbots to actually generate it. Like, why did you need three? It's very funny. 

Carmen Maria Machado: I do have a thought about this that maybe is a little bit like, I don't know, woo or something. But I feel like there is something so interesting about art that I think of as- And art as it distinguishes it from content or commercial work, like work that has a different function, which is that like, in my practice, it's like my ghost is touching your ghost, or that's what I'm aiming for. That there's this, again, this is like, meaning touching meaning. And it isn't that there aren't weird games- actually it's funny, 'cause during the AWP panel, we talked about Vauhini Vara's essay Ghosts, which I really love and I think is an interesting experiment with this technology. And also, I don't know how you, I know you have feelings about it and thoughts about it. But I think it's interesting and it's like, yeah, okay, so maybe there are weird games you could play with it, you could push it into a corner until it yields something kind of interesting, which obviously Involves the author, like it involves the writer. But the idea that that equals like, we should be having this technology literally everywhere. We should learn how to put it in literally every- it's like, we've written books with Ouija boards. That doesn't mean that Ouija boards should be the way that we, you know, run things or put it in every- So it's like a weird formal game that you might play. And I'm like, I will give you that, if you wanna do this. I think a prayer, AI generated prayer feels so self-defeating that I don't really understand it. But I'm also like, oh yeah, sure, okay. Maybe you could write something kind of metafictional or like really push something into a corner to make it interesting. But the idea that this is how art should be made or that artists should accept this is just like, part of their practice is just so disconnected from what we're doing here. I just don't understand. 

Emily M. Bender: Didn't you hear, the Gates Foundation is gonna put a Ouija board on every desk in every classroom? 

Carmen Maria Machado: Right? Yeah! It's like, let's just ask the ghosts, like we might as well, right? Oh, you just have a Ouija board. You can just do therapy that way. Like, why are you bothering with an AI? 

Alex Hanna: I love this. Now I want to read a fic that's just that, the Ouija boards in every classroom in Africa. And then we've done these randomized control trials where there was teachers teaching their students how to use Ouija boards. And I wanna remark on this too, because I think it's so interesting on what Vauhini's essay Ghosts, Ghosts is interesting. And I think the way you put it in the AWP panel, I thought was interesting. Basically the wrestling of the tool to basically, to move it and show the limitations of it. And what was interesting about Ghosts is effectively, it is the story about her sister's death, and the way it rewrites it to these kind of saccharine- and she was using GPT-3- rewrites these stories through very saccharine endings. And it's really interesting too, because in Searches- which, Emily had a pleasure of being an interlocutor for Searches when she was touring with it. And one of the things that she says in Searches that's very interesting is, OpenAI approached me and was like, do you wanna be an ambassador for the product? And she's like, you missed the fucking point. The point was that this is, it is not your ghost touching someone else's, or your candle touching someone else's. It's like, you're missing what the purpose of interacting with the technology is. 

Emily M. Bender: Yeah. So I just wanna lift up something from the chat here, in answer to your call there, Carmen. This username always defeats me. Zubenelgenubi17, maybe, writes, "Current PhD student here. I avoid using synthetic text for multiple reasons, including the following. One, I think I'll get the most out of my studies if I do my work myself. And two, it feels like an interesting experiment to see how my experience compares versus those who do choose to use it." So there's one answer. 

Carmen Maria Machado: I mean, at some point there is gonna be, we are gonna be able to visibly see the difference between people who went through their academic journey using AI and those who didn't. And the problem is that it's an experiment being run in real time on students. And on the general public, which feels insane to me. I'm like, I didn't ask to be part of this experiment. I don't know why I am. But it'll be, I think it's gonna be maybe in a decade we're gonna actually start to be able to really see the damage that it's done in the academic space.

Emily M. Bender: Yeah. And magidin says, "This is an experiment with no IRB approval, to boot."

Alex Hanna: Oh, that's the only kind the tech industry likes. So continuing, "There seem to be two options facing writers." I don't know why only two. "The first is to not use AI at all, or to pretend not to use it. The other is to automate their writing practice. The first is retrograde and fearful." Ugh. "The second forgets that art is a human practice made by people, for people. As becomes obvious when you actually try to use AI to make art, this is a false binary. Already, a few paths through the slop are emerging."

Emily M. Bender: I hate this call to fear. And it's such an undermining move to say, oh, you seem to be afraid of AI. I'm not afraid. I'm angry. 

Carmen Maria Machado: Yeah. And also, I love writing. I love what I do. Like, I don't know how he feels about his work, but I actually really love the work. The work is the point. It's not like making words appear on a page, it's like, the work that's happening in my mind. It just feels like he- I actually am kind of curious what his relationship with art is in general. 'Cause I'm just like, that feels like a very anti-intellectual perspective that also does not reveal any deep understanding of his own practice or the practice of artists.

Alex Hanna: Yeah. I suppose if your main job or your main gig is just outputting op-eds, just as fast a clip as possible. I mean it's, sure, okay, just generally as much content as possible. But I think that kind of, yeah, thinking about writing versus content. Content for the content mills.

Emily M. Bender: Yeah. This next paragraph starts with a false statement. So the header is good. "Do not underestimate your value." I think I can agree with him on that. He says, "The inventors of the transformer, the T in ChatGPT, and the architecture by which all generative AI works, believed, against the grain of research at the time, that language was the key to abstraction." I'm gonna finish the paragraph, then I'll come back and say why that's not true. "Language, rather than images or mathematics. They were more right than they could ever have imagined. At the core of the new magic is language. Language is now power. The revenge of the humanities is now fully on. The new cliche among tech lords is the need for taste in the artificially intelligent future. How do you think you develop taste? By reading, by writing, by being trained in reading and writing." All right, so the transformer architecture comes out of work on natural language processing. I think it was specifically work on machine translation, but I would need to double check that. So those are people working on language technology. They're working on a way to more closely model the shape of the text that were the input so that they would get better language models and therefore better machine translation systems. This jump to, this is going to lead to intelligence, that's OpenAI and the people in that crowd who got taken in by the ELIZA effect, and not actually the people who invented the transformer. 

Alex Hanna: Yeah. It's also not historically accurate to say that language was a problem that they had not been working on. Even prior to machine translation, if you think about language and the kind of language generation, that goes back to the sixties and seventies. And prior to that. 

Emily M. Bender: But machine translation was actually their, like it was the earliest non-numerical use of computing. There's not really, prior to machine translation in computing, unless you wanna go back before we have actual computers. 

Alex Hanna: Yeah, but even thinking about, I'm thinking about the Data and its Discontents paper that we wrote, and the common task framework. Anyways, we're getting deep in the weeds. I do wanna talk about this thing about the taste thing, which I think is like, what is the taste? Have you seen how they dress? This is not about taste, like there is no taste involved. And it makes me think a little bit about that weird thing, we talked about this on a podcast, but it was about that person that had been writing about, the AI agents that was like, they all have different flavors. Like, Claude generates text that looks like this, and Gemini is more, it's more short with you and it's more direct. And it's just as if there's a certain kind of authorial type of- it's just, it was such a weird way, it's the same way that they talk about research, AI in science. It's like, there's gonna be styles to science, or it's as if researchers are the ones that are directing research program and then that's the only thing that they're doing, as if it's not a social endeavor, as if it's not an interactive endeavor. It's so bizarre. 

Carmen Maria Machado: I also, the idea that you could develop taste or style, so something that's highly specific to you, the idea that you can develop that by either using AI or reading, I guess, what AI produces. Just, it's like, then you don't understand what taste or style are. You actually have no, you have an abstract idea of what it is. It's not, it doesn't make any sense.

Alex Hanna: Yeah. I also love this comment by abstract_tesseract, who says, "Marche's interpretation of 'Do not underestimate your value' is like that Rudolph the red nosed reindeer meme, 'Deviation from the norm will be punished unless it's exploitable.'"

Emily M. Bender: Yeah. All right. So, "Researchers in Italy discovered that they could use poetry to jailbreak the large language models into giving them instructions on how to build a nuclear bomb. This is more than a metaphor. Revel in it." Are we reveling? 

Alex Hanna: I'm not reveling. I'm less revelry. 

Emily M. Bender: Yeah. All right. Are we gonna do this whole thing? This is long. 

Alex Hanna: This is long. Is there anything you wanna touch on, Carmen?

Carmen Maria Machado: I actually had a question for you, because I feel like you understand... Like, the chess example that he gives, is that the same kind of AI?

Alex Hanna: No, it is definitely not. We write about this a little bit in the book, basically the way that chess becomes this metaphor for intelligence and that basically chess algorithms have changed, chess playing algorithms have changed. So what is in IBM's Deep Blue is different from what Google had and their particular type of learning algorithms. And there's basically, I'm gonna do a terrible job of, explaining this, but basically from my understanding, chess you can kind of brute force via searching and going down different pathways. Whereas, but if you could also not have to brute force this, you can also do it by doing some more of this pattern matching. And that's basically the argument DeepMind was saying about Go, playing Go is much harder to brute force 'cause there's just way too many pathways. And so they have to use a different, and then they had to do this thing where it played itself. So yeah, it's a completely different, this is a thing where the term AI is papering over many different types of algorithms. 

Emily M. Bender: Yeah. And among other things, if you're thinking about what the training data is, for anything that's primarily linguistic, the training data is just what word came next? What word came next? For chess and Go, training data is a bunch of games, so what move came next? But also what was the result of choosing that move? And because it is a closed system with definitive answers, it's an entirely different problem. So abstract_tesseract has one more playground taunt here. "Your mom is so AI that she needs to hire people to fine tune her taste and style." 

Alex Hanna: Damn. That's rough. You're just saying my mama needs a stylist, lord. 

Emily M. Bender: That's good. That's really good. 

Alex Hanna: Wait, here's another premise for a dystopian story, but like, new LLMs that, hire stylists the same way that like- oh, what was that brand that was hot for a minute? They hired online stylists that were just basically really low paid Stitch Fix. So if any VCs are on this call, plug your ears up, but Stitch Fix, but for LLMs. 

Emily M. Bender: All right, so Ozzy, our producer, is pointing us to this little bit here. "Thinking, creating, understanding. These cannot be replaced, certainly not by artificial intelligence. Trust me, I've tried." 

Carmen Maria Machado: I feel like, I don't know this man. I have never met him. But I'm just like, what, is, why are you alive? What is the purpose of being alive to you? Because thinking, creating, and understanding are three things that I would literally never try to replace with anything. It would never even occur to me to replace any of those things in my life with this thing that literally by definition cannot generate meaning, this nonsense machine. So I just, I'm worried about him, actually. I'm like, what? 

Alex Hanna: Yeah. Are you okay, bro? 

Carmen Maria Machado: Yeah. Literally. Are you okay? That's an admission to something really disconnected, like, that he's completely disconnected from even the most minor pleasure of the human existence. And that makes me feel like this whole thing is just so dark, and in ways he does not intend. 

Emily M. Bender: And the trust me thing in there is really weird too, because this is really good evidence to never trust him.

Alex Hanna: Yeah. And it's, this has given me that meme that we riffed off of a while ago, but it was the thing about the, I did this. Do this and then write me through three bullet points, have fun with my kids and then write three bullet points, fuck my boyfriend, and then give me a report with three bullet points. So it's giving yeah. it's giving that energy. 

Emily M. Bender: Yeah. So we should do the last paragraph, 'cause he does come back to the playground kids. "The kids I overheard on the playground knew the difference between language that sounds meaningful and language that is meaningful. Do you? Does the literary community? Two roads diverge into a sloppy wood." Isn't it "in"? Why "into"- okay, anyway. "One goes through what machines can do. The other goes through what only people can do. To write now is to wage war against cliche as usual, just this time with the AI and against it." Thoughts, Carmen? 

Carmen Maria Machado: I'm like, I believe that he uses AI in his work, because this just means nothing. This whole piece is just, it's weird 'cause there are individual pieces you could pull out that there are arguments against AI, and then he just brings it back. And it's so bizarre and rhetorically messy, that I'm like, this doesn't actually feel like a person put it together. Also, the idea of the literary community, I am actually really convinced, I do feel like AI has already kind of broken, I dunno if it's called the literary community, but the publishing community, let's say. I do feel like there's more commercial work. 'Cause there's these interviews with people who are making like, 200 AI novels a year and publishing them en masse, and making pretty good money. So the self-publishing world is being just flooded with this AI stuff. And that is inevitable. And I think there's nothing you can do to stop that, 'cause that is just sort of how it is. And there's really no one- and really the role of publishers and editors has become even more important. But the idea that like, yeah, I don't know, editors should be paid enough to do the work they need to do. And I think this just like, plucking these clearly AI generated books out of the self published- it just feels like, is this the path we actually wanna take in our industry? Is this actually what we wanna do? Because this is gonna happen again, this Shy Girl thing, I have no doubt. Because if that's the practice, if it's just like, we're gonna scoop up a book, not read it super closely, and just push it through, this is gonna happen again. And I guess the publishing industry or individual publishers or houses have to ask themselves, what do we want our role to be in this new world? Do we wanna be like, printing slop, killing trees to print garbage? Or do we take our work more seriously than that? And do we pay our employees enough to take that question more seriously? 

Alex Hanna: And then, and before Emily, I know you're looking at the chat 'cause you're laughing, but I wanna get into some of the- because I also want to riff off the chat, which says funny things about "sloppy wood," because I'm like, great drag name. But yeah, you're so right. Also because the New York Times reporting on this, which we didn't get into, but we were looking at beforehand, is also talking about the way that the publishing industry is being pretty equivocal about it. They're not saying "No generative AI." 'Cause they're like, maybe we wanna use this for publicity and marketing and indexing and making covers and doing what an industry wants, which is making precarious roles even more precarious and trying to use them to automate away people who are, Seen as expendable. And it's there's, and there's so much and absolutely right. It's gonna happen again and yeah. What's the publishing industry gonna do about it? 

Carmen Maria Machado: I was gonna say also, like, I have also had students come to me because magazines that they have sold work to have tried to put AI stuff in the contract. So this is, and it is happening at certain publishers, and people really are pushing back, and it's really this tug of war. And I know that there are a lot of publishing sort of trade groups and things that are kind of on this question. Like my agent is part of, in his, I don't even know what it's called, like the agent group that he belongs to. It's like, there was sort of an anti-AI task force. So it is being worked on. But I do think that this is gonna be just, writers are just gonna be having to push back and really be careful about what they're signing. Also, I just did all of my literary estate stuff. 'Cause I was thinking about my own death, so I was like, I might as well figure that part out. And I actually had to write a whole section in my thing being like, none of my work is to be used- like I had to really outline it, so that after I die, that isn't a risk. So that's also something that people should be thinking about, is also the legacy of their work. 

Emily M. Bender: We got it into the contract for our book and also the copyright notice that it can't be used to train GenAI, and that was a fight. It was a big fight with the publisher. But we got there. All right, can we do the chat? So, Zubenelgenubi17 starts off with, "Two roads diverge into a sloppy wood, by Robert Fraud-st." 

Alex Hanna: Yes. Then there's, possumrabbi says, "Hundred acre sloppy wood by Winnie the Poop." magidin, "The word for world is sloppy forest, unpublished novel by AI Le Guin." 

Emily M. Bender: And then possumrabbi, "Over the river and through the sloppy woods to grandmother's house we vibe code." 

Alex Hanna: Yeah. There's lots of things. 

Emily M. Bender: Yeah. Should we get to our second artifact for a few minutes?

Alex Hanna: Let's do it. And then, if there is a drag queen who wants to show up to our live show named Sloppy Wood. Maybe she'll make a special appearance after the intermission. I don't know. 

Emily M. Bender: That would be amazing. 

Alex Hanna: Maybe she's an alter ego of mine. Who knows? All right. Let's get into this. 

Emily M. Bender: All right, so this is from AFT, and the date is just spring 2026. And the headline is "Harnessing the Best of AI." Subhead, "Three educators share tips for saving time and boosting creativity with artificial intelligence." And it's by Elisa Leonard, Cal Siebenmark and Louis Venagro, those are the three educators. And it's got this really appalling illustration. Alex, do you wanna try to describe what we're looking at here? 

Alex Hanna: Yeah, to be clear, this is an article and so this is the American Federation of Teachers, American Educator, which is their quarterly magazine or whatever. And so there is an image, and there's no attribution to it, so it seems possible that it was extruded. And it's drawn and there's a teacher that's standing that's femme and has brown hair and has a cardigan on and smiling. And then there's two wire frame dudes that look actually like they have luchador masks on, and they look kind of mad, and they're writing-

Emily M. Bender: They're writing exactly the same thing as each other.

Alex Hanna: Yeah, and they look like they're maybe just staring at a student in the classroom, like taking cop notes on the student or something? I have no idea. 

Carmen Maria Machado: It does feel like AI, because I'm like, this is not how I would choose to illustrate a story that was pro AI. This actually feels so threatening and weird.

Alex Hanna: I know, right?

Emily M. Bender: And possumrabbi in the chat, "Sigh, tag yourself. I'm the terrible posture of the wire frame figures." 

Alex Hanna: I am the hand that's like, tentatively in the front, like, what the fuck? 

Emily M. Bender: Yeah, I'm the partially erased whiteboard in the background. 

Alex Hanna: Oh, great. 

Emily M. Bender: All right. Actually, that's a really fun way to generate image descriptions. Tag yourself. Okay. There's an intro here by the editors, and then most of the body of the text is the teachers themselves. The first couple lines are, "Whether you find artificial intelligence exciting or frightening, its widespread use, including among students, means we all need to learn how to interact with it responsibly." No, there's other reactions we could have. "How can we harness AI as a tool to enrich learning? Reduce teachers' workloads? Facilitate communication with families? Answering these questions is probably best done by jumping in, trying out AI to see what it can do for you." Excuse me. This is the teacher's union? And if the goal here is to reduce teachers' workloads, there's better things they could be doing than telling people to jump in and play with AI. 

Alex Hanna: Yeah. And this experimenting, it's like, why do they have to experiment? Isn't this something that they- they've got other shit to do.

Emily M. Bender: So the teachers are a kindergarten teacher, this is Elisa. Cal is a fourth and fifth grade special education teacher. And Louis is a math teacher, having taught for 30 years. So do we have particular bits of this that we wanna get to?

Carmen Maria Machado: There were two points that really stood out to me. One was on- sorry, I printed it. It's on the second page. It's Cal saying, "I have a colleague who's not very good with technology." Oh yeah, there we go. "But he has bloomed with ChatGPT. He became interested in having it generate passages for students to read, and then started adding topics that they like, like dinosaurs and Power Rangers. Now these AI passages are a reward in the classroom. When kids complete their work, they can ask for a personalized passage. One child asks for a story about playing soccer with Argentine star Lionel Messi. I think this is great. Kids doing are extra reading as a reward." 

Alex Hanna: This is also the thing I was gonna rage about, yes. 

Carmen Maria Machado: Yeah, it's so funny, because so often I read things like this where I'm like, you are adjacent to an interesting exercise with this. Because you're like, okay, I wanna read stuff to the students that they wanna hear, and then they want them to integrate things that they're interested in. And it's like, okay, so there's different things you can do, right? You can look for passages in existing work written by people that deal with any of these subjects. You also can have the students create their own stories and put in whatever they want. Like, literally, this is the exercise. And it's just so weird to be staring down what to me seems like a fairly straightforward way of thinking about this, and just using, just pushing it all off to a ChatGPT. I'm like, what are these students learning to, they're learning to read work that means nothing and then push a button and get this weird extruded thing that they can open their mouth and swallow. Like it's just, it's vile. I just, I don't understand. 

Emily M. Bender: Yeah. And are these teachers actually even skimming the thing before they give it to the kids? Do they have one of the really big context windows that's gonna let it eventually go do something really horrific that you don't wanna put in front of a kid? There's a reason, as you were saying before, editors do real work. And teachers in deciding what to share with their students do real work. It's, yeah. 

Alex Hanna: Yeah. And it's just, it's like a content version of a story, or a content model, that is now being used in a classroom setting. Rather than, let's say, do the work of curating good pieces or good books, age appropriate books for your kids, in cooperation with librarians. 

Carmen Maria Machado: The work that was read to me by teachers as a child was like the most, it was one of my favorite things to do as a kid. And it was such an introduction to so many beautiful books that really shaped my education. And the idea that that would just be some new AI generated passage that literally comes from nowhere. So you're just taking that entire experience away from a child? Like that doesn't... 

Emily M. Bender: Yeah. And also suggesting that kids can't find books about things they're interested in? 

Carmen Maria Machado: Like dinosaurs! Yeah, there's a lack of dinosaur books for children? I had no idea. 

Alex Hanna: I did not think we were bereft of dinosaur books- or even Power Rangers. Also, thx_it_has_pockets is great, the comment in the chat is, "When I was a kid, we called this fanfic and wrote it ourselves. Lol!" 

Carmen Maria Machado: Yes. I mean, truly. And if you're like, you wanna put Power Rangers and dinosaurs together, there is a way to do that involves the human mind that is beautiful and creative. And for a lot of, I don't know, I teach a lot of grad students, and I asked my most recent class how many of them wrote fanfic as young people, and almost all of them raised their hand. So like, it is a path into writing, and into the creative arts. And it's like, the idea of that just being automated, it just feels really ridiculous to me. 

Alex Hanna: I know. And there's so much stuff like, ugh, there's- I have read my nibling's terrible Wattpad fic, it's just, kids need to do that. That's incredible. Like, I'm so happy. 

Emily M. Bender: Also riffing on the personalized reading thing too, I'm thinking about how these systems are so focused on the hegemonic view of the world. And how important it is for kids to connect with literature where they see themselves. And so every minute these kids are spending reading this slop, they're again losing that kind of an opportunity. So I guess one other thing to talk about in here, they do things like "using it to refine the wording I used on individualized education programs." 

Alex Hanna: Oh gosh. 

Emily M. Bender: And later on, when they talk about what are you concerned about? It's like, well, I'm concerned about privacy, so I'm very careful not to put the kids' names in. I'm like, you are still feeding lots of data to these corporations that you should not be. 

Alex Hanna: But that's also really frightening too, because I think, I don't know every detail about IEPs, but IEPs have to, geez, like what a, there's so many things that would go wrong. 'Cause an IEP is used to guide so much of what students- 

Carmen Maria Machado: If I found out that AI was used to write my child's IEP, I would, I don't even know what I would do. Because it's, then what is the point of any of this? Like, why are we doing it? It's just, ugh. 

Alex Hanna: And the thing that's also frustrating about this, and this is something that pisses me off about the way that AFT has been talking about it, is that they use all the wishy-washy words about like, responsible and guardrails and the language is indistinguishable from the same types of words OpenAI uses or Anthropic uses. And it's, you cannot tell the difference about we're doing this responsibly. And I'm like, no, there's not a responsible way to do this. The responsible position is not using it. And the, are you looking at the- 

Emily M. Bender: I was trying- they don't name, but they do talk about AFT's AI Academy, which is joint with OpenAI. 

Alex Hanna: Yeah. There's also one piece I wanted to mention here, which is- what is it? There's a thing here- oh, this is also terrible, a bathroom pass app? Okay, sorry. 

Emily M. Bender: Oh, vibe coded. Yeah.

Carmen Maria Machado: The bathroom pass should be a weird, large, unwieldy object. Just like god intended. 

Alex Hanna: Exactly. 

Emily M. Bender: And the app was also collecting data of who was using the bathroom when. 

Alex Hanna: Oh my gosh. Yeah, lord. And then this one thing, so Louis says, "One concern I'm beginning to think about more is that students may start to optimize their writing to please AI instead of writing for a human reader. To mitigate that, I am trying to instruct the Gems-" which I guess is the name of the tool- "to give objective rubric based feedback without altering the student's voice, tone, or style. I want AI to support their thinking and not reshape their writing."

Emily M. Bender: Okay. That's, so when you talk about having a whole AI academy, and yet these people are missing the very basic thing that you can't just tell the chatbot, "Be truthful, don't hallucinate." It doesn't work that way. Like they, the academy's doing nothing but just promoting this tech, obviously. 

Alex Hanna: Yeah. Good comments in the chat, before we move onto Hell. So possumrabbi says, "Using AI to write an IEP is actually a fairly clear cut violation of section 504 and IDEA rules, because they have to be customized to the child. Past court cases have held even copy pasting from a kid with similar needs is a nope." So like, if that's true, then, pretty big deal. Also BoxoMcFoxo- I think first time attendee, hello- says, "Guardrails are not real. Trust me, I'm a furry. I've broken them all." So, god bless our cybersecurity furries. I am saluting the troops. 

Emily M. Bender: And BoxoMcFoxo also says, "Individualized extruded plan." Pretty good. All right, so Carmen, any final thoughts on this before we go on to Fresh AI Hell? 

Carmen Maria Machado: It's just like, what is the point of the union? I guess I find that the actual, like the AFT's just seemingly uncritical adoption of this, just to be- I know I keep saying it's dark, but it really is. And I'm like, the idea of, yeah, we're giving teachers instructions like using it for IEP, which apparently is illegal, or we are using it to reduce their workloads instead of like, actually reducing their workloads. Like we're just gonna bring this technology on. It just, it feels like such a missed opportunity. And I feel like in the last, I feel like the missed opportunities of COVID, for example, where we kind of learned how to do things differently and a little better, and then we just walked back the old way. I feel like it's the same thing here, where it's, yeah, what is the point of any of this? Why don't you just wanna give teachers the room and space they need to be good teachers, and why push this technology on them?

Alex Hanna: Yeah, a hundred percent.

Emily M. Bender: All right, before I give you your prompt, Alex, I want to share what abstract_tesseract said. "Today's episode is brought to you by the letters A and O, and by the number 3." 

Alex Hanna: Oh, yes. God bless. Another set of troops I am saluting today. Our AO3 Warriors. 

Emily M. Bender: All right. And I think it was also abstract_tesseract who said, "Gotta keep up the KPIs, which was the key prayer indicators." And so that's where I'm taking the inspiration here. Alex, you are, musical or not as you like, sending up a prayer and trying to get it through all the extruded bullshit to reach the deity of your choice. 

Alex Hanna: What? Okay. I'm trying now, I'm trying to rewrite Like A Prayer on the fly. Geez, let me think. All right. Come back to me. I will have something, and then it will be something that maybe Sloppy Woods records or performs or lip syncs to at the live show. 

Emily M. Bender: All right, excellent. So, rain check on that. 

Alex Hanna: I am writing a check my drag ass cannot cash. 

Emily M. Bender: All right. So into Fresh AI Hell. I think, Alex, you get to start us off with this one. 

Alex Hanna: So this is from a publication called Ed Surge, which sounds like a failed beverage from the nineties. And the sticker is Artificial Intelligence. The title is "With Teens Comfortable Confiding in AI, Should Schools Embrace it for Mental Health Care?" By Daniel Mollenkamp, March 3rd, 2026. And the answer is no, just no. 

Emily M. Bender: No, yeah. Betteridge's Law. And there's, just reading through this a little bit, so "Phillips' district has used Alongside, an automated student monitoring system, for three years. It's an example of a growing category of tools," blah, blah, blah. It's more surveillance tech on top of everything else. All right. Keeping us moving quickly. This is a really nice piece in the New Statesman by Will Dunn. Sticker is special report. The date is April 9th, 2026, and the headline is "The Silent Coup: How AI Captured Westminster." And this is a long read, it's long and detailed about all of the ways in which the UK government is just all in on AI. And it starts with this horrific anecdote. So there was an article in the April 20th, 2025 Financial Times with the headline "UAE set to use AI to write laws in world first." And British officials were like, "We were tempted to say, we got there first," but then decided that they didn't need to fight for "the crown of first AI written line of legislation."

Alex Hanna: Good lord. 

Emily M. Bender: Yeah. So worth a read. 

Alex Hanna: Yeah. Next one. This is from Wired, from April 9th by Maxwell Zeff. And the title is "OpenAI backs bill that would limitate liability for AI enabled mass deaths or financial disasters." And the lede, "The ChatGPT maker testified in favor of an Illinois bill that would limit when AI labs can be held liable, even in cases where their products cause quote 'critical harm.'" So this is to some degree, this is a little bit of the inverse of SB 1047 we had in California, which was the critical harm bill, which would've had liability. And it was weird 'cause there was a lot of debate around it, but it was mostly written around existential risk, rather than any kind of things like hiring, use of ADSs in hiring, or in housing, or anything else that is already very critical. 

Emily M. Bender: I sure hope that the Illinois lawmakers see right through this, but we will have to follow the story. Speaking of policymakers who have been taken in, here's a document from the United Nations Secretary General's Scientific Advisory Board on "AI Deception." It is a brief of the scientific advisory board on that topic out of something called Horizon Scanning 2026, published on March 19th of this year. And I don't think we have to get into the details, but AI deception is this idea that the synthetic text extruding machines are going to combust into consciousness and try to deceive us. And the fact that the UN is spending any time on this is alarming. From the UN to the Midwest. 

Alex Hanna: This is, yeah, as a Midwesterner. This is from Melanie Dusseau, associate professor of English. This is- there's a picture of a Reader's Digest, and the title is "Making Friends with AI, Smart Steps and Safeguards." And it's got like, a trad wife blonde woman with a denim colored dress, looking at a mirror of herself. 

Emily M. Bender: Sitting on a swing with that mirror, too. 

Alex Hanna: I'm not sure, yeah, like they're on a swing. And the other self is like ethereal, and bathed in weird, spirit light. It's very bizarre. And so Melanie says, "From the Midwest wilds of America, behold the cover of Reader's Digest at the grocery checkout. April 3rd, 2026. I'm not making up this headline." And then repeats the headline. And then there's other things. Anyways, very bizarre from Reader's Digest.

Carmen Maria Machado: I just, I used to loved Reader's Digest as a kid, and I'm like, et tu, Reader's Digest? What is happening? I feel, so I'm like, this is just so... 

Alex Hanna: and I simply must read this comment by abstract_tesseract, which is "Pounded in the butt by an ethereal vision of what my chatbot avatar would look like, by Chuck Tingle." God bless. 

Carmen Maria Machado: Yes!

Emily M. Bender: All right, so we like to end with something either uplifting, or good pushback. And this is from- oh man, I stuck myself with this name. 

Alex Hanna: You got it. This is the most Dutch name I've ever read. 

Emily M. Bender: Yeah. I'm gonna go with, Joep Schuurkes maybe. And it's a blog, and the headline is, "On the acceptance of GenAI," from April 5th, 2026. And it says, "By using GenAI," and there's a bunch of check boxes. "I accept the models were trained on stolen data. I accept that the data was labeled by exploited workers. I accept the environmental costs of the data centers running these models. I accept that I'm outsourcing some of my skills to a company. I accept these companies don't have a viable business model. I accept that I'm granting more power to big tech and their vision for the world. I accept that I'm granting more power to the United States. I accept that all this effort could have been spent elsewhere." And then, he continues, "If any of these facts are new to you, you haven't been doing your due diligence. I have the deepest respect and sympathy for people who are forced by circumstances to accept these things, and I acknowledge that people's ethical behavior is neither deterministic nor consistent. But yeah, GenAI radicalized me." And this seems like a great GenAI. The most acceptable gen AI policy. Oh, and I have a correction from the chat. Roughly, this is not IPA, but Joep Schuurkes. Thanks possumrabbi for helping me with that name. 

Alex Hanna: Okay. Fantastic. That's it for this week! Carmen Maria Machado is the author of the bestselling memoir In The Dream House and the award-winning short story collection Her Body And Other Parties. Thanks again so much for joining us. 

Carmen Maria Machado: Thanks for having me! That was real dark, but thank you for having me regardless. 

Emily M. Bender: You were a light in the darkness, and we appreciate it. Thank you so much, Carmen. 

Carmen Maria Machado: Thank you so much. All right, bye! 

Alex Hanna: Our theme song is by Tony Menon. Graphic design by Naomi Pleasure-Park. Production by Ozzy Llinas Goodman. And thanks as always to the Distributed AI Research Institute. If you like this show, you can support us in so many ways. Order The AI Con at thecon.ai or wherever you get your books, or request it at your local library. 

Emily M. Bender: But wait, there's more! Rate and review us on your podcast app, subscribe to the Mystery AI Hype Theater 3000 newsletter on Buttondown for more anti hype analysis, or donate to DAIR at dair-institute.org. You can find our merch store there too. That's dair-institute.org. You can find video versions of our podcast episodes on Peertube, and you can watch and comment on the show while it's happening live on our Twitch stream. That's twitch.tv/dair_institute. Again, that's dair_institute. I'm Emily M. Bender. 

Alex Hanna: And I'm Alex Hanna. Stay out of AI hell, y'all.