The Vocal Fries

The AI Con

The Vocal Fries Episode 139

[intro music]

Carrie Gillan: Hi, welcome to the Vocal Fries Podcast, the podcast about linguistic discrimination.

Megan Figueroa: I'm Megan Figueroa. 

Carrie: And I'm Carrie Gillon.

Megan: Wow, it's May already. Can you believe it?

Carrie: Barely. I can barely believe that it's May. But yes, it's May.

Megan: I know. We're here. 

Carrie: We're here.

Megan: We're upright. Congratulations to Canada, though, on the election.

Carrie: Congratulations to Canada and Australia.

Megan: Oh yeah. 

Carrie: They had very similar results. Their shitty right-wing leader also lost their seat, or lost his seat.

Megan: Nice. That is beautiful. Thanks to Trump a little bit, right?

Carrie: Not a little bit.

Megan: A lot? 

Carrie: Basically entirely. In both elections, the party—the conservative, they're called something else, they're called the Liberal Party in Australia—but the Conservative Party in both countries were going to win. They were going to have a huge majority government. They were ready to win. And then Trump, Trumped. And here we are.

Megan: Yeah. So he endorsed both? He was out for both of them? 

Carrie: Not really. He was pretty cagey. It was kind of weird. He was acting as though like, "Oh, I didn't really know Polyev," and he was like, "Oh, good luck to Carney." I don't [crosstalk].

Megan: Weird. 

Carrie: Yeah, well, he just was trying to not give the game away, but everybody knew. We all knew that Polyev and Trump were aligned, that Polyev's party would do doge here—like Canadian doge. We knew. So it's like, "No, thank you. We don't want any part of that." I don't think Australia had this piece, but also the fact that the "51st state"jokes that were not really jokes, pissed so many of us off. 

Megan: Was voter turnout better than normal? 

Carrie: It was a little bit higher than last time. There’s only 69%, though, and nowhere near the peak, which was in the ’50s—of 78-ish. Australia has mandatory voting. 

Megan: Yeah, that's nice.

Carrie: It's a nice low [crosstalk].

Megan:  Some good news. Exactly.

Carrie: I am very sad that the NDP no longer has party status because they don't have enough seats. But I also am not surprised because I don't think that the NDP ran a very good campaign. And also Canadians were scared that if the Liberals didn’t get a government, that they didn't get enough seats, the Conservatives would form the government. 

Megan: So some NDP supporters were voting Liberal this time.

Carrie: Yeah. And in some places that split the vote. If they had stuck to NDP, there would be NDP seats instead of Conservative seats. So they actually hurt themselves by switching in some places. Not all places, but in some places. Anyway, let's shift gears and destroy this good mood.

Megan: Okay. All right. What are we talking about here? 

Carrie: Later, our episode is about AI, which always makes me angry. So that's also a downer. But this is also sent to us by Diego Diaz, we definitely should be talking about it. Did you hear about the UK ruling about the definition of a woman? 

Megan: No. Is this something that J.K. Rowling is happy about? I'm sure.

Carrie: Oh my God, you should see the response photo she posted. She's got, I don't know, a snifter of whiskey in one hand and a cigar in the other. And she looks evil. She's like, "He-he-he-he what?"

Megan: She is evil. 

Carrie: Well, I know. But is that what you want to project? It's a weird photo to post. 

Megan: That is really strange. 

Carrie: If it was just her celebrating with a bunch of women or something, it would still look bad.
You wouldn't immediately read it as an evil photo. This looks like an evil photo. I don't know.
It's really hard to explain. Anyway, so, in mid-April, the UK ruled that the legal definition of a woman under the equality law excludes transgender women.

Megan: Fuck that. 

Carrie: And the case was a challenge to the Scottish government, who had held that the legal definition also applied to transgender women who hold a gender recognition certificate.

Megan: Oh, there’s something called a gender recognition certificate in Scotland?

Carrie: I guess.

Megan: Okay. All right.

Carrie: I think it's just like, "Well, you've gone through the steps and you are now recognized as a woman," because if you want to change the gender on your passport or something, you have to go through steps. So I assume it's the same kind of process.

Megan: Okay. It's so terrible that trans people have to go through all these extra steps just to live their full lives.

Carrie:  Yeah. And not even that. They can't even do that anymore, at least in the UK. It doesn't matter.

Megan: Right. And I'm sure soon here. 

Carrie: Oh yeah. The US is, if things don't drastically change soon. But I'm really extra upset because Scotland, that's my ancestors' homeland, and they were on the right path.

Megan: So UK did it because they were following suit?

Carrie: The UK includes Scotland. 

Megan: Oh, yeah.

Carrie: The United Kingdom includes four countries. So it's a country with four countries inside of it. 

Megan: Okay. Scotland, Ireland.

Carrie: Nope.

Megan: Nope?  Wait, so Scotland is its own country?

Carrie: Not really. It's a country within a country. The UK is the country like, internationally.

Megan: That's complicated.

Carrie: Yeah, it is complicated. So Scotland, Northern Ireland, Wales [crosstalk].

Megan: Oh, I knew that. Okay. Right. The Troubles?

Carrie: Yes.

Megan: Oh, see, I'm one of those Americans who's bad at international history.

Carrie: It is upsetting. And I'm really sorry to all trans people everywhere, but particularly those living in the UK. I'm sure life has just gotten a bit rougher, and it's already [inaudible] rough.

Megan: Yeah, that's an understatement. That's awful. Fuck J.K. Rowling.

Carrie: I know. And it's so bizarre to me that she went down this path. Like, "What? What was she doing?" I'm not saying that there was nothing in there to begin with. If you read her books, you can see all kinds of bigotry in it. It still didn't feel like it was her life's passion.

Megan: Her life's passion to be a bigot.

Carrie: Yeah, to be a bigot and to try to destroy trans women in particular. I don't know. The early 2010s-ish? Not sure. 

Megan: Oh my god. Fuck that.

Carrie: Fuck that. Fuck Rowling. Don't watch the new TV show. Just don't.

Megan: Yeah, don't support that. 

Carrie: Just don't. Make your life easier and just try to support other things. I recognize there's no ethical consumption under capitalism. Everything is going to be somewhat problematic, but this is just so problematic, just different entertainment. There's a ton out there.

Megan: There's so much.

Carrie: Anyway, AI.

Megan: AI!

Carrie: Just makes me angrier every day. 

Megan: I know. I keep reading more and more. I feel like it's never-ending, the shit that is coming out about ChatGPT, about workers' rights—just so much. 

Carrie: Yeah. And how it doesn't actually work. And people treat it like it works. 

Megan: Bottom line.

Carrie: I guess it works in some small way. But people are treating it like it works in these grand ways that it does not. So if you just wanted to create a sentence, yes, it can do that for you. That's it.

Megan: Yeah, it's not going to have empathy for you because it does not understand.

Carrie: Well, definitely not that. No. Anyways. I hope everyone enjoys.

Megan: Yeah, enjoy.

Carrie: See you next time.

Megan: We'll see you next time. 

[music]

Carrie: This month, we would like to thank our newest supporter, Lori Burrhow.

Megan: Thank you so much. We really appreciate you.

Carrie: Yeah, absolutely. And if anyone would like to join Lori and become a supporter of 'The Vocal Fries', you can do so at patreon.com/vocalfriespod. 

[music]

Megan: We are so excited today to have Dr. Emily M. Bender, who is a professor of linguistics at the University of Washington, where she's also the faculty director of the Computational Linguistics Master of Science program, and affiliate faculty in the School of Computer Science and Engineering and Information School. In 2023, she was included in the inaugural 'TIME100' list of the most influential people in AI. She's frequently consulted by policymakers—from municipal officials to the federal government to the United Nations—for insight into how to understand so-called AI technologies. And Dr. Alex Hanna, who is a director of research at the Distributed AI Research Institute, and a lecturer in the School of Information at the University of California, Berkeley. She's an outspoken critic of the tech industry, a proponent of community-based uses of technology, and a highly sought-after speaker and expert, has been featured across the media, including articles in 'The Washington Post', 'Financial Times', 'The Atlantic', and 'TIME'. They are the co-authors of the book 'The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want.' Welcome back, Emily. And nice to meet you, Alex. 

Carrie: Yeah. Welcome.

Emily M. Bender: Thank you for having us on the show.

Alex Hanna: Great to meet you. 

Megan: It's so exciting. We got advanced reader copies of this book, and it's fantastic. 

Carrie: It's really good. 

Megan: Oh my gosh, I had so many pages of notes and at the coffee shop reading, being like, "Oh my god, yes."

Emily: Thank you.

Alex: Thank you so much.

Emily: And I have to say, I know this is audio only, but I love the cover design that the publisher did for us because there's no question what the book is about.

Alex: Yes.

Megan: It's true. Yes, it's a great cover.

Alex: There's a nice story behind the cover design too, where the UK designer actually used a manual letterpress to actually do it. So it's this nice hearkening back to another type of printing technology, that's like an Easter egg. So—love that.

Carrie: That's really cool.

Megan: Yeah, very cool. So we always start with the same question for people who are coming on talking about books: why did you want to write this book, and why now?

Emily: So we started kicking around the idea of a book in the course of doing our podcast. So our podcast is 'Mystery AI Hype Theater 3000', where we do what Alex has dubbed "ridicule as praxis." So we find absolutely ridiculous hype artifacts and just go after them—sometimes just the two of us, sometimes with guests who bring in the expertise if we're talking about areas that are outside of our expertise. And it's feeling like, "Okay, this is having some impact, but it would be useful to have it out to a bigger audience, and it's a maybe more refined package." Our podcast is very seat-of-our-pants. And then as for 'why now'—gosh, when we were writing it—we finished the manuscript in September of '24. And it's like, "It would really be good to have this on the shelves already."

Megan: Yeah, I know. When it comes to AI, it's fast-moving, right?

Alex: It's both fast-moving and it's not fast-moving. The thing that I think gets played up is that the technology is just rushing ahead. That, to some degree, yes, insofar as the models are becoming bigger and bigger. Not necessarily the methodology has had any breakthroughs. There's diminishing returns on scaling. But the political economy of it is fascinating right now, especially as we're getting to a position where the bubble is quite big. Still, there's not this huge return on investment that all the AI hyperists have promised. So, we were trying to get it out because we're like, "Oh, maybe the bubble's going to pop," but I feel like it's coming out right on time.

Carrie: Yeah, I would agree. With everything that's happening in the U.S. right now, it just feels like this is the perfect time for this book to come out. Maybe it'll help pop the bubble. Maybe. 

Alex: In 2024, I was saying it would be a better world if our book is not needed in 2025. Unfortunately, it still seems to be needed.

Carrie: Oh my god. 100%. I get so frustrated because I'll meet some new person at my workplace. Eventually, somehow, they'll start talking about how they use ChatGPT. And I'm like, "How lazy are you that you need this for an email? I'm sorry."

Megan: Yeah, I told Carrie this, but I just went to get my hair cut,a nd I was talking to my stylist. She was just like, "I just got back from Jupiter, Florida." She's telling me all about Jupiter, Florida. And I'm just like, "Wait, can we back up? Why did you go to Jupiter, Florida? I'm just curious." She's like, "Oh, I asked ChatGPT where I should go."

Alex: What?

Megan: Yes.

Alex: I've never even heard of Jupiter, Florida.

Megan: I know.

Carrie: It's a tiny little seaside town.

Megan: I don't know what she put into ChatGPT, but it told her that the place she should go on vacation is Jupiter, Florida.

Emily: Imagine a really fun service where the tiny little towns could have their commerce department put in, "Here's a little ad," and then you go to it. You could be like, "Okay, I want tourist destination roulette," and just have something randomly come up. That would be neat. It would be the Chamber of Commerce of each little town would have a specific say about what they're putting in. And if you want to pick a random place to go, great. Leave ChatGPT and the glacier melting out of it.

Megan: Yeah, exactly.

Carrie: I agree.

Alex: Totally. That's so random. 

Emily: And this is a pattern we see over and over again. Like, anytime someone is using ChatGPT, there is either a better way to do the thing they're doing or some unmet need that we should be thinking about and pulling back and saying, "Okay, why are there not enough resources that this person feels like this is the only thing?"

Carrie: Right. That's true. And also, I guess I should be fair. Writing emails can be stressful in the workplace. I'm so used to it now. I'm like, "Just do it."

Emily: It also feels really rude to the recipient of the email.

Alex: Totally.

Carrie: 100%. If I knew that they had written using an LLM, I would have been furious, personally.

Emily: We often get people saying, "Well, how's it different from using a spell checker?" Spellcheckers are about taking what you intend to say and making sure that it's conforming to orthographic conventions. There is certainly some sociolinguistic stuff that goes into spellcheckers, like what gets codified as the standard convention is a real thing. But it's nowhere near, "Take these bullet points and make an email," or, "How do I respond to this?" And then you have whole sentences that you interpret as ideas that, "Oh, that sounds good, but that's no longer you."

Carrie: What are we talking about when we talk about AI and also AGI? 

Alex: Usually, AI is a marketing term, and it's always been a marketing term. The origins of the term go back to the Dartmouth conference from 1956, in which John McCarthy and Marvin Minsky were just trying to find a word that was far enough away from "cybernetics" not to piss off Norbert Wiener, because no one wanted to deal with him. And then they wanted to come up with a different thing around critical thinking machines. And so now AI has come to mean so many different things. It's come to mean classification systems, come to mean facial recognition systems, automatic speech recognition, machine translation, recommendation systems. And so all that gets bundled into AI. And so when we talk about AI, there's not a coherent technology. But then we get surrounded up to generative AI, LLMs, and diffusion models. So it seems like we're talking about generative AI when we're doing that, but with all the branding, it's AI is inside. We did this episode where it was in one of the hell things, and there was this 90s aesthetic, and there was a sticker, and it was packed full of AI agents. And it's like, "What the hell?"

Emily: "Fortified with essential AI" or something?

Megan: Oh, no.

Alex: It was something like Emily has said "pixie dust" that's just sprinkled on it like Salt Bae. So it seems like there's something you're talking about with generative AI, but it's not even clear because the term has become so overloaded. AGI is supposed to be artificial general intelligence. It is this science fiction notion of this singular brain. But even that is pretty contested. It really depends on what you're talking about. I was thinking about this recently because Emily and I are writing an explainer on this. There's a soft version of this that Sam Altman has put in, which OpenAI has effectively said, "We're going to know AI when we see it." But it's in their charter. And the actual thing that they say is, "If there is sufficient capabilities to do something that a normal human could do," I will look at the actual definition that they have. Do you have it up, Emily?

Emily: I don't have it up, but I remember that it's keyed also to economically relevant activities by people.

Alex: That's right.

Emily: Then there was this hilarious leak; I think it was some negotiations they had with Microsoft, where they defined AGI as a system that would make them $100 billion.

Carrie: What?

Alex: So then it gets keyed to the economics of it, even though they have this element that's supposed to be about these intrinsic capabilities of this whatever, x, y, z. Then you have the hard AGI, which has the people like Eliezer Yudkowsky and the rationalists and these people who are very far afield—who are very ideological hardliners—where they think AGI is like a singular brain that's going to have its intense feelings and "drives", which is really fucking out there. That's the one that's like, "Oh, you're going to have this thing that has intense behaviors and internal thoughts that's capable of deception." That's why we need to "align it to certain kinds of human values". But then, if AI is a bullshit term, like, AGI is bullshit squared term. It's just absolute thing that is just out there that gets thrown around, and it's the hotness that certain people need to have some ascription to—like Zuckerberg. It's like, "Yeah, we're also working at AGI at Meta." I don't know what that means, but we'll know it when we see it.

Megan: We'll know it when we see it. 

Carrie: Oh my god, it says porn.

Alex: Exactly, it's porn.

Megan: And so you say in your book, "Extraordinary claims require extraordinary evidence." This plays in right now with this—with AGI—right?

Emily: Yeah, absolutely. A lot of the reason that people have a leg to stand on in claiming that AI or AGI is right around the corner is the chatbots. Because the chatbots are set up to output convincing-looking text. They're designed to use first-person pronouns, which is a design choice. And one that I think is unconscionable, because there is no 'I', and there is no point of view. There is no conversation partner. But these things are set up that way, and because we are going to instinctively and reflexively interpret language—as we always do—by imagining a mind behind the text, it feels like there's a reasoning machine there. And so, because of that, it's very easy to say, "Look, it can solve math Olympiad problems, and it can do a psychotherapy session, and all of these things." It's important to remember that those are, in fact, extraordinary claims. And just gesturing at something that looks like the right form is never going to be sufficient evidence. You really have to say, "Okay,"  first of all, sometimes people say to me, "Okay, Emily, what evidence would convince you that ChatGPT is sentient, or reasoning, or whatever?" And I say, "That's actually not my job. I don't have to come up with the test. I certainly can say you have not provided a convincing test." Also, I'm not interested in those questions. I'm interested in: how is this technology being used, and to what extent is it an appropriate match for the tasks that we're putting it to? And in most cases, the answer is: not at all appropriate—unfortunately being used a lot.

Carrie: Unfortunately. Yeah. What is AI hype, and who benefits from it?

Alex: AI hype is this kind of notion: we have this definition in the book, but it's this idea. I'm very excited, there's a group of folks, they're starting a hype studies thing within science and technology studies, which is very interesting. I wish we could go to this conference, but it's conflicting with a few of our commitments. We define hype as this idea that it's this thing you need to get on board with, unless you're a consumer or a product manager or an executive or a teacher or student, or you're going to be left behind. You're going to be branded a Luddite, everyone's going to think you're backward, and then you're going to be not supercharged for the future. AI hype in particular is the focus on hype around this particular technology. I think the defining feature of AI hype is that there's such an idea that AI, as a tool of automation, is going to make your job so easy that it's going to mean that if you don't use it, you're going to be left behind. As a student, you're not going to have access to the same knowledge networks. The good examples of this are—we were reviewing this artifact on our podcast, and Emily introduced the podcast, and it was easier to write this book because you had so much material from the podcast. And so there was an example in this artifact that we were reading called 'AI 2027', in which these rationalists—very hardline AI people—were writing this science fiction. It was very bad science fiction. 

Emily: It was not fun to read.

Alex: I had a great time. It was like watching a train wreck. I kept on thinking about it because it was like watching something crash in slow motion—but nobody died, which is great. That's how I love my train crashes. 

Carrie: Same.

Alex: And so one of the things that they had in there was, AI has gotten so good at science that the researchers—the very smart researchers, open-brain—they don't do anything all day. At worst, they get in the way of the AI researchers I'm making scare quotes—"AI researchers"—and at best, they're maybe directing them correctly, like what stuff to research. This is a notion that you need to get on board, it's going to do most of your work. And so there's this adage that's popular in this world, which is like, "AI is not going to take your job, but someone using AI will." And we've twisted that and flipped it and said, "AI is not going to take your job—it's going to make your job shittier."

Megan: Yes. I wrote down that quote. I was like, "Yes."

Alex: And so the idea here is, this AI hype is this aggrandizement of this technology, especially as it's going to be the best thing since sliced bread. It's this general-purpose technology, like electricity or the steam engine. You're going to be riding your horse and buggy, whereas everybody else is going to be driving cars. You're going to be lighting candles, whereas everybody else is going to be using light bulbs. That's a hype claim that surely hasn't borne out.

Emily: And a key part of hype is FOMO. It's this idea that you got to get on board because otherwise you're going to miss out. We were doing an interview for a radio station a couple of days ago—it's been a busy couple of weeks—the radio host was asking us, "What about these AI incubators that are being set up? Should we have FOMO?" I said, "Yeah, we should be fearing missing out on time together with people."

Carrie: Yeah. I just keep thinking of monorail.

Alex: Monorail.

Emily: Monorail in Seattle is a bit of a sore spot.

Carrie: Oh yeah. I'm so sorry. I like that monorail, though.

Emily: No, it's lovely. The reason it's a sore spot is that we had this decades-long Seattle process trying to extend the monorail, and then it got voted down over and over and over again because it's not actually a very practical transit system. We finally got light rail, but it was delayed by decades because of all the monorail stuff.

Alex: Sorry. 

Megan: Oh boy.

Alex: #The Simpsons' are very relevant in this discussion of technology. Could just probably talk about this entire book with 'Simpsons' quotes.

Carrie: For sure. Do it. 

Emily: Newsletter post.

Alex: Yes, exactly. Once we're out of all these interviews. I was checking our schedule, and we have like five or six interviews next week.

Megan: Oh no.

Emily: So far.

Carrie: Thank you for adding us to the list.

Megan: We appreciate it.

Alex: No, thank you for having us. It's a pleasure.

Emily: No, we're excited for the book to be out in the world and to get to reach multiple different audiences. So it is a good problem to have.

Carrie: Yes, absolutely. You've already answered this, but maybe you want to expand upon it a little bit: Why are so many people convinced that AI is sentient?

Emily: That's the linguistic parlor trick, I think. And this is where I think it's really interesting to speak. There's a lot of stuff in the book where we're leaning on Alex's expertise in sociology and thinking about how this is impacting the world. And for these questions of why is it so convincing? Well, linguistics is where it's at. Linguistics, as you—and I'm sure a lot of your audience knows—is the field that studies how language works and how people work with language. And both of those things are really key to understanding what's going on with the chatbots. I have a project where we're working on de-anthropomorphizing the language to talk about these things. And the term 'chatbot' lands in our category of verbs of communication: the "chat" in "chatbot." And so my co-authors asked me, "How would you de-anthropomorphize chatbot?" What I came up with was 'conversation simulator', I think, is a pretty accurate description of what these things are. And we can add that, Alex, to our collection of 'mathy-maths' and 'spicy autocomplete', that 'text-extruding machine'. So, one of the things that comes from linguistics is understanding languages as systems of signs. So you have the form, and you have the meaning, and the meaning might be like the dictionary, concrete meaning, or it might be social meaning associated with things. But in all cases, we've got that pairing of form and meaning, and it is sometimes iconic, but mostly not. Which is to say, the meaning is not 'in' the form. 

We can only access the meaning from the form because of two things. One is it's because we've mastered the linguistic system that the form comes from. And secondly, we bring all of our social skills to bear. So when we are understanding language—here, I'm leaning on work like Herbert Clark's 'Using Language', and this wonderful, strange paper by Reddy, 1979, on 'The Conduit Metaphor', which is just a fabulous paper. I can send you the link for show notes, if you want. I have yet to work out who the intended audience of that paper is, but it's a very good paper.

Alex: That's fascinating. I just love that claim. Who is this for? I love it.

Emily: Basically, the conduit metaphor that Reddy is documenting suggests—and we talk about it in English all the time—so: 'I can't get my message across to you,' or 'you aren't coming through clearly,' or 'I struggle to put this in words.' And the idea is that, according to the conduit metaphor, we take our ideas, we pack them into words, we send the words across—through the air or over the ether or whatever—and then the other person takes the words and they unpack our meaning from the words. And Reddy's point—and this matches what Herbert Clark says and others—that's actually not how it works. When we're understanding somebody, we are taking everything we know about them and the world and our common ground, and then using the linguistic signal as a particularly rich clue to what it is they might be trying to communicate to us. And so when you look at the output of ChatGPT, you're looking at something that, first of all, was only trained on the form side of the form-meaning pairing. It's not manipulating meaning at all—but it is' a very good model of the form. And so out comes this thing that we reflexively can't help ourselves. We see it, interpret it, and we do that by imagining some mind behind that text that has some intentions and accountability for what's being said. It is really hard to separate from that and say, "I made sense of the words by imagining that mind—and that was all me." Too many computer scientists who are doing work on language processing know no linguistics, and so they're not exposed to any of this. And so you've got a lot of people who really ought to know better.

I'm thinking of Jeff Hinton here. He actually has a degree, I think, in cognitive science. He really ought to know better. But he is fully bought into these things being—if not sentient—then on the pathway to sentience.

Carrie: So scary that people who are otherwise quite intelligent can be tricked by smoke and mirrors.

Megan: Yeah. I think you both saw this on Bluesky this week. It was Kevin Roose of 'The New York Times', right?

Alex: Yeah, geez.

Megan: Should we start taking the welfare of AI seriously? 

Alex: There was this argument, basically, about "should we care about AI's fee-fees". And it's actually quite funny, I skimmed this article, but it's like a lot of the people—Roose, I don't think, has ever been super critical of AI. But there's been others, like Billy Perrigo, who's a journalist for 'Time', who wrote very critically on AI and all the data labor that goes into training; on labeling and doing content moderation, especially with people in Kenya earning less than $2 a day. But he's been fully taken in by this particular startup called Anthropic that is founded by his brother-sister combo, Dario and Daniela Amodei. They used to work at OpenAI. Then they "defected" and then they started their own language startup called Anthropic. These are like the humanistic style AGI believers. They're like, "Well, we really have to defend humanity. These things might have feelings." But even then, the people involved were like, "Yeah, this might happen, which is already granting way too much credence." But Kevin Roose is like, "No, this thing, it was trying to hit on me."

Emily: Oh, that's right.

Alex: He had this conversation where he was like, "This thing was trying to get me to leave my wife." That must have affected him. It's just way too much. He posted a follow-up, and weirdly, the way the Bluesky app works is that if someone links something to you, sometimes it just shows up as your stock page. And it's this post that he posted two days ago, which says, "My new column is drawing rave reviews, go fuck yourself, die loser" and more. "Take a look and add yours to the pile." He's also writing a book about AGI. 

Emily: Oh, no.

Alex: He just looked at the space. It's called the AGI Chronicles. Who's the publisher here? Boo this man. Another Simpsons reference. Boo this man. And so this is speaking, Roose is probably the person with one of the largest stages coming from traditional journalism. There's a lot of people within tech journalism that have just been way too credulous with all this bullshit. Roose is probably the worst offender. And Kara Swisher isn't like bad. She's bad about the tech industry, and she was like, "Well, I just recently found out Elon Musk was a eugenicist." I'm like, "The signs were there," and then Billy Perringo and a few others. There are many good journalists doing this. We should call them out. 

Emily: I need to shout out Karen Howe, who's amazing and also has a book coming out in May, I think, called 'Empire of AI,' which I haven't gotten to read yet, but I'm super excited to. She's been doing some deep reporting on OpenAI. She's amazing. Her work is really good. We like Kari Johnson. The whole crew at 404 Media is amazing. There is good journalism in this space, but there's also a lot that's just, as Alex says, super credulous. And I think that's important for people who are following these stories to learn how to evaluate a piece of journalism. Is this person just serving as a mouthpiece for the company they're reporting on, or are they holding power to account? One of the tricks that they'll do is they will ask someone from the other side, so they get both sides in. But oftentimes in these puff pieces, it's written effectively from the point of view of the company that they're profiling. And then there'll be a critic's say, a little corner buried down in the article. That's better than not doing that, but you're still looking at very credulous journalism if it's shaped that way. 

Carrie: Absolutely. I also have to wonder, Kevin Roose, what was he doing to get those responses? Is it revealing something about his psychology? If I was his wife, I'd be a little bit worried, personally. 

Emily: That thing where he got the system to tell him to leave his wife, he entered into this long interactive fiction and lost track of the fact that that's what he was doing. 

Alex: I don't recommend reading it. He published the whole interview, but he had been very flattered by the tool. Let's just say that much.

Carrie: The tools are set up, by the way, to flatter. So at the heart of them is a language model. The language model is a model of the distribution of word forms and text, and these are trained on basically next word prediction. You give the system a bunch of text, and then each step's like, "Okay, what word is likely to come next?" And the system makes its calculations. If those calculations are off, the weights inside the model are adjusted. So that's the very heart of it. But then there's this other layer of training that's called reinforcement learning from human feedback, where companies contract with a lot of workers around the world, or sometimes get people to do this on a volunteer basis. Basically, either just sums up, sums down an output or choose between two outputs that allows the system developers to further adjust the weights towards responses that people like, where people are represented by these data workers. And so they are very much set up to be people pleasers. That's probably part of where that flattery was coming from.

Megan: And do you think that's part of why people are so willing to believe it's sentient? Because they are people pleasers?

Emily: Maybe. It says things they want to hear.

Megan: Yeah. You mentioned Elon Musk and eugenics. What is the connection between AI and eugenics? 

Emily: Deep, but Alex has more of the info here. So I'm going to let you do that one, Alex.

Alex: Yeah, there's a lot of this thinking about it from, I think, even the term intelligence and the rankings of intelligence. That has an incredible eugenicist history. All kinds of metrics of intelligence have been in eugenicist projects from their earliest in Alfred Binet's original IQ test, which he intended not to be a test of intelligence, but as a remedial metric for children who may have needed more aid in school. But of course, Americans are experts, and also people in the UK. In the UK, the eugenicist project was using the test and adapting the test to try to identify people who were disabled and had developmental delays, but then funneling them into poorhouses and sterilization, preventing them from procreating and denying them full human rights in the UK. In the U.S., we took this to become a ranking amongst races. And so there's a tranche of "scientists" who were using these tests to then make determinations about who was the best for citizenry, who also played into sterilization legislation, to miscegenation legislation. Stanford has a very particular place in this hierarchy. Malcolm Harris talks about this a lot in his book 'Palo Alto'. And so this kind of ranking of intelligence, we're seeing it as much now. There's a paper that cites some of this work called 'The Sparks of AGI', in which Sebastian Bubek, which Emily debated.

Megan: Oh my god, bravo!

Alex: Just dismantled this man. 

Megan: So hard.

Alex: But an earlier version of this paper cites this letter by Linda Gottfredson, who is this person who published this 'Wall Street Journal' op-ed on something like “52 psychologists agree, this is a definition of intelligence.” And then it goes into a bunch of white supremacist nonsense in it, effectively saying Gottfredson is this person who's on the Southern Poverty Law Center list of white supremacists, has supported the Human Biodiversity Project, which is explicitly a eugenicist project, and has been cited approvingly by David Duke. She is effectively making this racial argument, like mental ability is inherent and tied to race, effectively. This is the one dimension of this stacked order ranking of intelligence. And there's also the test-real nonsense without eugenics, the eugenics project, which is like the vision of a future, which is effectively promoting birth rates, promoting birth for a certain set of people, or foregoing any kind of improvement in the current era for this future in which there's 10 to the 47 number of people living amongst the stars.

Emily: I thought it was 10 to 58. It's a nonsense number, but it's a specific nonsense number that they keep reporting.
 
Alex: Yeah, 58 or 47, it's some kind of bullshit number. We need to just charge forward, develop AGI. It doesn't matter about climate change because we're going to have these trillions or quintillions or whatever. We don't have a proper prefect because why would you? But we need to forego any kind of development, especially these people in the climate refugees and people—the majority world—who are suffering in here now from climate change. We're going to have this glorious virtual singularity in which that's what's in the future. And so it's pretty expressly a eugenicist project. Bringing it back to Elon Musk, even before he was doing the [inaudible] at Trump's inauguration, he had been making these pronatalist statements. We see that eugenicist couple everywhere.

Carrie: Oh God, yeah. 

Alex: They must have immense amounts of money to be every day in publication. He's supporting those people, supporting the big natalist conference that happens. Now it's very out that we're seeing these people everywhere. But Musk—he's been an express natalist. Marc Andreessen has also made a lot of these natalist statements, effectively saying that people in the developed world need to be having more children. That's a pretty clear dog whistle there.

Carrie: Yeah. Can you explain how people who are so pro-AI, where supposedly it's going to take away all our jobs, also are so pronatalist—why they want so many more white babies? 

Emily: So that's the key point. They have this idea. And was it Bostrom? Somebody said a kid in the UK, the U.S., other so-called developed world places is more likely to contribute to the development of the singularity than a kid born somewhere else. And so we shouldn't be focusing on the well-being of people in the majority world because they don't matter towards this path towards the singularity, and then those 10 to the 58 or however many virtual beings living in the stars. And it's utilitarianism taken to its logical endpoint—that the happiness of those people outweighs any suffering now because there's so many more of them. It's all wrapped up, of course, in the inevitability arguments about AI. It's just a question of how quickly this happens. It's definitely our future, and just how fast can we run towards it? And do we get the good kind that doesn't kill us all, or do we get the bad kind and lose out on the happiness of those 10 to the 58 virtual people living in the stars?

Alex: I think there's this idea—the idea behind the natalist dream is like you're going to have this population bomb where you're not going to be able to support. Yes, AI is going to replace these jobs, but at the same time, I think the argument is—I don't know the internals of the argument because I don't want to go down this disgusting rabbit hole—but it's effectively like you need enough bodies in the mix locally to support all the services. Otherwise, you're going to have not enough people paying into Social Security systems. It's this what I call population bomb that gets used. It's just absolutely absurd and disgusting stuff. 

Emily: I just want to say that it's no fun to read this stuff, but there's a fantastic paper summarizing a lot of it. So the 'Test-Real Bundle' by Timnit Gebru and Emile Torres, I think, is essential reading for anybody who's interested in questions about artificial intelligence and how it fits into these other ideologies. I also frequently recommend it to students who come to me who are super excited about AI. I'm like, "Here, read this paper first and then let's have a conversation."

Carrie: Yeah, it does make me feel tense, and I've got a sick feeling in my stomach every time this kind of questions come up. I'm just like, "What is going on?"

Emily: It's so absurd. So this idea that we're going to have 10 to the 58 people living in simulations, so computer programs spread across the galaxy—I'm like, "Have these people never once had to do tech support? Who's keeping those computers running?"

Carrie: Exactly. I don't understand the simulation part of it at all. Makes my brain melt. 

Megan: Related to this, you mentioned this earlier about AI safety and alignment. I'm thinking this is related because you mentioned how people—I guess boomers, right? No, sorry. Doomers. 

Emily: Doomers. Yeah.

Megan: Are concerned about AI safety and these hypothetical harms. But what about the actual harms that are happening today? Can you talk more about that?

Emily: Yeah. So there was a really pointed moment when Jeff Hinton, who is recognized as one of the “godfathers of AI,” which is, again, another anthropomorphizing way of thinking about things. He left Google, retired/quit—I like to say—took his fainting couch on tour.

Carrie: Oh, that's so mean. I love it.

Alex: Just imagining a fainting couch with wheels on it now. Load her up on these planes. 

Emily: So he's talking to all these different journalists, and someone put Jake Tapper on CNN up to asking him, because he wants to talk about how dangerous this is going to be. He left Google so he could talk about how dangerous it was going to be. It's these existential risks that he's worried about. So this is the AI—the racist pile of linear algebra combusting into sentience and then deciding to kill us all—is the stuff that he's worried about. So someone put Jake Tapper up to asking, "Well, where were you when Timnit Gebru was getting fired, and then Meg Mitchell, over raising the actual current harms of this stuff?" And he said, “Oh, my concerns are more existential than theirs.” It's, if it doesn't affect everybody, then it's not going to affect him because he's sitting at the pinnacle of power and privilege, and therefore, it's not as important. So he's worried about things where everybody dies, which are based on this fantasy of the racist pile of linear algebra combusting into sentience. And so that basically becomes more important and steals all the energy and spotlight from the actual things that are happening now in the name of this technology. Unfortunately, it's not just journalist attention, but also policymaker attention.

So Chuck Schumer has this whole series of AI insight forums that he ran in '23 to '24. At that point, there was a decade of work looking at the harms of automation in various contexts and the ways in which biases are getting amplified and perpetuated, and accountability is getting displaced, and all these things. One of these insight forums was literally about the probability of doom, or the P-Doom. What comes out of this is this weak-sauce report that's got the existential risk in it and calls for more study, when people have been working on this for 10 years. And so the Jeff Hintons of the world and the Sam Altmans have managed to distract policymaker attention in really unfortunate ways.

Carrie: Do you think that's on purpose, or do you think that they're just really believing it?

Alex: Just to back up, I wanted to say one more thing. This idea of this existential risk is like, "Well, my risk is more important." It reminds me of this colorblind racism, which is just like, "We need to improve everything. It's fine. We're not going to talk about race or any of these harms." We're just like, "Okay, but we know that this stuff fucks over Black and Brown people and people in the majority world disproportionately." What you're saying is you don't care about these people. That's how it strikes us. In terms of whether it's on purpose, I think there's this soft version of it, and then there's this hard version of it. So people like Yudkowsky and people to this Machine Intelligence Research Institute—those are the people who are like, "Yudkowsky wrote this op-ed in Time," and he's like, "I don't know why these people are getting platformed at Time," because he and Scott Alexander and some of these other people have gotten a few op-eds there. But Yudkowsky is saying, “This thing is an issue. If we need to do nuclear tactical strikes on data centers, we might have to do that.” And you're just like, "That's incredibly wild." First off, you've never worked in a data center or done any tech support before. So if you think that's going to happen, you're living in a different world, my friend. And then there's the soft version of it, which is like, "Well, we're going to go along with this. It could be a risk, but along the way, we're going to get a boatload of money."

There's the kind of political-ideological alignment there, which is, "We're going to get, and in this latest funding round." OpenAI got a $40 billion investment round, which $30 million is coming from SoftBank. And so they're now getting all this money getting pumped into them.
They're saying, like, "Well, this thing is going to be powerful enough. At some point, we're going to have to consider this “element of alignment,” but it has such incredible power and we need this investment. And so in that case, these AI Doomer—we see in the book—the AI Doomerism, the AI Boosters are two sides of the same coin. There's the idea that AI is inevitable, that it needs to be controlled. And the AI Boosters are like, "Well, we're going to figure out alignment." The AI Doomers are like, "Sure, we need to figure out alignment, and it can actually kill us all." But it's all just thinking about this kind of nightmarish future where this stuff is just here and is in every domain, even though it's not that impressive of a technology.

Megan: Annoys me most of the time when I encounter it. 

Emily: And a pro tip: if you want to use Google for searches and you don't want to see the AI overview, if you add “minus AI” to the end, you don't get it.

Carrie: If you're on Chrome, you can add an extension that takes it away, too. Because I always forget to put the “minus AI.”

Emily: Especially on my phone. I'm picking the autocomplete searches, and then it's not there. 

Carrie: Yeah, on the phone, I'm not sure if there's a way to round it. DuckDuckGo doesn't have it, I think.

Alex: DuckDuckGo doesn't have it, but I use DuckDuckGo, and they have an AI bot, but you can turn it off in the settings. It's very annoying that they've decided to do an overview. Aren't you the privacy-minded browser?

Carrie: Yes.

Alex: Please.

Carrie: I know. For the love of God, please.

Emily: And Sasa Luciani, who does a lot of great work looking at the environmental impact of this stuff, points out that it costs far more compute—and therefore more electricity, more carbon, more water—to generate one of these AI overviews than it does to return a sequence of links. Because each of those words has to be calculated one by one as to what's a likely word to come next, given the previous one. And I believe it's not cached, whereas the links are going to be cached.

Carrie: It just makes me so angry. 

Emily: Your expression is saying it all.

Carrie: I'm sorry this is audio-only. 

Megan: I feel like when I'm reading this book, there were times where my face probably was telling the story of what I was reading. It's so infuriating because there are actual harms that are happening right now. I feel like not everyone is privy to these harms, like what's happening to these exploited workers that are having to babysit these models and stuff like that. If they knew more about this, would we change our behaviors? What do you think? Do you have hope?

Emily: We do have hope. I have to say that my mom, who was one of our early readers, took us to task for the preface saying that this book comes from a joyful collaboration. And it is a joy to work with Alex, but we are looking at awful stuff. She's like, "What do you mean joyful? There's so much terrible stuff." But chapter seven is—I think the title is something like—“Do You Believe in Hope After Hype?” And it's like, "What can we do?" And I think the very first step is to reject any inevitability narratives. This is built out of choices made by people. And yes, we can only control our own choices. But if enough of us make these choices and enough of us educate the people around us, we don't have to accept that the will of Google, OpenAI, Microsoft, Amazon, and Anthropic is what will be true in the world. There's just starting from the power of refusal is one of the sources of hope, but there's others. We have some mere[?] policy suggestions in there. We talk about looking to librarians and library science for ways of managing information, information access, and also ways of connecting with people around information. And then that connection—people to people—and talking about solidarity and what workers working together can do to push back is super important.

Alex: Yeah, we talk a lot about collective refusal. So organizing around AI, which a lot of unions and other labor collectives have done. So like the National Nurses United, the Writers Guild of America, I think, have come out strongly in noting that this is not something that's going to be pro-worker—it's going to be anti-worker, it's going to make your job shittier. Also, strategic refusal—thinking about just not using it. And the heartening thing about this is I have a lot of friends that just come to me and they're like, "I talked about your book at work. I have an office job, or I have a data analysis job, and they're really pushing this." I was like, "Look, read my friend's book."

Carrie: Good.

Alex: That's the thing that I think is heartening. Any way we can get the book—or elements of the book, like zines or something—into people's hands. I think a lot of people are already doing this. A lot of people are developing materials because there has been some really good work around. It's not just us. We're providing tools and trying to provide language. I think one of the things that we help with is explaining the technology and then explaining the political economy around the technology. I think that goes a long way in saying, "This is incentives around this, and people are really incentivized to push this." And so what can you do to push back against this? Because you have this knowledge at hand. This is not the coolest thing since the electric light bulb. They want you to think such a thing, but there doesn't need to be a chatbot in every home. What was the thing in AI 2027? It was like an agent in every home—like an agent in every pot. Some like terrible Great Society bullshit. It's like, "No. What's that going to do? It's going to take up intense amounts of energy, and why?"

Carrie: Yeah. You talk about it in your book, and I think also in one of your podcast episodes, about AI agents talking to each other in a Zoom meeting. And what is the point of that? Just burning the earth for nothing.

Emily: Yeah, exactly. So one of the things, having talked about the preface, that I always like to point out about the book is that we have endnotes. But because this is a trade press book, we were not allowed to put the little numbers saying where they are. It was probably the hardest thing for me about writing this book—figuring out how to navigate that academic urge to cite our sources with, you can't distract people with little numbers. And so the solution is that there are endnotes, and they are anchored to an anchor phrase. So they tell you what page and then what anchor phrase. So at any point, if you're reading and you're like, "Really, what's their source for this?" We've got a source. Go find it in the endnotes.

Alex: How many endnotes? Some like 500.

Emily: It's a lot.

Alex: It's well cited. And that's not just sources—the sources are even more because there can be multiple sources in an end note.

Emily: And some of the end notes are meaty too. It's things where it was too far in the weeds at a certain point, but we couldn't quite let go of it. And so it got pushed to an end note. My hope is that at least some of the readership will go dive into the end notes and see what's there.

Carrie: Always read the notes. Always read them.

Alex: Always read the footnotes, 100%.

Carrie: I may have stolen that from you.

Alex: Please, take that. You are welcome to distribute that. Always read the footnotes. You're going to discover some eugenicist, white supremacist bullshit, or you're going to discover a host of preprints and stuff that hasn't been peer reviewed. And this is a thing. This is for any enterprising—I don't know if you have any sociologist listeners—but if you have any enterprising science-of-science type of listeners, I'd love for you to study these circular types of citation networks and this knowledge network of the people that do alignment research. Because it's going to be completely either citing like OpenAI's company press releases or whatever. It's all shoddy citation practices all around.

Emily: Was it Meredith Whittaker who calls it a Potemkin citation network?

Alex: Yeah, she calls it Potemkin citation, which is very useful. She's talking about it in particular. I forgot exactly where she references it. I think it might be in reference to some anti-trans shit that 'The New York Times' did, which they love to do. But she was saying, one of the functions of putting this out and putting a statement around it is that it becomes a Potemkin citation—the  idea that, "Well, I needed to say something." And OpenAI is notorious for this, "We're going to put out a white paper, and then everybody can cite this," really nefarious strategy. There's another analog, too, and there's a historical analog of the 1956 workshop, in which a lot of the citations were also the same. They're like stuff coming out of industry labs is shoddily citing things, not tracing ideas. None of the stuff got peer-reviewed or had been peer-reviewed. And then they misconstrued the argument. So lots of continuity there—more continuity than you'd imagine.

Carrie: It's scary how much continuity there is, really. Is there anything else you want to let our listeners know before we let you go?

Alex: You can check out 'The AI Con' wherever you find fine books. I don't know when this is coming out, but we have a virtual book launch on May 8th. You can check it out on our website and all other events at thecon.ai. We also have our podcast, 'Mystery AI Hype Theater 3000'.
You can listen to it wherever fine podcasts are sold. And by sold, I mean getting them for free.

Carrie: Yes, absolutely. Definitely check out the podcast—it is great.

Megan: Yes, it is.

Emily: Thank you. And it's such a pleasure to be back on The Vocal Fries with you all.

Megan: Oh, it's so great to have you back.

Carrie: Yes, I always enjoy having you on and enjoy any anti-AI stuff because I have such a hate on for it. 

Alex: Roxane Gay posted this, and I'm upset that she posted this because I had this internal thought. There's the meme, which is like, “If there's 100 AI haters, I will be amongst them. If there's 50 AI haters, whatever, if there's zero AI haters, it's because I am dead.” She posted something similar. I'm like, "Damn. She got it in before I did," but I'm happy she's amongst our ranks.

Emily: One of us.

Carrie: One of us. 

Megan: Absolutely.

Carrie: Well, thanks again. This has been great. 

Megan: This has been great. So much fun. And we leave our listeners with one final message:
Don't be an asshole.

Carrie: Don't be an asshole. 

Emily: Don't be an asshole. 

Alex: Don't do it.

Carrie: The Vocal Fries Podcast is produced by me, Carrie Gilon. Theme music by Nick Granum. You can find us on Tumblr, Twitter, Facebook, and Instagram at @vocalfriespod. You can email us at vocalfriespod@gmail.com, and our website is vocalfriespod.com.


[END]

People on this episode