Mystery AI Hype Theater 3000

You Talked to Workers for This Labor Research... Right? (with Sophie Song), 2025.11.17

Emily M. Bender and Alex Hanna Episode 67

Last month, Senate Democrats warned that "Automation Could Destroy Nearly 100 Million U.S Jobs in a Decade." Ironically, they used ChatGPT to come to that conclusion. DAIR Research Associate Sophie Song joins us to unpack the issues when self-professed worker advocates use chatbots for "research."

Sophie Song is a researcher, organizer, and advocate working at the intersection of tech and social justice. They’re a research associate at DAIR, where they're working with Alex on building the Luddite Lab Resource Hub.

References:

Also referenced:

Fresh AI Hell:

Check out future streams on Twitch. Meanwhile, send us any AI Hell you see.

Our book, 'The AI Con,' is out now! Get your copy now.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Ozzy Llinas Goodman.

Alex Hanna: Welcome, everyone, to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find. 

Emily M. Bender: Along the way, we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. I'm Emily M. Bender, a professor of Linguistics at the University of Washington. 

Alex Hanna: And I'm Alex Hanna, director of research for the Distributed AI Research Institute. This is episode 67, which we're recording on November 17th of 2025. Last month, Senate Democrats warned that quote, "automation could destroy nearly a hundred million US jobs in a decade," unquote. Ironically, they used ChatGPT to come to that conclusion. 

Emily M. Bender: Conservative researchers then push back on the Democrats' call for more oversight of tech companies. The one thing they can agree on, of course, is that it's a great idea to use ChatGPT for labor research.

Alex Hanna: It's been a while since we've done an episode focused on labor, so we thought that it'd be a good time to look at some of the AI talking points that come from the side of worker advocates, or at least people who want to think of themselves that way. And our guest this week is Sophie Song, a researcher, organizer and advocate working at the intersection of tech and social justice. They're a research associate at DAIR, where we're working together on building the Luddite Lab Resource Hub. So welcome, Sophie! 

Sophie Song: Thanks for having me! So good to see both of you. 

Emily M. Bender: Thank you so much for joining us. Let's dive in to our first artifact here, which is a report that came out of the United States Senate Health, Education, Labor and Pensions, aka HELP, Committee, with the byline Bernard Sanders, ranking member. Minority Staff Report, October 6th, 2025. And the headline is "The Big Tech Oligarchs' War Against Workers: AI and automation could destroy nearly a hundred million US jobs in a decade." And I think we're gonna start here with the executive summary. This first paragraph basically just says, we've had a lot of increase in worker productivity, but guess what? The workers aren't getting the benefits of that. And then below the graph here, the graph showing corporate profits taking off and wages staying flat or falling, it says, "Now artificial intelligence and automation stand to transform the relationship between workers and employers. Artificial intelligence- AI- refers to computer systems that perform complex tasks normally requiring human reasoning, decision making, and creativity. For example, ChatGPT by OpenAI describes itself as an quote, 'AI model that generates human-like text by predicting the most likely next words based on patterns it learned from a vast amount of written data.'" I think I wanna stop there because I think I have issues with that definition of AI, but, Sophie, thoughts? 

Alex Hanna: Oh yeah. You're also- 

Emily M. Bender: Oh, I'm not sharing. 

Alex Hanna: You're not sharing, yeah. 

Emily M. Bender: Ah, are you seeing it now? 

Alex Hanna: No, we're not seeing it. 

Emily M. Bender: Okay. Well keep going.

Alex Hanna: Oh well, so, I have a question for Sophie, because you used to work on the Hill, or nearby. And just so the listeners know, so the HELP committee is, it's staffed by Sanders staffers, correct? Or is it sort of, so there's a lot of them on the HELP committee.

Sophie Song: So the majority of the staffers in the Senate committee are sort of staffers that work for the chair of that committee. Or in this case, on the minority side, it's Bernie Sanders. So this report is going to be almost exclusively written by Bernie Sanders staffers who work for the HELP committee.

Alex Hanna: Got it. Okay. How's it going, Emily? 

Emily M. Bender: So I seem to be unable to actually share this time. If I say share, share the screen, I pick the window, and it's not letting me do it. I'm gonna try one more time, and then if not, we might have to- . 

Alex Hanna: I could also try, if that helps. 

Emily M. Bender: Okay. I mean, I'll hit stop. So it's not sharing now, right? Even though it's telling me that it is. 

Alex Hanna: I don't see it, no. 

Emily M. Bender: All right. 

Alex Hanna: Why don't I try it? 

Emily M. Bender: Yeah, why don't you go ahead and try. 

Alex Hanna: I'll try and get it this time. 

Emily M. Bender: All right. And while you're trying, I will just yap a little bit here, which is- This definition of artificial intelligence is pretty funny because it's, first of all, this very bland thing about, "perform tasks that normally require human reasoning, decision making and creativity." And it's like, yeah okay, that's one of the more defensible definitions, I suppose. But it also then sort of begs the question of, well, what does it mean? Like when we've automated a task, are we still actually doing the same thing, or not? And then they give the example of ChatGPT describing itself, which is not a thing that it can do. And apparently the description is that it generates human-like text. So OpenAI there is being honest, saying that this is basically just something for mimicking language use, which isn't what they usually claim about ChatGPT. So that is kind of surprising to me. And I see you succeeded in sharing the window. Thank you, Alex.

Alex Hanna: I am now sharing. I am the sharer today. Yeah, I also wanna look at this chart, 'cause it's very silly. 'Cause it's corporate profits and wages and labor productivity, and the corporate profits are going up in a nearly secular increase. Not monotonically, but they're going up. And so then there's a dip with the financial crisis, and then another dip after COVID, and then a huge rebound. And the claim that is being made about productivity is like, there's an increase since 1973, but there's not like a, you know, if you would expect that there was gonna be an increase that is based on quote unquote "AI," you'd imagine that to be very, you know, huge. And it's actually just like, flat. So it's just very funny. 

Emily M. Bender: Well, also because this is profits, not market cap, right? So all of the big numbers we're seeing about AI is market cap, and then the cash they're burning, which is the opposite of profits. 

Alex Hanna: Yeah. And then however labor productivity is defined, I think coming from, they've got a footnote here from, I believe the Fed. Anyways. All right. Initial thoughts on this, Sophie?

Sophie Song: I sort of despise AI definitions in the policy context, almost always. It's very like, "Webster's Dictionary defines...", this blank sort of starting place for how they talk about what they think AI is. I'm also kind of just, whenever I think about this report, I just think about how this is the best foray that the Democrats have made in terms of talking about AI and workers, and I just really wish it wasn't. 

Alex Hanna: Yeah. Yeah, unfortunately. 

Emily M. Bender: Yeah. The good thoughts about workers could be paired with better understanding of what the tech is. 

Alex Hanna: Yeah. 

Emily M. Bender: And then we'd be in a better place. 

Alex Hanna: All right, so going down. So the next three paragraphs are pretty terrible. "Some of the very wealthiest people in the world, including Elon Musk, Larry Ellison, Mark Zuckerberg, and Jeff Bezos are now investing hundreds of billions of dollars into these revolutionary technologies, and corporations are wielding their power to turn AI and automation into a new kind of workforce, 'artificial labor.'" Which is the first time I've ever seen this term deployed, so it's kind of weird, but we'll get back to it in a second. "This is happening at an unprecedented speed. The agricultural revolution unfolded over thousands of years. The industrial revolution took more than a century. Artificial labor could reshape the economy in less than a decade. Senator Bernie Sanders, ranking member of the Senate Health, Education, Labor, and Pensions Committee- HELP Committee- directed his staff to examine how artificial labor could impact workers in their livelihoods. HELP minority staff reviewed economic data, investor transcripts, corporate financial filings, and asked ChatGPT about job displacement." 

Emily M. Bender: No! 

Alex Hanna: "Specifically, HELP minority staff asked ChatGPT to analyze job descriptions cataloged by the federal government for the entire US economy, and predict tasks that could be performed by AI and automation." And this is in bold: "According to the ChatGPT-based model-" which, not really a model, but- "artificial intelligence and automation could replace nearly a hundred million jobs over the next 10 years, including 89% of fast food and counter workers, 64% of accountants, and 47% of truck drivers." Yes. All right, thoughts, y'all?

Emily M. Bender: So, sjaylett in the chat says, "Revolutionary, perhaps in the sense of capital's use of these technologies will hasten a revolution." 

Alex Hanna: Yeah. 

Sophie Song: I think it's so unfortunate in this line that like, you had an opportunity to ask workers what they think, or what is happening to their industries. Even, like truck drivers, for example, right? Represented by a very large and powerful union. Why are all of these stats just things that you ask ChatGPT, and not something that you, a Senate staffer on HELP, simply asked a worker? Like workers, some of which are represented by people who would be happy to talk to you about any of these things.

Alex Hanna: Yeah, and that's right, 'cause that's also, I mean, it's not like they don't have connections to the Teamsters. And like the Teamsters, for instance, have been pretty out in front on self-driving vehicles. You know, they were flanking Supervisor Jackie Fielder when the Waymo in San Francisco killed Kit Kat. Rest in peace, Kit Kat. But they've been saying, you know, they were pushing for regulation around self-driving vehicles, and were advocating for a bill that would've regulated automated truck driving. And Governor Gavin Newsom vetoed it, right? So, oof, yeah. 

Emily M. Bender: Yeah. People to talk to. And so the end of the first paragraph, where it says, "artificial labor could reshape the economy in less than a decade." Citation fucking needed! Where's that coming from? But as you point out, they're asking ChatGPT instead of talking to workers. And then the people that they quote in this next paragraph are Dario Amodei, McKinsey, and Elon Musk. 

Alex Hanna: Yeah, right. The thing about this report that sends me up the wall is that they do this bullshit analysis, but then the people that they quote approvingly are, you know, certified enemies of the working class. You know, why are you taking these people at their word, when, you know, you shouldn't? And it also reinscribes stuff that Bernie has said before where he's like, "I spoke to the head of this AI company, and they told me this, and it really shook me." And it was really, I mean, it's just absurd. I also wanna take serious umbrage with this term "artificial labor." Like first, it's just something they've made up. You know, we look at a lot of stuff on this podcast, and read a lot of stuff in AI and labor, and Sophie and I do too, and I've never seen this term. And it's such a weird term, especially because, of course, what we know about data work and data labor and all the stuff that happens, sight unseen, to make any of this stuff work. And so then it's such a weird, I don't know what that like, jurisdictional kind of move, to then coin this term. So I don't, it's so bizarre. 

Emily M. Bender: Yeah. I think, I mean, it opens the question of, what do they think labor means, if "artificial labor" could refer to automation? 

Alex Hanna: Yeah.

Sophie Song: Right. Also, I think there's something really ironic about one of the core demands that labor is making around technology is that they want workers' voices, workers' say, workers' agency and decision making power to be a core part of how technology shapes their work. And this report that is ostensibly about technology and workers, like, refuses to seem to address the question of- refuses to include that worker voice. 

Emily M. Bender: Yeah. Infuriating. So I'm gonna pick up with the next couple paragraphs here. "In May, Dario Amodei, the CEO of the main competitor to OpenAI's ChatGPT, Anthropic, warned that AI could lead to the loss of half of all entry-level white collar jobs, spiking unemployment to 10 to 20% in one to five years. In 2023, McKinsey estimated that AI technologies had quote, 'the potential to automate work activities that absorb 60 to 70% of employees' time today.'" And there's footnotes on both of those that lead to, one is an Axios report and the other one is McKinsey. "Last year, Elon Musk said that as a result of AI and robotics, 'probably none of us will have a job. If you want a job that's kind of like a hobby, you can do a job. But otherwise, AI and robotics will provide any goods and services you want.'" And my notes here are like, what about care work? You know, what the hell? And then, next paragraph: "The reality is no one knows exactly what will happen. There is tremendous uncertainty about the real capabilities of AI and automation, their effects on the rest of the economy, and how governments and markets will respond. While this basic analysis reflects all the inherent limitations of ChatGPT, it represents one potential future in which corporations decide to aggressively push forward with artificial labor." Which sounds to me just like, "ChatGPT makes shit up. Standard disclaimer." But, thoughts? 

Sophie Song: I think it's, it's almost like, they stumbled the finish line of this paragraph to me. Because it's like, yes, there is tremendous uncertainty, and there's a reason we still have to do something about this right now. And that it's like, we can identify actively how workers are being harmed by these technologies, how it's being used as a cudgel to intensify their work, that we're seeing people be laid off and then rehired, but at that time they've implemented an entire new technology suite that makes their jobs significantly more demeaning or de-skills them in some way. There are really obvious material things we know are happening, right? 

Alex Hanna: Yeah. 

Sophie Song: And yeah, we don't know what it looks like in 10 years, and that's a perfectly fine thing to say. 

Emily M. Bender: But you could ask people about what it looks like now, right?

Sophie Song: There's plenty to say about what we do know. 

Alex Hanna: Yeah. A hundred percent. magidin in the chat says, "Part of the reason quote 'nobody knows' is that you guys insist on broadcasting your fantasies as if they are business plans, muddying the waters." And yeah, I mean it's, and also just like, bullshit analysis when, you know, there are kind of estimations of labor displacement. And it's very hard to estimate displacement due solely to the introduction of quote unquote "automation" or quote unquote "AI and automation" in the workplace. But there are other kinds of estimations that you can make, and this, model is not one of them. Absolute trash.

Emily M. Bender: Absolutely not. I wanna add one more quote from the comments here. abstract_tesseract says, "Okay, but what about the reasonable amount of certainty of what kind of fuckery the companies are currently up to?" Like displacing work onto the workers who are still retained or people overseas or everything like, you know.

Alex Hanna: Yeah. And so that, here's, the next thing is like a chart of the top 20 occupations or the largest job losses based on task performed, ranked by the total number of jobs replaced by artificial labor over 10 years. What a- 

Emily M. Bender: You missed the two most important words though, Alex. It starts with "ChatGPT's prediction of." 

Alex Hanna: Yeah, I mean, I was just gonna remark on the unwieldiness of that sentence. So the first is fast food and counter workers, which this quote unquote "analysis" says it's going to replace 3.3 million workers, which I'm like, okay, that's pretty ridiculous on the face of it. Customer service representatives are about 2.5 million workers, "laborers and freight, stock, and material movers- hand," which I thought was very literally the people that move things. And I guess this is a place where the conceptual slippage of AI is really showing because I mean, are you talking about robots are, I mean, are you talking about, I mean, what are you doing here? and so then it just goes down and down to like, all the way down. And yeah, speaking of healthcare workers, there are home health aides. Janitors and cleaners was surprising to see here, except maids and housekeeping cleaners. Waiters and waitresses, personal care aides. And then lastly- 

Sophie Song: I'm hoping these categories are defined in a census somewhere, or like BLS, and that's why they look this way.

Emily M. Bender: They are. They come from that, whatever the thing is called- 

Alex Hanna: O*NET. 

Emily M. Bender: Yeah. 

Alex Hanna: They come from O*NET, so I'm assuming what they're doing is that they're taking David Autor's framework of treating jobs as bundles of tasks. And then what they're doing is that they're then doing the ChatGPT analysis on the tasks, and then aggregating up to the jobs. But it is very- 

Emily M. Bender: Which is a huge conceptual problem there. 

Alex Hanna: Yeah. It's really ridiculous. And yeah, VAshETC- Victoria, hi!- says, "Replacing home health aides? Hilarious!" Yes. Gosh. 

Emily M. Bender: Yeah. So there's also a terrible thing where you can see the, doomers' influence in the paragraph just below the table there, where they say, "Taken to its extreme, artificial labor could be used to create even more forms of artificial labor." So this is the singularity idea. Who have you been listening to, and why are you not talking to actual workers and better experts on this technology? 

Alex Hanna: The citation is really weird too. It says "Job description, software engineer, product" from something called Mechanize, spelled with a Z, Incorporated. Yeah, I don't, that sort of seems sus. 

Emily M. Bender: So yeah, so this is, the end note was on "Software engineers used to train the replacements. Now they are creating the replacements, who in turn could create their own replacements." And so I'm guessing it's a company that is supposedly creating artificial software engineers, and hiring software engineers to do that. Although I didn't click through on that particular end note. 

Alex Hanna: Yeah. 

Emily M. Bender: I can try that and see what happens. But Sophie, your thoughts? 

Sophie Song: I think I'm sort of stuck on this idea of what they're calling a model, right? I forget if they explain better down the line, but I just find this, it's just that continuous trend, I'm sure you guys have talked about this, that's just like, we're using ChatGPT to produce text that sounds like they've done analysis, and we're calling that actual substantive analysis, right? It's just, whenever I see these numbers, I'm just like, did you do the sort of assumption of like, you can write code to do prediction and because you've asked ChatGPT to do essentially, like, labeling, categorization, and prediction, that math has actually happened?

Alex Hanna: Yeah. 

Emily M. Bender: So I got super suspicious when I was reading this report, and I was like, are they just like literally taking numbers extruded directly from ChatGPT? But if we go down, Alex, to the, paragraph where they describe it. 

Alex Hanna: Yeah. And as we're going down, all we're passing now is basically lots of corporate quotes, like, "We're using AI." Sure. It's just silly. So then, yeah. Where are we going now? 

Emily M. Bender: Page 12. 

Alex Hanna: 12. Okay. 

Emily M. Bender: Page 12. The last paragraph above the footnotes. And by the way, while you're searching, I did click through on that job application. And there's nothing there. It basically says, it's a ridiculous job ad. "Senior software engineer, compensation $500,000, location type hybrid. Overview: As a software engineer, you will create RL environments to sell to the leading AI labs. You might be a good fit if you have two to ten years of experience as a software engineer and can write in Python. No prior machine learning or AI experience is required." And that's all it says. 

Alex Hanna: And that's all it says about the analysis. 

Emily M. Bender: And so, why they're using that as a footnote for what they said there, yeah. Who knows? Okay. 

Alex Hanna: So now we're on page 12, and we're in the actual description of the analysis. And so, this says, "We used the Occupational Information Network's, O*NET's, list of occupations, n = 867. Detailed tasks are available for 774 occupations." So the number of tasks is 15,638. "Categorized by core tasks, 11,905, and supplemental tasks, 3,733." And the citation here is to some exclusion of some jobs. "Building on previous research-" and the previous research is, if you can guess it by the time I'm finishing the sentence, you get a cookie. "Building on previous research, we asked OpenAI's GPT-4.1 model to rate each task based on whether the task could not be automated, could be automated with a human in the loop, could be partially automated, and could be fully automated by AI technology in the next 10 years, including by large language models, neural networks, machine learning algorithms, automated machines, and combinations thereof." Footnote 48. And that footnote is "GPTs are GPTs," OpenAI's own research paper. 

Emily M. Bender: Which we got into in a previous episode, that I will look up to make sure it goes into the show notes, because it's the same stupid methodology. So when I was reading the table, I was like, are these numbers just straight out? And then I was like, doing a little bit of the math to see if the percentages checked out, and they were checking out. And then when I got to this, I'm like, oh, I see, they actually got in there. This is what they, I think they mean by their model, the "ChatGPT-based model," is, they did this thing where they asked about specific tasks, which of course is nonsense. That's fake data. And then made this leap that, if you can automate X percent of the tasks, then X percent of the people doing that job are gonna lose their job. And does that make sense? 

Alex Hanna: Wait, so did they do that? Well- 

Emily M. Bender: That's how I read it. 

Alex Hanna: Okay. So yeah. So after, so what they say is, "Weighting core tasks twice as high as supplemental tasks, we aggregated the data from the task level to the occupation level, resulting in an automation score for each occupation. After converting the score to a percentage of employees displaced-" oh, you're right. So they literally took the weighting of the tasks to the percentage of people within an occupation. And then they multiplied that by the total number of employees in 2023 in the occupation, provided by BLS data. That's a, I mean, technically it is a model. It's an incredibly stupid model, but it's a model.

Sophie Song: Wait, sorry, let me make sure I understand this correctly. So they just multiplied the number they got by the percentage of jobs that exist? Like assuming, so it assumes that all these jobs could be replaced.

Alex Hanna: Yeah. So what I'm assuming they did is, okay, if you're a McDonald's worker, you're a fast food worker, basically, like it says 66% of your work can be automated. So like, taking the orders or something and then, like flipping the burgers is at 33%, and then, whatever. I guess I'm doing the math in my head. And then they said there's, what's six- oh, fractions, sorry. And then they're basically saying, oh, there's 5 million fast food workers in the US. They multiplied that 66% by 5 million, and it produced that 3.3 million number. Is that, am I reading that? Is that what they did? 

Emily M. Bender: That's how I read it. 

Alex Hanna: I think, that's how I read it. That's outlandish. That is wild. 

Emily M. Bender: And I guess the logic is, okay, there's a certain amount of work that these 5 million people are doing, and 66% of that work is gonna be automated, and so therefore we only need 33% of the workers anymore. But that's not how it works, right? 

Alex Hanna: No. 

Emily M. Bender: Especially if you think about things, well, I mean, anything where there's a person who's responsible, and they're on the job, and they're there like, sort of as needed. That's not gonna change like that. 

Alex Hanna: Yeah. abstract_tesseract says, "Is this what we call quote 'data driven'?" Yeah, apparently it is. Oh my gosh. Yeah. What a methodology. And it's just as ridiculous on the face of it as the OpenAI paper is. And, you know . 

Emily M. Bender: Yeah. And I put the link, by the way, it was episode 25 where we talked about that OpenAI paper, in January of 2024.

Alex Hanna: Yeah.

Emily M. Bender: I guess the HELP committee wasn't listening to that episode. 

Alex Hanna: No, unfortunately. Okay. So that's basically, and then they reiterate the results, and then they basically have these, there's a chart hidden here about their review of corporate transcripts are just like, they have a layoff number, and then the profit number, and then executive pay number. And I'm just like, sure. And then you're actually gonna be taking people at their word in job losses here? I dunno. 

Emily M. Bender: Before we leave this document, I think, 'cause we were talking ahead of time with Sophie, that there are some good policy ideas in here. 

Alex Hanna: Yeah. Right. 

Emily M. Bender: They're just being promoted based on this quicksand. Is that about right, Sophie?

Sophie Song: I mean, that's my general feeling. I actually think, like the committee recommends a series of policies about how you can help working people in the face of technological change. And, let's see, this is page, maybe page 6 if we wanna look at it. 

Alex Hanna: Yeah. Page 6, yeah. So in the top line things. Yeah. Do you wanna read them, Sophie? 

Sophie Song: Sure. You know, it suggests "moving to a 32 hour work week with no loss of pay, sharing corporate profits with workers, democratizing corporate boards, expanding employee ownership, enacting a robot tax, more than doubling union membership"- you know, highly supportive- "guaranteeing family and paid leave, paid family and medical leave, bringing back pensions, banning stock buybacks"- you know, bring it on. Like so many of these suggestions are things that would materially benefit working people, especially in the face of uncertainty that this, like, giant hype bubble is bringing on, right? But it literally, the report at some point says, oh, these employees are being fired because of AI. When it's literally, we know that employees at Amazon, for example, are being fired because, you know, they need their bottom line to look good while they're spending so much money in this AI expansion. And getting rid of labor is one of the easiest way to do that. And then every other organization and every other competing tech company is following suit, right? Yeah, "ban stock buybacks." Let's do that right now. "Bring back pensions." You know, make it possible for workers to actually unionize in their workplace. These are the things that will prevent these potentially terrible outcomes when you bring technology into the workplace that is controlled by the boss, that is built on false promises of a workplace without workers, right? Like this idea has been around for a hot minute, and it's just not what happens. It's just a way to wrest power from workers. And claiming that these technologies have the power to take away this many jobs is really like, affirming that power. 

Emily M. Bender: Yeah. Exactly. And these are the last people we want doing that. It does remind me of Lee Vinsel's criti-hype, right? They're criticizing the target, but they're doing it in a way that builds it up rather than actually stealing the power back or, not even stealing the power back, but reclaiming the power, as they should be doing. 

Alex Hanna: Yeah. Absolutely. 

Emily M. Bender: All right. Should we move on to the next one, Alex? 

Alex Hanna: Yeah, let's do it. So this next one is from, not a friendly part of the Sanders orbit, but the American Enterprise Institute, a senior fellow there named Will Rinehart. And the post is titled, "Senator Sanders' AI Report Ignores the Data on AI and Inequality." And so this is published in a series, I guess, called AEIdeas. Yeah, terrible little portmanteau there. October 9th, 2025. And the first paragraph is just reiterating the report. And then, I want to just jump in, it says: "While I commend Senator Sanders's staff for its pioneering use of ChatGPT to model job losses," which, already hilarious. 

Emily M. Bender: Also, and it wasn't pioneering. It was done two years earlier. 

Alex Hanna: Yeah. And then, "there are a number of serious issues with the report and its proposal to tax robots." So, already off to a strong start. Pretty not appealing, not flattering picture of Senator Sanders, in kind of his classic, I'm about to say something, pose. So, "For one, the report reviews some key papers on automation and income inequality, but nowhere does it review the current literature showing that new AI tools are reducing inequality." So first off, the report does one paper, which I believe is the OpenAI one. But sure. "In Brynjolfsson et al., Caplin et al., Choi et al., Hoffmann et al., Noy and Zhang, and Hauser and Doshi, advanced AI tools were found to be skill equalizers, raising the performance of those at the bottom in customer support, legal work, and software development, among others. If Sanders were truly concerned with worker inequality, he should be optimistic about AI tools and engaging with the empirical work on the subject, which I've been actively collecting here." And there's a link to Will's stuff. Okay, let's stop there. Thoughts so far?

Sophie Song: I think I got stuck when I heard them complimenting the methodology of the paper. Because fundamentally I think that work about workers should listen to workers, right? Like when you're talking about the future of work, it's just, even that methodology where they make that, jump from, oh, this is how much a set of tasks in a profession could be automated to that's how many jobs will be replaced. Literally, the ILO, the International Labor Organization, like the UN agency for labor, did a set of reports that looks at gen AI exposure for jobs, and concludes the majority of jobs will be augmented, not replaced, using a similar sort of measurement around what's automated or which tasks can be partially automated, which ones can be fully automated. And it's like, literally, I just wish anyone would listen to workers. This is just like, my whole theme of this, being on this episode. And people who spend more time listening to workers even! 

Emily M. Bender: Right. Listen to the people who listen to workers. And you know, I wanna sort of, on that theme- where, who's this author again?- Rinehart is saying that AI tools are actually skill equalizers. And he says this is gonna raise the performance of those at the bottom in customer support. This is not the inequality in question, right? Inequality is economic inequality, not- and as sjaylett puts it here in the chat, "Of course the inequality the AEI cares about is inequality of output."

Alex Hanna: Yeah, exactly. And it's really, if you read, so everybody that's cited here is a labor economist. Like I've read the Brynjolfsson piece and the Noy piece. So like, the Brynjolfsson study is measuring, like they're looking at data from customer support people, and they're effectively saying lower skilled workers are basically, the ones that are less experienced basically are like, producing more outputs. And then the people who have been at jobs less long are like not seeing as much, just because they're going and they're getting this extruded text and then they're responding quicker. Okay. And then, and this is also the thing about the study that's wild, too, is that they're like, they basically, I believe they effectively were treating a Filipino customer support workplace as this, like, natural experiment, which feels weird already. It was either this study or a different one. And then the Noy and Zhang experiment, what they're doing is that they're also doing this, where they're looking at time decreased and output quality in an online experiment, in which what they're doing is, it's like a writing task. And I think they were using, it's not a real writing task, it's basically prolific. So then- One interesting part of this study, and this is getting way into the weeds, but it's like, there's a point in the study where they said right after the interview, people were kind of excited to use ChatGPT, but then that didn't sustain two weeks later, which was very funny. They're like, oh, this isn't actually that impressive. So yeah, so right, in the comments, the person that said, was it sjaylett, like, yeah, worker inequality in terms of worker productivity is like, sure, but actual worker, like, wages, no. And others' work has come out. The Humlum study, that came out a few months ago, shows that there are no positive labor market effects in their survey and administrative data of 25,000 Danish workers. So it's like, you're doing some very creative equation of what inequality you actually mean.

Sophie Song: Yeah. I think it's really important to remember- I'm thinking about worker dignity now, right? Like all these conversations around productivity and how this transforms the workplace. And I think it just totally forgets that it's like, we're talking about the day in, day out of millions, you know, hundreds of millions, billions of human beings. And what is the point of any of this, if what the technology does is fundamentally make the lives of that many humans worse, right? Like this conversation of productivity moves the target away from what we should be fighting for, and what we should be thinking about when we're talking about technology and work. And work in general. 

Alex Hanna: Yeah. Oh, absolutely. I mean, that's a great, that's a really great point. There's a few more things to point out. I mean, we won't get into this robot tax thing, which is kind of a different issue. But the two paragraphs to highlight here, which is Rinehart's bone to pick, which is, "Second, the report doesn't contextualize what a hundred million jobs lost over a decade would mean for an economy of 170 million workers. In any given month, roughly 4 to 6 million jobs are lost. Over a decade then, the US economy experiences roughly 480 to 720 million job separations through the normal churn of business, putting the 100 million job loss number at or below 20% of the total job turnover that would occur regardless." So this is like, a weird nitpick. This is basically saying, actually, a hundred million jobs isn't that bad when you contextualize that, you know, there's this much job churn. And you're like, well, a pox on both your houses. 'Cause both of you are wrong. But then also, it's not about churn. Like the hypothetical here is that there's a hundred million and that's not churn. It's that those are permanent losses. So what's your point? It's such a weird point. And I think just to reiterate what you said, Sophie, yeah, it's not really getting at what's a good job and what do people want to be doing. 

Emily M. Bender: Yeah. And also, what's that experience, right? Again, to your point earlier, of being "separated" from your job, in this euphemism here. A hundred million people experiencing that sucks, right? One person experiencing that sucks. 

Alex Hanna: Yeah. And then, "Finally-" and then, I will give Reinhart props here. As he does say, like, "modeling out AI job loss is a tricky business. In an extended review of AI job loss predictions back in 2019, I pointed out a small shift in methodology between estimates from the University of Oxford and from PwC had the effect of completely changing the impact of AI, causing job loss estimates to swing from 47% to 9%." But then, you know, so I'm like, okay, I'll give you that. And I mentioned as we were looking at the other paper, like, estimating these things is very hard. But then he says, "And much like those earlier estimates, Sanders's model says little about the cost-benefit tradeoffs that firms face when adopting new tech or how automation will interact with task reallocation to determine net employment effects." So this is basically like, well, firms have to deal with all this and what happens when they're shifting internally. But it's also, it plays to that kind of literature and this thing we've seen of like, firms aren't really seeing benefits to quote unquote "AI" in this case.

Sophie Song: You know, all these economic debates, it's never been my forte. But at the same time, it's like, we know what real humans are experiencing right now. When we have all these conversations about how, oh, it's hard to tell, and it's hard to predict what's going to happen, the methodology for assessing all this- That's all true. It's hard to know. But there's things we do know, right? There's things we do know about what's happening, and we do know how people are being negatively impacted in their current jobs by these technologies. We do know how firms are manipulating this moment and this hype to better their bottom line at the extreme cost of working people. We do know that the direction we're going doesn't benefit an everyday, ordinary human, and we know that it's a lot of bullshit. And I wish that instead of getting caught up in the conversation of hypotheticals, we could be talking, when we talk about AI and the future of work, we could be talking about right now and what we can do for the future of work.

Emily M. Bender: Yeah, yeah. And coming back to those policies around, you know, more worker say and more worker control in what's happening. And, you know, to my taste, stepping away from AI as the thing that we're looking at, and talking much more about, okay, what's being automated? And this is a little bit belated, but I was annoyed with the HELP report differentiating artificial intelligence from automation. Because I think if we talk about all of this stuff as automation, things become much, much clearer. And we talk specifically about what we're automating. And then we don't run into the definitional problems that Rinehart here was complaining about in legislation around this stuff, too.

Alex Hanna: Great. Well, let's move into AI hell. 

Emily M. Bender: Yeah. So I got one for you, Alex, that you can make as musical or unmusical as you like. Which is, imagine we've got the demons of AI Hell on strike. They are picketing, and you get to do the chants. 

Alex Hanna: Okay. 

Emily M. Bender: And if you want us to call and response, I'm here. I don't know, Sophie, I won't volunteer you, but you can join if you want. 

Alex Hanna: All right. Lemme think. All right. All right, boys. You ready? This is what demonology looks like! 

Emily M. Bender: Tell us what demonology looks like! 

Alex Hanna: This is what demonology looks like! All right. I'm gonna say, what do you want? You say slop. And then, when do we want it? Now! 

Emily M. Bender: All the time! 

Alex Hanna: All right. Yeah. All right. Ready? What do we want? 

Emily M. Bender: Slop! 

Sophie Song: Slop! 

Alex Hanna: When do we want it? 

Sophie Song: Now! 

Emily M. Bender: All the time!

Alex Hanna: No, it's now! All right. All right. Very good. We'll practice those before, yeah. If you want slop, say so in the chat. All right. So the first one is from me, actually. And what it is, it's a quote from The Information's AI agenda newsletter, which doesn't have a public place where it posts, but it's a really good place. The Information, by the way, is like an industry source, but I actually love it as a source for AI Hell. It's actually very, very incredible. So, I say here, "It seems like the only way tech companies are able to compel AI usage is by coercion in performance review processes? Via the Information AI agenda newsletter." And the quote, it says, "AI adoption is easier said than done. Even as executives pressure workers to use AI, getting people to do that throughout an organization is easier said than done. Rukmini Reddy, an engineering executive at incident management software manager PagerDuty, does so by making AI usage a part of her employee's annual performance reviews. This strategy seems to be working, as she said that 98% of her engineers use coding tools like Anthropic's Claude Code or Microsoft's GitHub Copilot on a day-to-day basis now." So how about that conversion, or, not conversion, coercion. 

Emily M. Bender: Yeah. I mean, this is, what is the thing? If you make the, if you mistake the metric for the goal- I can't get the thing right, but basically it's, yeah. 

Alex Hanna: Goodwin's Law. 

Emily M. Bender: Goodwin's Law. Yeah. 

Alex Hanna: Yeah. And the kicker of this is, she ended up responding to me. I think, I'm trying to find, yeah. She ended up responding to me here. 

Emily M. Bender: I think she, didn't you say she created her account to respond to you? 

Alex Hanna: She created the account to respond to me, and so she said, "Hi there. First Bluesky post here. I thought I would chime in on this. As an engineer for the past 20 plus years, I've always found joy in learning new things and finding myself throughout building and pushing the bounds of technology alongside my team." And it goes on and on, and she just gets dogpiled by actually, mostly people who are engineers. So this person Goeks is a friend of mine, and is like, I actually hate this. And many people are responding to her like, I'm never using PagerDuty again. So, yeah. All right. Let's move on. But that one's a real tickle. All right, I'm gonna give this to you, Emily. 

Emily M. Bender: All right. So this is from Dare Obasanjo, on November 1st, 2025 on Bluesky. Quoting, or linking to something from blog.arxiv.org, where the blog title is, "Attention Authors: Updated Practice for Review Articles and Position Papers in arXiv CS Category." And the post content is, "arXiv-" the Bluesky post- "arXiv will no longer accept review articles and position papers unless they have been accepted at a journal or a conference and complete successful peer review. This is due to being overwhelmed by hundreds of AI-generated papers a month. Yet another open submission process killed by LLMs." And I mean, this is, I think, a little bit ironic. I have a longstanding beef with arXiv, especially CS arXiv, as sort of a way around peer review. And now they're saying, well, for these two categories, so surveys and position papers, which are the kinds of things where there's not flag planting going on, you can wait till it's gone through peer review somewhere else before you put it here, because we can't deal with all the garbage that people are trying to throw up on arXiv. And you know there's no less garbage in the other categories.

Alex Hanna: Yeah. Gosh. I mean, I'm very supportive of open reviews and preprints, but I mean, it's just become, it's like, this is why we can't have nice things. 

Emily M. Bender: And I have no problem with preprints or, you know, getting around paywalls. But that's not how arXiv is being used in CS. And I'll make sure that my blog post about that's in the show notes. 

Alex Hanna: Yes. All right. So queuing up the next two. So this is in Fortune, and this is, the title is "Godfather Of AI Says Tech Giants Can't Profit From Their Astronomical Investments Unless Human Labor Is Replaced." And the journalist is Jason Ma, November 1. So Hinton saying the, you know, the quiet part out loud. We're replacing wages, and we're just trying to automate a bunch of labor. Yeah, he says in an interview with Bloomberg TV's Wall Street Week on Friday, he said, "The obvious way to make money off AI investments, aside from charging fees to use chatbots, is to replace workers with something cheaper."

Emily M. Bender: Yeah. And you're not gonna make money off this unless you can solve the problem of wages. 'Cause that's the only thing it's really supposed to be for. 

Alex Hanna: Yeah. All right, next. 

Emily M. Bender: Okay. Oh, so Ars Technica, journalist is Stephen Clark. Date is November 5th, 2025. And the headline, oh, the sticker is a picture of a planet with a ring around it, and then Big Data, which is kind of funny. And then the headline is, "If you want to satiate AI's hunger for power, Google suggests going to space." So, you know, this is data centers in space. "Google engineers think they already have all the pieces needed to build a data center in orbit." And I've got this filed in our big old list of links, by the way, under environment. Because this is like, okay, we're gonna try to deal with the impossible power needs of these things by putting it in orbit somehow, and never mind the environmental impact of doing that, and never mind the lag. And anyway, it's ridiculous. Just ridiculous. 

Alex Hanna: Yeah, it's very big hype type thing. And this is, they mentioned, this is like under the, Google X, their like, moonshot thing, you know. And so like, Waymo's a moonshot, but they also had Project Loon, which was involving, like, providing internet based on weather balloons. So just one of those incredibly ridiculous proposals. And I'm also thinking about all the space, all the ways in which Starlink satellites have been blocking out, like, astronomy by putting all the streaks across space from light exposure. 

Emily M. Bender: Plus the debris in orbit when these things start breaking down. Like it's, yeah. 

Alex Hanna: Yeah. Well, imagine how quickly they're, like, how are you gonna do maintenance on any of this? You can't. Basically you're going to launch, and then they're going to be deprecated, and then you're just gonna be launching other things. So. 

Emily M. Bender: Well, but the robots- Go ahead, Sophie.

Alex Hanna: But the robots, but the space robots! Yeah, Sophie. 

Sophie Song: Well, yeah. I'm just like, are you gonna have space SREs? Like, what? 

Alex Hanna: Space SREs! Imagine. 

Emily M. Bender: SREs in space! 

Alex Hanna: Name your future occupation. I'm a space- I don't know what to call it. I don't even have a, space engineer. 

Sophie Song: You could be a space garbage person. We're gonna need a lot of those. 

Alex Hanna: We really, yeah. We really would. Oh, this is great. 

Emily M. Bender: And we also need space NIMBYs. 

Alex Hanna: "All these space NIMBYs, telling us they don't want our space data centers." Abstract_tesseract, just on fire today. Really good comments. All right. Let's get on. All right, there's three more. This next one is from Financial Times, and again, we have Hinton in the picture. So, "AI pioneers claim human-level general intelligence is already here. Tech leaders say systems now rival human intelligence in key tasks, further fueling the superintelligence debate." And then there's a picture of Hinton, Bengio, Yann LeCun, Jensen Huang, Fei-Fei Li, and then Bill Dally, who I don't know, the last one. And this is by Christina Criddle, Madhumita Murgia, and Melissa Heikkila- sorry, I mangled the surname. Published November 6th, 2025. And so, "Artificial intelligence is already superior to humans in many tasks, according to the quote, 'godparents' of the revolutionary technology-" thanks for Fei-Fei, for making it gender inclusive- "fueling an industry debate on how quickly tech giants will create superintelligence." Ugh. Okay. 

Emily M. Bender: Yeah, and I've just, I'm just amused that they've been promising it's around the corner for so long that now they feel like they have to say it's here. 

Alex Hanna: Yeah. 

Emily M. Bender: And like, how long is that gonna last? It feels like a sort of daring, where they just don't care what people think or they don't believe anybody could possibly disbelieve them at this point. Such a ridiculous claim. 

Alex Hanna: Yeah, for sure.

Sophie Song: I just feel like, you know, saying these things must get applause in the room, right? 

Emily M. Bender: Yeah. 

Alex Hanna: It's kind of a thing where it's, okay, we don't know what AGI is, so let's just scope it downwards and just say it's here. And then that's kind of the move now. That's been the move for a little bit. 

Emily M. Bender: Yeah. Applause in the room, and also, you know, extra digits on the stock price. 

Alex Hanna: Yeah. All right, so let's hit the last two. The next one is, Emily, I think you shared this one. This is, yeah. 

Emily M. Bender: So this is someone named Ed Kohler on Bluesky posting about, I guess, Minneapolis elections. So the text of the post is, "Brenda Short has posted a rant on Nextdoor claiming that ranked choice voting is unfair because candidates who were mathematically eliminated in the first round were eliminated mathematically." And then, "It includes an AI-generated image of supposed supporters of her cause." And so there's this relatively small crowd of people, very pale, largely, many, you know, bundled up because Minneapolis, holding signs where it's misspelled. So demi-kracy- just, somehow the bad text just is never not funny. 

Alex Hanna: Yeah. What is this? There's a few good ones. Tag yourself. There's also one that says something like, "something ho now," and then there's- 

Sophie Song: "Ho now." 

Alex Hanna: Yeah. Then there's like "ninja folis, ninjesfolis," and then "fair blob." And then yeah, there's, it's just, and then there's one that's- 

Emily M. Bender: Democracy foe elections. F-o-e. 

Alex Hanna: There's one, this is great, it says, "Vote local. Make change. Elections." I'm pretty, that's how elections work, right? 

Emily M. Bender: Especially if you have ranked choice voting.

Alex Hanna: And the last one, I'm actually gonna give this one to Sophie, because Sophie helped make this. 

Sophie Song: Well, I tried. So yeah, we have the AI Implementation Bingo board, which is a very fun game that you and your coworkers and friends can play to identify all the AI talking points and AI hype that your boss is telling you. So we have a lot of different versions you can play. If you go to WorkersDecide.Tech, there's a nice, lovely printout. But we also have, my favorite version of the rule set is what I like to call the LinkedIn version. So you can sort of do it competitively with your friends. You scroll through your LinkedIn feed while your friends scroll through theirs, and every time you find someone who, you find a post that is one of our bingo pieces, you mark off, and the first person to get a whole bingo wins. 

Alex Hanna: Some of these are great. I mean- 

Emily M. Bender: They're so good. 

Alex Hanna: Some of the things are, "AI increases efficiency and productivity" and, "This is not de-skilling, it's upskilling." And some of my favorites are, "Are you some sort of Luddite?" and then, "You're just not using it right."

Emily M. Bender: I also liked, "There are ways to use AI ethically." Appreciate the inclusion of that there. 

Sophie Song: But shoutout to the AI Chaos Prevention Committee, which is a sort of loose collection of folks from all types of wonderful places, like DAIR, but also the Tech Workers' Coalition, Amazon Employees for Climate Justice, Collective Action in Tech, and several others. So I'm sorry if I didn't call you out, but yeah, keep an eye on this website. They're putting out really fun work like this. 

Emily M. Bender: Absolutely excellent work. Thank you for this. 

Alex Hanna: All right. Very great. 

Emily M. Bender: That is it for this week. Sophie Song is a researcher, organizer, and advocate working at the intersection of tech and social justice. Thank you again for joining us, Sophie! 

Sophie Song: Thanks so much for having me! This was so much fun. 

Alex Hanna: Great to have you, Sophie! Our theme song is by Toby Menon. Graphic design by Naomi Pleasure-Park. Production by Ozzy Llinas Goodman. And thanks as always to the Distributed AI Research Institute. If you like this show, you can support us in so many ways. Or the AI Con at thecon.ai or wherever you get your books, or request it at your local library.

Emily M. Bender: But wait, there's more! Rate and review us on your podcast app, subscribe to the Mystery AI Hype Theater 3000 newsletter on Buttondown for more anti-hype analysis, or donate to DAIR at dair-institute.org. That's dair-institute.org. You can find video versions of our podcast episodes on Peertube, and you can watch and comment on the show while it's happening live on our Twitch stream. That's twitch.tv/dair_institute. Again, that's dair_institute. I'm Emily M. Bender. 

Alex Hanna: And I'm Alex Hanna. Stay out of AI Hell, y'all. 

Emily M. Bender: What do we want? 

Alex Hanna: Slop! 

Emily M. Bender: When do we want it? 

Sophie Song: Now! 

Alex Hanna: Sometime soon!