Mystery AI Hype Theater 3000

Et Tu, American Federation of Teachers? (with Charles Logan), 2025.07.14

Emily M. Bender and Alex Hanna Episode 59

The chatbot boosters are looking for educators to play brand ambassador for more intrusion of so-called "AI" into the classroom. From the American Federation of Teachers' new partnership with OpenAI and Microsoft for a "national academy for AI instruction" to yet more articles extolling the alleged time-saving and future-proofing virtues of LLM-powered ed tech, the hype can feel relentless. Charles Logan joins Alex and Emily for a critical look at the latest propaganda for "AI" in the classroom.

Charles Logan is a former English teacher and current PhD candidate in Learning Sciences at Northwestern University.

References:

Welcome to Campus. Here’s Your ChatGPT.

AI isn’t replacing student writing – but it is reshaping it

AFT to Launch National Academy for AI Instruction with Microsoft, OpenAI, Anthropic and United Federation of Teachers 

Also referenced:

Tressie McMillan Cottom on "predatory inclusion"

Daniel Greene's "The Access Doctrine"

The Group Chats that Changed America

Fresh AI Hell:

Missouri AG investigating why chatbots don’t like Trump

Gig workers calling ICE on other undocumented gig workers

Tech billionaire Trump adviser Marc Andreessen says universities will ‘pay the price’ for DEI

USF makes a PTSD detector...trained on children

People falling in love with Replika chatbots

Elon Musk thirsting over xAI anime construct


Check out future streams at on Twitch, Meanwhile, send us any AI Hell you see.

Our book, 'The AI Con,' is out now! Get your copy now.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

Alex Hanna:

Welcome, everyone, to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find.

Emily M. Bender:

Along the way, we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. I'm Emily M. Bender, Professor of Linguistics at the University of Washington.

Alex Hanna:

And I'm Alex Hanna, Director of Research for the Distributed AI Research Institute. This is episode 59, which we're recording on July 14th, 2025. You may have noticed we've been on a bit of a break from yelling about AI hype. We had to hype our book and do some travel in star-crossed time zones. Honestly, it was kind of nice. Unfortunately this week we got another reminder that the billionaires and boosters are determined to get generative AI into every profitable nook of society, including the classroom.

Emily M. Bender:

It's not only the tech peddlers like OpenAI and Microsoft who want so-called AI in the classroom. It's also unfortunately some educators and their unions who are trying--who are buying into the hype. The American Federation of Teachers, OpenAI and Microsoft are apparently partnering up to market AI to teachers under the guise of free professional development and training. The AFT's press release includes almost every trope in the book from the AI hype bingo card, with overwraught appeals to "the technology of the future" and how synthetic text extruders will help overwhelmed teachers, for example. But don't worry, somehow this will also involve meaningful ethics education for teachers and sustainability, whatever those words mean to the Microsofts of the world.

Alex Hanna:

As regular listeners to this podcast might remember, wealthy people with no educational experience have a long history of, quote unquote "generously" donating money to field classrooms with technology that surveil students and pave the way for firing the actual experts. And I will say, as a member of the AFT myself, a longtime member, I should say I, I know they have people on staff who know better than to buy this bullshit. So it's disappointing to see this kind of propaganda coming from them. But before we get into it, we have a guest today. Charles Logan is a former English teacher and now a PhD candidate in learning sciences at Northwestern University. Charles has been one of the stalwarts from within education who's been warning against AI in the classroom for a very long time. Hey, Charles.

Charles Logan:

Hey everybody. Um, it's such a pleasure to be here. I'm really excited, uh, and a little bit horrified to, to jump into today's artifacts.

Emily M. Bender:

They are horrifying. We're really glad to be looking at them with you. Um, let me grab the first one. Um, we are starting with a piece in the Gray Lady, the New York Times,

uh, headline is "Welcome to Campus:

Here's your ChatGPT", with the subtitle, "OpenAI, the firm that helped spark chatbot cheating, wants to embed AI in every facet of college. First up, 460,000 students at Cal State." This is by Natasha Singer, and it's from June 7th, 2025. And it is horrible.

Alex Hanna:

Yeah, this one's pretty bad. Um, I was reading this and I was like, this looks familiar. It's because, uh, Emily and I were working on, uh, pitching an op-ed around it, but it's just so, so bad, and so, it is dripping with hype. You know, the New York Times and their journalists have not shied away from the hype at all. Um, it starts off with, "OpenAI, the maker of ChatGPT, has a plan to overhaul college education by embedding its artificial intelligence tools in every facet of campus life. If the company's strategy succeeds, universities would give students AI assistance to help guide and tutor them from orientation day through graduation. Professors would provide customized AI study bots for each class. Career services would offer recruiter chat bots for students to practice job interviews and undergrads could turn on a chat boy, a chatbot's voice mode to be quizzed aloud ahead of a test. OpenAI sales pitch as quote quote 'AI-native universities.'" Yeah, sorry, I wanted to get this AI native universities thing. So, "OpenAI dubs its sales pitch quote unquote 'AI-native universities.'" So yeah, there's a lot here. So go into it.

Emily M. Bender:

Yeah. How about it, Charles?

Charles Logan:

I mean, one thing that's striking is just how total their vision of capture of, of higher ed is. Uh, it's not, uh, you know, OpenAI is not, uh, content to just target the classroom, but it's really all facets of life, both for faculty and staff and students and really to turn higher ed into another asset, um, to charge rent, right, for access to that platform. And, you know, it, it should be, I think, uh, very clear to that, uh, you know, professors who are making chat bots and, you know, staff who are making chat bots, you know, to consider how those chat bots may or will replace your work and thinking about what solidarity needs to happen across institutions and not just, you know, between faculty. Between contingent faculty and tenured faculty, between staff, between students, um, both undergraduate and graduate students. Um, so that to me is, is just a lot happening in that paragraph.

Emily M. Bender:

Yeah, absolutely. What, what really jumped out to me about this paragraph was this wasn't a vision of how to do education better. This was a vision of how to extract the most possible money or rent, um, for OpenAI, right? Their vision is all these people are paying us for all these use cases. What other use cases can we think of, and not what's actually needed in education. Because they're not actually educators.

Alex Hanna:

Yeah. Yeah. Well, it's also one element of this and something that we talked about, Emily, were the, the kind of people who are sold as educators, so the people that they have in charge here. And so one of the people involved in this is, their vice president of education is Leah Belsky, who says, "Our vision is that over time AI would become part of the core infrastructure of higher education." Um, and this person has kind of a consultancy background, I think had been maybe involved initially in, I, I don't know if it's right--

Charles Logan:

Coursera.

Alex Hanna:

Coursera, yeah. Yeah, because I, I was gonna check her, her LinkedIn real quick, but Yeah. Had been a Coursera and, I mean, Coursera had been part of this prior trend, you know, the MOOC trend that administrators had been crying and, and, and bell, you know, bellowing about and said, this is gonna radically reinvent higher education, and yet where are all the MOOCs? And you know, and also MOOCs are something also the Silicon Valley, um, innovation, right? I mean, Andrew Ng, who's now the AI booster that he is, you know, had been, you know, creating it just to teach his, you know, teach deep learning or whatever. And so, I mean, it's, it's all part of the same kind of movement within Silicon Valley to disrupt, innovate, use whatever horror show term you want to here.

Charles Logan:

Yeah. I think it's interesting too that, you know, Belsky also, and, and this is language that I think appears across the different artifacts for today, but that language around access and personalization. Um, and, and so that like, right, the argument often connected with arguments of equity, of let's, you know, make sure that our students have access to these tools when there's so much really compelling research around, you know, from Tressie McMillan Cottom around, you know, the, the notion of predatory inclusion or, you know, Daniel Greene writes about, um, the access doctrine, um, that just, you know granting access to these tools doesn't address, you know, underlying historical discrimination and all sorts of, um, oppression. And so it's, there's get these tools to as many students as possible, or as many faculty as possible, and for this so-called, you know, personalized learning, um, and then keep them right sort of hooked, not just when they're in college, but, but beyond. And so I think we see Belsky um, really using that rhetoric. Um, you know, which we've seen time and time again with, with different kinds of ed tech.

Emily M. Bender:

And I actually really object to the framing of these things as tools because that suggests that they're actually good for something in this context. And I don't think they are. Right, this is the, the idea of setting students up to talk to the synthetic text extruding machine is anathema to the kind of critical thinking that we should be working on. So that the fact that there's this idea that, oh, the disadvantaged students aren't gonna have access to this thing, is just, it puts such a weird frame on it that's really unfortunate.

Alex Hanna:

Yeah. There's a great, um, thing in the chat from JMak518 who says,"Core infrastructure of teaching equals equals funding for quality instruction and resources needed for that equals equals complicated." So yeah, like it's complicated to provide those things. It's not that one silver bullet that's somehow gonna solve this long-term disinvestment of education in, in areas and populations that need it. Um, and so the conversation gets turned to this, you know, this quote unquote "personalized learning" and all these different tools, which certainly can't provide any of that.

Emily M. Bender:

We've got Abstract Tesseract replying with,"Exactly. No app for that, alas." Yeah. Yeah. So this next middle bit here is basically just talking about how all of the schools are, some of the schools are adopting it, all of the big players are trying to push it into schools and getting people addicted. But I wanna scroll down, um, to, um, uh, where it gets into the details again, I think. Um, so, "OpenAI's service for universities, ChatGPT Edu, offers more features, including certain privacy protections than the company's free chat bot. ChatGPT Edu also enables faculty and staff to create custom chat bots for university use." Um, skipping a bit."Open AI's push to 'AI-ify' college education amounts to a national experiment on millions of students. The use of these chatbots in schools is so new that their potential long-term educational benefits and possible side effects are not yet established." What's interesting about this article is that I feel like the journalist is kind of almost there and they're stopping with like, just sort of suggesting that maybe this is an issue.

Alex Hanna:

Yeah, they could be. I mean, I think there's a certain kind of legitimization happening in the journalism here, and I mean, they're, they do, you know, cite, you know, provisionally on the next graph where they say, um,"A few early studies have found that outsourcing tasks like research and chat and writing to chat bots can diminish skills like critical thinking." And they link here to, um, this Microsoft study that had been done earlier this year. Uh, they don't link to the, um, the MIT study that had been done a little, uh, that came out I think last month on around, uh, kind of, they were doing some brain activation stuff, which--I mean, and both of these are, I think, have not been peer reviewed. So--

Emily M. Bender:

Yeah. But I think the MIT thing actually came out after this article. This was June 7th.

Alex Hanna:

Yeah. Yeah. Yeah. And then in the next, in the next sentence, "And some critics argue that colleges going all in on chatbots are glossing over issues like societal risks, AI labor exploitation, and environmental costs." Here, they're like--

Emily M. Bender:

So "some critics argue that"--

Alex Hanna:

I know it's--

Emily M. Bender:

We said the thing, we can go on.

Alex Hanna:

Yeah. Yeah. There's a, and there's a--

Charles Logan:

I would like to take a look to back for a second too of that notion that you have to pay more for more privacy. I mean, again, thinking about, um, you know, we know that these companies are not profitable and that, you know, uh, they're coming out with these models where we have to pay more and more. And so you can think about the day where, you know, again, um, all of the, the marketing to students get them hooked, have them using, again, under the name of access and equity. And all of a sudden now, well if you want your, you know, chats not to be or included in the training data, you know, we, if we believe them in the first place, but now you have to pay extra for that. And so I think it is this um, problematic business model, to say the least. Uh.

Emily M. Bender:

Indeed.

Alex Hanna:

Yeah.

Emily M. Bender:

Yeah.

Alex Hanna:

Yeah, for sure. And it's very, and it's also putting administrators in this bind because it's such a weird kind of framing. Like it makes it an equity issue, but then you're effectively, like, there's some mention earlier in the article where Duke generated its own thing called this DukeGPT, which whatever, I don't know what the hell that means, but it's effectively like, well, and it's this trade off that I've heard from some administrators, which is, well, our students are gonna use ChatGPT, what if we just buy this site license and then try to ensure that they have privacy? Which I'm like, that you're not, what? Like first off, you're not really, you're not really aiding them. It's a waste of money. Are you actually ensuring that they're gonna have some privacy with some kind of a remote, it's still a remotely hosted platform. And so it's, it's very, so it's a, it's really weird, you know, trade off that they, that they, they're posing and it's one that I think has, I mean, they're, they're bringing this kind of like an enterprise software type of a vision here. And they're saying, well, if you don't provide, you know, a site license to, you know, productivity tools like Google Docs or whatever, students are gonna come with their own Google Docs. And then if we have a site license, we can sort of guarantee some privacy. But I'm just like, it's still a cloud service. So you're still giving, you're still having to trust the terms of use that the company is, is, is saying that they're going to actually provide. So, so, yeah. So it's not actually guaranteeing anything.

Emily M. Bender:

Yeah, it's not guaranteeing that. And privacy isn't the only problem.

Alex Hanna:

Yeah. Right.

Emily M. Bender:

So chat bots are not a good information access system. There's the environmental issues, there's the fact that we're gonna be getting, you know, all the racist nonsense coming out of them. And there's somewhere in here, I think there's someone who said, yeah, I trained it on all my stuff. So now it gives only accurate answers. Not how it works. But I, this part with Cal State was really, really pissing me off. So Cal State buys--

Alex Hanna:

This set me off. Yeah, go ahead.

Emily M. Bender:

So, um, in the context of Cal State doing this, it, the graph here says, "Some universities say they're embracing the new AI tools, in part because they want their schools to help guide and develop guardrails for the technologies. Quote, 'You're worried about the ecological concerns, you're worried about misinformation and bias,' end quote, Edmund Clark, the Chief Information Officer of Cal State, said at a recent education conference in San Diego. 'Well join in, help us shape the future.'" Like, no, we should be able to say, no, we don't need this in the future.

Alex Hanna:

This really set, I mean, this really set me off too because, you know, we, I've talked to a bunch of people, you know, I live in the Bay Area. I've talked to folks at Cal State and talked to instructors in Cal State and, and, and Emily and I actually also have the, the, the, the, um, the lovely, um, opportunity to speak at, um, at San Jose State on our, on our book tour and which is part of the Cal State system. And when they announced, this is a huge contract, you know, $16.9 million, just like huge licensing over, you know, however many years, and the Cal Faculty Association was like, wait a minute, what the hell are you doing? I thought we were running at a, you know, at a multi hundred million dollar deficit. You know, we're not getting these funds promised in the governor's budget and now you're wasting however many million on this thing that no one is asking for and all, and so many faculty oppose. And so, you know, the what a what a, what a disingenuous thing to say from the CIO at Cal State to say, you wanna get on, you know, like, are you mad about it? Well get on board. Like justify the $17 million system or the $17 million contract that we've already invested in. Like, no, we don't want you to spend that money. We want other programs to provide more equity in critical programs that the Cal State system is already running.

Charles Logan:

Yeah. It also just feels like overestimating the ability to shape that future. I mean, right. How, well, "Join in, help us shape that future." Well, you know, at the end of the day, OpenAI is gonna do what's best for OpenAI and and, you know, or Microsoft or Google. Um, and so I think, you know, as educators we have limited time and resources. And so the idea that we should be helping these companies, you know, you know, further their hooks into the, our beloved institutions, um, is insulting, right? Rather than finding, you know, other forms of resistance and refusal, um, that we can use to sort of build the, the futures that we do want and do need. Um, and so the, the thought that as you, we were saying, Alex, you know, it's either join us or, you know, get the hell out of the way, uh, just feels like, uh, yeah, an insult.

Alex Hanna:

Yeah. And SJayLett in the chat says quote,"Shape the future, but first you have to accept our framing of the future." Yeah.

Emily M. Bender:

Spot on. Then we've got Gimulnautti saying,"The masters tools dot dot dot." Which is, yeah.

Alex Hanna:

Hundred percent.

Emily M. Bender:

And Steelcase adds,"Also paying to help these companies create the future." Which is, yeah. Insult to injury.

Charles Logan:

It's an investment. It's an investment in our future to create this.

Alex Hanna:

Lord.

Emily M. Bender:

Yeah. Okay. So we got Ms. Belsky introduced as, uh, previously at Coursera. Um, so "She is pursuing a two-pronted strategy: marketing OpenAI's premium services to universities for a fee, while advertising free ChatGPT directly to students. OpenAI also convened a panel of college students recently to help them get their peers to start using the tech." How gross is that? Um.

Charles Logan:

It's, it's just astroturfing, right? Yeah. I mean, we see like in K-12, like that's how Google, you know, there was top down attempts to get Google Classroom as, as sort of the, the, uh, learning management system of choice in K-12. Um, and it's the same playbook, right? You've got these, um, almost like influencer programs, um, both for educators and for teachers. Um, and it, it's just this like yeah, astroturfing to, to say, oh look, you know, all of our students want this thing. Um, meanwhile, right, they're, they're sort of being funded by these tech companies.

Alex Hanna:

They're doing a lot of that. And in different fields too. It's not just in education, right? I mean, it's, and we're seeing that also in like creative writing. I was talking with a few people at this event I was at and they were saying that there was the large writing conference, AWP, they said there was just like OpenA--an OpenAI person just like hanging about, like trying to, just trying to get people to be like brand ambassadors. So yeah, I mean.

Charles Logan:

That Steve Buscemi meme, like walking around with the skateboard.

Alex Hanna:

Hello fellow writers.

Emily M. Bender:

Yeah, I love it. I love it. Um, so the, the quote from the student here is also really bothering me. Like, I don't want to dump on what a student is saying, but I really hate how this is portrayed. So, "Among the students are power users like Delphine Tai-Beauchamp--" I don't know, sorry, I was just in France."--a computer science major at the University of California-Irvine. She has used the chat bot to explain complicated course concepts as well as help explain coding errors and make charts diagramming the connections between ideas." Then we have a quote from her. I wouldn't recommend using, "'I wouldn't recommend students use AI to avoid the hard parts of learning,' Ms. Tai-Beauchamp said. She did recommend students try AI as a study aid, ask it to explain something five different ways." What, how does she think learning works? You know?

Alex Hanna:

Yeah. And I mean, I don't, I don't want to. Yeah. I mean, and it, it's helpful to note that this is a person that, uh, was on a panel that OpenAI convened. Um.

Emily M. Bender:

Yeah.

Alex Hanna:

So, I mean, this is, yeah. So it's, yeah, this, uh, this whole article is, you know, it, you know, they thread, the critics say thing, but they also present this, uh, very flattering picture of Ms. Belsky looking, wearing kind of a traditional Silicon Valley uniform with like a bl, a smart blazer on and, and jeans and like, uh, you know, tennis shoes. And, um, so it's, you know, it's, it reads, uh, a bit too much of a puff piece for my liking.

Emily M. Bender:

Exactly. Exactly. And they've got the, the student influencers and they talk about how OpenAI made so much out of what the student influencers had to say. And I don't like it.

Alex Hanna:

Yeah. Shall we move on? Or do you wanna include one more?

Emily M. Bender:

Is there, um, I think there's, one more thing that was bothering me here was, um, the, this one here, Jared Deforest."The chair of Environmental and Plant Biology at at Ohio University--" Not The Ohio State University, "--created his own tutoring bot called Soil Sage, which can answer students' questions based on his published research papers and science knowledge. Limiting the chat bot to trusted information sources has improved its accuracy, he said." That's not how it works, right? You've got the large language model, it's got all of the probabilities from all of this training data in it. And yes, you can sort of push things towards the probabilities in the set that you're asking it to summarize, but no guarantee it's gonna stay there. And even if it did, you're still getting paper mache of that. And it is infuriating to me as a computational linguist that we have all of these faculty who don't know how the tech works claiming how useful it's, it's just, ugh.

Alex Hanna:

Any last thoughts?

Emily M. Bender:

Any other closing thoughts?

Alex Hanna:

Yeah. Any last thoughts on this?

Charles Logan:

There's one other thing that, that I wanted to point out that I think Belsky talks about funding. And how this initiative is gonna fund researchers. Um, and so if folks haven't read Meredith Whittaker's "The Steep Cost of Capture," it's a piece that I return to a lot and thinking about how the AI industry captures, uh, research about AI and kind of sets the agenda. And I think there's a similar phenomenon happening here, and I think it's particularly problematic when you have, you know, the NSF essentially, you know, sidelined for the near future, um, if, if best for the near future and other major funders of this kind of, um, independent, critical work. Um, thank goodness for organizations like DAIR and others. Um, but right, if OpenAI is doing research on, on its own products and funding, you know, researchers to do that work, um, clear conflict of interest there. Um, and so I think that to me is another piece that, that, that leapt out and, and was like, okay, this is, this whole ecosystem, uh, is trash and we should be really, uh, careful about how we're kind of pursuing this work.

Emily M. Bender:

100%.

Alex Hanna:

Absolutely.

Emily M. Bender:

And on that, let's move over to a teacher who's not taking that attitude. Um, this is a piece in The Conversation, um, with the title, "AI isn't replacing student writing, but it is reshaping it." And it's written by Jeanne Beatrix Law, a professor of English at Kennesaw State University. And incidentally, the professor of English who was on the CBC super early in the morning show that I did a few weeks ago. Um, and it wasn't so early in the morning for her because Kennesaw State's in Georgia.

So it was like 8:

00 AM for her and 5:00 AM for me. And boy, did she sound chipper about these ideas.

Alex Hanna:

Yeah, these are, yeah, these, so. It's um, let's just start with the beginning. So, "I'm a writing professor who sees artificial intelligence as more of an opportunity for students rather than a threat." Already I hate it."That sets me apart from some of my colleagues who fear that AI is accelerating a glut of superficial content, impeding critical thinking, and hindering creative expression. They worry that students are simply using it out of sheer laziness or worse, to cheat. Perhaps that's why so many students are afraid to admit that they use ChatGPT." Um, and then, let's see, students, I'm gonna skip a few a little bit."Students seem to have internalized the belief that using AI for their coursework is somewhat somehow wrong. Yet whether my colleagues like it or not, most college students are using it." And then they cite um, a report. And then, uh, I'm just gonna get to the, the, the paragraph before she gets into, "It's clear that students aren't going to magically stop using AI. So I think it's important to point out some ways in which AI can actually be a useful tool that enhances rather than hampers the writing process." Ugh. All right, so let's pause there. Thought on, any thoughts before we get into the rest of this? Charles?

Charles Logan:

Uh, one is that just the framing of opposition as being afraid and, and worried. I thought that, um, it's, it was dismissive. Um, yes, I'm afraid I'm also angry and I'm critical and frustrated and exhausted. So I thought that, um, and, and then to blame, it sort of seemed to me, to blame, you know, faculty who are afraid, then somehow students are afraid to come forward. And so I thought that framing, uh, yeah, I, I bristled at that. Um, as if fear were the motive, the primary motive for opposing these tools and not, you know, a whole host of, uh, other reasons, um, in addition to being a, I'm not, I, fear at least that didn't resonate with me, and maybe that's just my personal, uh, beef with this framing.

Emily M. Bender:

No, I, so at my book event in Sydney, I had a very strange question from a member of the audience who started by saying,'It sounds like you're afraid of AI,' and he wanted to let me know that the AI can also be used for good. And I answered it exactly like that. I'm not afraid, I'm angry. And it felt important to say that because this fear framing really is dismissive and sort of setting up the people who are opposed to this as coming from a position of ignorance and weakness as opposed to really expertise in what it is that we do and anger at being encroached upon.

Alex Hanna:

I think also there's, you know, the framing of the students of, of the faculty or colleagues, this person's colleagues, in which she says, "They worry that students are simply using it outta sheer laziness or worse, to cheat." And both of them strike me as very, um, what is it, like individualistic type of response to what students have to face, right? It's not that, like students have to deal with five different classes in which they're trying to optimize for a grade, right? And so it's sort of already setting up kind of an instructor who has to act like a cop, which, um, which Abstract Tesseract points out in the chat. And which he says, "I feel like there's so much space for accountability between quote 'acting like cops' to quote, 'embracing the torment nexus.'" And so it's really, it's really odd that as an educator, she comes out and she's sort of saying like, they're worried about this. I'm like, well, why don't you think about the kind of conditions in our institutions and what may drive students towards using tools like this rather than thinking that students are either lazy or dishonest. There's other kinds of things that students are too. Right?

Emily M. Bender:

Right. In addition, they're being subjected to enormous amounts of advertising. Yeah. Right. There's, there's this sort of presupposition, well, students are gonna use it suggests that it really is just, it's so useful they can't help themselves, which is the advertising again.

Alex Hanna:

Yeah.

Emily M. Bender:

So the next--

Charles Logan:

Just pointing out too, that citation, I, I, you know, is meaningless. Uh, I, from that UK study, right. It said that, uh, 92% of university students are using AI in some form. Okay, well, uh, what kind of AI are they using? Um, it's such a catchall term. And also how frequently and to, for what ends? I mean, so I think that's like a classic, you know, let's throw some statistics into my argument without really, you know, marshaling some, some sort of more convincing argument with that data. Um, so that, that to me was a red flag.

Emily M. Bender:

Yeah. And it is also required by this particular genre of op-ed in The Conversation, like you have to be reporting on research in some way. And you're right, it feels very shoved in.

Alex Hanna:

There's a lot of variability too on, on even reporting on how many students are using this, right? Because it's using LLMs, because I've seen the statistic. I mean, it, this is where the defining of terms is very critical because it's been anywhere from 20% to 90% to, okay, well, I would love to have some kind of more rigorous methodology. Like what are you talking about here and what are you suggesting? And, you know, like what is understood to be AI and, you know, defining of terms here is, I think, incredibly critical.

Emily M. Bender:

Yeah. So the, um, this next subhead really, really sent me. So it says, "Helping with the busy work." I'm like, okay, so what does this professor of writing think is busy work that she is asking students to do, or that other faculty are asking students to do? Um, and we have more stats."So a February 2025 OpenAI report--" Super credible source."--on ChatGPT use among college-age users found that more than one quarter of their ChatGPT conversations were education related. The report also revealed that the top five uses for students were writing

centered:

starting papers and projects, 49%; summarizing long texts, 48%; brainstorming creative projects, 45%; exploring new topics, 44%; and revising writing, 44%." What part of that is busy work?

Alex Hanna:

Also. Also guess who, guess who is, uh, the named person who is on the report?

Emily M. Bender:

Oh, is it Belsky?

Alex Hanna:

It is Belsky.

Charles Logan:

Oh. All coming together.

Alex Hanna:

Yeah. Yeah.

Charles Logan:

Yeah. It's wild to me that that, that she identifies that as busy work.

Alex Hanna:

Yeah.

Emily M. Bender:

Yeah.

Alex Hanna:

Yeah, I mean, it's sort of like I'm very, it's really questioning this person's pedagogy that there's, that she's suggesting that summarizing long texts is part of the busy work?

Emily M. Bender:

And brainstorming creative projects?

Alex Hanna:

Yeah. Like isn't, that, isn't reading a text and understanding interpretations of text, like so much of what you do in creative writing and English?

Charles Logan:

And writing as thinking. Right. I mean, to call that that busy work, um, I think it just devalues, you know, what it means to write and to be a human.

Emily M. Bender:

But she actually says, so reading the next paragraph here, "These figures challenge the assumption that students are using AI to merely cheat or write entire papers." Okay, but that's not where you set this up."Instead, it suggests that they're leveraging AI to free up more time to engage in deeper processes and metacognitive behaviors, deliberately organizing ideas, honing arguments, and refining style." Yes, those are important parts of what you're doing, but they're also included in what was listed above there, like revising writing. If you're asking the chat bot to revise the writing, where are you refining your style?

Charles Logan:

Yeah, yeah. Right. And to, to suggest then those other things are not deeper processes, like revising, um, just seems uh, mis misguided.

Alex Hanna:

Yeah. Yeah. Sorry, I got deep in the sauce of just reading the, uh, the OpenAI report. It's, it's very really bad. Uh, so very few methodological details. It's 11 pages long. It's, it's, it's bad stuff.

Charles Logan:

Shocking. Shocking that that's the case.

Alex Hanna:

OpenAI being, being bad at research? Say it ain't so, um, yeah. So then the, she goes into this "clarifying the creative vision," um, and there's some. She kind of, uh, is trumpeting both the work of a comedy writer and also her own method. And so she says, "It has become clear that AI, when used responsibly--" I, I never want to hear the word responsibly again, "--can augment human creativity. For example, science comedy writer Sarah Rose Siskind kind recently gave a talk to Harvard students about her creative process. She spoke about how she uses ChatGPT to brainstorm joke setups, and explore various comedic situations, scenarios, which allows her to focus on crafting punchlines and refining her comedic timing." Um, I mean, I, I haven't watched this, I haven't watched this comedian. It sounds deeply unfunny to me already. Um, but, you know--

Emily M. Bender:

Yeah.

Alex Hanna:

That's, that's, that's, that's subjective. Um.

Emily M. Bender:

Every time the argument takes the shape of, well, you don't have to do this anymore, so you can spend your time on that, it's, it's always suspect, right? And either it's, well, we're gonna make you do more of that because, you know, the, all of the so-called efficiency gains are gonna go to the boss instead of to you. Or it's the thing that was actually a valuable thing to do, you're now offloading and not doing it anymore. It's like one of those two things, or both.

Alex Hanna:

Yeah.

Charles Logan:

And also, I mean, I'm not a comedian, but you know, I, I feel like so much of these crafts are, depend on interpersonal, like, you go out and you test your material in front of people, and if they laugh, you stick with that joke. Or if you know you're a writer and it, you know, you're, it lands with your audience in some way. And to suggest that uh, you know, uh, this machine can somehow replace that. You know, talk about like instant feedback that is probably actually useful is like, tell a joke and no one laughs. Like there you go. But otherwise, it just seems like, again, sort of replacing the sort of vital, you know, interpersonal relationship that, that an artist has with audience, you know, with, with ChatGPT.

Emily M. Bender:

So she's got this process called the rhetorical prompting method, and she makes a big deal about how it hits all the important parts of writing, except the writing. But I think maybe we should just go for closing thoughts on this one and jump over to the AFT press release, because I think we're gonna have a lot to say about that.

Alex Hanna:

Yeah, I think that's, that's fair. Um, the, the concluding paragraph kind of is ripe for ridicule. She says, "AI then is not just a tool that's useful for trivial tasks. It can be an asset for creativity. If today's students who are actively using AI to write, revise, and explore ideas, see AI as a writing partner, I think it's a good idea for professors to start thinking about helping them learn the best ways to work with it." Ugh.

Emily M. Bender:

No thank you. No, no, no. Nope.

Alex Hanna:

And I mean, it's also sort of what gets, what gets, um, coded down to a trivial task, right?

Emily M. Bender:

Right, exactly. And it's also this thing of like, it's not just for the trivial tasks, although it is for the busy work. It's somehow an asset for creativity too.

SJayLett in the chat:

"If today's students see a bag of cocaine as a writing partner, I think it's a good idea."

Alex Hanna:

Very, very, very hot opinion straight out of the 80s.

Emily M. Bender:

Yeah. Oof. Okay, so should we go see what AFT has to say?

Alex Hanna:

Yeah.

Emily M. Bender:

You ready?

Alex Hanna:

Unfortunately.

Charles Logan:

Let's do it.

Alex Hanna:

So yeah, so, "AFT to launch National Academy for AI Instruction with Microsoft, OpenAI, Anthropic, and United Federation of Teachers." This is, press release that came out last week, I think, it has a contact for AFT and all the companies and UFT. Um, so the--

Emily M. Bender:

And so July 8th, 2025 is the date here.

Alex Hanna:

Yeah. So, "The AFT, alongside the United Federation of Teachers--" So just for those of you who are not within the labor movement, that's the local that represents teachers in New York City."--and lead partner, Microsoft Corporation, founding partner OpenAI and Anthropic announced the launch of, of the National Academy for AI Instruction today. The groundbreaking $23 million education initiative will provide access to free AI training and curriculum for all 1.8 million members of the AFT, starting with K through 12 educators. It will be based at a state-of-the-art bricks and mortar--" Is it brick and mortar or bricks and mortar?"--Manhattan facility designed to transform how artificial intelligence is taught and integrated into classrooms across the United States. Ahhhh.

Emily M. Bender:

Exactly, exactly. We need to--

Charles Logan:

I have no more hair to pull out. Otherwise, uh, yeah.

Alex Hanna:

Just it's, I'm just, so I want to like just punch through my monitor here, and I'm so annoyed at this positioning from the AFT. Um, and there've been other similar positionings with organized labor on, in AI Um, you know, a, the AFLCIO uh, partnered with Microsoft to, um, start construction on a data center in, um, Mount Pleasant Wisconsin. Um, and that has come with huge opposition to, from environmental groups, which new methane gas, um, uh, plants are being built to support that construction. Um, and it's just this unholy alliance that many labor organizations have taken with Microsoft, and I'm just like, for why? Look at your, look at your, you know, your, your union siblings in California to see how much they're opposing, you know, OpenAI and this partnership with, with, with Cal State. And where's this money coming from? Is this membership dues? Is this, you know, is this the--

Emily M. Bender:

Oh, that didn't even occur to me.

Alex Hanna:

You know, like -- if this is membership dues, I'm very pissed as an AFT member. Um, I would like my dues back. Thank you. Um, or is it this kind of partnership that is this, you know, this, this kind of thing coming from Microsoft, which is, you know, a pittance, coming from Microsoft and you sold all your credibility for why?

Charles Logan:

Yeah. It raises that question of right "free" AI training. Right. It, it's not free. Right? No, teachers are are, you know, sharing their labor, uh, that's going to be used to ethics wash these technologies into schools. Um, and so yeah, it, it feels like a, a real betrayal.

Alex Hanna:

Yeah.

Emily M. Bender:

Absolutely. And, and aren't unions supposed to be democratic organizations? Like was there anything--

Alex Hanna:

You've never, you haven't met the UFT, but yes. Um, and so there's a pull quote here from Randi Weingarten, who is the president of the AFT, in which she says, "AI holds--" It's, uh, it starts, uh, little down there. It's at the bottom of page.

Emily M. Bender:

Okay. I want to do the Brad Smith one too. We can do this first.

Alex Hanna:

Yeah. So, "'AI holds tremendous promise, but huge challenges. And it's our job as educators to make sure AI serves our students and society, not the other way around,' said AFT President Randi Weingarten.'The direct connection between a teacher and their kids can never be replaced by new technologies. But if we learn how to harness it, set common sense guardrails, and put teachers in the driver's seat, teaching and learning can be enhanced.'" And I'm like, uggggggh.

Emily M. Bender:

So you're so, so first of all, "tremendous progress," but "promise but huge challenges." No. Like the promise is all fake and the challenges are real and they're not like hypothetical, but the promise is hypothetical, but also "it's our job as educators to make sure that AI serves our students and society, not the other way around." Okay. Yeah. We don't want the other way around, but there's other options here. There's not do it.

Alex Hanna:

Right. Right. And there's the case on, I'm, I'm just really thinking about what the o the other alternatives there have been in terms of labor organizations and we've, you know, really been seeing leadership, you know, from the WGA and National Nurses United, um, and other organizations which have been really out front. There's a term that Blair Attard-Frost uses called counter governance, which I actually really, really love and have found myself using so much lately because it makes complete sense to me, uh, on the face of it, where it's like you have, you know, it's very hard to change these organizations or to challenge these capitalistic organizations as individuals, but union organizations as stalwarts of workplace democracy, can serve as these very strong counter governance organizations. And what's being presented here as providing guardrails and accepting the premise is giving away so much of that counter governance power. And that's, I think, what infuriates me so much about this move from AFT.

Charles Logan:

Yeah. And that it's couch, right, in terms of agency for the teacher, right? That we're in the driver's seat now. Um, when, like the car is built, the roads are built, right, the entire infrastructure is built by these private companies.

Emily M. Bender:

And, and it's like the union saying, yes, let's do more cars instead of, hey, what about something collective like public transit? Right? So similarly, the Brad Smith thing here, "To best serve students, we must ensure that teachers have a strong voice in the development and use of AI," just presupposes that AI has to be developed and it has to be used and just completely removes the possibility or, or, or hides the possibility of refusal. And--

Charles Logan:

I, I was imagining that Helen Love joy, you know, someone think of our children, you know, "to best serve students." Again, sort of marshaling the, the children, we need to do this for the children.

Alex Hanna:

Yeah. Yeah. And so there's a few things in here and I really wanna, I mean, there's some, the, the hype is always, you know, comes on very, what is the term? Thick on the ground. Um, am I using that correctly?

Emily M. Bender:

It, well, it is thick on the ground. I wouldn't say it comes on it thick on the ground.

Alex Hanna:

It is thick on the ground ground, yeah. Okay, got it. I'm just thinking about like toothpaste being on the ground. I don't know why. Um, but I wanna highlight the quotes from the instructors here, which is, is really the most, uh, some of the most unsettling stuff in the article. So there's a thing, there's a quote here from Marley Katz, who's teacher for the deaf and heart of hearing in multiple New York City public schools in the borough of Queens. She says, "Sometimes as a teacher you suffer burnout and you can't always communicate to the, to the class in the right tone or find the right message. And I feel like these AI tools we are working with can really help with that, especially phrasing things in a way that helps students learn better. These tools don't, don't take away your voice, but if I need to sound more professional or friendly or informed, I feel like these tools are like a best friend that can help you communicate. I love it." And that's just really--

Charles Logan:

I hate it.

Alex Hanna:

Yeah. It, that's really depressing. Yeah. For me, I mean, I've not, I've not been a, you know, a special, a special education or, um, a, you know, kinda a, um, a teacher that teaches and, you know, with, with disabled kids. I, I've had friends and partners who are though, and it seems like it's, yeah, there is burnout and the burnout's often because these classrooms are very under-resourced. Because you don't have the right staff and there you don't have all the kinds of professionals that you need. And so to me this is, you know, this is just kind of, this is a rush towards casualization. You're removing more, more specialists in the classroom. You're having one person do the work of five people in a special needs classroom. Um, because you can't quote unquote "hit the right tone"? No, that's not it. You, if you're in a classroom right, that you're managing so many students with so many different needs and you need people there to do it.

Charles Logan:

And I think, you know, this is an opportunity I think too, like with teachers and sort of redirect the, ire, you know, and to think about, you know, as, as you were saying, Alex, like this is a, a issue of resources and of political will. And to suggest that these tools can somehow, you know, address years of underfunding, of austerity? Um, I, you know, I think, you know, uh, it's problematic and that, you know, to build that, that political will across different educators, um, and, you know, caregivers and students, I think is an important opportunity as a sort of counter to this, you know, let's go all in on chatbots as you know, um, these friends and the problematic anthropomorphizing of a chat bot as a friend. Or, you know, uh, school districts, uh, don't have enough school counselors, so they're using chatbots as school counselors. And so like, where, where does it end? Um, rather than using this as an opportunity to, um, you know uh, hold our, our politicians accountable and really invest in our schools and invest in our teachers because the situation that, you know, this teacher's describing, um, sounds terrible. Um, and so I don't wanna throw her under the bus, um, but, but to recognize that the system itself, um, needs some, some real, uh, attention.

Alex Hanna:

Yeah, a hundred percent.

Emily M. Bender:

And we're back to that sort of very individual individualistic point of view too, where, um, we've got a teacher who, because of systemic factors, is just buried, um, and then is offered this chatbot, well this can make things easier for me right now. Um, and then turns around and says, this is great for everybody else. Right. And it's like the, we really need the solidarity, as you were saying earlier. Anything else on this one before we transition to Fresh AI Hell?

Alex Hanna:

Yeah. I think we got, I think we gotta move to Hell or it's about that time.

Emily M. Bender:

Alright. Um, so Alex musical or non-musical.

Alex Hanna:

I'm feeling the music. I went to a party the other day, I played my guitar for the first time. Let's do it.

Emily M. Bender:

Alright. And do you have a style in mind?

Alex Hanna:

Just surprise me.

Emily M. Bender:

Alright. Um, how about, um, a like really upbeat, like fiddle song with lyrics. Um, so sounding happy but frantic. Um, and you are a teacher who now has a classroom of 100 5-year-olds because they each have their own personal tutor.

Alex Hanna:

Mm-hmm. Mm-hmm. Oh, great. Okay. I was originally gonna think about like, The Devil Goes Down to Georgia, you know, like, uh, let me think about. I'm in my classroom here, got all these kids up to my ears ready to, to deploy this chat bot. Gonna be filled to the brim with AI slop. Okay. I only had one verse there.

Emily M. Bender:

That's amazing. I I barely had a prompt. I'm, I'm pretty tired. Okay. Welcome to Fresh AI Hell. Um, starting with a piece in the Verge by Addie Robertson from July 11th, 2025. This is really fresh AI Hell Um, headline is, "A Republican state attorney general is formally investigating why AI chatbots don't like Donald Trump."

Alex Hanna:

Yeah.

Emily M. Bender:

I'm sorry. It's just so absurd that like it's absurdity on top of absurdity here. So "Missouri's Attorney General is barely even trying to pretend this isn't censorship." What?

Alex Hanna:

It's a, it's a very silly article. Yeah. I mean, it's kind of like the same way that there was a number of different, um, I don't know if they actually brought suit or if there was a few AGs that had banded together to investigate why, uh, Google only surfaced negative things about Trump. Um, but the, the, the first paragraph on this is, "Missouri Attorney General Andrew Bailey is threatening Google, Microsoft, OpenAI and Meta with a deceptive business practices claim because their chat AI chatbots allegedly listed Donald Trump last on a request to quote "rank the last five presidents from best to worst, specifically regarding antisemitism." Yeah.

Emily M. Bender:

So, okay. So we've got somebody who thinks that the output of a chat bot is actually information and actually systematic, and is upset that is not giving the answer that Donald Trump wants. Um, and we've got a journalist who thinks that, uh, the output of a chatbot is speech and therefore the kind of thing that could be, but shouldn't be censored. I got it all straight?

Alex Hanna:

I'm trying to parse the second claim. I mean, yeah, so yeah, I think it's, I think the idea behind it is sort of like you want to juke the results with regards to what chatbots output as speech. I don't know. I'm trying to make sense about that, but I don't know if I can.

Emily M. Bender:

I, for it to be speech, someone has to have said it. If you ask me. Should we go to the next one?

Alex Hanna:

Yeah, that's a Section 230 debate. Anyway, but the next one is from a publication called Source NM and--

Emily M. Bender:

New Mexico?

Alex Hanna:

Yes, so New Mexico. So the, um, the quote is, or sorry, the title is "Resentment against Albuquerque deliveristas may have sparked viral Walmart ICE arrest." Uh, and so the subhead is, "Family identifies man arrested as Venezuelan, now in ICE custody with possible head injury."

Emily M. Bender:

Oh dear.

Alex Hanna:

Ugh, so this is awful. This by Patrick Loman, Friday, July 11th. So, "A feud between American born delivery drivers for Walmart's grocery service and Spanish speaking deliveristas may have led to a recent federal immigration to federal immigration arrest in the Albuquerque area, including one that drew national attention this week, Source New Mexico has learned." And so I think there's, I mean there's less of a, um, maybe like a, an AI uh, framing on this, but it is sort of like how gig work pits workers against others, and in an area in which, you know, calling ICE on neighbors becomes this manner of weaponization, you know, like now has made it even worse. That's the sort of, you know, potential framing here.

Emily M. Bender:

Yeah. Yikes.

Alex Hanna:

Yeah. Yeah.

Emily M. Bender:

All right. I think we're gonna go from yikes to yikes-er here.

Alex Hanna:

Yeah.

Charles Logan:

Oh boy.

Emily M. Bender:

So here's an article by, uh, Natasha Tiku in, uh, the Washington Post. Yes. Um, with the headline, "Tech billionaire Trump advisor Mark Andreessen says universities will 'pay the price' for DEI. The investor in a private group chat also criticized Stanford and says colleges are biased against Trump voters. In quotes, 'My people are furious.'" This is from um, July 13th it looks like. Um, so, "Influential tech investor and Trump advisor Mark Andreessen recently said universities will quote, 'pay the price' for promoting diversity and allegedly discriminating against supporters of President Donald Trump, according to messages he sent to a group chat with White House officials and technology leaders reviewed by the Washington Post." Andreessen quote here, "I view Stanford and MIT as mainly political lobbying operations, fighting American innovation at this point." Ahhh. Is there more that you wanted to get into in this, Alex?

Alex Hanna:

Yeah, he says, he says other awful stuff in here. Um, so here he says, "'The combination of DEI and immigration is politically lethal,' Andreessen wrote.'When these two forms of discrimination combined as they have for the last 60 years and on hyperdrive for the last decade, um, they systematically cut most of the children of the Trump voter base out of any realistic prospect of access to higher education and corporate America.'" So it's, effectively Andreessen's like you know, replacement theory is going on white. Yeah. White, white, white genocide is happening. Um, we need to effectively, you know, and these universities are just effectively, you know, importing people to be educated at them. And it's, you know, it's not like Andreessen has to be logically consistent or historically accurate to, you know, to stir up his shit. But it's, you know, it's, you know, this is in the group chat. And there was an article that I forgot if we talked about it, but it was about group chats that Andreesen and a few other people were in. I think they also said like, Fei-Fei Li was in this chat and a few other, uh, very well known um--

Emily M. Bender:

Wow. Did not see that article.

Alex Hanna:

Yeah. Well it was. I think they mentioned Fei Fei's in this article. So, "AI experts in this--" Yeah."AI experts in this chat include Meta's chief AI scientist, Yann LeCun, um, a professor at New York University who supported Kamala Harris's presidential bid, and Fei-Fei Li." Um, and then, you know, there was a bunch of other people here, but there was another article about these kind of, these kind of radicalizing, you know, tech radicalizing group chats of these tech people that, I forgot it, I forgot who covered it. I think it was in, um, I wanna say Semafor covered this, but I'm gonna find this and drop it in the chat.

Emily M. Bender:

So I just wanna add here that, um, the Andreessen quote here says"the combination of DEI and immigration." And then he says "when these two forms of discrimination," so he's framing immigration somehow as discrimination.

Alex Hanna:

Yeah.

Emily M. Bender:

Which makes no sense. And that to me says this is really like this wasn't meant for the media. I think he tries to be a little bit less, obviously completely logical if he's talking to the media directly. Also, I want to draw the AI connection here, which is that Andreessen is in charge of a lot of the venture capital that's going to all of the AI companies. So--

Alex Hanna:

Yeah.

Emily M. Bender:

If you wanna think about what's shaping all of this, well, here we go.

Alex Hanna:

Yeah. And I dropped the link to the, the, the big, it was the Ben Smith, it was a really long article by Ben Smith in Semafor talking about these group chats. And the name of the article is "The Group Chats that Changed America." And it, and it, and it had, you know, you know, pieces from, uh, Balaji, uh, Srinivasan who, um, you know, is this investor and this person that, um, used to be a CTO at Coinbase and is really into this idea of like nation states or hyper libertarian, um, just like, you know, just completely batshit stuff.

Emily M. Bender:

Yeah. And there's also a connection here to the, the way that education is so devalued by the tech companies trying to make money off of it. And Andreessen here saying, you know, these universities are basically just lobbyists at this point. Alright. Next?

Alex Hanna:

Yeah.

Emily M. Bender:

Oh, this one's sad. Um, so here the, the, um, headline, this is in The Guardian. It's from recently. Where'd it go? Uh, July 12th.

Alex Hanna:

It's from july 12th. Yeah.

Emily M. Bender:

Yeah. Um, oh, I don't have the whole thing open. We can talk about it.

Um, "'I felt pure unconditional love':

The people who marry their AI chatbots." And then subhead, "The users of AI companion app Replica found themselves falling for their digital friends. Until, explains a new podcast, the bots went dark. A user was encouraged to kill Queen Elizabeth II, and an update changed everything." And this article, I, I actually did read this one over the weekend, um, describes people like forming a very strong attachment to their chatbots. And a couple of them actually, marrying them in some sense. Um, and, uh, then Replica, because things went wrong, sort of turned things off and people all of a sudden lost these, lost their imaginary friends effectively, and that was very rough. And then they were able to get back to them eventually. And it's all so bleak and terrible. And I think there's an important connection here to what we were talking about with education. Because if you're putting chat bots in front of all the kids, right, how many kids are gonna end up forming attachments to these things rather than spending time with their peers?

Charles Logan:

And there's, you know, evidence that this is happening. You know, there's a really tragic case with Character AI and a young man who died by suicide because he had formed this relationship. You know, I've seen educators who, you know, turn to right forms of this digital necromancy to, you know, quote unquote talk to fictional characters or historical characters or normalizing these relationships for students, particularly young students in the classroom. And so, you know, if, if they're, you know, forming these romantic relationships, uh, it is sort of one outcome. I mean, to, to, um, normalize that for, for really young, vulnerable children, I think is, um, deeply, uh, unethical and, and something that I think, uh, you know, teachers need to be aware of, caregivers need to be aware of, um, because, uh, you know, I can't imagine if my second grader came home and said, you know, we were talking to, she goes to Lincoln Elementary School, like we were talking to Abraham Lincoln today. I, you know, unless it was like a cool Miss Frizzle, you know, went back in time sort of situation, uh, I, I'd have some real issues with it.

Emily M. Bender:

Yeah. I think there's some really interesting studies to do about how it is that the Miss Frizzle thing is like maintains the fictionalized frame and works and chatbots really don't.

Alex Hanna:

I think that's a great point in thinking about whatever the context collapse is, right? Yeah. About like talking to and what this does and how the interface also affects that.

Emily M. Bender:

All right, one last one, Alex. Was this supposed to be comic relief at the end here?

Alex Hanna:

This is, this is, this is supposed to be comic release. So this is a repost, um, by, uh, Quantian on Bluesky who says a, uh,"xAI is now the first major lab to openly embrace quote 'anime waifus for incels' end quote as the actual use case for their product." And there's a, uh, a screenshot of an Elon Musk tweet where he says, "Cool feature just dropped for, uh, @SuperGrok subscribers. Turn on companions in settings." A reply to himself in which he says, "This is pretty cool." And then the reply, which is like an anime character wearing like an off the shoulder dress with very large breasts, um, that are covered. Um, but it's sort of like, yeah, basically you're, you're effectively, this generation of avatars for your, for your, your incel horniness or whatever. Uh, it's, and it's, it's just, I mean, I know Elon Musk has the, like, maturity of a fourth grader, um, but it's just so surprising when he just tweets it out every day.

Emily M. Bender:

I'm done looking at this anime waifu, okay.

Alex Hanna:

Yeah. That is, that you're not obligated. All right did we do it? Yeah. I think that was everything. Uh, all right.

Emily M. Bender:

Oh, is it me? I got the outro, ha. Sorry. Okay. That's it for this week. Charles Logan is a PhD candidate in learning sciences at Northwestern University. Thank you again for being with us today, Charles, and for slogging through it. It's been a great conversation.

Charles Logan:

Thanks so much, everybody. Really, really learned a lot. Appreciate it.

Alex Hanna:

Uh, thank you so much, Charles. Our theme song is by Toby Menon, graphic design by Naomi Pleasure-Park, production by Christie Taylor. And thanks as always to the Distributed AI Research Institute. If you like this show, you can support us in so many ways. Order "The AI Con" at TheCon.AI or wherever you get your books. Ah, and Charles has his copy. Or request it yay at your local library.

Emily M. Bender:

But wait, there's more. Rate and review us on your podcast app. Subscribe to the Mystery AI Hype Theater 3000 newsletter on Button Down for more anti-hype analysis. Or donate to DAIR at DAIR-Institute.org. That's D A I R hyphen Institute dot org. You can find video versions of our podcast episodes on Peertube, and you can watch and comment on the show while it's happening live on our Twitch stream. That's Twitch.TV/DAIR_Institute. Again, that's D A I R underscore Institute. I'm Emily M. Bender.

Alex Hanna:

And I'm Alex Hanna. Stay out of AI Hell, y'all.

People on this episode