Mystery AI Hype Theater 3000

Why "AI" Is a Con: Our Book Launch! (fear. Vauhini Vara)

Emily M. Bender and Alex Hanna

It's finally here! The AI Con: How to Fight Big Tech's Hype and Create the Future We Want hit the shelves in May. In this special bonus episode, Alex and Emily speak to tech journalist Vauhini Vara at one of the book's online launch events, where they covered the misleading nature of the term "artificial intelligence," why the use of tools like ChatGPT will only ever cheapen human labor and enrich the already powerful, and how people can fight the narrative that these technologies are inevitable.

Vauhini Vara is a technology reporter and writer. Her journalism has been honored by the Asian American Journalists Association, the International Center for Journalists, the McGraw Center for Business Journalism, and others. Her latest book is Searches: Selfhood in the Digital Age, a work of journalism and memoir about how big technology companies are exploiting human communication — and how we’re complicit in this.

References

Everyone is cheating their way through college (with ChatGPT)

The Last Human Job: The Work of Connecting in a Disconnected World by Allison Pugh

Te Hiku Media

Resisting AI: An Anti-fascist Approach to Artificial Intelligence by Dan McQuillan

Refusing Generative AI in Writing Studies

Pennsylvania's SEIU Local 668 wins a victory against AI

Elon Musk's xAI is polluting Black Memphis residents

Possible Futures: An Internet for Our Elders

Better Images of AI

The Electronic Privacy Information Center (EPIC)


Check out future streams at on Twitch, Meanwhile, send us any AI Hell you see.

Our book, 'The AI Con,' comes out in May! Pre-order now.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

Emily M. Bender: Hey there! Emily here. We're taking a break from recording new episodes, as Alex and I cris-cross the globe this summer. But in the meantime, we thought we'd share a conversation we had for the virtual launch of our new book, "The AI Con: How to fight big tech's hype and create the future we want."

This was back in May already, but we think a lot of the issues we discussed with interviewer and tech journalist Vauhini Vara remain as pertinent as ever. Enjoy!

[Music]

Vauhini Vara: Thanks everyone for being here. I am very delighted to be here, um, helping to celebrate the launch of "The AI Con: How to Fight Big Tech's Hype and Create the Future We Want."

There we go. Um, welcome everyone. Thanks for being here. Um, I, um, I'm not gonna say too much about this book 'cause I just wanna jump right into to getting to, to talk about it, but I had the privilege of reading this book a little bit in advance, um, while I was preparing for--sorry, I'm Vauhini Vara. I'm a, I'm a journalist.

I wrote a book called "Searches: Selfhood in the Digital Age," which has to do in part with AI, um and also with big technology companies and their power. And so I happened to be reading this book sort of while I was preparing for my book tour, and I found it very instructive, um, uh, as I was preparing to sort of go out and talk in public about AI and the ways in which it's bound up with capitalism.

It's, um, it's a really, uh, accessible, um, book about AI. It has a strong argument that I'm really excited to talk to Emily and Alex about. Um, and, um--

Okay, let me, let me just, uh, go right into the, the intro stuff I wanna say. Um, this event is hosted by, by DAIR. DAIR is an interdisciplinary and globally distributed AI research institute, rooted in the belief that AI is not inevitable, its harms are preventable, and when its production and deployment include diverse perspectives and deliberate processes it can be beneficial. Its research reflects its members' lived experiences and centers their communities.

Dr. Alex Hanna, one of the co-authors of this book is Director of Research at DAIR, the Distributed AI Research Institute, and a lecturer in the School of Information at the University of California-Berkeley. She is an outspoken critic of the tech industry, a proponent of community-based uses of technology, and a highly sought after speaker and expert who has been featured across the media, including articles in the Washington Post, sorry, Washington Post, Financial Times, The Atlantic and Time. 

And Emily M. Bender, the other co-author, is a professor of linguistics and an adjunct professor in the School of Computer Science in the Information School at the University of Washington, where she has been on the faculty since 2003. She's the co-author of recent influential papers such as "Climbing Towards NLU: On meaning, form and understanding in the age of data," ACL 2020. And, "On the Dangers of Stochastic Parrots: Can language models be too big?" In 2022, she was elected fellow of the American Association for the Advancement of Science. And in September 2023, she was included in the first ever Time 100 AI list highlighting 100 individuals advancing major conversations about how AI is reshaping the world.

So, um, we are going to talk for like 45 to 50 minutes and then we're gonna open up to questions, um, from those of you who are viewing. You should feel free to ask your questions anytime and we'll, we'll generally sort of save those for the end of the discussion, but feel free to raise them as they come up for you.

Um. Are there any logistical things I'm forgetting Emily and Alex, or anything else that I should have mentioned that I didn't yet? 

Emily M. Bender: I think we just wanna say that it's, uh, ask questions in the chat. So the, the Twitch chat is a way to get those questions to us. 

Alex Hanna: Yes. Nothing, nothing else there. Uh, yeah, we'll be collecting those and having it on in the end.

So yeah. And we're of course supported in the background by our great producer Christie Taylor. 

Vauhini Vara: Yes. Thank you Christie. Um, okay, so Emily, I, I, as I mentioned, I was just on book tour and I got, had the privilege of being in conversation with Alex about my book in San Francisco and with Emily in Seattle.

And Emily, when I saw you in Seattle, you were excited about the book coming out soon. And you said to me that one thing that you love about the cover of the book is that the message of it is right there on the cover. And so I wanted to just start by asking you, um, asking you about that. Asking, asking what you meant by that, but then also asking both of you like what in your view AI is and what "The AI Con" is.

Emily M. Bender: Yeah. Thank you. So I'll, I'll start just by talking about the cover. One of my favorite comments online so far was somebody who said that they were really looking forward to reading the book, but also to putting it in their Zoom background when they're in meetings and someone is trying to push so-called AI tools into something and like, there it is.

And this font is big enough that even in the background of Zoom, like it's, it's, it's screaming the message, which I think is wonderful. And, uh, yeah, so the message, and maybe I'll start with an AI definition and let Alex talk about what's the con in this, but, um.

One of our first messages is that the phrase "artificial intelligence" is actually just a marketing phrase. It doesn't refer to a coherent set of technologies, and it sort of immediately muddies the waters of any discourse about this. And we recommend instead, if you wanna talk in general, talking about automation, and then you can talk about what's being automated and why. Or if you're talking about some specific technology, then name that technology in terms of what it does, automatic transcription, image classification, and so on.

Um, so that is the issue that we have with the phrase AI. And it's been a sort of a struggle for me because it's like, it's in the title of the book, right? So it's right there. Um, and there was a very funny moment. So we try to not use AI or artificial intelligence without taking distance from it. And so there's scare quotes all over the place and so-called AI. And at one point Alex said, 'Emily, you can have so-called, or you can have scare quotes, but you can't have them both on the same token.' 

Alex Hanna: Yes, yes. I mean, creating the distance. But yes, we have to, we have to move and use it. We have to use it sometimes. Right. Another thing I want to say about the title too, the, the front page is really, that's really smart is that, um, it is, um, actually, um, the designer who made it actually did used a physical, uh, letter press to do it. So if you look closer, you can see the kind of gr granulations. And so I love the materiality of it so much and the way that it's, you know, talking and, and, and using this, uh, decidedly non-digital sort of medium to, to do it.

And, um, I, uh, uh, I'm trying to see, like there's a, the jacket design, Chris Potter, I want to shout them out. Um, but great, great cover design.

Um, and then, okay, so AI, AI is a con. Um, yeah, so that's the, that's the thing. I think in the book we say, you know, put flatly AI is a con. Effectively the way that the con of AI is that there's a notion that there's some kind of a singular technology that's going to be the singular answer for a whole host of things, whether that's workplace productivity or social services, or teaching or nursing, or doing science, as if there is this singular consciousness like we have in sci-fi, and it's going to replace all these different things. But that instead is, um, that term and that vision replaces what's happening in the background, which is, uh, a small set of companies who are, uh, aimed to make a huge windfall, um, um, from venture capitalists and other big tech firms. Um, even though they're developing a certain kind of technology, that is all based on a parlor trick and a set, uh, and also a bunch of stolen data and is, um, being run out of many different data centers, um, that are, um, polluting our environment and making the climate catastrophe even worse. 

Vauhini Vara: Thank you. Thank you both for that. Um, you know, I wasn't, I wasn't, this wasn't on my original list of questions, but I was just, as I was preparing for this talk and, um, pulling the, um, the sort of mission statement or about, about DAIR to read as part of my introduction, I noticed, um, that part of the DAIR mission statement, I don't know if that's what you call it, but part of what DAIR says about itself is that it believes that AI's harms are preventable and that when it's production and deployment include diverse perspectives and deliberate processes, it can be beneficial.

And I'm, you, you talk, um, you talk, uh, about a, what's called AI in a nuanced way in the book. Um, and so I'm interested in how that perspective that's conveyed as part of that DAIR mission statement is in conversation with the notion um, of AI as a marketing term and that there's this AI con, I'm curious about like, which aspects of what's called AI are part of the con, which aren't, if that's, I I don't know if that's, that's possible to talk about.

Alex Hanna: You wanna go ahead?

Emily M. Bender: Yeah. Um, so I think again, if we, if we step back from the term AI and we, and we can sort of look at individual things that get called AI, including, you know, machine learning over various data sets, um, and to make something that is not a con that involves that sort of an approach, I would say, okay, well you're transparent about the data. Um, the data was collected in a consentful fashion. Um, the, you're transparent about the fact of automation and there's good recourse if the automation goes wrong and there's a, an honest and clear match between what the automation's actually doing and the need that it is being used to fulfill. 

And if you've done all of those things, I still don't want you calling it AI, but that sounds like a good use of automation.

Vauhini Vara: Got it. Yeah. 

Alex Hanna: Yeah. And I think it has to do, I mean, it has to do a lot with control. I mean, thinking about who is doing what. I mean, so there's kind of elements of what is quote unquote "called AI" that tends to be, I mean, it is, it is a-- the ways that is deployed now, it is like a technology where it's like, I don't necessarily want like an "AI for good" because it's like, first off, that signals that the rest of the uses are AI for bad and which, which, yes, that gives the game away.

But it's also giving, it's also sort of like, okay, in addition to these we could use like ChatGPT for X, Y, Z. And that's not like a future we really want. I mean, that doesn't have to do with control. I mean, whoever's controlling this, it's not just because you can deploy it in certain ways. It's that well, OpenAI owns the means of computation and they've taken all data that's not nailed down on the web and they're making all these terrible deals with, you know, Axel Springer and the Guardian and all these other journalistic outlets.

And so that's not a future that we can control, and that's not gonna be beneficial for how this is used and deployed. So, you know, I think a lot of the discussion around technology is like the, you know, like you don't have to be a, a, a, you know, a science and technology scholar to know that the, the phrase like "technology is just a tool" is not correct.

I mean, technology has its politics. Mm-hmm. And AI is no different. And the way AI is developed now is that it is not like--you can't have deployments of AI for good. You would have to reform what is known as AI so fundamentally it would take on a different form and you should be specific and call that, you know, language technology or, you know, uh, I don't know, mat, you know, matching protein sequences at scale or something that is, that demystifies it and is specific to what the well-scoped task is.

Vauhini Vara: Yeah. Yeah. Okay. Thank you. Um, you write in your book about the proliferation of products marketed as AI in areas, in social service areas like healthcare and education. And you also write about the red flags this raises. Um, I am interested in hearing you talk about those red flags, what they are, um, and also given those red flags, what's behind the proliferation here?

Emily M. Bender: Yeah, (laughter) I mean, so red flags are, to me, sort of the, the biggest one is this idea that the people who can afford it, the people who are in the 1% or whatever, are forever going to have access to real people who are really skilled as, you know, doctors, lawyers, teachers, psychotherapists, and so on. Um, but because we have these systems that can produce the form of an educational conversation or the form of a psychotherapy session or the form of a medical diagnosis, um, then there is this move to sort of saying, well, that's good enough, right?

And so the masses get that even though it is not good enough, and in fact in many cases, most cases, it is worse than nothing, maybe even all cases. Um, but what's behind it I think is in part austerity measures. And we've been living under austerity measures, you know, to varying degrees in different countries for a very long time now.

Um, and also this idea that we couldn't possibly fund, you know, real social services for everyone. That's impossible to imagine. Um, and yet it's perfectly fine to think about, um, taking the small, meager amount of money that goes to school districts and using a big chunk of that to pay a contract to OpenAI or whomever to have this, you know, magical, personalized tutoring in every classroom.

Vauhini Vara: Alex, is there anything you wanna add to that? 

Alex Hanna: Yeah, I'm thinking, so you're thinking, thinking about red flags, about like, I'm trying to think through this. Um.

(pause) What is, say the question one more time and I will have something smart to say. Sorry about this. (laughter) 

Vauhini Vara: Yeah. Um, I was asking about, um, you write in the book about, um, and maybe red flags is a, is a tricky thing, word to use. What I, what I mean to ask is what's bad about using a what's called AI, products called AI in social services like healthcare and education. Yeah. I think Emily addressed the sort of like broader, um, social dimension. Uh, as in, you know, there, there are ways to address these problems, not using technology, using human resources. Um, uh, and people are choosing to do this instead.

Is there any anything else that you'd wanna flag about. when issues arise? 

Alex Hanna: Right, I mean, one of the things that I think that we talk about in the book is, is really these things, these social services, I mean, much of them are about human connection, right? I mean, it's, if you need to actually have human connection, you actually need to have, um, some kind of person behind that.

I mean, we seek empathy and we seek connection in that, right? Um, there's another book that's, uh, written by a sociologist, Alison Pugh, that talks, it's, I think the name is "The Last Human Job." And in it, she's talking about, she talks to like chaplains, she's talking to nursing professionals. She's talking to doctors.

And it's very funny because I think there was, there wasn't an AI chaplain, but there was at one point this, this, um, this chat bot that was, uh, answering questions about, uh, theology and it kept, and this was just saying the most random things, I mean you know, and, you know, we should mention, you know, a new pope got ordained, is that, conclaved today? I don't know the right verbs. 

Emily M. Bender: Convexed. 

Alex Hanna: Convexed. 

Vauhini Vara: I say picked.

Alex Hanna: Picked, yes. Picked is is is good. But I mean, effectively, the notion that like you don't go to your chaplain to get answers about theology, right? You go to your chaplain to have some connection, have some kind of person to connect to you, right?

Um, and so that kind of replacement takes out this notion of any kind of connection and any kind of um, you know, any kind of a relationship. Right? And I mean, it's sort of like, it's, it's upsetting to hear about all these uses in which people are like, well, I'm talking to this about AI companionship.

There's actually this instance where Sam Altman, who's in front of, um, some congressional committee today, and somebody asked him point blank, uh, I forgot who. They're like, well, would you, you just had a son. Would you want, um, would you want an AI companion? And Altman was like, no, probably not. And, and, and so this other instance in which you have, you know, AI for thee, but not for me.

Vauhini Vara: Yeah. Yeah. Um, speaking of OpenAI and, and ChatGPT, I was interested in the place in your book where you write about how ChatGPT and other large language, other products based on large language models might seem like a good deal now. And you're referring here to like the experience of users, right? Like if you're an everyday user using ChatGPT, it might seem like a good deal to you, um, but that history suggests that this won't last. Um, and I'm curious, and maybe like you could use some, analogs like, like Google or Meta or these companies in the past, right, that engage with people using their information. Could you describe how you expect the cost for users of engaging with these products, whether that's a financial cost or something else, is going to change over time?

And I know you can't be sure, but any, any thoughts on what could happen?

Emily M. Bender: So--

Alex Hanna: Yeah, go, go ahead. Go ahead.

Emily M. Bender: So, um, this question relates to other questions we've been asked, we've been doing the podcast circuit, um, and where it's often connected to Cory Doctorow's notion of enshittification. Um, and there's an interesting disconnect because the enshittification idea is that there's something that's initially useful to the user. Um, and then like that sort of, it's done as a loss leader then just burning through the VC money. And then when the company wants to actually make money, the value is sort of clawed back from the user. And it's usually clawed back in the form of, um, selling advertisements or like shaping things more so that it benefits the company or the paying customers rather than the end user.

And the interesting difference that I see is ChatGPT, like it might feel useful, but I think that things like, you know, search engines before they got too enshittified by SEO and advertising were useful in a way that synthetic text isn't. Um, but there is, you know, several ways that it could be further enshittified.

Um, so one thing is that you could, um, end up having to pay more. Right. Um, another thing is, and, and, uh, we spotted this and Alex totally called it, um, it could start actually including paid advertising in the output, um, in ways just like with Google that you might not know are paid advertising. 

Alex Hanna: And we spoke to someone, um, we spoke to a reporter, um, at the New York Times, uh, not the New York Times, sorry, the Washington Post. Um, and one of the things that she had pointed out was, um, there is this kind of trend of some Tiktokers that are asking ChatGPT whether they're, you know, like whether they're, they look good or whether they're, you know, how could they change their style or whatever. And then the, the kind of, um. Next turn on that is now, now OpenAI is effectively noticing that and trying to monetize that or thinking about that. So it's either serving up ads or having kind of sponsored content and um you know, maybe they have a, they have to disclose that it's sponsored content.

I mean, I don't know if there's actually any kind of regulation that says such a thing. Um, you can imagine there's some kind of area where the FTC might take issue. Um, but then you're having this case in which this might, you know, like you are going to have to take this product. Which again, uh, uh, when it's, when it, when the way we're thinking about it, it's not like it's becoming enshittified, it was born enshittified.

Um, but then you're getting to a point where it's, okay, how are you actually gonna make money on this? Because just providing synthetic text by itself is not a money making venture. You're not getting sufficient money that you would get from, uh, B2C or even B2B deals. Even though B2B deals are gonna be the ones that are gonna yield the most, um. You need those sweet, sweet advertising dollars.

Right. And that's, you know, you well know Vauhini. That's how Google and Facebook made their bank. Right. And that's, that's the entire business model. Right. And so finding anything that's gonna be a business model that leads them to profitability and tries to get them outta this huge infrastructure, um, spending hole.

Vauhini Vara: Yeah. You know, and I just, I had missed this when it happened, but I was just today, um, reading that this, this, uh, December, uh, Financial Times article in which they interviewed the, the CFO at OpenAI who said that OpenAI is starting to look at advertising, um, which is not something that they'd sort of explicitly talked about before, which I, which I found interesting.

It's also interesting as you, I hadn't thought about this, but the way in which, like if it's the case that people pay for a premium version of ChatGPT, does it then become the case that like the currently free version becomes ad supported, which is to say like, you know, and, and you might even take issue with the, with the notion that one version is better than the other, that one is capable of being better, quote unquote, but what OpenAI markets as the, as the less good version would be the version that's then supported by ads, right.

To to, to some of your earlier points. 

Emily M. Bender: Yeah. And if you think about the ways in which people are being told this has to be in the classroom because these are the skills of the future, and students have to know how to use this, then how many students are gonna be subjected to the ads in the ad supported version because that's all that's available in their educational context.

Worse than nothing. 

Alex Hanna: Well, now if you, if you saw now, I mean there's been this thing that's been going around where they've been offering ChatGPT Plus, which I don't know what that is, uh, but it is some extra spicy flavor or something. Um, and they're saying that they're offering that, I mean, they might just put a like kind of cap on, like how many queries you can make or something of that nature.

Some kind of, uh, gimmicky, um, uh, interface thing that they're doing. And they're offering that for free during finals week for students, right? And so now first off, you have this thing where there, and there was this really awful, uh. New York, uh, New York Magazine article. I dunno if you've seen this, that, uh, came out yesterday. Um, I had it on hand. I'll mention the journalist in the low--or drop it in the chat. And, but it was talking about just like how widespread it is and how much of a, just the, a bomb it's been in education.

Um, and so, and, and, and I think that's the, like, and they're fully leaning into it. They're just like, well, whatever. It's not our problem. And they said they released, you know, an AI detector, of which of course they didn't. I mean, unless they're, they were effectively finding an effective way to watermark the content, which, if they were doing that, then I'd figure that people would just go to Claude or Gemini or whatever.

And so then, um, so now you have this thing where you're like, are they going to then take that away and then try to get revenue from students? Like what is the, what is the play here? And I mean, that seems like it's, it's a, it's, it's giving, it's giving it AOL free trial energy. (laughter) 

Emily M. Bender: When I, when I started college, they had this like, welcome pack of all these free samples, including Vivirin, which was caffeine in pill form.

Alex Hanna: Oh my God. 

Emily M. Bender: It's giving that energy too.

Vauhini Vara: Totally. I love that.

Alex Hanna: Wow. That's awful. Literally just addicting college students to even more caffeine. Good lord. 

Vauhini Vara: Um, okay, so the first about the fir, what I think of, of the, as the first half, the sort of spiritual first half of the book is devoted to describing AI hype and its effects.

I'm interested in turning to the second half, what I think of as the second half, which talks about how people might respond to all this, which I imagine is something that you're interested in talking about too. Um, for one thing, you, you write that there are applications of machine learning. You specify machine learning, and this may be true of other, other, um technologies that sort of tend to fall within that AI umbrella that are, that sort of fit kind of three parameters. They're well scoped, they're well tested, and they involve appropriate training data. And you describe these as therefore being legitimate tools.

And I wrote legitimate tools in my question, and I don't know whether you use the word legitimate in the book. So that's my paraphrase. Um, I wondered, and, and it may be that you take issue with the premise of this question itself, which I would be interested in hearing if you do, but I, I'm interested in what some examples are of what you would consider legitimate tools and why, tools that sort of like fit these three characteristics.

Emily M. Bender: Yeah. So the, my favorite go-to example is automatic transcription systems. And I've got a whole bunch of caveats, right? So an automatic transcription system goes from an audio signal to a written representation of the language that's being spoken. Um, and in order for that to be sort of well scoped and well used, you want to make sure that it's been, if not trained on the same, uh, people for whom it's going to be used then at least evaluated for those people. You're gonna want transparency about the fact of its use. Um, and you know, you're gonna want also a recourse.

So my example of not having recourse there is, imagine someone says you've got a deaf audience member, and, um, the situation where ADA says that sign interpretation has to be provided, and you say, no, no, no, we've got automatic transcription. It's good enough. Right? Well, that deaf audience member is not going to be in a position to double check what's coming out in the transcription. Um, and if it's too fluent, they might have very little clues that it is actually not what's said. Um, another way in which automatic transcription can sort of fall off of legitimacy is the way it's being done, um, by Whisper, which I believe is OpenAI, where instead of doing sort of the more standard input maps to output, we're using language modeling, um, to re-rank the outputs, it's we're gonna encode the audio signal and then decode with the language model. And so you get these just complete fabrications that riff in language model fashion off of what came before.

And I'm sorry, I've, I, I don't have the study to hand. Um, but it's one that I know about through Timnit Um, where I think this was in a medical transcription context and it was outputting random racist stuff, because guess what? That's in the training data.

So yeah, that's an example of sort of like what's well scoped and then how, how it quickly becomes not. 

Alex Hanna: Yeah, she just dropped it in the chat. That was, uh, some, uh, reporting by, uh, Garance Burke and Hilke Schellmann for the AP. Um, um, yeah, and I mean, I would, I would just speak on the data element to it, and I think that because I spend a lot of time thinking about data and thinking about like, well, what data and what kind of artifacts data are, and one of the things that I think is helpful, not just talking about the task, was also thinking about where the data come from and whether that data is, is collected or used with consent or credit or compensation or whatever those people who either own the data or are the data subjects, um, is, um, uh, actually are, you know, wanting to happen with those data. And so, uh, an example we mentioned in the book is Te Hiku Media, and these are folks who have their own, um, um, models for machine translation and automatic speech recognition for the Te Reo Māori language.

And so the data very notably comes from, you know, people in that community. And there are certain data rights, especially like traditional knowledge rights that are, um, that are, um, that are uh, uh, labeled with parts of those data. So certain amounts of those data could be used for training and some that can't be used for training.

And those are, you know, those are respected. And so definitely like, uh, something with regards to kind of data and data sovereignty I think is a great example of, of that kind of, um, uh, uh, well, uh, well-defined and, um, well, um, well speced system. 

Vauhini Vara: Thank you. I, I I kind of, um, I don't know, like a thought experiment occurs to me, which is to say that, um, it, or a question really that I wanna pose to you, which is that it seems that currently big technology companies are the ones who are proponents of like these sort of general purpose, um, large language model based AI products, right?

That they, that they, um, will claim like, can do many things and then the future will be able to do everything. Um, I'm curious about the extent to which, like that is a sort of political orientation that is necessarily bound up in big technology companies and their goals, or whether one can imagine um, an approach to building technology that would sort of fulfill some of the things you laid out while being that sort of, um, I don't wanna use the word monolithic, but, but while being like a very, a very general purpose, you know, large approach. 

Alex Hanna: Well, completely is a political orientation, right? Yeah. I mean, I mean, that's the thing that your book does really well, Vauhini, is that you talk about the way in which these technologies have become so colonial in our everyday life, right? I mean, the way that Facebook wanted to be the networking app and Google want to be the search and information retrieval app, and Amazon wanted to be the, um, wanted to be the logistics app, right?

I mean, there is a totalizing, um, politic in it, right? And so, you know, AI. As it is formulated is a political project, right? It aims to be totalizing, it aims to be colonial. It wants to get into all these elements of, of your life. And you know, there's this way in which, you know, the tools we use map to these certain kind of organizations which are most sort of proficient in using them.

And so like under, you know, racial and surveillance capitalism, like you have this, you know, you have, in which growth at kind of any cost is necessary, then these technologies are going to be the most, um, most adaptable and the most kind of successful, or will find most root in the system, right? And so there's kind of some really good reference here. So like Dan McQuillan, you know, has his book, um, "Resisting AI," which I think he really does well to kind of articulate what that political project is and really noting that like, if we're gonna have control of it, we have to think about what would be resistance to this, that would be necessarily not centralizing, you know?

And, um, Ali Alkhatib also has a few essays where he is kind of been working through this as well. I think one of them that has been good is called, uh, "Destroy AI."

Vauhini Vara: Hmm. Oh, I don't know that one.

Alex Hanna: I appreciate that one. Yeah. 

Vauhini Vara: Um, you know, that, that, that, what you raise and actually some of the, the questions that I'm seeing starting to come up actually, um, are a really good segue for a question I wanted to ask you about.

And it sounds like a couple, at least a couple of people, um, on Twitch are interested in asking question about this area as well. You write in your book about what you call everyday resistance. So saying no to using compromised products. Um, this is something that I write about in my book too, in a different way.

And the, the only reason I mentioned my book here is, is not to, not to, um, uh, market my book, but to say that--

Alex Hanna: Do it, market it, do it. (laughter) 

Vauhini Vara: My book, "Searches: Selfhood in the Digital Age." (laughter) Um, uh, no, but is to say that on my book tour, a question that has consistently come up is a version of, "But my boss requires that I use these products", or "I teach in a school district where, you know, we're strongly encouraged to be using these products".

Um, I think I, I raise like somewhat of a version of this question in my book as well. Um, and my version is like, what if the publications that I've written for strike deals to share my writing with AI companies to train their models without my consent, without any additional compensation for me, um.

How can resistance function when the institutions that pay us are customers or partners or pro generally proponents of the companies behind these products. And just to flag a couple of specific use cases that came up in the chat. One person was a teacher who, um, I, I guess this is, um, this is not exactly one-to-one, but a teacher who says, "As a teacher, I'm frustrated by the refusal to see the embrace of AI as analogous to the embrace of smartphones and iPads in classrooms, which we now understand have distracted students and impaired learning. Would love to know how you address this to resist AI mania."

And then there's another question, um, saying "AI implementation is the framing for my job. I'm here, so let's presume I'm aware of the weaknesses of that. Any thoughts on the best approaches to reframe as a sociotechnical approach?"

Um. So it sounds like there's a lot of interest in this question.

Would love to hear you two riff on that.

Emily M. Bender: Yeah, so I think there's many different avenues of resistance and they don't all apply equally. So sometimes there's things that we can do individually. Sometimes it's only through collective action that we can really resist. Um, and this is one of those things where it's like, uh, taking the mantra from value sensitive design, "progress, not perfection", like every opportunity to refuse that you take is meaningful.

Um, and there might be cases where there is no apparent opportunity, but that shouldn't prevent you from refusing when it is a possibility. Um, and to speak in terms of, you know, AI at work where the management is saying you have to use this because they're experiencing the big fomo that is AI hype. Um. I think that oftentimes the best way forward there is to talk in terms of evaluation and risks and, um, to maybe phrase the refusal actually as questions. And say, you know, I'm not comfortable using this unless you can tell me how we know that the output is going to be, um, you know, not gonna get us in IP trouble, or that the output is going to be consistent enough or that we know how to test the output.

And, and sort of asking questions that way is the way that I would go on the sort of work front. Um, the, it may, depending on the people you're working with directly be appropriate to have a conversation to say, okay, you and I both know that this is bullshit, but I understand that you're getting pressure from above. How can we do something that is meaningful and grounded, um, while still talking about it as a thing that fulfills what the upper management is asking for? Um, and then finally, in the context of the classroom, um, I think that a, yeah, it's really, really frustrating to me. I, this went by in the chat, like, 'Oh, well this is here to stay. The students have to know how to use it.' And I think really what students need to do, need to learn to do, is not changed, right? It's to be critical consumers of information, um, including information about technology that's being suggested. And so if you have to do AI in the curriculum, well, you know, hey, yeah, I'm gonna teach students how to do critical AI studies.

But I'm sure Alex has thoughts too.

Alex Hanna: Well, I definitely have thoughts on the work and I, I'm seeing someone in the chat, which I'm sorry, I, or said, who says, "Learn the hard way this morning that voicing concerns doesn't work gr great mixed with at, at-will employment." So I'm ap apologizing, apologizing for that. And I think that's the sort of thing, I mean, one of the things that, um, that a few colleagues and I have been running, and we just ran it this morning with something called the Luddite Lab. So my colleague Sophie Song and, and, and I ran, ran this workshop where it's really focused on thinking about what does it mean to push back at work?

And this is a kind of a multifaceted thing, um, just because a lot of workers have existing protections with regards to labor unions or, or different labor federations or, or, or, or labor associations, right?

And so there's an ability to then exercise some kind of united control um, around that. So either by thinking about what it would mean to either put some language in one's contract or to fight for that as a piece of bargaining, or to have some kind of a technology committee or AI subcommittee. And there have been wins here.

I mean the WGA, the Writers Guild of America as we mentioned in the book, is the one that is probably the, the starkest, um, and the most visible. But there's also been wins by folks like the longshoreman, the IBEW, um, by journalist collectives, by, um, by recently a set of workers in Pennsylvania and the SEIU local had some good protections that they had been fighting for that scoped their work to be done by people and not by algorithms or machines.

And so there are ways of fighting it really collectively. And I mean, that's a problem kind of with any employment. And we know in the US it's, it's notably difficult just because of our low unionization rates and, um, becoming even worse for the public sector. And so, I mean, I think that becomes really a way of thinking about it.

In terms of teaching, I mean, I think there's an element of thinking about what it would mean to change pedagogy and I mean, I don't think there's any kind of way of thinking about like any foolproof AI, you know, like AI proof assignment. I mean, I don't really know if that exists, but thinking, I'm thinking about the, um, the Swiss cheese metaphor of, of covid protection. Like you have masks and you have air filtration, you have X, Y, and Z.

I would say the biggest thing that we want to really think away from, uh, get away from, and I would love if folks in the chat, um, had recommendations for this, is thinking about what it would mean to have like a non carceral, anti AI pedagogy.

Um, and that would be thinking, 'cause I think a lot of approaches that, um, that instructors are thinking about is like, well, we need like intense enforcement. You know, and I was on a call this morning and someone mentioned, you know, at their university they had like code of conduct violations and academic, um, academic dishonesty proceedings and like had a two year wait list on that. Right.

And that's, that's a nightmare, right? I mean, so what are non carceral, um, anti-carceral ways of thinking about, um, kind of like anti AI moves in the classroom.

Because when you put the onus on students, you're blaming students for this bullshit that came from Sam Altman, right? And that shouldn't be on them, right? These students are here to learn. They're already in their mass pressures to do so and get good grades. What's a way that we can rethink the academy to be more resistant against these encroachments of these technologies? 

Vauhini Vara: Um, and we'll come back to, I just, uh, I had peeked at some of the questions in the chat and we will come back to those. Um, but I just, I raised, um, one and a half of those questions just because it felt like it was in conversation with a question that I was already planning to ask.

Um, uh, you also write, um, about how resistance might play out at the policy level. Can you lay out, um, what the policy changes are that you would advocate for and explain how you think people might strategize to make them happen in light of the huge lobbying power of the companies who are advocating for a different approach to regulation combined with, um, a lot of investment in, um, in AI companies.

Emily M. Bender: Yeah. I wanna start by sharing an experience that I had, um, in Washington DC I've had the opportunity to go speak to members of Congress there by invitation once actually to testify, which was a really fascinating experience.

And, uh, sort of in a hallway conversation, um, um, someone's gonna have to remind me of, of her name, but I was talking to a staffer for the congresswoman, for the district that includes Google's main campus. And the staffer actually said, we represent Google. And I said, no, you don't. Right?

Alex Hanna: For the, for the, for the district? In, in the, in the, the US rep.?

Emily M. Bender: Yes, the US rep from that part of California, um, a woman,

Vauhini Vara: --a kind of shorthand, like Google, Google's in our district, we represent Google. 

Alex Hanna: It wasn't Rho Kanna or Ted Liu? 

Emily M. Bender: No, it wasn't Rho Kanna. He's in a slightly different part of the Bay Area. It's, um, and she's, she's retiring at the end of this term, or anyway, doesn't matter.

Rep from the Bay Area whose staffer said, we represent Google. And I said, no, you don't. Some of your constituents might work for Google, but you do not represent Google. Um, and so I was pleased with the opportunity to get to say that to this person. Um, but I think that it is, you know, one of those things that, that needs to be said more.

Um, but then sort of looking at what we're saying about regulation, the first thing is to push back against the discourse of tech, two discourses. One is that tech moves too quickly and the regulators can't possibly keep up. And the people who don't know what's inside the tech, how it's built, are not in a position to regulate it.

And that's just not true. But it is something that is said so many times and I've, I've heard policy makers that I've talked to say, yeah, I don't like, they seem to believe that, that they can't possibly keep up. And so flipping that around and saying, no, actually the role of regulation is to protect rights, and that's not moving so fast.

Um, and another piece of this is that the, um, sorry. And I lost what my second one was gonna be. Dang it. Um, so here's the second one. I'm not sure this is what it was, um. That existing regulation often applies. That it's, it's not like, and, and under Lina Khan, the FTC was great this way. They said there, there is no AI loophole. We regulate what we regulate, we regulate the activities of businesses, and it doesn't matter what kind of automation they're using to carry out those activities, that is still our remit. And so to get more policymakers sort of seeing it through that lens will be really helpful because a lot of existing regulation does apply.

But then there's more that we need. Um, and when we need more regulation, it's because various types of automation are exposing, um, places where rights have not been shored up because they didn't need to be prior to that regulation. Maybe they did, it just became more obvious. Um, and so, um, you know, one example of that is privacy and data rights and having something where, you know, we have principles of data minimization because as soon as you have data that's been amassed, there's risk.

And it, and it also-- an analogy that I like there is, is, um, nuclear waste, right? Little bits of radioactive material, not so dangerous. When it's all in a big pile, it becomes much more dangerous. Little bits of data scattered here and there. Not so bad when it's been amassed, it becomes very dangerous. Um, so that's one of the ideas in the book, but I, I wanna give Alex some space to bring up one or two that she likes too.

Alex Hanna: Sure. I mean, the hard part of this now is we're in a terrible legislative environment, right? I mean, even, even, even quote unquote, blue states are doing really a really bad job on this, right? And so, you know, the session today just, uh, Willow Oremus, who I think is at the Washington Post had been, um, some, he had done some reporting on this, and there was a US Senate committee on Commerce, Science and Transportation, uh, that, and I think, uh, Ted Cruz is the head of that, um, the, the chair of that committee. The literal title of that is "Winning the AI Race: Strengthening US Capabilities, Computing and Innovation." And everyone they invited was, you know, it was Sam Altman, uh, the CEO of AMD, um, someone from Core Weave, and then Brad Smith, um, the VP and President of Microsoft.

Right. And so the framing has been very much about either framed around winning the US-China frame, right. And knowing that right now, even Sam Altman made this incredibly. Ridiculous statement, which was, if we are gonna win the race with China, then you cannot enforce copyright or something of that nature.

And I'm just like, great, great. Wonderful. Um, so really, really transparently, um, uh, exposing that. And so the kind of things that we were discussing in, you know, in the book is, you know, around data minimization and, you know, we, we need some kind of federal data minimization akin the GDPR in the US. You know, we have the California Privacy uh, Act, uh, which is okay and is like also encountering a lot of difficulties right now. Um, you have kind of fights happening in different places and there's so much money on the other side, right? And so it's, it's kind of feeling uphill. I mean, we're a little more optimistic in the book, in the book because we saw some really nice enforcement action, as Emily mentioned, from Lina Khan's FTC.

Um, and right now I think there's a lot of things that exist on the book that can, um, that could be addressed via existing enforcement action around things like, um, deceptive practices around antitrust, around, you know, and at least this FTC still has the appetite for antitrust. They're pursuing one against Meta right now for, um, for different claims, not related to quote unquote "AI".

And so there's existing action that can be, can be done. Um, and you know, like there's still the ability in states to do things regarding data minimization and data and privacy. Um, whether that looks more like the, the California law, whether that looks like the bill that was in the US House for a second. I think it was, I forgot the acronym, like ADPA or something of that nature. Um, apologies for not getting the acronym right.

And there's a whole proposal, again, also thinking about data with regards to workers' rights. Um, and so on the federal level, there was the, uh, Protecting the Right to Organizing Act that had been in the, um, being kicked around the Senate a few times and got very close to passing. Um, I don't think they were able to, uh, invoke cloture on it. And that would've done a few things, especially with regards to thinking about AI and replacing individuals with automation. So think about this with regards to gig workers and the way that gigification and, you know synthetic media machines kind of go hand in hand, right?

And so you have some shitty text that comes out and you have to, as part of your job, fix it. And you might have had a full-time job and you got fired and you're now you're doing the same job at a fraction of the wage. So that way, thinking about labor rights as being part and parcel of the kind of piece of AI, of the suite of AI regulation, I think is really important.

And I think that's one thing. I mean, a lot of the things that we could push against AI and the worst ex worst excesses could be tied to other, other kinds of claims. So labor claims, environmental claims are really heartened by organizations like CHIPS Communities United, which is a labor environmental partnership focused on pushing against, kind of like ensuring that the jobs that go to semiconductor manufacturing guaranteed by the CHIPS Act are also jobs in which, you know, workers are not gonna get cancer and in, and PFAS are not gonna be dumped into the environment like they have been in Santa Clara County and from, from, from initial, um, waves of Silicon Valley development. Right? So where are the places in which this push against AI becomes this, you know, red, red, green, blue kind of demand?

Emily M. Bender: And I wanna just say that I remembered what the second thing was, thanks to a comment in the chat, which is the other discourse. So one discourse is, it's moving too fast, you can't regulate it. And the other is, don't you dare regulate it 'cause that's going to kill innovation. And in fact, regulation shapes innovation.

It shapes it towards the benefit of society. Um, if it's done well, if it's, it's actually effective regulation. And when someone says, oh, you're killing innovation, in fact what they're saying is, you're killing my ability to do this totalizing capture of some market and all of the money that comes with it.

Alex Hanna: Yeah. 

Vauhini Vara: Um, I'm curious, uh about the extent to which, like there are existing movements, uh, that you would, you know, that people might, here, here, viewing the, the, the stream might, uh, wanna have on their radar that are sort of like already mobilizing collective action on these kinds of issues. Um, and to the extent that like, some of that might not be fully formed yet, like how you envision something like that coming together.

Like given that that political action often is about, about collective power.

Emily M. Bender: So, I'll do a couple, but I know Alex, you've got a long list. So one thing that comes to mind, um, is something called EPIC, which is, um, a group behind better privacy law. And so that's, that's sort of one direction. Um, another one that's been around for a while, that's a little bit off to the side, but I think is valuable, something called Better Images of AI.

Um, and that's a series of images that you can use to illustrate articles about AI that don't have these anthropomorphizing, like, you know, a brain made out of a computer chip style things. Um, but there's, there is definitely, um, more, and I think Alex has more of these at her fingertips. 

Alex Hanna: Yeah, I'm thinking, I mean, I would say, I want to distinguish because there's been movements-- it's very funny. I was listening to this podcast that I generally like, and they had on this podcast, these people from this organization called Stop AI. And these people are existential risk people. They're like, they think, they're like, they're like, uh, you know, you need to stop it because it's in 2027 it's gonna become, um, self-aware and then it's gonna murder. 

And I'm like, okay, that's a, that's like a bullshit, you know? Like, that's bullshit. Like don't pay attention to them. Um, and I think there's some elements that like might, you know, like appeal, like, oh yeah, I don't like LLMs, but they're like, oh, it's gonna be sentient.

I'm like, oh, you do that the wrong way. (laughter) And it matters. I mean, it matters what the analysis is. Right? I mentioned, I mentioned this earlier today in another workshop, and I'm saying again, it's such a good quote from Tamara Nopper, knowing who you, knowing who to get mad at is part of the work.

You know? And knowing, like that is part of the work of movement actions. Movement is movement research, right? And so actually knowing who the target makes sense, it, it's part of actually, you know thinking about where, where to focus one's energy.

And so in terms of collective action, we've seen really strong efforts from some unions who have taken action on this. So I mentioned WGA, National Nurses United, um, has had, you know, these, they've had, uh, uh, uh, recent contract negotiations in which we, they've focused on algorithmic management and uh the intrusion of kind of, um, different, um, surveillance technology in the room. So fighting things like remote patient monitoring and kind of metricization um, of, of, um, of, of their, what they're doing in a way that either automates it or outsources it.

So NNU uh, strong in the, in California, very large nurses union, uh, where you've seen other moves in, uh, other orgs I mentioned, so the SEIU local in, uh, Pennsylvania. I'm gonna look it up real quick because there was just something about it. Um. Um, yeah, there was a report, um, let me drop it in the chat. Um, uh, but this is, um, what this SEIU Local 668 did, which is a social service employees union. They, um, they were working with, uh, Governor Shapiro there, and they, uh, basically we're trying to prevent this kind of intrusion of generative AI into their workplace.

And so now workers have control some more control over that. I'll drop the, uh, I gotta confirm that's in this article, but it's in this long article, I think about Pennsylvania. So I'll put it in the chat in a second after I stop talking. (crosstalk) Yeah. Uh, so we, and then I think there's been other, other kinds of movements and, and I think the thing that I want to keep on hammering on is if there's kind of a notion of like, pushing against AI, like AI qua AI of like, there's a tool and this tool is gonna do X, Y, Z because it's gonna like, I don't know, gain sentient or something of that. Like it's, it's, that's, that's probably, I mean, we have to think about what it means when these things are paired with, um, the violation of rights, the violation of kind of what it means to be, um, someone who is a working person in, you know, like in an industry that is facing so much automation. Um, and what we've seen really successful campaigns do is pairing those, um, struggles together. As these, as these, um, as the org, these organizations really at the forefront of some of the worst elements of this.

Uh, environmental racism was another one that comes to mind. The movement against xAI in Memphis. And there's a really good article that was written by a journalist from prison that was talking about the community, um, in Memphis that was focusing specifically on, um, southwest Memphis and this area called Box Town, which has historically been a place in which recently freed, um, Black folks went after, after the, uh, Emancipation Proclamation settled in. That area, became a hub of environmental racism due to industrial pollution. And now Elon Musk has 35 methane, you know, gas generators and they are originally only admitted to 15 of them. Now they have 35. This is a sweetheart deal that the mayor, uh, Memphis, you know, had with xAI. Um, but now there's a coalition of environmental groups, um, fighting the environmental racism that's going there. So thinking about how this is hurting people in the here and now, and not stop AI 'cause paperclip maximizer bullshit. Sorry, I have talked so much. (laughter) 

Emily M. Bender: Yeah. I wanna shout out one more organization really quickly, which is DAIR, the Distributed AI Research Institute.

Alex Hanna: Very kind.

Emily M. Bender: And yeah, um, events, and I'm not exactly, um, impartial, but events such as the Imagining Possible Futures, I think are another way to connect with people and find things to do. 

Vauhini Vara: Um, I have two more quick questions for you. Um, here's just a question that I was personally curious about having never co-authored a b book.

Um, and also knowing that I think like, just knowing a bit about the research that you both do, assuming that there might be differences of opinion on certain aspects of the material here, I'm curious about what the two of you disagreed on while putting together this book, if anything, and how you addressed that, whether you ended up not putting, putting certain things in the book or finding some middle ground.

Emily M. Bender: I think that largely if there was a disagreement, one of us felt much more strongly than the other, and that person carried the day. I think that that's, that was my experience of it, um. Worth knowing that, uh, at least 90% of the writing collaboration happened asynchronously. Um, and, uh, some, some folks here know that Alex and I finally met in person in March when I was in the Bay Area, which was, um, amazing.

So, so I, you know, a lot of the work behind the book is the podcast and so we, you know, experienced face-to-face in the podcast. And when we got, um, there was some early meetings where we were talking like through Zoom, and then we did our final editing sprints together, sort of in a pair programming fashion.

Um, but most of it was, um, conversations in the comments in our markdown files. Um, and my experience of it was sometimes I felt really strongly and got my way. And sometimes it seemed like Alex felt really strongly and got her way. And maybe once or twice there was something where we couldn't agree enough to put it in the book.

But I actually can't remember any like that. 

Alex Hanna: I think if there was anything, there was--first, Timnit said something hilarious in the chat, which is we were working asynchronously, "so they couldn't fight each other physically, given the asynchronous nature of the collab."

Emily, Emily runs every day. Every day.

So she would outrun me. If I got her in a corner, maybe we could fight. Maybe we could.

Emily M. Bender: I mean, you're the one who does the contact sport, so. 

Alex Hanna: I do the con--I do, I play roller derby, so unless I'm, I'm not catching you unless I'm on skates though. So, uh, but it's, it's, uh, yeah. Emily runs while taking notes, says our producer.

Yeah. There was one episode where she was taking notes on a book she was listening to and dropping it into our group chat, which I'm like, wow-- 

Emily M. Bender: I stopped to take the notes. I wasn't doing that while running, but yeah. 

Alex Hanna: Um, and so, yeah, I mean, the thing is, and is that Emily's a linguist and a lot of the linguistic stuff.

You know, um, you know, like I, it's like I defer to that and I'm a sociologist and things are sociology. And I think the only moments where it was like, really, it, it, it like really, uh, combative was like, I think this is way too weedsy, like, either, take this out, and Emily's like, no, I don't want to, I'm gonna, we're I'm gonna agree to put it in the end notes. (laughter) 

So yeah. And so, and we were talking to someone for another podcast and, uh, she had pointed out, and I don't think I had actually, uh, counted, but there are 50 pages of end notes in the book. So it's very, uh, you know, like it is very well cited and researched, and that was an important point to both of us.

Vauhini Vara: Yeah. And I'll, I'll vouch for that as well as a reader of the book. Yeah. Um, the last question I wanted to run by both of you, and we're, we're perfectly on time, is to say that I share with the two of you an interest in imagining possible futures, and I'm curious about whether you two could each describe a future you would be interested in living in with respect to the role of AI.

And, and maybe I should say this, the, the, the, the technologies that are, tend to be classified under that title, AI, right in, in our lives. Um, and I say a future 'cause I'm not asking you to commit to one, one particular future. 

Alex Hanna: Yeah. Well, so I'm gonna make reference to a piece that, um, Timnit and Asmelash Teka and I wrote, um, that was called "An Internet for Our Grandmothers," and, um and, um, I, I think someone can drop that in the chat and to, and it, it is mentioned to in that, uh, in the event from a few weeks ago. And what don't we talk about in that book is, um, uh, what we talk about in that blog post is like, well, what if we had, you know, our grandmothers who speak these languages that are not well represented on the internet?

Um, so in the case of my grandmother, you know, Coptic and Egyptian Arabic and not like Cairene Egyptian Arabic, like real country Arabic, you know. So like, and so like, and doesn't use a computer, I don't think really used the phone or like used the cell phone or anything. What would it mean for, you know, for her and them to be using, um, you know, an interface access this, you know, what we know is the internet and what we know the internet can offer and, and the sort of best, uh, incarnations of it. Right?

And so it, and I love the piece because it, you know, and this is, um less of my imagination than Timnit's and, and Asme's, but thinking about like multimodal interfaces, you know, voice interfaces that don't rely, you know, using a keyboard, how do we know that it's going to work for them? How do we know it's going to, you know, their data's not gonna be swept up and use to, you know, make the new Facebook mega model that is good at, you know, country Egyptian Arabic, right? So, you know, like, what does that mean? And we don't have any of those guarantees now, right? I mean, what does it mean to have that it actually works for people and actually has this really thing that's useful for people.

So, I know it's vague, but you know, there's a, you know, there's, there's a vision in there that I think is, is helpful and we need to exercise that muscle. 

Vauhini Vara: Yeah. Thanks Alex. 

Emily?

Emily M. Bender: I decided to turn to something that's actually in the book. Um, but it started off life as a newsletter post. So Alex and I write a newsletter and most of the posts there are assigned by one or the other of us. Occasionally we do a joint post. Um, and this one was, was one of mine and it came from frustration about the way universities, including my own, where jumping on the AI bandwagon. Um, and so this is a future, an an imagined future that could be very close about how a university could react differently to the AI moment.

So this is, um, from page 96. Um, it says, "Imagine if just one two- or four-year college put out a statement along the following lines: 'We're going to prepare for this AI future that everyone is talking about by committing to funding fundamental research across disciplines, but especially the humanities and social sciences. Of course, we're concerned about the ethical and equitable development and use of the technology, and that's why we need scholars who are innovating at the edges of our understanding of how humans experience life, how power works in society, and how we can reshape our social and economic systems towards justice, equity, and sustainability. And we recommit to our mission of training students to be critical thinkers across disciplines, who can consider sources of information and locate them within their context, who can evaluate toolkits for the tasks they are taking on, and decide which tools fit which task, and who can see through the glib marketing that power cloaks itself in.'"

I would love to live in that future. 

Vauhini Vara: Thank you. Thank you for sharing that. Um, okay, I'm gonna jump into questions. Um, and there are a fair number of questions, so just letting you two know, and maybe what we should do is, like, for each question, one of you should choose to answer it. Um, and then we'll move on to the next.

Um, Support Live Music wants to know, "What do you recommend as the best option to make disapproval known to the companies that force AI on users by introducing features that people cannot turn off or choose not to use, if deciding to stop using the product is not necessarily immediately feasible."

Alex Hanna: Yeah. Ooh, that a good question. I mean, I think in the book we say something like, don't click the button, but I know that's a pretty weak signal, especially it's being forced on you. Right? I mean, you know, one thing that we hold very dear as one of our principles is ridicule as praxis. So really, I mean, you know, like, for better or for worse, we live in this very obnoxious attention economy with a set of influencers that do X, Y, Z. Right? And, you know, a lot of the times that gets attention, right? Um, so I mean, I think the more there is kind of some way to be upset about it or to organize against it, I mean that's, you know, these are companies, they can do whatever the hell they want, but, you know, if they want to keep on dumping a lot of money into an absolute trash fire, I guess, go ahead, but let them know that it's terrible.

Vauhini Vara: Um, okay. I'm gonna ask the, another part of Lutz or Lutz Fernandez's question, um, which I feel like I kind of skimmed over earlier. This was the, the teacher in the, in the chat saying that they were "frustrated by the refusal to see the embrace of AI as analogous to the embrace of smartphones and iPads in classrooms, which we now understand has distracted students in impaired learning. Would love to know how you address this to resist 'AI' mania," and they put AI in quotations. 

Emily M. Bender: Yes. I, I think I would take the asking questions approach and say, how do you expect this to improve student learning, specifically? And then just like, keep asking, right.

'Well, it's, it's the technology of the future.' Well, what do you mean? How do you know that? Who's saying it? Um, and then also, um, you know, where is this coming from? Where, whose money is putting this in front of your face and whose desire for money is making this your problem. I got to speak to the uh, at something called the Pac 12 Deans Conference last academic year, when there was still a PAC 12. Um, and they wanted me to talk about ai, of course. And I got to say, look, the reason this has become your problem this year is not because there's been some amazing change in technology. There is no big phase shift in what it is that we can do with automation. This is your problem right now because of a whole bunch of venture capital money behind it.

And I would really hope that, you know, educators are constantly struggling against not having enough resources. And I understand that. But also I would hope that most people in that sector are not there to do the bidding of venture capitalists. And sort of drawing those connections would hopefully be helpful.

Vauhini Vara: Thank you, Emily. Alberto Honest AI asks, and maybe this was one that Alex can, can address, "What's the one thing about this whole con that really gets under your skin and why?"

Alex Hanna: Oh, gosh. Just one thing? (laughter) 

Vauhini Vara: They wrote a whole book. It's like 200 pages. (laughter) 

Alex Hanna: The one thing that really gets under my skin.

I, you know, the, I'm, I'm trying to think of the thing that is just the most upsetting. And I think it's, there's a lot of things that are very upsetting. I'm torn. I think it's a tie, uh, between, I think, and someone kind of hinted at this in, in a comment, which is this, the first, the self-assuredness against which, um, the broligarchs, this is a comment from A-N-I-C-T-8, that they are like geniuses. I mean, and I think this is a great comment because they're, they said, "A lot of people buy into the idea that tech broligarchs are geniuses, but there have been so many versions of 'the monorail!' to come outta Silicon Valley." Um, and so yeah, I think that really gets under my skin, like the idea that this is not just some kind of a, um, way of just getting rich and funding VCs and it's not just another, effectively a scam.

Um, that really pisses me off. Um, I would say the second thing that pisses me off is just like, it's tied to the first one, but it's just the notion that like, things are gonna be different this time, right? Like we're gonna have a massive, somehow there's gonna be a massive save in productivity and that you can go to the beach, you know, and, uh, um, what is this, Eric Yuan said this, in an interview, like the CEO is gonna go to the beach, uh, you know, and, um, and like, not gonna have to answer his emails 'cause there's gonna be like an AI agent that's in the meeting.

First off, that vision of the future is just like absolutely batshit. Like what do you think a meeting is? Um, and then, um, the second part of it is just like, if you, do you actually think that we're going to a place under surveillance capitalism in which any kind of improvements and time savings are going to net you know, workers taking time off, no, those people are going to be fired for this janky technology, and then they're gonna be replaced with shitty facsimile and then hired back at a fraction of wage. Like we live under capitalism. That's not gonna happen. And I think that really grinds my gears. 

Vauhini Vara: Thank you, Alex.

Um, I'm, I'm moving around in these questions just to sort of start with the ones that seem most broadly applicable. Um, uh, Aybrielle, I'm probably not pronouncing that correctly, says, "What is your thinking on the idea of reclaiming AI, LLMs, image generation--" and I, I believe that the question meant means to say for the people, it says, "--from the people", so let's say "from / for the people in opposition to corporations, is that possible?" 

Emily M. Bender: So I, I think it is perfectly fine to imagine a situation where you have, um, either an individual person who takes their own artistic and creative output and decides to remix it using one of these algorithms. Or you have a collective of people who are contributing artwork of, you know, in whatever modality and using that in an intentional way, um. One of the issues that you would need to confront is that if you make the models very big, it becomes expensive to retrain them when someone decides to revoke consent. Um, but I think that there are certainly reasonable things to do with this particular kind of automation when it is data and the input that was either consentfully given, or it is the own artist's own work that's there.

Um, and you know, that is never going to be something with the same scale as a ChatGPT or a Gemini or whatever. Um, and so it is, will be hard to get, I think, to the same degree of apparent fluency. But I think that apparent fluency is, is really just part of the problem and not actually a useful feature from a, you know, sensible user's point of view.

Um, it is quite useful from the point of view of the people who are perpetrating the con. 

Vauhini Vara: Thanks Emily. Um, Martha A Bird asks, "What actions are happening in R-O-W, AKA rest of the world, outside of the US to bring specificity to the risks of AI?" Um, let me, yeah, let me end the question, question there. Alex.

Alex Hanna: Uh, the rest of the world is, is quite large, right? (laughter) Uh, and so, you know, I, I can speak a little bit to what's happening to the EU, right? And so the EU passed, uh, last year, I don't know what time is anymore. Uh, but they had passed a piece of, uh, legislation called, uh, the EU AI Act. And the EU AI Act did a few different things. Um, it effectively tried to set a standard on, what was it? Prohibited certain kinds of, um, elements of quote unquote "AI" which would be considered high risk. That included some elements of surveillance and policing. Um, and, uh, and employment.

It did not go nearly far enough, effectively any kind of quote unquote "foundation model" um, which is a term that we don't use because it's a hype term that comes outta Stanford. Um, but it is a, um, effectively any kind of model that, um, was a large language model that was developed by an organization like, um, like, uh, OpenAI or, uh, or Anthropic. And effectively it would not hold them liable, but would hold liable organizations that develop that downstream technology.

And so that was progress to a degree, but there's a lot of issues along that. And a lot of folks have written about the failures of the EU AI Act.

Um, I would say that the EU also has GDPR, which allows a little many more data rights to be afforded by people represented in the data that, what they call data subjects. They, those are also people who are subject decisions of, of AI systems. So that had been some progress. I can't really speak beyond that. Um, but the data regulation is a lot better in the EU than it is in the us. 

Emily M. Bender: I wanna just bring in a little bit more rest of the world too, um, which, uh, we talk in the book some about the exploitative labor practices behind the content moderation.

Mm-hmm. Um, and uh, so you'll find some of that in the book. And also there's interesting developments, for example, with the African Content Moderators Union, um, pushing back in some lawsuits in Kenya and other places. 

Vauhini Vara: Thank you. Thanks both of you. Um, okay. Here is where, so we're, we're just about at time, so people, if people need to jump off, that's fine.

And then maybe, if it's okay with you, Emily and Alex, should we try to squeeze in another maybe one or two questions? 

Emily M. Bender: Sure.

Alex Hanna: We could do one or two. Yeah. Maybe just one. 

Vauhini Vara: Maybe just one. Okay. Let me see. Uh, let me see here. Actually this question feels probably relevant to a lot of people here. Um, Kent_Bye asks, "I interviewed a lot of AI researchers at IJCAI in 2016 and 2018, and there seems to be a split between machine learning techniques versus other domains of symbolic AI, like constraints and planning, NLP, knowledge management and representation, robotics, et cetera. Do you find that some of the worst abusers of the a AI content tend to be more from the ML and LLM side? More so than from others who are trying to take a more holistic approach." 

Emily M. Bender: I would say yes. Although you do have people who were working in other areas, um, that might be publishing at IJCAI or even the NLP conferences who jumped on the bandwagon bandwagon and as sort of the recent converts could be among the worst.

Alex Hanna: Yeah, I don't, I don't really, I have, I don't live in those worlds, so I can't say. I mean, from what I've seen, it seems like there is a real bandwagon effect that people are, you know, saying, you know, it's, it's, it's, um, so yeah, it's kind of, uh, you know, like, yeah, I'm not, I'm not sure, um.

I would say, you know, one thing that it is doing is that it does to like, does very much kind of foreclose what could be other avenues of interesting, um, work in kind of computer science and engineering? I think partially because the, like evaluation practices tend to be pretty shoddy. Um, everybody, like you really can't get something published at NeurIPS or ICML unless it's, you know, got some kind of, you know, it's got some language model or got some generative AI in it and then it's, it's souping up and trying to achieve on some benchmark.

So it already, you know, it's, it's kind of messed up how a lot of the work in those, in those fields go. I mean not like it was really perfect before, but you know, now it's make number go up and then I get published. 

Vauhini Vara: Um, okay. I think this is, uh, a good place to, to close and sorry to those of you whose questions we didn't get to, um.

Thank you Emily and Alex for the great conversation. The book again is "The AI Con: How to Fight Big Tech's Hype and Create the Future We Want," um, which I really enjoyed and got a lot out of, and I think you all will as well. Thanks Emily. Thanks Alex.

Emily M. Bender: Thank you Vauhini, thank you for the wonderful questions and thank you to the audience.

Alex Hanna: Thanks all. Appreciate it. 

Vauhini Vara: Thanks audience.

Alex Hanna: All right. Bye.

Vauhini Vara: Bye. Bye.

[Music]

Emily M. Bender: That's it for this week! Our theme song is by Toby Menon. Graphic design by Naomi Pleasure-Park. Production by Christie Taylor. And thanks, as always, to the Distributed AI Research Institute. 

If you like this show, you can support us in so many ways. Order “The AI Con” at TheCon.AI or wherever you get your books, or request it from your local library.

Rate and review us on your podcast app. Subscribe to the Mystery AI Hype Theater 3000 newsletter on Buttondown for more anti-hype analysis, or donate to DAIR at dair-institute.org. That’s D-A-I-R, hyphen, institute dot org. 

We'll be back with new episodes later this month. 

I’m Emily M. Bender - stay out of AI Hell, y'all.

People on this episode