Mystery AI Hype Theater 3000

This is What Algo-cracy Looks Like, 2025.12.01

Emily M. Bender and Alex Hanna Episode 68

Tech leaders are pushing the idea that automation can strengthen democracy — but as usual, their bold suggestions are based on castles made of sand. Alex and Emily tear down some flimsy arguments for AI governance, exposing their incorrect assumptions about the democratic process.

References:

Also referenced:

Fresh AI Hell:

Check out future streams on Twitch. Meanwhile, send us any AI Hell you see.

Our book, 'The AI Con,' is out now! Get your copy now.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Ozzy Llinas Goodman.

Alex Hanna: Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis In this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find. 

Emily M. Bender: Along the way, we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. I'm Emily M. Bender, a professor of linguistics at the University of Washington. 

Alex Hanna: And I'm Alex Hanna, director of research for the Distributed AI Research Institute. This is episode 68, which we're recording on December 1st of 2025, and today we're talking about government uses of automation, specifically the idea that so-called AI can strengthen democracy.

Emily M. Bender: This is an idea we're seeing increasingly pushed by tech leaders, but as usual, their bold suggestions are based on castles made of sand. The foundations you'd expect, like definitions of AI and democracy, are just missing. So Alex, let's start there. And, I think, so we've talked a lot, on the pod, in our book that about AI as a term and how it lacks a definition. And so just to reiterate, for anyone who's new, basically if you hear the phrase artificial intelligence, you should replace it with mathy maths, synthetic text extruding machines, racist pile of linear algebra. Or recently, I've been enjoying saying magic beans. If you drop magic beans into the sentence to see what's going on. 

Alex Hanna: Magic beans is nice. I like that. Giving it some fairytale taste. Some little twist on things. 

Emily M. Bender: Exactly. So we can try out magic beans today, but basically if someone is saying AI can do something and they haven't defined AI, then they're saying magic beans can do something. But we've got a new one today, which is "democracy," which also isn't defined in these artifacts. So I'm curious for you, what does democracy mean? 

Alex Hanna: I mean, democracy is a very complicated concept to suss out, right? I mean, there's so much of political science literature focusing on this, on political theory focusing on democracy. It's been something that political theorists have been dallying with for so long. And one should remember that democracy was kind of a bad word in Aristotle, like, actually thought that democracy was, or it wasn't a good thing. And the, Plato and Aristotle- and I apologize that I took these classes on political theory a long time ago, so sorry for Professor Kevin B. Anderson, one of my mentors in grad school, in undergrad. But you know, like, what was it- was it Plato or Aristotle who had said, we should have philosopher kings, and those people should be ruling because they know better. But I think the idea here, I mean, I can't offer a pithy definition of democracy and desirability, but I think, what democracy really signals to the writers here, not that there should be algorithmic governance, but that there's some way in which democracy and democratic preferences can be revealed just by having a lot of information, and then sort of using synthetic text extruding machines and other generative AI tools to suss out what those preferences are. And I think that's, it's very wrongheaded, because it doesn't really think about a system of government. It's thinking about democracy in terms of preferences and then aggregations of those preferences, which is just totally wrong. 

Emily M. Bender: Yeah, and I think what's missing there, if I can try to put it really succinctly, is any power analysis. And that democracy, I think, has to include a notion that power is in the hands of the people, either directly or through some representative system of government. And that's not what preferences means. So let's keep that in mind as we get to democracy. Our first artifact comes from the New York Times. It's a guest essay in the opinion section, with the title "This Is No Way To Rule A Country," from November 11th, and it's got this weird pixelated portrait of George Washington.

Alex Hanna: It's ASCII art. 

Emily M. Bender: Is it ASCII art? 

Alex Hanna: Well, it's not pure ASCII art. It's an original, I mean the original, I think it's the official portrait of Washington, but then it has on his face like these, hashtags or- what's the name? Octothorpe. It's got those, and some hyphens and dots. So it's a poor approximation of ASCII art. 

Emily M. Bender: Right. Over an actual portrait. Yeah. Ooh, interesting. Okay. So this is by Eric Schmidt and Andrew Sorota. And their byline says, "Mr. Schmidt is a former chief executive and chairman of Google. Mr. Sorota is Mr. Schitt's-" Mr. Schmidt's, excuse me.

Alex Hanna: Sorry. Freudian slip much! 

Emily M. Bender: "Mr. Sorota is Mr. Schmidt's head of research." So Eric Schmidt has his own head of research. 

Alex Hanna: I mean, sure. I mean, he's got how many umpteen different operations. I imagine he has something called Eric Schmidt Corp. And he's got just a personal guy. 

Emily M. Bender: Yeah. So yeah, shout out Schitt's Creek, by the way, for that slip of the tongue.

Alex Hanna: Incredible. I love it. 

Emily M. Bender: All right. You wanna take us in here, Alex? 

Alex Hanna: Yeah. So I think we talked about this on a Fresh Hell segment of the pod. And so it starts off, "Albania is the first country to take a real step towards quote algo-cracy-" algocracy? I don't know where the- 

Emily M. Bender: Algocracy? I think so, yeah.

Alex Hanna: Not sure, but it's A-L-G-O-C-R-A-C-Y. "Government by algorithm. In September, its prime minister announced that all decisions concerning which private suppliers will provide goods and services to Albania's government over $1 billion annually will be made by an AI avatar named Die-la." 

Emily M. Bender: Diella? 

Alex Hanna: Diella? Yeah, it's "die" plus L-L-A. "Albania has long suffered from corruption, particularly in this realm. The unbiased, competent algorithmic Diella is thought to be the solution." You wanna start there? 

Emily M. Bender: Yeah. Well, so unclear as yet whether Schmidt and Sorota actually think that, agree that Dee-ella, Die-ella is unbiased, competent, and algorithmic, or not. We'll see. But it bothered me that they said that the decisions will be made by an AI avatar. Because the avatar is just an image projected to sort of anthropomorphize the algorithm. 

Alex Hanna: That's right. Yeah. As if we could say the Tupac hologram was actually performing at Coachella with Snoop Dogg or whatever.

Emily M. Bender: Meanwhile, magidin in the chat: "DIELLA- die, large language algorithm."

Alex Hanna: Yeah. It's giving me sideshow Bob where it's "Die, Bart, die" in the trial of, I forgot the episode reference, but it's, "No, it says 'dee' Bart, 'dee,' it's German." Anyways. Deep cut. But I think the, I mean, just the last part of the sentence is very telling already that "the unbiased, competent, algorithmic Diella is thought to be the solution." As if there is nothing biased about an algorithmic system that would somehow be the solution to government corruption.

Emily M. Bender: I have a problem with "competent" in there, too. 

Alex Hanna: Yeah. There's a lot going on there. 

Emily M. Bender: A lot going on. And it's gonna get to be an even richer text. So the next paragraph, "It's a seductive trade. When democratic systems fail, simply replace them with algorithmic ones. But it's the wrong reflex. Algorithms can optimize efficiency, but they can't decide between competing values, the very choices that lie at the heart of democratic politics. Without transparency about how Diella reaches its conclusions, and without mechanisms to challenge its decisions, citizens will inevitably feel wronged and without recourse." So, your thoughts on that part? 

Alex Hanna: I mean, sure. That's fair. I don't disagree with most of that. The things that I would disagree with are, "competing values." And the idea that that's what lies at the heart of democratic politics. And I mean, values are one thing, but interests are another. And also things that democracies ought to also optimize for, including equal treatment under the law. The fact that people must be held accountable, that government works in the public interest. I mean, there are many things that lie at the heart of democratic politics that's not just value based. And also the fact that certain people, because of power and what resources they marshal, are louder in democratic politics. So, many things happening there, but the rest of the graph, not that objectable to me. 

Emily M. Bender: Yeah. I have a little bit of an issue with, "it's the wrong reflex." Because that sounds like people are just sort of instinctively turning to algorithms, instead of reacting to lots and lots of marketing, and also people in power saying, "Well, we can just displace accountability over there. How's that?" 

Alex Hanna: Yeah. Yeah. A hundred percent. And sjaylett in the chat says, "No more government corruption. Now it's all moved to corruption within the LLM developer." So yeah. Displacement of accountability. So then the next paragraph: "Rather than replace democracy with AI, we must instead use AI to reinvigorate democracy, and make it more responsive, more deliberative, and more worthy of public trust." And so that's the kind of nut of the entire op-ed, and I wanna read the next three graphs just because it'll help provide some context. 

Emily M. Bender: Can I do something with that short paragraph? 

Alex Hanna: Yeah, yeah, yeah. Go ahead. Mm-hmm. 

Emily M. Bender: So, back to our question of what does democracy mean? When they say, making "it," making democracy more worthy of public trust. That doesn't make sense. So democracy is about sharing power across all of the people. And if we have a system where we don't trust it, it means that the power is not shared. 

Alex Hanna: Yeah. I mean, in that sense, they're using democracy as a stand in for democratic regimes, or rather democratic governments. So the more correct term would be to use a democratic government or multiple, but they're trying to speak more generally. I mean, even though they're, I mean, the article mostly focuses on the US, and then, I guess with this aside on Albania, which is, yeah.

Emily M. Bender: Yeah. All right. So one thing from the chat before you do those three paragraphs, thx_it_has_pockets says, "To replace democracy, one must have a democracy to replace." 

Alex Hanna: Yeah, yeah. We'll also get into that. Geez. So, "Unfortunately, this isn't the path we're currently on. A majority of adults-" and weird, weird link to "a majority," but it is, this is a link to a Pew Center survey- "of adults across 12 high income countries say they are dissatisfied with the way their democracy is working. We see this disaffection manifest in turnstiles ablaze-" which, weird- "smashed storefronts, and streets choked with tear gas, the seemingly endless churn of protest against governments that are perceived as out of touch, ineffective, and corrupt. Meanwhile, AI systems continue to improve rapidly. We already have models that outstrip human performers in fields like geometry and medical imaging. The public is also becoming more familiar with the technology, even if Americans use it-" and here it's another Pew Center survey link on "Americans use"- "significantly less in those in many other nations do, most notably in China." And there's a link to, it looks like an Ipsos survey. So a lot there. I wanna start off by talking, just in the sentence "turnstiles ablaze-" which is a weird thing to have ablaze, 'cause I think turnstiles, I think of like, subways- "smashed storefronts and streets choked with tear gas." And so, yeah. So this to me is- 

Emily M. Bender: I'm mad. I'm really mad. 

Alex Hanna: Yeah. This is terrible. It's a terrible way of really focusing on protest. It's focusing on the property destruction element of that, which is a very, it's a classic conservative move in talking about protests, and talking about protest as more of a deviant behavior, which is a really old kind of style of thinking within social movement studies, but it's also one that is very classic for conservatives talking about any kind of public protest. Whereas if you think about public protest as necessary for robust democracies, that's not a problem, right? 

Emily M. Bender: And also, "streets choked with tear gas," who is... 

Alex Hanna: Yeah, exactly. 

Emily M. Bender: It's not the protestors that are throwing the tear gas canisters. 

Alex Hanna: Exactly. Yeah. Yeah, exactly.

Emily M. Bender: And there's just some wonderful stuff going on in the chat right now. First of all, sjaylett says, "Turnstiles and storefronts being load-bearing aspects of democracy." Well put. abstract_tesseract says, "Turnstiles Ablaze sounds like a great punk song, by the way." ndrwtylr says, "Turnstiles Ablaze has got to be a drag artist." And then, conclachat, "New derby name just dropped." Love it, love it, love it. Okay. So next, do you wanna get into this one a little bit too, about the- 

Alex Hanna: Well, let's get into this, too. 'Cause also getting into this, "most notably in China," right? And so this is, I mean, this is a classic Schmidt move. There are many people that wave China as this boogeyman. And we've seen it in many different, many of the different characters on the show. We've seen it with Reid Hoffman, we've seen it with Sam Altman, we've seen it with Trump. We've seen it with many people. And so the kind of move to China is like, well, even if Americans use it significantly less than people in other nations. So it's like, "Americans should be using it more! Look at China, this big boogeyman and this non-democratic entity. We ought to be using AI too, right?" 

Emily M. Bender: Yeah. And it's all predicated on this, well, what are we even talking about here? And "AI systems continue to improve rapidly." Well, what systems? Improvement measured how? Okay. So, continuing? 

Alex Hanna: Yeah, go ahead. 

Emily M. Bender: "Perhaps it isn't surprising then that people around the world already trust emergent AI systems over established democratic ones. Three rounds of surveys run by the Collective Intelligence Project between March and August 2025 consistently found that people believed AI chatbots could make better decisions on their behalf than their elected representatives."

Alex Hanna: So first off, I mean, let's follow- 

Emily M. Bender: Let's follow this link? 

Alex Hanna: Following this link. This is a link to a Substack, or it's a blog, but it's a Substack. It's on the Substack platform. By the Collective Intelligence Project, which we're not familiar with, but we poked around a little bit. So there's a few surveys here, about comparisons. So there's a chart here. The first one is people- the first subheading is "People trust AI chatbots more than they do their own elected representatives." And so there is a question here posed by the Collective Intelligence Project, which says, "AI could make better decisions on my behalf than my government representatives," and then is marking agreement and disagreement. And so in this case, the agreement on three of these waves, I'm assuming, I don't know if this is cross-sectional or panel data, but there's about 37% agreement in that and then about 27% disagreement. And then, but I think it's also significant to note that there's about 35% unsure across all waves, which is kind of fascinating. And then there's a few subsequent questions here about trust in chatbots versus people who are making them, and a few other kinda survey things. We don't have to get, go down that rabbit hole too much, but it's also just weird. I mean, if you want to have a meta conversation, too, this nonprofit is a bit weird. The Collective Intelligence Project, so we dug into a little bit. It's kind of a bizarre organization, and it has a few people on staff that are really interested in, I think, behavioral economics. So the ED, Divya Siddarth, used to be in Microsoft's office of the CTO, and then also was at the UK AI Safety Institute. And then has a BS in computational decision analysis from Stanford. So it's giving me like big behavioral econ vibes. And then the research director is someone who, their name is Zarinah Agnew, so, has a background in neuroscientist, which is kind of- neuroscience. So it's a bit of a weird kind of eclectic mix of folks, I think very much in the collective decision making realm. And I think weirdly, I mean if you look at their funders, they're kind of mixed with a collection of traditional public tech interest funders, but also Google.org and at least two TESCREAL organizations. So one is the Survival and Flourishing Fund, and then I think the other is Future of Life Institute. 

Emily M. Bender: Those guys. 

Alex Hanna: Yeah. So it's kind of a weird mix. And then there's one organization called the Amaranth Foundation, that is doing research in longevity in neuroscience. So it's a little weird, and it's a little weird of organization, and it's also a little weird about the surveys and the orientation towards the surveys. So, apologies for the kind of deep dive into the footnoting of it all, but like the survey is constructed weird, and I think it's this idea about collective decision making and explorations of that with quote unquote "AI," which itself has a particular orientation.

Emily M. Bender: Yeah, I have to say, I finally got to their funders page and it's, I'm a little bit weirded out by these two organizations whose logo doesn't even give a name. This one's Amaranth, I think. And this one is- 

Alex Hanna: That's One Project. Yeah. 

Emily M. Bender: Yeah. That's really weird. 

Alex Hanna: Yeah. One Project is an interesting organization, but I don't know too much about them.

Emily M. Bender: So a couple great contributions in the chat here, sjaylett says about these surveys, "Even if it's true, it seems like it's less about AI chatbots. Did they also collect opinions on elected representatives versus tubs of lard?" And then arestelle says, "Quote, 'politicians are so bad, even the mathy maths are not worse,' end quote, isn't really a great pitch for chatbots." 

Alex Hanna: Yeah. And I mean, the helpful kind of control of this is, there's a lot of data. I mean, it would be interesting if they also compared the Pew data that exists on how satisfied Americans are with their representatives, which is typically pretty low. I mean, I don't want to speak without the data in front of me, but I think the stat for the approval rate for Congress is always hovering between 10 and 20%. So it's like, yeah, it's a pretty low bar to clear. I'd actually be interested if people were like, they did a poll that was like, would you rather continue to live in the US democracy, or are you waiting for the Rapture? I mean, it's like, the vibe is bad, and you need a better control than, "would you rather have this nebulous thing to control it?"

Emily M. Bender: Yeah. I pick the bear. All right. So maybe continuing here. Back to the paragraph in this New York Times editorial. The next paragraph: "This pattern is as old as politics itself. When democracy struggles to deliver, people turn to strong men, authoritarians, and now algorithms, hoping for competence over chaos." And again, what does democracy refer to here? And here's also competence associated with algorithms again. "But replacing democratic liberation with algorithmic efficiency doesn't solve the underlying crisis. It merely substitutes one form of distance between people and power for another." Hey, I think I agree with that sentence. 

Alex Hanna: Yeah. Right. 

Emily M. Bender: But that, that's not gonna last. "When algorithms determine, say, budget allocations or public benefits with no explanation and no appeal, the result is the same alienation and disillusionment we're already seeing from distant, unresponsive institutions, except now there's no one to hold accountable." Also good. 

Alex Hanna: Also agreeable. Yeah. 

Emily M. Bender: Yeah. "With human dignity relegated to an afterthought, polarization deepens and trust erodes further." 

Alex Hanna: Yeah. I mean here, just a qualm with this focus on polarization, which I think is something that I think not only conservatives guilty of focusing too much on polarization, but so are liberals, so are people on the left. Because I think the point is not, the point is not to solve polarization, as if you aggregate all these preferences and try to find something that's negotiable. It's that you protect people. You know, protect people across a society, and that people have recourse when there's particular government decisions that they cannot, that they- that government decision makers ought to be accountable for. So I mean, it's, polarization is such an annoying frame, and it's just become very pervasive.

Emily M. Bender: Yeah, and it's also very much, I think, in tune with the sort of reactionary centrist idea that if we could just strike a balance, it would all be good, right? 

Alex Hanna: Yeah. Yeah. All right, so: "These problems compound the risks AI already poses. Today's AI powered algorithms are largely built around business models where conflict drives revenue." True. "Since outrage keeps people clicking, ranking systems surface the most divisive material, fragmenting public discourse and driving us into echo chambers that make consensus seem impossible." Like, the first part of like, yes, there's sort of like, the drive for ranking systems and the kind of attention farming, but then is it public discourse? Is it echo chambers? Is that the conversation we need to be having? So it's getting back to the polarization point. "As AI systems become more powerful and capable of ideological manipulation, these threats will only intensify." 

Emily M. Bender: Mm. No. So it's not that- AI systems doesn't refer to anything, and then automation is not something that has capabilities, right? And what is this, ideological manipulation sounds very much like the sort of Anthropic, "Oh no, we poked our chatbot and it said something scary," kind of a thing. 

Alex Hanna: Yeah. Okay. So, these parts are where we really get into the annoying parts of the proposal. "To save democracy, America needs a different path, one that uses AI to give people more voice in our policy choices and better results. AI can help governments-" 

Emily M. Bender: Magic beans, magic beans. 

Alex Hanna: Magic beans! "AI can help make governments more effective, cutting red tape, improving public services, and opening up decision making to the public. In Taiwan, the platform vTaiwan has spent over a decade demonstrating how AI-" I'm gonna keep on saying AI like that- "can strengthen rather than supplant democratic deliberation. When Uber arrived in Taiwan in 2013, it triggered the same conflicts that erupted in cities worldwide. Taxi drivers versus riders, incumbents versus newcomers, regulation versus innovation. Instead of letting the loudest voices dominate, or having lobbyists and bureaucrats decide behind closed doors, Taiwan used AI powered tools to facilitate some version of mass deliberation." Oof. Okay. Yeah. 

Emily M. Bender: Yeah. So, there's so much to be annoyed with here. One is, why do we need algorithmic systems to give people more of a voice, right? It seems like the issue here is the distribution of power and not the tech that we have to hand. And, AI cutting red tape, that sounds an awful lot like, "Yes, we're gonna speed up the permitting process," meaning we're gonna let a bunch of stuff slide. Which is bad. And then this characterization of Uber really annoyed me. It totally misses the point about what disruption meant with Uber. And drivers versus riders? I don't think so. 

Alex Hanna: Yeah. That's a very bizarre framing. 'Cause my sense is that the drivers, the taxi drivers are the ones that are contesting Uber drivers and Lyft- and not the drivers, rather, the companies. So Uber and Lyft, right? Because I mean, that's been like, in the US, many local taxi services have been decimated, right, absolutely by the entrance of these companies into the ride, not ride sharing, but in the taxi market. And so it's like, okay, you're pitting- this is such a weird thing to pit these people against each other. And then I think the other part of it is, yeah like, we could have solutions to lobbyists and bureaucrats and it would be limiting lobbying. Right? Isn't that a little easier? Like, how can you find proposals to make voices equal within legislation? And I mean, you don't have to go too far to see some of the things that Senator Elizabeth Warren has proposed, the ways that we can limit money in government, that we can not have super PACs. There's lots of tools that really don't need this technological fix.

Emily M. Bender: And then when you have companies like Uber that basically their business model is, "we're gonna break the law and just try to make it a fait accompli that this is how things work," you could enforce the laws. 

Alex Hanna: Yeah. Or you could get, I mean, get the government, I mean in California, Prop 22 had a hundred billion dollars of spending to counter and make a carve out for Uber, Lyft, and DoorDash, so they didn't have to abide by work classification laws, specifically AB5. And if you limited spending, and the really disingenuous advertising around Prop 22, I mean, that wouldn't have been a campaign they won. And that there was this, the most spending on any California proposition before that point. So really wild that it's saying, "well, we can use quote unquote AI, magic beans, and bring everybody to the table." 

Emily M. Bender: Yeah. So what vTaiwan is doing, I think is interesting. I think calling it AI does everyone a disservice, because it lumps it together with the Diella business, but other things as well. But let's read this next paragraph, and maybe talk about that a little bit. So, "Thousands of citizens submitted statements to vTaiwan and voted on others' proposals. The AI didn't make decisions. Instead, it mapped the landscape of public opinion, surfacing the proposals that bridged divides rather than deepened them. The result was legislation derived from collective intelligence-" which is a really gross term. "For Uber, this meant allowing the service to keep operating as long as its drivers were insured, professionally licensed, and didn't undercut taxi fares. Taiwan has since used variations of this approach for dozens of other policy challenges, including one building trust with citizens and showing that AI can provide the infrastructure that enables large scale democratic reasoning." So vTaiwan is using something called Polis, which we talked about briefly in the Superagency episode. And this makes it sound like there was people typing in things, and then the AI basically processed that text and gave a summary. And it doesn't look like that's what Polis is doing. It's not actually doing language processing, but rather providing a platform for people to express opinions, and then also, basically, I think, associate with other people's opinions, and then doing some analysis over groups of people and the opinions that find the most support across different groups.

Alex Hanna: Mm. Okay. That's interesting. Yeah, so it's not like this kind of, somehow you feed in all these different opinions and find some kind of averaging that a synthetic media machine would generate. Which it sounds, that's kind of the sense that this is what's happening, according to Schmidt and, what's his name, Sorota. And, yeah, so, and I think it's really interesting too, because there are, and have existed for years, these different tools for deliberation on platforms. So Decidim is one other tool that I know is used by the city of Barcelona and a few other municipalities. And as far as I can tell, doesn't really have a quote unquote AI component. And I think the problem, as anyone listening to this show, I hope, knows very well, and I think our listeners are smart enough to know this, is that the problem isn't technical, it's social. It's just, it's ensuring that the right people are actually, have the ability to be using these tools, which is very, very hard, especially in places where the people who are most vulnerable often don't have that access or don't have the time to even participate in these deliberative democracy processes.

Emily M. Bender: Yeah, yeah, absolutely. And I think that there is something maybe interesting in using a platform to help people find consensus amongst themselves. There was a documentary I think linked from the Polis website, some short BBC documentary on vTaiwan and what they're doing. And one of the things, one of the sort of tensions in that documentary was, do the MPs dislike this because it devolves more power. But then they interviewed this one MP who said, "Well, I'm also concerned about the people who are not comfortable participating in this digital platform." Right? So we can't say, okay, we're just gonna use this so that our representatives can get a really good sense of what's going on. This is somehow better than polling. It's gonna generate good ideas. And if you can't access it, too bad. 

Alex Hanna: Yeah, yeah. And I mean, there's also other really interesting kind of proposals that don't use a digital platform at all. I mean, and there's things like participatory budgeting that has to do a lot more with having neighborhood councils and having people that are submitting proposals. And this happens with existing infrastructure that doesn't require you to be incredibly online. All right. Let's finish this out. 

Emily M. Bender: Let's wrap this up? 

Alex Hanna: Yeah. So, all right, so the next paragraph is pretty bad. "Democracy has always been limited by logistics. You can't fit millions of people in a room, you can't have everyone speak, you can't process that many perspectives. AI has the ability to remove many of those constraints by summarizing thousands of public comments, identifying common concerns and helping policy makers understand constituent priorities. It can make legislative text accessible to ordinary citizens, explain trade-offs in plain language, and help people articulate their preferences. It can even scale deliberation across geographies and languages, making global coordination feasible in ways that were previously unimaginable." This one's a real stinker. 

Emily M. Bender: Yeah, and I love this comment from our producer Ozzy. They say, "'Democracy has always been limited by logistics' has big 'since the dawn of time' energy."

Alex Hanna: Yeah, Aristotle famously said, "Well, we wanna hear everybody, but, the Agora? Not big enough. We can't fit everybody there." Yeah, that was his main concern, actually. 

Emily M. Bender: Yeah. And this is so, this whole paragraph is so devoid of any power analysis, right? And then on top of that, the authors here seem to be jumping from what vTaiwan was doing, to something that is doing a whole bunch of language processing, right? Summarizing public comments, identifying common concerns by doing that summarization, and then also, this style transfer, so making legislative text accessible to ordinary citizens. Sorry for the beeping truck outside. You know, you cannot use the synthetic text extruding machine in cases where you care about accuracy but don't have a chance to check it. And this sounds like one of those. And then, oh, and let's just make it global! We can do machine translation! Also not very plausible sounding. 

Alex Hanna: Well, one of the things that I really take umbrage here, too, is the kind of fantasy that you can take legislative text and make it accessible without understanding how people receiving that may have different interests or concerns, right. You know, I'll take Prop 22 as another example. If you take Prop 22, and I don't know the language of Prop 22 off the top of my head, but you know, if you said, explain this easier, it might say, well, you're setting a wage floor for delivery drivers. And then you know, you're doing that, so it looks really nice on the face of it. But really what it is is, it's a proposition which is encouraging worker misclassification, which means that workers cannot unionize, which is really keeping them atomized. And you need human interpretation to do that work, as well as really having an idea of who you're speaking to, especially if they're people from the lower class or they're, or the working class, or are people who are casualized workers themselves, right? This really has a significant implication for people, depending on who they are. 

Emily M. Bender: Yeah. Yeah. All right, I'm gonna read these last two paragraphs so we can land this plane and maybe talk about the other one. So, "But the window for this path is not infinite. Every month that democracies remain dysfunctional while AI capabilities improve, algocracy becomes more appealing. It remains to be seen whether Diella will root out corruption in Albania or simply add another layer of opacity to government." No it doesn't! 

Alex Hanna: Yeah, no. Didn't the legislators, they were throwing trash at the people who implemented that, right?

Emily M. Bender: Yeah. "But it will certainly attract imitators-" not if we have anything to say about it- "especially in countries where corruption is endemic and institutions are weak." Meaning, countries like that are easy prey for the tech companies that are trying to make money. "The future of democracy doesn't require us to reject AI. Quite the opposite. We need AI to make democracy work for the 21st century. But we also must be careful about what we ask AI to do. Not decide for us, but to help us govern ourselves better." Argh. 

Alex Hanna: Yeah. I guess. I guess, Eric Schmidt, terrible. All right. 

Emily M. Bender: Yeah. All right. So, our other artifact here, which we're going to go over much more quickly, is a piece in The Guardian by Nathan E. Sanders and Bruce Schneier, entitled "Four ways AI is being used to strengthen democracies worldwide." And because The Guardian doesn't use the New York Times stylization, we don't have to say AI.

Alex Hanna: Yeah. Right, with the periods after every letter. Yeah. 

Emily M. Bender: And this is from November 23rd of this year. The subhead is, "The dangers of artificial intelligence and its potential to consolidate power are clear. But used fairly, it can be a boon for good government." So how do you wanna take us into this, Alex?

Alex Hanna: Well, okay, so this is, effectively, this is promo for Sanders and Schneider's book. Really kind of sad to see this from Schneider, who I know is kind of a towering figure in cybersecurity. 

Emily M. Bender: Yeah. Schneier, I think, not Schneider. 

Alex Hanna: Is it? Is- oh, there is no D there. 

Emily M. Bender: There's no D there. Yeah. 

Alex Hanna: So Schneier- apologies, Bruce. And so this is their book, "Rewiring Democracy: How AI will-" I'm still gonna say it that way- "will transform politics, government, and citizenship." "In it-" the book- "we take a clear-eyed view of how AI is undermining confidence in our information ecosystem, how the use of biased AI can harm constituents of democracies, and how elected officials with authoritarian tendencies can use it to consolidate power." Okay, true. 

Emily M. Bender: But wait, wait. Can you call your own book clear-eyed? 

Alex Hanna: No, I don't think so. I mean, it's a little self-aggrandizing. But you know, whatever, you're trying to do promo for the book. "But we also give positive examples of how AI is transforming democratic governments and politics for the better." And then they review four stories in Japan, Brazil, Germany, and the US. So, let's do the first one. It's pretty bad. So this is Japan. Actually, why don't you take this one, because you will have a better pronunciation of the names. 

Emily M. Bender: All right, so, "Last year, then 33-year-old engineer Takahiro Anno was a fringe candidate for governor of Tokyo. Running as an independent candidate, he ended up coming in fifth in a crowded field of 56-" my goodness- "largely thanks to the unprecedented use of an authorized AI avatar. That avatar answered 8,600 questions from voters on a 17 day continuous YouTube livestream-" oh, the horrors- "and garnered, the attention of campaign innovators worldwide." So basically Anno-san, as they give it here, "two months ago, he was elected to Japan's upper legislative chamber, again leveraging the power of AI to engage constituents, this time answering more than 20,000 questions." So Anno-san here, basically set up the synthetic text extruding machine to speak on his behalf, and people liked it, apparently. 

Alex Hanna: Yeah. I mean, that's pretty rough. It's also interesting, and I guess the part of this, when I was reading this, I was thinking of other popular campaigns, people who've been kind of dark horses in fields. I'm thinking about, the Zohran Mamdani campaign, and where it seemed like that wasn't, like, fielding questions was not the big thing there. I mean, it was really interesting too, because in one of the debates, Cuomo had said, "Well, I just need to get better at social media and then I'm gonna be like Mamdani." I'm like, no, you're a sex pest. That's the problem, right? And there's ways in which there's the message discipline from Mamdani and also the kinds of constituents that he was seeking out. And that ground game was also pretty impressive. So, yeah, I thought that was such a weird deployment there of the use of some kind of avatar as a means to respond to people, rather than just being a clear, someone that's clear and on message. 

Emily M. Bender: Yeah. Yeah, absolutely. And as abstract_tesseract says in the chat, "This feels like the opposite of strengthening democracy." Yes. Yes it does. And there's something else from the chat that I have to bring up, when we were talking about the different stylization of AI.

Alex Hanna: Yeah, I see that. 

Emily M. Bender: So ndrwtylr says, "Could be worse. The New Yorker calls it-" and then it was written out as capital A, I with a diuresis. And I laughed at that. And ndrwtylr said, "Good luck reading that one out." And I think the pronunciation is ayeee! 

Alex Hanna: I know. Yeah. That's wonderful. It's got the same I as Alaska Thunderfuck who says, "Hiii!" So, Aiii! 

Emily M. Bender: Aiii! Nice. Okay, so you wanna do this Brazil one quickly? I think we've got four cases. I'm not sure how many we're gonna get to here.

Alex Hanna: There's four. Yeah, maybe we should, let's at least do Brazil and Germany because the Germany has a fun- well, actually, let's skip- ah, they're all very- 

Emily M. Bender: Well, okay, so, Brazil, to summarize, they're basically saying, there is an awful lot of lawsuits going on in Brazil, Brazil is notoriously litigious. And the government is basically using Aiii!, right, to somehow handle things faster. But also, the people filing the lawsuits are filing them faster with Aiii!, and so, really, stalemate, I guess. I don't think we are seeing democracy served in this case. 

Alex Hanna: Yeah. And the German case, there has been a nonpartisan voting guide called Wahl-o-Mat. And, which, in the early version of this, "Officials convene an editorial team of 24 young voters, with experts from science and education, and develop a slate of 80 questions." And then they put them to all the registered German political parties, and then they publish it online. And then now they utilize, they leverage Aiii!

Emily M. Bender: Wait, so the first version sounds really sensible, right? 

Alex Hanna: Yeah, yeah. And then, "In the past two years, outside groups have been innovating alternatives that leverage Aiii! The first came from Wahlweise, a product of the German AI company AIUI-" or Aiii! Ooiii! If you are, I'm sure Germans are flinching at that. "Second, students at TU Munich developed an interactive AI system called Wahl.chat. The tool was used by more than 150,000 people within the first four months." But then the next paragraph says, "However, German researchers studying the reliability of such tools ahead of the 2025 German federal election raised significant concerns-" and it links to an arXiv paper- "about bias and, quote, 'hallucinations,' AI tools making up false information." So-

Emily M. Bender: Well, yeah. So they took something where, let's ask the parties, let's organize what they've said and give people this online quiz so they can figure out which party matches their perceptions. And that's, let's just throw a chatbot in the middle of that, so we get a whole bunch of garbage mixed in. Like, what the hell? 

Alex Hanna: Yeah. So you're purposely spreading misinformation here to those 150,000 people. So, "Finally-" so, the last example is that CalMatters has a Digital Democracy project that collects every public utterance from California elected officials. And then they've released something this year, which is called AI Tip Sheets, and they say, "The feature uses AI to search through all this data, looking for anomalies, such as a change in voting position tied to a large campaign contribution." Now this is like, when you dig into it and you click through, there's an interview with the CEO of CalMatters, Neil Chase. And so if you go down to talking about the tool, Neil Chase says, "We now go through the data using AI. This isn't generative AI, but it is asking AI to crunch a bunch of data and look for anomalies, for things like a member of the legislature who's been getting a lot of money from a certain interest group for a long time and has cast a vote today that was against that interest group's positions." And so this is another case where calling something AI really obfuscates what's actually happening here. It seems like this is more like a tool that's used for anomaly detection, which could be as basic as having a univariate or bivariate distribution and identifying outliers. And it's really obscuring what's happening here, and yet the authors here, Sanders and Schneier, are lumping it all together with these generative AI tools. 

Emily M. Bender: And unfortunately- this NC is Neil Chase, is that right?

Alex Hanna: Yeah, Neil Chase. 

Emily M. Bender: Yeah, he's sort of leaning into this too, so he says, we asked the AI to do something. We asked the AI to crunch numbers. And it's like, no, just, we ran some statistical analysis, we collected a bunch of data, we were careful about the data, and then we looked for outliers. That's fine. But no, they have to lean in and call it AI, which is really frustrating. 

Alex Hanna: I think one of the strategies that they're using here too, and they say a little bit further down, is the funding that they're getting, right? So they're funded by Arnold Ventures and the Knight Foundation. And this is one of the places where I think it's been really, I mean this is kind of, if you're a funder and listening, this is really a case where funders who are working in this area really need to be careful with their language, and like, what are you actually funding? And are you interested in funding things that are generative AI? I mean, why are you doing that? This is a tool that seems really useful. But it would be better if it was described as like, anomaly detection, or something that helps us make sense of data, which is, I think, pretty unobjectionable, rather than using generative AI and things that are going to output false information and make pretty bad mistakes.

Emily M. Bender: Yeah, absolutely. And, no credit to the authors here- someone and Schneier, Sanders and Schneier- for basically lumping all of this together. And I think I wanna conclude with this short paragraph, where they say, "AI technology is not without its costs and risks, but we are not here to minimize them-" oh- "and we are not here to minimize them. But the technology has significant benefits as well." "The technology" doesn't refer to a single kind of technology, so separate it out. And talk about, look, here's some ways that we can do some data processing in ways that are beneficial if you're trying to organize information about large groups of people, for example, or do data journalism- see our recent episode with Decca Muldowney. But don't just munch it all together. That's not helpful. 

Alex Hanna: Yeah, a hundred percent. 

Emily M. Bender: All right. Are you ready to transition to Fresh AI Hell? 

Alex Hanna: Yeah. There's been some notes in our back channel, so I actually have some, I was trying to multitask and write down some freestyles. 

Emily M. Bender: Let me tell the people what I'm asking you to do. It is time to hear the latest hit single from Ethical Autonomy Lingua Franca. Longtime listeners might remember the breakout hit "Rat Balls." And this time, the title of the new single is "Turnstiles Ablaze." 

Alex Hanna: Yeah. Okay, great. All right. Our name is Ethical Autonomy Lingua Franca. This is our new one, "Turnstiles Ablaze." And it starts off with some like, it's got one of those really long bass intros. So think about that, I forgot the song that's on, I think "Evil Empire" by Rage Against the Machine. But it's like, duh nuh nuh nuh, nuh nuh nuh, nuh nuh, duh nuh nuh, nuh nuh nuh, nuh! Duh nuh nuh, nuh nuh nuh- And then it's got that, and then it's: Turnstiles ablaze! The most frequent craze! Get off your couch and stop being lazy! Get up, it's not a phase! Sorry. And I did too much growling, and I'm coughing.

Emily M. Bender: Oh no! And I can't pick it up, sorry.

Alex Hanna: I've sabotaged myself. Please, keep that in the audio, Ozzy. All right, let's go. 

Emily M. Bender: Let's go. All right. So, thank you for that blazing transition into Fresh AI Hell, where certainly, the turnstiles are always ablaze every time you try to enter the transit system. 

Alex Hanna: Yes. 

Emily M. Bender: Okay. So, first piece here is actually published by Amazon themselves, and November 6th of 2025. Under news slash books and authors, headline, "Amazon introduces Kindle Translate, an AI powered translation service for authors to reach global readers. The service translates between English and Spanish, and from German to English, expanding opportunities for Kindle direct publishing authors." So this is, I think, people who are basically self-publishing through Kindle. And Amazon saying, "Here, you can have unverified machine translations into another language, so you can also try to reach people in that language." And thinking about the care with which, especially the Italian translator for our book has been following up with specific questions and like, how important it is to me that what we're saying comes through in another language. This is just horrifying. 

Alex Hanna: Yeah. I mean, anyone who has any insight into translation of fiction and nonfiction just knows that there's so much involved in translation, that even these, you know, two languages that are spoken in Western Europe that are considered to be very well sourced, not low resource languages, there's so much that could go wrong here. And I'm thinking here, like, even the translation that we're doing the book into Spanish, we're having a long, drawn out discussion of the title. And then the German to English- that is very funny because, I'm gonna reveal my true Marxist bona fides here, but I'm reading the new translation of Marx's Capital that just came out last year, and there are long, long footnotes in the back of the text of just like, the translation of value and value form. Kind of like, the sort of references Marx is making to Hegel and Kant, and all the different people. And so, it's really horrifying to see this. 

Emily M. Bender: Yeah. All right. But we can't stay here too long, 'cause we've got a lot of fresh AI hell. 

Alex Hanna: I know. 

Emily M. Bender: You can have this one. 

Alex Hanna: So this is a skeet from Carl T. Bergstrom. So, "What do you do after you're done jumping the shark? Whatever it is, Nature Careers is all in." And so, can you click through? 'Cause I can't see the whole thing. So this is a Nature article, a career column nonetheless, from November 11th that says, "I have Einstein, Bohr, and Feynman in my pocket. Grappling with difficulties in your career? Try asking an AI powered advisory panel of experts, suggests Carsten Lund Pedersen. Woo! 

Emily M. Bender: No, thank you. 

Alex Hanna: Yeah. Terrible. And I mean, I don't want any of those people really advising me.

Emily M. Bender: Yeah. Nor do I want them in my pocket. Hello! So, all right. This next thing comes from Slashdot, to which my first reaction was, "Slashdot's still around?" And so user fjo3 shares a report from TheWrap: "There are apparently at least 175,000 AI generated podcast episodes on platforms like Spotify and Apple. That's thanks to Inception Point AI, a startup with just eight employees cranking out 3000 episodes a week, covering everything from localized weather reports and pollen trackers, to a detailed account of Charlie Kirk's assassination and its cultural impact, to a biography series on Anna Wintour." 

Alex Hanna: Geez. Yeah. 

Emily M. Bender: Yeah, because clearly the world needs more podcasts. 

Alex Hanna: Yeah, yeah. Geez. Just another kind of slopification of another information ecosystem. 

Emily M. Bender: Yeah. And the worst thing is, if anyone's listening to algorithmic feeds, they might land on one of these and not know it. 

Alex Hanna: Yeah, a hundred percent. 

Emily M. Bender: All right, next.

Alex Hanna: All right, so this is from Instagram. This is from an account called newyorkcity.explore. And I don't know if this image is AI generated or not, but the text says, "New York City is set to debut the world's first AI dating cafe this December, offering guests a chance to take their digital companions on real life quote 'dates.' Created by EVA AI, the popup features candlelit single seat tables, phone stands, and a minimalist romantic atmosphere designed for one-on-one time with an AI partner." Yeah, really awful. And, I know this is an increasing trend, but like, woof. Just this makes me very, very upset. And... 

Emily M. Bender: Absolute best case scenario is that the people sitting at these tables will start ignoring their phones and start talking to each other. 

Alex Hanna: Yeah. I mean, that would be great. I would love to see it. 

Emily M. Bender: Yeah. Eugh. All right, next. So this is from Rusty Foster on Bluesky. It was posted on November 25th. And Rusty's comment is, so, it's, sorry, it's a quote skeet of a post by Davey Alba- shout out Davey Alba, one of the good journalists in this space- saying, "New: AI recipe slop is overrunning search and social. Food creators say Google's AI overviews and glossy fake food pics are drowning out real tested recipes, collapsing traffic and setting home cooks up for disaster, especially this Thanksgiving." And Rusty says, "This is so funny because recipes can literally already be copied and reposted with no attribution or copyright. All AI does here is add statistical noise and make the stolen content useless. AI can't even steal content without fucking it up."

Alex Hanna: Yeah, incredible stuff. 

Emily M. Bender: Yeah, but feel bad for the food bloggers losing traffic, for sure.

Alex Hanna: Yeah. All right, so this one is a quote skeet, and so the original is an image that was published in Nature Scientific Reports. And the original is "Autism Spectrum Disorder." And there's like, you know, it's a terrible extruded document image. And it says, all- the original person speaking about it is Erik Angner, and he's quoting some of the fantastic fuckups here. So, "'Runctional features'? 'Medical fymblal'? 'One toll line storee'? This gets worse the longer you look at it. But it's got to be good, because it's published in Nature Scientific Reports last week." And so it's a slop image that was peer reviewed, and I know has been since retracted. I love, there's a real, there's some great things here. So one of the graphs here, it says autism, and it has a bike, and the bike has maybe like a pie chart, and it says "score: 0.93." And the "one toll line storee" has something that either looks like a rabbit vibrator or a kidney, depending on your disposition. Yeah, it's just, it's full of slop, here.

Emily M. Bender: Yeah. And DEI Virologist had a nice comment: "The paper is about how AI-" Aiii!- "can improve autism diagnosis and management. Not giving me any faith that said diagnosis and management will be accurate based on the output below. Keep this shit out of healthcare, it's already bad enough." 

Alex Hanna: Yeah. Bad stuff.

Emily M. Bender: All right, so I'm gonna give you this one, 'cause you're closer to the story, and then I'll do our chaser. 

Alex Hanna: Yeah. My partner actually sent this to me, so this is an Upwork ad, and it is the- and so, Upwork being a gig worker platform. And so the title is, "Editor for letter to the editor or guest editorial." So, an editor for a letter to the editor, which is funny and recursive, posted five days ago. So, "Summary: Seeking an experienced editor to refine a letter to the editor or guest editorial for publication in Inside Higher Ed." Already incredible. "The goal is to transition from an AI tone to a more personal voice, ensuring clarity and credibility with the target audience. The draft is clean, but requires developmental editing to sharpen the thesis and flow." So just, already pretty depressing that someone that is submitting something to Inside Higher Ed, which, that and the Chron have become pretty bad at AI coverage, especially within education. But someone wanting to then find someone to edit their bad editorial. At that point, why don't you write it and then have someone help edit it from your original voice?

Emily M. Bender: Yeah, exactly. What's the point of- and it's also interesting to me that they, like, are upfront about this being AI generated in the first place. All right. And then, for our chaser this week, this is in CBS News. Local news, this is the local Chicago CBS station, from November 26th. Headline, "Hundreds of Chicago residents sign petition to pause robot delivery pilot program over safety concerns," by Noel Brennan and Mikayla Price. And basically, it sort of tells a story of going out for a nice stroll on the sidewalks of Lakeview, and encountering these delivery robots, and sort of the experience of feeling like you have to get out of the way, and how there's gonna be many more of them. And these residents are saying, no thank you. This is pedestrian space. It should be reserved for people who are going places on foot or in their wheelchair, and not having to deal with these robots. 

Alex Hanna: Yeah, and I appreciate here the quotes here too, focusing on accessibility. And so the person here, what is their name? Robertson is their last name, and says, "Sidewalks have to be accessible for everyone. They have to be safe. Those aren't negotiable." And so, there are considerations for ADA, and the ways that these things are taking up sidewalks that are meant for those on foot and in assistive devices. So, really helpful. 

Emily M. Bender: Absolutely. And there we are.

Alex Hanna: All right. That's it for this week. Our theme song is by Toby Menon. Graphic design by Naomi Pleasure-Park. Production by Ozzy Llinas Goodman. And thanks, as always, to the Distributed AI Research Institute. One of these days, I'm gonna know how to pronounce where I work. If you like this show, you can support us in so many ways. Order "The AI Con" at thecon.ai or wherever you get your books, or request it at your local library. And I should say, you could also purchase it for a friend, family member, whoever enjoys fine books, this holiday season. 

Emily M. Bender: Yes, give books for the holidays. But wait, there's more! Rate and review us on your podcast app, subscribe to the Mystery AI Hype Theater 3000 newsletter on Buttondown for more anti-hype analysis, or donate to DAIR at dair-institute.org. That's dair-institute.org. You can find video versions of our podcast episodes on Peertube, and you can watch and comment on the show while it's happening live on our Twitch stream. That's twitch.tv/dair_institute. Again, that's dair_institute. I'm Emily M. Bender. 

Alex Hanna: And I'm Alex Hanna. Stay out of AI hell, y'all.

Alex Hanna: Aiii! 

Emily M. Bender: Aiii!