Mystery AI Hype Theater 3000
Mystery AI Hype Theater 3000
How the War Department Learned to Stop Worrying and Love AI (with Naomi Klein), 2026.02.09
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI boosters and the US military are engaged in a lethal love affair. Award-winning journalist Naomi Klein joins Emily and Alex to discuss how glitchy technology supports global imperialism — and vice versa. Plus, we explore which Dr. Strangelove characters are currently running the US war machine.
Naomi Klein is a columnist for The Guardian and the international bestselling author of nine books published in over 35 languages. Her new book, End Times Fascism: And the Fight for the Living World, written with Astra Taylor, will be published in September 2026.
References:
- War Department press release on GenAI.mil
- Old-school racism from Anduril CEO Palmer Luckey
- "How artificial intelligence is reshaping the future of war"
Previous episodes referenced:
- Episode 61: Winning the Race to Hell (with Sarah Myers West and Kate Brennan)
- Episode 50: Petro-Masculinity Versus the Planet (with Tamara Kneese)
Fresh AI Hell:
- "Cops Forced to Explain Why AI Generated Police Report Claimed Officer Transformed Into Frog"
- "Amazon outbids WA utility for one of nation's largest solar projects"
- "AI data centers are forcing dirty 'peaker' power plants back into service"
- "Mamdani Targets 'Unusable' AI Chatbot for Termination"
Check out future streams on Twitch. Meanwhile, send us any AI Hell you see.
Our merch store is now live on the DAIR website!
Find our book, The AI Con, here.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
- Bluesky: emilymbender.bsky.social
- Mastodon: dair-community.social/@EmilyMBender
Alex
- Bluesky: alexhanna.bsky.social
- Mastodon: dair-community.social/@alex
- Twitter: @alexhanna
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Ozzy Llinas Goodman.
Alex Hanna: Welcome everyone, to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find.
Emily M. Bender: Along the way, we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. I'm Emily M. Bender, a professor of linguistics at the University of Washington.
Alex Hanna: And I'm Alex Hanna, director of research for the Distributed AI Research Institute. This is episode 72, which we're recording on February 9th, 2026, and we're joined by an incredible guest this week, Naomi Klein. She's an award-winning journalist, columnist, and the international bestselling author of nine books published in over 35 languages. A columnist for the Guardian, her writing has appeared in leading publications around the world.
Emily M. Bender: Naomi is also the honorary professor of media and climate at Rutgers University and is associate professor in geography at the University of British Columbia, where she's founding co-director of UBC's Center for Climate Justice. Her new book, End Times Fascism and the Fight for the Living World, written with Astra Taylor, will be published in September 2026. Welcome to the show!
Naomi Klein: Thank you! I'm so glad to be with both of you.
Emily M. Bender: We're so excited.
Alex Hanna: Thanks so much for joining us! And this week we're unpacking some of the overlap between AI hype and imperialist propaganda, specifically when it comes to the US military and immigration enforcement.
Emily M. Bender: Of course, and unfortunately, this is an especially timely issue right now, and we're honored to have Naomi's expertise to help guide us through the propaganda. And with that, I'm going to pull up our first artifact, which comes from the absolutely horrific desk of the War Department.
Alex Hanna: So this was published on December 9th. It is a press release. They say "The War Department," this is the title, "The War department unleashes AI on new GenAI.mil-" that's the top level domain- "platform." So the first paragraph is, "The war department today announced the launch of Google Cloud's Gemini for Government as the first of several frontier AI capabilities to be housed on GenAI.mil, the department's new bespoke AI platform. This initiative cultivates an AI first workforce, leveraging generative AI capabilities to create a more efficient and battle ready enterprise. Additional world-class AI models will be available to all civilians, contractors, and military personnel, delivering on the White House's AI action plan announced earlier this year." So first blush on that, y'all.
Naomi Klein: Yeah, I mean, I would just say that this was just the first of a few such announcements. I think the most recent one, Pete Hegseth went to SpaceX and announced it alongside Elon Musk, that they were also embedding Grok on all capabilities. And they made the announcement just in the middle of the Grok undressing global uproar. So the timing of it was really something. The other thing I'd mention is that the the Doomsday Clock was moved forward eight seconds this year. That was also announced in January. And one of the reasons, when they were describing why they had made this decision- I mean, I don't think it came as a surprise to anyone that this year wasn't a great year for the Doomsday Clock. But they talked about how the US and China and Russia were all embedding AI in their military systems, like the countries with the largest nuclear capability. And then the other thing that we might think about at the same time is that the last treaty regulating offensive nuclear weapons just expired, I think four days ago. And so for the first time in decades, we are without a treaty. And we have Hegseth overseeing it, so everything will be fine.
Emily M. Bender: Great. So let's have more and more weapons, and then also embed the, random text generators into crucial systems across the military. And the rest of us just have to hope that the people who are closest to the launch systems know better than that.
Naomi Klein: Yeah, you know, Astra and I have just gotten to draft on this book- which is alarming, because when you were saying, Emily, it's coming out in September, it's like, god, that's really soon! But you know, where we ended up is really just, we need a new kind of disarmament movement, that is not only addressing the arms, but is also addressing the concentration of power in technology and the concentration of wealth. Because all three of those very powerful forces have now convened, right? And so I think what we see with the Trump administration is, the most powerful tools have now found the worst people in the world. And also the glitchiest tech. So what could possibly go wrong?
Alex Hanna: Yeah. And they reference the AI action plan, which we did talk about a few episodes ago, and dropped that link in the chat. But so much of this is around security, and so much of it is oriented around winning the race with China. I mean, this is kind of how it is framed, completely. And it's interesting how Gemini gets called out here, which is of course a Google product, and then yeah, Grok. And so, I'm curious, Naomi, just in terms of these unholy alliances- Musk is someone that we expect, but then Google being quite explicit about this war making. Wondering if you can talk a little bit more about that?
Naomi Klein: Yeah, we're a long time from "Don't be evil," as you know.
Alex Hanna: Of course.
Naomi Klein: And it's interesting, 'cause that slogan first was coming up around Google's decision to pull out of China. And this was a long time ago, with this sort of early revelations that US tech was being used to help locate Chinese dissidents. And so Google pulled out of China, and Yahoo, feels so long ago. But what has become clear now, for a while, is that actually they're jealous, and they want that level of kind of integration with military and government. They want the contracts. I mean, I think the backdrop for all of this is, there's nothing quite as lucrative as being Lockheed Martin, when it comes to just having a business model where it's really all about these relationships with government and the revolving door. And so yeah, the subtext of all of these releases are: we're coming for Lockheed, we're coming for Boeing. And they wanna be evil! They call it "patriotic tech." They wanna be lethal. And the other piece of it, I think, Emily, is it's very related to a theme that you're always talking about around whether or not these products are any good, whether or not people wanna buy them, whether you can justify the massive, multi-trillion dollar build out of data centers worldwide to get to compute, and so on. They don't have a business model that makes any sense, right? Which is why we are always also talking about a potential bubble. This is, I think, part of how we need to see this, is as a bailout. And it's unfortunate, right? That the highest stakes uses of AI are being used, to bail out the fact that people don't even wanna use it to make shopping lists. So it's like, I know what we'll do, we'll embed it, you know, the UK is embedding it in their healthcare system. And it's replacing NHS workers- that's a big problem. And it's also a problem when it's obviously embedded in the largest and most lethal military in the world.
Emily M. Bender: Yeah, absolutely. And I think these next several paragraphs are, it's just barely subtext, right? So, "This past July, President Donald Trump instituted a mandate to achieve an unprecedented level of AI technological superiority." Very easy to read that as, gotta make sure that money goes to these companies. And then, "The War Department is delivering on this mandate, ensuring it is not just ink on paper." No, it's real money! "In response to this directive, AI capabilities have now reached all desktops in the Pentagon and in American military installations around the world." Which is like, I'm thinking about the people who are working all these jobs batting away all the little sparkle emojis like the rest of us.
Alex Hanna: It's funny how it's phrased, like the desktops and whatnot. These next two paragraphs I find to be really, I mean, there's so much here. So, "The first instance on GenAI.mil, Gemini for Government, empowers intelligent agentic workflows, unleashes experimentation, and ushers in an AI driven culture change that will dominate the digital battlefield for years to come. Gemini for Government is the embodiment of American AI excellence, placing unmatched analytical and creative power directly into the hands of the world's dominant fighting force." And then the quote from Emil Michael: "'There's no prize for second place in the global race for AI dominance,' Emil Michael, under secretary of war for research and engineering, said. 'We're moving rapidly to deploy powerful AI capabilities like Gemini for Government directly to our workforce. AI is America's next Manifest Destiny, and we're ensuring that we dominate this new frontier.'"
Emily M. Bender: Let's be evil, right?
Naomi Klein: So do we know, does everyone remember Emil Michael, who he is?
Alex Hanna: No, actually.
Naomi Klein: He was second in command at Uber.
Alex Hanna: Ah, that makes sense.
Naomi Klein: So this is Uber for military. And a couple of scandals I just brushed up on. One was, he was, remember when there was a scandal at Uber where there was somebody suggesting that they do oppo research on the journalists who were covering them and making them look bad and spending about a million dollars? That was him.
Emily M. Bender: Great!
Naomi Klein: Yeah, there was also a scandal involving escorts, and I think it was in Hong Kong, just as an aside. I mean, you know, it's part of this revolving door. I mean, it's not really a revolving door, it's just saying like, we're gonna do it for you. But yeah, I mean, I think this is the only way that they can justify building out this market, when people aren't asking for what they're building. Is this idea, is framing it as an arms race with China. Which is what Eric Schmidt's been doing for a long time, paying off. And this is, I think, the whole point of the Trump administration in so many ways, is best understood as a merger between big tech and Washington. And this is their next bubble, and they've kind of run out of consumer products. They don't wanna deal with consumers anymore. I think that's the big takeaway is consumers are unreliable, government contracts are where it's at, and that's how you get from Uber to this for Emil Michael.
Emily M. Bender: Unfortunately it makes a lot of sense.
Alex Hanna: For sure. The Manifest Destiny piece is just, you know, they've been using this language, the pretty clear white supremacist language, for quite some time now. And the thing that is interesting as well is the use of "frontier" kind of in both registers, because there is this language of frontier AI, or frontier AI models, that the big tech companies have been using for quite some time. And those of us involved in, you know, as being critics of AI, I've been like, well, that's a terrible term to use. Like, let's think two seconds about the implications about using this violent term, and what it signifies. And here they're, you know, of course they don't care. And the Trump administration's like, oh, we're just gonna post this anyways. And this is actually the kind of white emotions that we want to evoke in our readers here.
Emily M. Bender: And, honestly, I think the Manifest Destiny use there is gratuitous. 'Cause what is the territory being conquered here? Like, artificial intelligence isn't usually, I think, conceptualized as a territory to conquer, so much as a race in arms. But it seemed like they really wanted to get that word in here, so they shoved it in.
Naomi Klein: Yeah. They've used it in a few different contexts, including space, colonizing space, in Trump's inauguration speech, I believe it was. But I think the frontier piece gets at why this is so scary, right? Because what the frontier represents is a lawless territory. It represents a place where you actually can just kill people and not face consequences. And so, looking at a bunch of these documents, and they're talking about having autonomous boats in war zones, and tanks and all kinds of autonomous vehicles. And I think about like, you know, I covered the US Iraq invasion and occupation many years ago, and in those types of war zones, I mean, people just get killed all the time. Civilians get killed all time. You've got soldiers riding around in tanks with their guns out, and anything that makes them feel a little bit twitchy, they get shot. And there's no consequence for it. And I think that if we think about the boat attacks in the Caribbean, right? We don't know who's on those boats. So I think what appeals to them is basically an infinite margin of error in war zones. That actually is not true if you're deploying autonomous vehicles in San Francisco, you know? So I think what they're basically proposing here is using Black and brown people the world over as test subjects for this weaponry. Which is already happening in Gaza. And has been happening, you know, for a long time. So they're just laying it out for us, with just a little bit of coded language.
Alex Hanna: Yeah. That connects to- one of the features of our pod is that we have a really great chat. And so possumrabbi makes this point: "This reminds me of how the Israeli military industrial complex often quote, 'deploys new technology' as a way of bailing out companies that can't sell their shit products abroad. A lot of the tech used to surveil slash harm Palestinians is coming from companies that are, essentially, propped up by the Israeli state." And yeah, a hundred percent. And I mean, the book the Palestinian Laboratory really talks quite a bit- by Antony Loewenstein. And there's so many of these same implications in the Israeli state with, you know, people who are in Unit 8200, or the IOF, coming directly from the Israeli high tech sector. So we're seeing a lot of that desire here.
Emily M. Bender: Yeah. All right. I'm gonna keep us going, 'cause I also wanna get to other artifacts. But this one is worth doing, I think, in its entirety. So, "The launch of GenAI.mil stands as a testament to American ingenuity, driven by the AI rapid capabilities cell within the War Department's office of research and engineering. Their achievement directly embodies the department's core tenets of reviving the warrior ethos, rebuilding American military capabilities, and reestablishing deterrence through technological dominance and uncompromising grit." Like, I can't with calling it the War Department. I mean, Alex and I were chatting a little bit before the pod, and she pointed out, okay, well, truth in advertising, I guess, right? And just, this is so gross through and through, and also feels so mask off. And talking about this, I think you're exactly right, Naomi. This is a bailout. And what's interesting is that like, Google didn't need to put itself in a position where it needed a bailout. Google's sitting on piles of money, but decided to go all in on this AI thing and the data center build out, and now that's rickety.
Naomi Klein: I mean, the mask off thing is, this is something else that we've been trying to understand about this moment, right? I mean, in understanding, you know, this is a fascist moment. This is fascist language. And in trying to understand the fascist turn, and the conditions under which fascism surges. And it is always an alliance between private industry, and military, and masses who are united with some kind of a feeling of injury, right? A feeling of being unfairly weakened. So it's a pathology of the injured strong, fascism. And so, I think about the Epstein files and what that revealed, about Me Too, and this sort of sense, in big tech, of, the anger, right? And the reason why I struggle with it is, I think from my perspective, and many of our perspective, it seems like these guys have been on a nonstop winning streak. Like, why do they feel injured? What hurt them? Who hurt you, you know? So why, like, why do you have to go this far, as you're saying, Emily? Like, why? And, yeah, I'm interested in unpacking this a little bit, because I think part of it has to do with, like, we're in the after effects of having elevated these people to these king like, god-like levels, you know, back to the nineties. And so I guess injury is relative, like, it depends on where you were, where you started, because you would think that being this rich and this powerful would be enough. But I think it was going from that level of adulation, to any kind of accountability whatsoever. And yeah, the reason why I've puzzled over is 'cause it doesn't seem to me that the left has been that strong, right? But one of the things that the Epstein files x-rays for us is actually how much it pained them to have to be accountable in any way.
Alex Hanna: Yeah. This is a fantastic point, and there was a video that Jamelle Bouie of the New York Times did a few weeks ago, where he was really talking about the way in which so much of, I mean, not like the military has never been this kind of grotesque, toxic masculine place, but the way that's been, the thin veneer has really come off of that. So it's really been this kind of policy by masculinity and grotesque shows of these masculine things. So this is why you have this language of uncompromising grit. This is why you have Pete Hegseth going and doing pushups and pullups with ground troops. There's this idea of this kind of masculinity. And I think the thing that this also highlights is the way that masculinity is not just about doing pullups, it's also the power over, with technology. The, citing something from Sarah Sharma's book, where she talks about Heidegger seeing the earth as this standing reserve of resources and materials. And the way that the earth can be strip mined, can be taken for all this to establish dominance over it. And I'm really feeling this here, in this kind of sense that tech masculinity is any different. I mean, the Epstein files surely should wash away any illusions of that happening, especially with people like Bill Gates and all the various Harvard nerds. And really, you know, these are one and the same.
Naomi Klein: There's that culture of bombs and biceps, right? Like that is what they're trying to really capture. But for what, like, to outsource everything to chatbots? So that's kind of weird. So they're overcompensating for the fact that what they're essentially unveiling is handing over everything to machines, right? So you're just operating a joystick as much as possible. But I think that, might be related. That there's something, I don't know, is that emasculating, and why these tech overlords feel the need to perform masculinity in this way?
Emily M. Bender: So sort of connected to that, possumrabbi in the chat says, "These are all the people with such grit that they couldn't even withstand a sign for an all gender restroom in the employee cafeteria." Which is like, I think that there is something about identities being so tied up in a kind of like negative masculinity, right? The masculinity has to be constantly defended as opposed to just inhabited and lived. That might be part of the story of grievance and perceived injury that you're spinning for us, Naomi?
Naomi Klein: Mm-hmm. Mm-hmm.
Emily M. Bender: All right, two more things I wanna get to in here.
Alex Hanna: Yeah. I saw this other part about RAG, so I'm thinking, do you wanna get to that?
Emily M. Bender: Yes, I wanna get to that. So, "The department is providing no cost training for GenAI.mil to all DoW employees. Training sessions are designed to build confidence in using AI and give personnel the education needed to realize its full potential." And as magidin, who was reading ahead, was pointing out in the chat, it's the tech that's supposed to have its full potential realized and not the workers, right? And also "build confidence in" is one of these things about like, no, make it trustworthy tech! Don't try to get people to trust it. But then, "Security is paramount, and all tools on GenAI.mil are certified for Controlled Unclassified Information and Impact Level Five, making them secure for operational use. Gemini for Government provides an edge through natural language conversation retrieval augmented generation, and is web grounded against Google search to ensure outputs are reliable and dramatically reduces the risk of AI hallucinations." So this is secure, but also we're sending queries to Google at the same time. And I think this is what you're talking about, the merger, Naomi.
Naomi Klein: Yeah. And also just this combination of the most lethal technologies, highest stakes, with glitchy tech, right? This is the dystopia that I have most feared. And anyone who knows me knows that I'm always going on about how I think that Gary Shteyngart's Super Sad True Love Story is the best dystopian picture of where we were headed. It came out, I don't know, I think almost 20 years ago now, but this was the world that it portrayed. Like, American military super glitchy AI, with, I think it was an iguana in a sombrero who was the interface, that's just constantly mishearing people and accidentally deporting them to Venezuela.
Alex Hanna: Oh, wow.
Naomi Klein: I mean, he saw it all coming. And as you both know, this technology, like the people who have, quote unquote "built" it, don't understand. That's the whole point. Like how it's making decisions- and it's not making decisions, but it's, you know, it doesn't leave a trail in the same way. So there's a ton of mystery embedded in it. And what all of the press releases are saying is, we are gonna outsource our decision making to it.
Alex Hanna: Yeah. A hundred percent. And I think that the thing here that is very upsetting to me is this idea that these models are actually grounded in any kind of sourcing. I mean, they say web grounded, which doesn't mean anything. If that is the case, then why does the Google AI overview not have- or like, is it actually- but that's not what it's doing. They're effectively saying, this is their mild aside at risk mitigation by saying it's gonna dramatically reduce any of these quote unquote "hallucinations."
Naomi Klein: But it's nonsense.
Alex Hanna: It's absolutely nonsense, yeah.
Naomi Klein: If they could do that, they would reduce it for everyone. It's just... I was just thinking about Bezos and, you know, all of these companies, made their climate pledges. Bezos named a stadium in Seattle the Climate Pledge Arena, and laid off 15 climate reporters at the Washington Post. So it's not even just that they are breaking all these pledges that we knew were kind of bullshit from the beginning. But I think that it is related, because I think that was sort of- if this is patriotic tech, if this is manly tech- that those climate pledges were their kind of femme stage. And they're shaking it off, and they're burning as much carbon as they possibly can for their data centers. It's really entangled. This is fascist tech meets petro-masculinity.
Emily M. Bender: Yeah. Absolutely. And we did have a previous episode called "Petro-Masculinity Versus the Planet," I think, so, on theme. And just to underscore what you've both been saying, this stuff doesn't work, right? It's making papier-mâché of its input, but when you call it web grounded, that makes it sound like it's reliable. And then you have even more cover to say, well, I just asked the all-knowing machine, and so I don't have to be responsible for this decision anymore. Should we go on to Palmer Luckey here?
Alex Hanna: Yeah.
Emily M. Bender: Yeah. So the transition that I wanna make to this is: the fascism is clear, right? It's dripping throughout this document. And we'll still get people saying, well, how dare you call somebody a fascist, as if calling it out is the problem. And so, speaking of masks off, I think this tweet- this is like a thread in one tweet, because there's no character limit anymore- kind of just lays it all out there, in a horrible, but, at least it's all out there kind of a way. Do you wanna do the honors, Alex?
Alex Hanna: Oh gosh. If I must. So this is awful stuff from Palmer Luckey, CEO, founder of Anduril, and if I'm mispronouncing that, apologies to all the Tolkien fans. So Luckey says, "We cannot let them stay," on its own line. "Debates regarding illegal immigration often focus on policy issues like welfare, healthcare, crime, economic contribution, et cetera. That is a distraction. Democracy is the real issue. Status quo is that any city or state desirous of greater power can declare a suspension of federal law and import millions of illegal aliens for the purpose of inflating their electoral votes and congressional representation." Let's just stop there. So, you know, this is our gutter racism from Palmer Luckey. But curious on your reaction, Naomi.
Naomi Klein: I was just looking up the date on it...
Alex Hanna: Yeah, it's January 20th of this year.
Naomi Klein: So this is related to Minneapolis. I mean, that's the context. And I'm just looking up- yeah. So it was after Renée Good was killed, I think. Yeah. I mean, it's a justification for the ICE operation in Minneapolis. It's important for us to understand- this was striking to us in our research just because he's doing Great Replacement. I mean, that's what this is. And for people who aren't familiar with Palmer Luckey, he's the head of, or is he CEO of Anduril? Yes. And so they are the AI first, video game weapons company. He is their main marketer. What he does, he does streams where he shows off the weapons platforms, and it just looks like he's playing a video game except for it's a real weapons system. And so they're really, really recruiting people who have grown up playing these first person shooter video games, and he's kind of a hero to them, right?
Alex Hanna: Yeah. It's indicative because he was also the founder of Oculus VR, the virtual reality thing, before it was purchased by Meta. And the other piece here, and this is in, I don't know if we necessarily wanna move over to the other piece, that's the Hill piece? But it's basically saying, this is something that one of the people says, and we will go through this more, in a more structured way. But he basically says, if you look at Scott Sanders, they're talking about this other company called Forterra, and he says... "Scott Sanders, a former Marine intelligence officer and chief growth officer-" which is just a terrible title.
Emily M. Bender: Chief gross officer, yeah.
Alex Hanna: Well, gross is more appropriate.
Emily M. Bender: I know.
Alex Hanna: "Said that he doesn't think we need an army of engineers, just armies of people who've played a lot of video games." And just like, woof.
Emily M. Bender: And put them in a situation where they're completely unable to be aware that that's a person on the other end.
Naomi Klein: Maybe this is another kind of bailout for people who- okay, I'm gonna offend all the video, all the gamers out there, but like-
Alex Hanna: We're on Twitch. That's a little risky, but go ahead.
Naomi Klein: Okay. I'm not gonna do it.
Alex Hanna: No, I want to hear the hottest take.
Naomi Klein: I'm just gonna, well, you know, if one has concerns that one might have misspent their youth playing video games, this is the bailout. You were training to operate an Anduril system.
Alex Hanna: I mean, I actually played a lot of Counter-Strike in high school, if you want some Alex lore, which is a literally a counterterrorist type of- but you get to play the terrorists sometimes. I don't know if that makes it better.
Naomi Klein: I'm married to a gamer. I'm just, you know, surrounded by them.
Emily M. Bender: I played a lot of Tetris, what does that say about me?
Alex Hanna: Yeah. That you love order, and it's playing into your earth sign tendencies.
Naomi Klein: I'm just bad at it, I'm sore.
Alex Hanna: I think the thing here where this signals a bit to me, too, 'cause they've used this as marketing. I mean, the military has really played into doing the marketing. Then I think the Army released a video game or two, and they definitely have partnerships with, I think they've had partnerships with Call of Duty, in the various ones. So it's definitely giving a little bit of a shout out to young gamers, overwhelmingly men, who have done this. You know, join the military, you can just game whatever. And it's disingenuous of the horror of war, for sure.
Emily M. Bender: Yeah, absolutely. Should we stick with this Hill artifact, or do you have anything you wanna say, Alex, about Anduril's own webpage here?
Alex Hanna: I don't have too much to say. The Anduril webpage is terrible. I mean, there's a terrible piece called Rebuild the Arsenal. It is probably the best to be read and watched alongside with Starship Troopers. Naomi, you also said that Dr. Strangelove is something that you've been thinking about as a text to read with lately.
Naomi Klein: Yeah, I mean, people should just, if they're not familiar with Palmer Luckey, just watch some of his videos, watch some of how he tests the weapon systems as a kind of spectator sport. I've never seen weapons marketed this way. And it's also quite distinctive that they are cultivating a fan base, right? I've also never seen that. I mean, I suppose having fighter jets at a Super Bowl game is kind of a version of that, right? Where it's sort of fusing entertainment culture- but this is much more participatory. And I mean, I don't know if we got through the key parts of his tweet here, but, this is about getting ready for the next election, right? So what they're saying is, it's not about whether people are voting, who aren't registered to vote, who don't have a right to vote. It is simply, it is about ethnic cleansing. This is about getting rid of brown people from Latin America, who they believe will vote for Democrats, and they're presenting immigration as a plot to steal elections. And so they want what's increasingly being called remigration, and this is huge in Europe as well. So it's targeting people who are documented and, quote unquote, they're "saving Western civilization." That's what they believe. This is important, too, because Anduril has contracts with- not directly with NATO, but essentially with NATO. So they're expanding. But I think we should think about it also in the context of saying that they want ICE agents at polling stations. So it's all kind of merging with this post here. A lot of things are happening.
Alex Hanna: Yeah, a hundred percent. And he says here that basically, liberals don't want immigrants to actually vote, but they just want census representation. And then the kind of rub here, he says, " There's an effectively unlimited supply of poor people from poor countries that want to live in the United States who can be used to fuel this strategy. Some might be net positive to the US economy, some might not be, but that is beside the point. All would equally contribute to a future where minority rules the majority with no recourse. And in this case, he's effectively saying, if you talk about economic benefits of immigration, you're ceding the point to the liberals, you know, these brown hoards are invading and are fuel, and it's such a dehumanizing way of talking about immigrants, of course, but that's the point.
Emily M. Bender: And then, holy doublespeak, Batman. "A future where minority rules the majority with no recourse"? That's the fascism they're building. Right?
Naomi Klein: Yeah, but I mean, he's literally arming the base with the rationales for stealing an election. But, you know, it's a conspiracy theory, right? He's a conspiracy theorist. And yeah, you mentioned, I had originally thought we could play some clips from Dr. Strangelove. It's worth watching, or rewatching, and thinking about this moment of the most lethal and powerful tools in human history landing in the hands of the worst and most dangerous people in the world. Because that was the future that the film portrays. And it's interesting because it came out in 64, I think, and so this was really like, it was not the same kind of era that we're in now. But it was sort of forcing people to- 'cause I think that Americans like to imagine their nukes in the hands of tortured, Oppenheimer type characters, you know? Who like, learn Sanskrit in their spare time, and are tormented by the weight of the world on their shoulders. And I think the genius of Dr. Strangelove is like, no, you must imagine these tools in the hands of the worst people. 'Cause eventually the worst people will find the most powerful tools, right? And so it portrays, you know, the, plot line is like, this American general manages to launch a first strike on Russia, because he is a conspiracy addled maniac who has become convinced that the Russians have launched a first strike on the United States by putting fluoride in the drinking water. And so he considers that a first strike. So therefore it is justified for him to drop nukes on the Soviet Union. And then the Soviet Union has built a doomsday machine, but forgot to tell anyone, because- you know, so everybody is sort of profoundly compromised. And the final shot is like, the nuke thing dropped and the pilot basically screaming "Yeehaw" with a cowboy hat. And so, I mean, that to me is either Hegseth or Kristi Noem, it's unclear, because she does love hats. But just that fusion of conspiracy culture and just tremendously high stakes. And then there's Strangelove himself, who's definitely Musk. He suggests that they can all go into a bunker with lots of hot women and procreate. And they all think that's a fantastic idea, but then they all get nuked.
Emily M. Bender: So, what's the word for satire that was written before the events happened?
Naomi Klein: There's even kind of some good AI references. Because Strangelove says the computer can select the best people to go in the bunker.
Alex Hanna: Oh wow. Incredible.
Emily M. Bender: Wow. Thank you for that.
Alex Hanna: I do wanna raise two things up from the chat, just about the video game thing. So conclachat says, "I don't know how true this is, but I did see some posts on Instagram about how Alex Pretti's killers apparently said, quote, 'It's just like a video game' after they murdered him. Incredibly disturbing." And possumrabbi responds and says, "Shovrim Shtika-" sorry for mangling that- "an anti-occupation human rights group in Israel Palestine made up of former Israeli soldiers, has posted things on Instagram with folks saying they heard that from Israeli army officers during various campaigns, too." And I think the English name is called Breaking the Silence.
Emily M. Bender: Oof. Yeah.
Alex Hanna: All right. So let's move to the last thing. Do you want to introduce what this is?
Emily M. Bender: Sure. So this is a piece in the Hill. I'm sort of hesitating to call it journalism, because it reads more like a press release or a combination of press releases. But the accredited author is Jackie Koppell, and the date is January 11th, 2026. And the headline is "How artificial intelligence is reshaping the future of war." So: "NewsNation. Artificial intelligence, AI, is profoundly changing how we fight wars, and the US military is increasingly focused on it. Recently, the Department of War announced the launch of GenAI.mil, the new AI platform being rolled out to all three million military and civilian personnel in partnership with Google's Gemini. A partnership with the company xAI is also set to roll out early this year. The transformation, though, will be most apparent on the battlefield."
Alex Hanna: Yeah, and I mean, the next line is helpful too, so: "When you think about war, it likely conjures up images of big guns, tanks, warships, and jets." Now, when I think of war, I think of a lot of fucking terrible things.
Emily M. Bender: A lot of people die and get hurt, yeah.
Alex Hanna: People dying, and maimed, and everything. "While these aren't going anywhere, there are now also drones and even autonomous tanks and ships." And then, in the interest of time, I'm gonna read the next graph and get some reactions. So, the subhead is "How AI applies to war fighting." And then Emil Michael, appearing again, "explains what AI is-" there's a link that I don't really wanna-
Emily M. Bender: I'm not gonna follow, but-
Alex Hanna: I clicked on it, and I did not understand more of what AI is.
Emily M. Bender: Okay.
Alex Hanna: "-And how it applies to fighting war. Of AI, he says, quote, 'You can use computers and software to think and reason as much like a-" that's a weird construction- "as much like a human as possible, so that when humans use it, it could extend their capabilities. And the way that applies to war fighting, like it applies to many subjects, is it can simplify a set of complex tasks and give you leverage,'" end quote. Thoughts on that, Naomi?
Naomi Klein: Yeah, this was the kind of thing that made me think about the costs of glitch, right? I mean, a lot of the focus on AI's immense fallibility has been on pretty low stakes mistakes. But this is the same AI that we know makes endless mistakes, isn't getting better when it comes to quote unquote "hallucinations," and it's human beings on the other end of it. And I think that then there's also this question around mystification for me, of just whether or not you even understand why it's telling you to do, you know, to go here and not there, right?
Emily M. Bender: Yeah, and I love how in the next thing they say, "That includes logistics, like finding the most efficient way to move equipment, lifesaving medicine, or even where to transport soldiers." Like, staying well clear of the pointy end of the weapon. We're just gonna think about all the other parts, and nevermind that we're setting things up to kill more people more quickly.
Alex Hanna: And just like a technical nit, like a large language model is probably a terrible way to do route optimization. Which is, I mean, if you're a nerd listening to this, if you are thinking about something like, I don't know, the traveling salesman problem, or path optimization, LLMs are not made for that. That seems actually pretty bad.
Emily M. Bender: Yeah, and it's all magical thinking. So here's another quote from Emil Michael. "'AI can basically take a lot of different factors, sometimes more than a human can keep in their mind at one time, and more efficiently deliver a capability.'" It's like, yes, if you put in a query, something will come out that you can interpret as an answer, but that doesn't mean it's the information that you need.
Alex Hanna: Yeah. So there's also, just so we have some time for AI Hell, this part, I think, really hammers- there's this quote from Zach Kramer, also from Anduril. So: "Zach Kramer, general manager for mission command connected warfare at the-" just, ridiculous title- "at the defense technology company Anduril explains, quote, 'If you're out in a fight, you've got lots of decisions to make. You have to make these in a quick fashion. And so it's not that the AI is deciding what to do, but it's saying, look, I've considered a bunch of options and I'm presenting you with courses of action that you can then determine from.'" And yeah, to me this basically is saying that this has done some kind of a process that looks like thinking, and you can then execute on, have some kind of degrees of freedom through human discretion. But that's not how these things play out. I mean, it's going to direct you to do one thing, or it's going to provide suggestions- or, shouldn't even call them suggestions- it's gonna bullshit, more accurately, on things that one may do in that situation.
Naomi Klein: Yeah, I think we should see this as marketing text, obviously. And you know, I think they're gauging that people are not ready for lethal decision making to be entirely outsourced to algorithms that they probably know from their own experience are fallible. But you know, they're embedding this in, like they're announcing that the US military is now AI first, right? So whatever stage we are starting at is not the stage we're gonna be ending at. But also, like, I would just wonder about the permission structure being created, where you outsource the reasoning to these machines, and then you're not held responsible. Like, who is held responsible? And I think we've seen this with Israel, and what we know is that Israel has embedded AI in a lot of their targeting. In no way does it correlate to anything you would call precision, right? What it creates is a permission structure to say, okay, there's somebody who you could potentially make an argument in this building, which gives you the rationale to bomb the entire building and everybody in it. The only thing I would just add is, I really think that we need to understand the connection between all of this and the war that is being waged on international law and the United Nations right now. And all of these tech companies are very involved. So this decision to just go all in on becoming weapons contractors and embedding themselves in the US war machine- and I'm fine calling it a Department of War instead of a Department of Defense, it is more honest- means that all of these tech executives are worried about their own liability. And it is very telling that the day after Francesca Albanese released her incredible report on the economics of genocide, and named many of these companies as being legally liable for complicity in genocide, including Palantir, including Google, other tech companies, Marco Rubio imposed, sent sanctions on her the next day. She's kicked off of all of the tech platforms, basically can't book a hotel room. The same thing has happened with ICC judges that are holding Netanyahu responsible. Also the United Nations, which is now saying they're gonna have to potentially close the New York headquarters and lay off thousands of staff because the US isn't paying its dues. And at the same time, Donald Trump announces that lo and behold, he'd like his own UN called the Board of Peace, which is actually like a private family company. So the point is that you can't do what these tech companies are doing, and not be worried about their own liability. Because there are liability issues with having your algorithm be involved in targeting people. Your glitchy algorithms. So, this can't coexist with international law.
Emily M. Bender: And that also sounds like a really interesting, avenue for resistance that I think it'd be fun to come back to. Alex, I think I'm gonna give you your prompt for the transition? So, favorite small animal with teeth.
Alex Hanna: Oh, interesting. With teeth?
Emily M. Bender: Yes, gimme a small animal with teeth that you wouldn't mind identifying with.
Alex Hanna: Sure. A possum.
Emily M. Bender: Possum, okay. You are leading a team of possums that's going to save humans from themselves by chewing the cords that connect the LLMs to the nuclear launch systems.
Naomi Klein: Ooh.
Alex Hanna: Okay, so this is, I'll have to think about what's a good possum voice. All right, men. We're now- well, this sounds a little too Trumpy to me. All right, men, we've trained for this- Oh no, I don't have the, I'm not well enough to do that. I'm gonna just do, like a deep voice. All right, men. We've trained for this. We've mapped out the Anduril, whatever it's called, arsenal building in rural Georgia. We're gonna do it. They use Tesla as a model. So there are cracks and really cheap components all over the factory. We found a possum sized thing and you know, many- oh gosh. The general from Star Wars, that says many people died for this, of his race.
Emily M. Bender: Yeah. Yeah.
Alex Hanna: Anyways. Many capybaras died for this information. Let's go. Anyways, there we go.
Emily M. Bender: I love it. All right. So I think we're not gonna get through all of these, but I wanna do a couple of them. I love this one. This is January 2nd, 2026 by Victor Tangermann. And I love it 'cause it's hilarious. It's also sad. The headline is, "Cops Forced to Explain Why AI Generated Police Report Claimed Officer Transformed Into Frog." And what happened here was that they're using one of these systems, which is a really bad idea, of basically doing ambient listening to the cops' interactions, and then writing a first draft of the report. And this is the same people who create Tasers create this software. And "The Princess and the Frog" was playing in the background, and the system picked that up and wrote it into the report. And they're like, "and this is when we realized that you have to proofread them carefully." Well, no shit. All right, I'm gonna jump straight to these environment ones. Alex, do you wanna take the lead on this?
Alex Hanna: Yeah, so this is from the Seattle Times. So this is "Amazon outbids Washington utility for one of nation's largest solar projects." And then, I'm not seeing a journalist, published on February 5th.
Emily M. Bender: Here we go. Greg Kim.
Alex Hanna: Greg Kim. And so, let's see what the nut graph says. So, "Amazon outbid Puget Sound Energy last month in an auction for a massive solar farm project in Oregon, leaving the utility concerned about a larger competition for resources with energy hungry artificial intelligence companies."
Emily M. Bender: Yeah, so basically- and Naomi, I'm sure that I've heard this from you, too. Even if the big tech companies are using renewables, that doesn't mean they aren't having an environmental impact. 'Cause what are they displacing?
Naomi Klein: Yeah, yeah.
Emily M. Bender: And here's that displacing happening. Relatedly, we had this headline from Reuters. December 23rd, 2025. "AI data centers are forcing dirty 'peaker' power plants back into service." So this is the story of a particular power plant that was about to be retired, but in order to keep up with peak demand, it's still being used.
Naomi Klein: Yeah, there's a lot of this happening. A lot of coal plants that were gonna be decommissioned coming back.
Alex Hanna: Yeah. I know there's also the kind of, putting Three Mile Island back online in Pennsylvania. And then, the last one we have usually as a palate cleanser. So we actually, this is a piece originally reported in The City a few years ago, and this is a follow up. So, "Mamdani targets 'unusable' AI chatbot for termination," by Colin Lecher, the Markup, and Katie Honan. And this is on January 30th. And this references their prior reporting, where they say, "Reporting from the Markup and The City exposed how the bot, touted by the Adams administration, repeatedly served up false and damaging information." So, a win!
Naomi Klein: I'm typing, I'm getting the story right now so I can add it to the book. It's not too late.
Emily M. Bender: That's fantastic.
Naomi Klein: We need good news.
Alex Hanna: Yeah.
Emily M. Bender: I love it how Mamdani basically said, all right, we've got a budget gap. What can we get rid of? How about this useless chat bot? Yes!
Alex Hanna: Yeah, so let's cut. And that's why it's important to ridicule these things and identify where they can be removed. Resistance is possible.
Emily M. Bender: Indeed. So, that's it for this week. Naomi Klein is an award-winning journalist, columnist, and the international bestselling author of nine books published in over 35 languages. We're so excited to have had this chance to talk with you. Thanks again for being with us!
Naomi Klein: This was really fun. Thanks for having me!
Alex Hanna: Thanks so much for joining us. It was a pleasure. Our theme song is by Toby Menon. Graphic design by Naomi Pleasure-Park. Production by Ozzy Llinas Goodman. And thanks as always to the Distributed AI Research Institute. If you like this show, you can support us in so many ways. Order The AI Con at thecon.ai or wherever you get your books, or request it at your local library.
Emily M. Bender: But wait, there's more. Rate and review us on your favorite podcast app, subscribe to the Mystery AI Hype Theater 3000 newsletter on Buttondown for more anti hype analysis, or donate to DAIR at dair-institute.org. You can find our merch store there too. That's dair-institute.org. You can find video versions of our podcast episodes on Peertube, and you can watch and comment on the show while it's happening live on our Twitch stream. That's twitch.tv/dair_institute. Again, that's dair_institute. I'm Emily M. Bender.
Alex Hanna: And I'm Alex Hanna. Stay out of AI hell, y'all.