Mystery AI Hype Theater 3000

Winning the Race to Hell (with Sarah Myers West and Kate Brennan), 2025.08.04

Emily M. Bender and Alex Hanna Episode 61

Trump’s “AI Action Plan” is his latest attempt to turn AI hype into official national policy. Kate Brennan and Sarah Myers West, of the AI Now Institute, join us to dig through this pile of deregulatory gifts to Big Tech.

Dr. Sarah Myers West is co-executive director of the AI Now Institute, and a former senior advisor on AI for the FTC.

Dr. Kate Brennan is associate director of the AI Now Institute, where she spearheads their policy work, informed by a doctorate in law and years of experience in the tech industry.


References:

Winning the Race: America’s AI Action Plan


Also referenced:

People’s AI Action Plan


Fresh AI Hell:

Medical Pros Risk Malpractice Suits by Avoiding AI Innovation

Google search’s next cash cow

AI Is Coming for the Consultants. Inside McKinsey, ‘This Is Existential.’

The rise of AI tools that write about you when you die

Amazon’s Alexa Fund Invests in ‘Netflix of AI’ Start-Up Fable, Which Launches Showrunner: A Tool for User-Directed TV Shows

In federal lawsuit, students allege Lawrence school district’s AI surveillance tool violates their rights

Check out future streams on Twitch. Meanwhile, send us any AI Hell you see.

Our book, 'The AI Con,' is out now! Get your copy now.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

Alex Hanna: Welcome, everyone, to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find. 

Emily M. Bender: Along the way, we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. I'm Emily M. Bender, professor of linguistics at the University of Washington.

Alex Hanna: And I'm Alex Hanna, director of Research for the Distributed AI Research Institute. This is episode 61, which we're recording on August 4th, 2025. Since the beginning of his term in office, Donald Trump's administration has been a gift to AI hypers in the US. Doge boasting about using quote AI to quote, find efficiency. He fired thousands of federal workers. The federal budget almost included a 10 year moratorium on state regulation around AI, regulations meant to prevent things like the deepening of in inequities in the criminal justice system, the practice of health insurers to automatically denying 90% of claims or pornographic deep fakes of unconsenting people. Congress nixed that proposal, so the Trump administration has a new AI action plan that is trying to do the same thing.

Emily M. Bender: As all acts of federal deregulation begin, Trump claims to offer one common sense federal standard, but gives only a mountain of deregulatory gifts to Silicon Valley grifters. Think more data centers with no environmental regulations to limit them, instructions for the Federal Trade Commission to hold off on any oversight that might impede quotes, AI, and ever so vague language threatening the coffers of any state that dare impose, quote, burdensome regulation. We've got an arms race to win after all! This is a document that references China 28 times. This plan is to say the least a rich text. And we have two guests today to help us lift this very heavy burden. Dr. Sarah Myers West is co-executive director of the AI Now Institute. She's also a former senior advisor on AI for the FTC, very relevant today. You might also remember her from episode 21, which was all about open source hype. Welcome back, Sarah.

Sarah Myers West: Thanks for having me back on the show. I'm excited to, to dig into this with you all.

Alex Hanna: Totally. And we also have Dr. Kate Brennan, who is an associate director of the AI Now Institute, where she spearheads their policy work, in particular informed by a doctorate in law and years of experience in the tech industry. Welcome Kate.

Kate Brennan: Hi. So happy to be here.

Emily M. Bender: We're gonna have some fun, although it's a terrible set of artifacts or primary artifact that we're looking at. And then Fresh AI Hell, of course. So I'm going to start us off with- ha, sigh. This is I, it was so hard to make myself read this, I have to say. So the White House winning the race America's AI action plan, July, 2025. And then we have this quote on the front, which is attributed to Trump, but as you'll hear when I read it, there's no way he came up with these words. So this says, "Today, a new frontier of scientific discovery lies before us, defined by transformative technologies such as artificial intelligence. Breakthroughs in these fields have the potential to reshape the global balance of power, spark entirely new industries, and revolutionize the way we live and work. As our global competitors race to exploit these technologies, it is a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance. To secure our future, we must harness the full power of American innovation." I feel gross having read that, but also it's clear that these words were not actually authored by the current excuse for a president. So where did this come from? How did we get here? To having this document.

Kate Brennan: Yeah. We were blessed with this document pretty immediately we were told it was coming pretty immediately into Trump's administration. If we remember the executive order he issued on, I think day three in office that was repealing, any of the work that the Biden administration has led, set it as policy of the United States to achieve global dominance in AI, and tasked a task force with coming together with an AI action plan within 180 days. So that brought us to July 23rd, which is the date we knew this was coming to, set a series and a series of policy actions at least laying them out. And along the, the way the Office of Science and Technology Programs opened up a comments period. There were hundreds of tech industry comments, submissions, requests, wishlists we might say for, what they wanted to see in the action plan. And so reading through those, we also got a good idea of what was gonna come out. And when it did come out on the 23rd, we saw a lot of those come to fruition.

Emily M. Bender: Yeah.

Alex Hanna: Yeah. There's a lot of really terrible things in here. And as you were saying, Kate, so many of the things that we saw on day three of the Trump administration, the kind of revocation of all the Biden era executive orders around AI, the kind of the removal of those documents, including the blueprint for the AI Bill of Rights, a lot of things that were meant to guarantee things like anti-discrimination, civil rights consumer protection a lot of the progress made in other administrative organizations as well. So now we're getting into it. So: "Introduction. The United States is in a race to achieve global dominance in artificial intelligence. Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits. Just like we won the space race, it's imperative that the United States and its allies win this race. President Trump took decisive steps towards achieving this goal during his first days in office by signing executive order 14-179, removing barriers to American leadership and artificial intelligence, calling for America to re- retain dominance in this global race, and directing the creation of an AI action plan."

Emily M. Bender: Ooh.

Alex Hanna: Yes. And then let's do the next one.

Emily M. Bender: The next paragraph is rich. Yeah.

Alex Hanna: So, "Winning the AI race will usher in a new golden age of human flourishing, economic competitiveness, and national security for the American people. AI will enable Americans to discover new materials, synthesize new chemicals, manufacture new drugs, and develop new methods to harness energy- an industrial revolution. It will enable radically new forms of education, media, and communication- an information revolution. It will enable altogether new intellectual achievements, unraveling ancient scrolls once thought unreadable, making breakthroughs in scientific and mathematical theory and creating new kinds of digital and physical art- a Renaissance.

Emily M. Bender: So I just have to jump in and say, this thing about unraveling ancient scrolls just feels like a dog whistle to the evangelical base. What is that doing in here?

Sarah Myers West: It's a good question. It's actually the first time that I've noticed that line in the introduction doing this, like really close read, 'cause most of my reads have been really trying to get into the policy details, not this bit of the framing. I mean it's, yeah, that, the projections about what will make AI this transformative technology have been very wide ranging and also as we've seen, like very thinly supported. And I think that's really reflected here.

Kate Brennan: Yeah, something that jumps out to me immediately in this, everything- it will enable this, it will radically transform- like it is positioning it as inevitable. There is no question that we are going to have all of these incredible things, and that theme of inevitability moves through the entire document. There's no space for pausing, for questioning, for contemplating, how this is gonna happen and who it's gonna benefit. It is a foregone conclusion. 

Emily M. Bender: Yeah, and it's all one thing, right? AI is referred to as a singular item here. There might be, we were doing some background research into this ancient scrolls thing. There might be something to using machine learning to try to pick apart palimpsests, right? So images where you've got multiple layers you could imagine trying to do some data processing, machine learning there. But that's got absolutely nothing to do with, these other things that they're imagining. But there's no desegregation of different kinds of automation here. It is just one thing that builds up this big ecosystem of various, like instances of ai. Which is not surprising. I have to say that reading this was like, we, we deal with the AI hype all the time, but like the AI hype together with the Trumpism, it's just it's a lot.

Alex Hanna: Yeah. Yeah.

Emily M. Bender: All right.

Alex Hanna: So getting into this more, we've, we're getting into the meat of it. Talking about kind of the three pillars that they're focusing on. Also annoying: information revolution, industrial revolution, and then a renaissance. I'm like, you could have used revolution three times, this is bad writing. So there's some elements of this, so the kind of the three big pillars, so: "America's AI action plan has three pillars: innovation, infrastructure, and international diplomacy and security. The US needs to innovate faster and more comprehensively than our competitors in development distribution of UAI technology across every field and dismantle unnecessary regulatory barriers that hinder the private sector in doing so." Then there's a quote from Vance. I'm just like trying to get to the meat of it. "We need to build and maintain vast AI infrastructure and energy to power it. To do we will continue to reject radical climate dogma and bureaucratic red tape as the administration has done since Inauguration Day. Simply put, we need to build, baby, build." And then, "We need to establish American AI from our advanced semiconductors to models, to our models, to our applications as the gold standard for AI worldwide, and ensure our allies are building on American technology." All right, so let's pause there.

Emily M. Bender: Build, baby build- is that a back reference to Sarah Palin? 

Alex Hanna: Yeah. Yeah. It's a reclaiming of Sarah Palin to drill, baby drill. Yeah.

Sarah Myers West: With a little Mark Andreessen flare thrown into the mix.

Alex Hanna: Yeah. Yeah. Unholy alliance. Thoughts on these pillars, y'all?

Kate Brennan: Yeah, un- unexpectedly, we are seeing just, first of all, unleashing innovation is just how many ways can we talk about how burdensome and hindering regulation is? So like immediately, the top line message that we're getting from this statement is that, in order to succeed, in order to win the race, in order to be the best, in order to innovate, we need to take down any sort of regulatory barriers that are hindering us. And then not only that, we have to use that as justification to pursue this energy dominance, build baby build, drill baby drill energy agenda, where linking those two has been, I think a strategy of the administration since day one and seeing them, I think, one right after the other makes it so clear that AI is use- is being used rhetorically, strategically, politically, economically as a wedge to ensure that, big oil is getting what they want from this just as much as big tech is.

Alex Hanna: Yeah.

Emily M. Bender: Yeah.

Alex Hanna: A hundred percent. And then there's this last part of it, which is the American AI, the elements of this, and then you have this kind of line of semiconductors and modeling and applications as being an exporter. Yeah, what are y'all's thoughts on that?

Sarah Myers West: It's a Clinton era throwback, right?

Alex Hanna: Yeah.

Sarah Myers West: Like it's reviving this idea that the export of, US commercial interest is going to be good for the country and good for the world is the presumption here. And what that, what this, in combination, this reads to me like, is this huge and very costly bet that the administration is making that this version of AI which, as we all know, is not the only way that you can approach building AI, but like going for the most capital intensive, energy intensive approach to building out models and the apparatus around it is we're gonna go all in on that, hollow out all other R and D, just focus on, matching the oil industry's need for demand, which was what they were asking for coming into this administration with the AI industries need to de-risk their portfolios, which have been overburdened with having to make these capital investments. And then, like the White House is the broker between the two of them. And the bet is that all of this is gonna pay off. It certainly will in the short term. Like they're all gonna make a ton of money in the short term. But in the long term, I think we're all gonna be really feeling the cost of this.

Emily M. Bender: Yeah. What a simple world these people live in. That like the only impediment to this inevitable, marvelous AI future is regulation, let's just get rid of regulation, like...

Kate Brennan: Yeah. So we can let the private market do its thing.

Emily M. Bender: Yeah.

Alex Hanna: Yeah.

Emily M. Bender: There's, we're not done with the introduction though, so.

Alex Hanna: I know. Let's get through the rest of the introduction, 'cause there it does preview the rest of the document.

Emily M. Bender: Yeah. So I'll pick up here with, 

Alex Hanna: Yeah.

Emily M. Bender: "Several principles cut across each of these three pillars. First, American workers are central to the Trump administration's AI policy. The administration will ensure that our nation's workers and their families gain from the opportunities created in this technological revolution." Bullshit, but okay. "The AI infrastructure build out will create high paying jobs for American workers-" till it's built, maybe- "and the breakthroughs in medicine, manufacturing, and many other fields that AI will make possible will increase the standard of living for all Americans. AI will improve the lives of Americans by complementing their work, not replacing it."

Kate Brennan: Whew.

Alex Hanna: Yeah.

Kate Brennan: There's a lot to say about this paragraph.

Emily M. Bender: Yeah.

Alex Hanna: Yeah.

Emily M. Bender: Go for it.

Alex Hanna: Get into it.

Kate Brennan: One, I wanna have a question, how many American workers were consulted in writing this document? Because I can't imagine that if you really consulted a lot of workers that what their policy recommendations are for workers we would need a 28 page doc just for that.

Alex Hanna: Yeah.

Kate Brennan: We can get into it. The, by trying to position this document as being worker first, it's leaving a lot on the table. And we should talk more about that one. And then the second, mean, we know that the AI infrastructure build out is not creating high paying jobs for all American workers. Like a lot of the, the job numbers, especially when it comes to data centers, has been rebuked by communities. Data centers, after construction jobs, which are important, tend to materialize in the order of the dozens. If we're checking the footnotes on a lot of this how this document is long term supporting American workers, I just don't think it checks out.

Alex Hanna: A question for you guys is like, why even make the, throw the bone to work, kind of workers here? There's a few, there's a few unions that have lined up behind Trump most notably Sean O'Brien and the Teamsters. But like why even say make this, pretend like these are gonna create a bunch of jobs. Even the construction jobs are like a couple hundred jobs per site. But like why even mention this?

Sarah Myers West: I think it really points to the significance of worker power that they feel like they need to hat tip to, workers as a key constituency. And I think we're seeing on the right, an interest in building out like an economic populist bench. But again, if you look at the policy prescriptions, which we'll get to in a second, what they're actually offering is really thin and really shallow on many layers. So I, this is rhetoric. I think it's signaling rhetoric that we know that these are important people who have power but there's nothing like substantive that actually delivers beyond the message that like, we see workers as important.

Emily M. Bender: Yeah, absolutely. And you mentioned reading the footnotes. The footnotes in this actual document are just like to previous executive orders and stuff. So I think what you mean is if you go look at the actual studies that they should have been citing, which they're not because always read the footnotes is one of our mantras here. And I want to bring something from the chat, sjaylett says, "Notoriously places without any regulation at all, eg. because they don't have a government, have the best economies with nonstop growth that will leave you dizzy."

Alex Hanna: Yeah, absolutely.

Emily M. Bender: Alright, so that was the first principle that cut across the three pillars. The second one, they say, "Second, our AI systems must be free from ideological bias and be designed to pursue objective truth rather than social engineering agendas when users seek factual information or analysis. AI systems are becoming essential tools profoundly shaping how Americans consume information, but these tools must also be trustworthy." Elon Musk clearly wrote this part, or, 

Sarah Myers West: This is something that David Sacks has been obsessing over since before he came into the White House too. I would read this provision as a reflection of one the steady rightward shift within Silicon Valley which Alex, you and I have talked about this going back a while. It's from day one. 

Alex Hanna: Yeah.

Sarah Myers West: There's been a strong like rightward current around AI. But this is evidence to that fact. And then the fact that there's been this revolving door within the Trump administration, that they have a lot of folks within the, like right wing corners of the industry that have now been brought into the fold of the White House, first and foremost, David Sacks who really led on the production of this plan.

Emily M. Bender: So there's this conflation of rightwing ideology with objective truth in this paragraph. And then there's also this utter misconception of what these systems do, right? We don't have AIs, in quotes, that pursue objective truth. We have synthetic text extruding machines and other applications of machine learning, nothing that is autonomously pursuing objective truth, whatever that means. 

Alex Hanna: And we also have this kind of pursuance of this continues the kinds of various conservative suits around social media bias and search bias against conservatives, right? And it's now we're taking, which to some degree, was a crochety right wing position, and now is like making its way into executive policy documents. Especially the kind of, what's being waved around woke AI and whatnot. And I think, what came out against, when Google's Gemini had the weird sort of like diverse Nazis era error or whatever. This is like a a reaction to all this.

Emily M. Bender: Yeah.

Kate Brennan: No, I think that's exactly it. And you know where this ends up actually coming out in the policy recommendations and the executive order that was pursued is in forms of government procurement. So this is really a flex and a way for the government to understand that they have a certain amount of power right now as AI firms are turning to the government to shore up their own procurement and that the government can use AI procurement to pursue their own ideological and political agenda.

Alex Hanna: That's right. And we already, we discussed on this show, and I forgot which episode, but it was even seeing some of those, and I think it was soon after the election of Trump and we saw some of these kind of moves in kind of things like model specifications from OpenAI, in which I remember one, and I always bring up this example, but the example in which the OpenAI GPT-4 model specification had the Elon the famous Elon Musk thought experiment, which if you had to misgender one person and it would stop a nuclear bomb, should you misgender that one person. And using that, of course, which is silly, but it is the kind of one of these kind of like weird, bizarre litmus tests to show like that you're not, quote unquote, woke.

Emily M. Bender: Yeah, oof. And yeah, with these people, I think they would almost say, oh no, don't misgender the person because we wanna see that bomb fly.

Alex Hanna: Yeah. So the final, we're almost to the end of, so we're gonna finish the intro and then there's still, 23 pages left, which we're not gonna get through, but we're gonna hit up the worst offenders of hype and terrible Trumpist dogma. So, "Finally, we must prevent our advanced technologies from being misused or stolen by malicious actors, as well as monitor for emerging and unforeseen risks from AI. Doing so will require constant vigilance." So this is both an aside to China and also existential risk. The action plan sets forth policy goals for near, near term execution by the federal government. This action plan's objective is to articulate policy recommendation, recommendations for this administration, that this administration can deliver for the American people to achieve the President's vision for global AI dominance. The AI race is America's to win, and this action plan is our road map to victory." Hooray, yah yah America.

Emily M. Bender: Right. But "the AI race is Americas to win," like that has, I think, Kate, you were talking earlier about this inevitability narrative, right? So it's also that. It's like there's a road and it's just a question of how fast we can run down it, and if we can get there faster than anybody else, we won. And that's not how any of this works. But of course it's how they see it, right?

Kate Brennan: And if you, the more that you can put in the framing of a global race, an arms race a national, a measure of national security or national competitive, then like this, then the more you can ask for whatever you want in terms of policy actions.

Emily M. Bender: Yeah. And so I wanna mention the the authors of this. So we have Michael J. Kratsios, whose title is Assistant to the President for Science and Technology. David O. Sacks, Special Advisor for AI and Crypto. And Marco A. Rubio, Assistant to the President for National Security Affairs. These are the people whose words we're actually reading. And I'm reminded of the one time that I got to testify before Congress. It was a subcommittee of a committee in the House of Representatives, and Kratsios was one of the other witnesses testifying at the same time. And he was surprisingly well behaved. I was expecting much more cringe from him. But there was definitely this like "but China" rhetoric going on, that was pretty awful.

Alex Hanna: Yeah.

Kate Brennan: And very much worth calling out that Michael Kratsios and David Sacks have deep connections to the AI industry. Michael Kratsios was managing director at Scale AI, David Sacks is a venture capitalist, part of the original PayPal team. And even in the authors it's clear who the audience is.

Alex Hanna: Yeah. Absolutely.

Emily M. Bender: Yeah. All right. Should we get to pillar one?

Alex Hanna: Yeah. I want, starting to think about where to highlight. So maybe instead of the, policy actions, unless there's stuff that y'all wanna call out.

Emily M. Bender: I got a couple.

Alex Hanna: Yeah, there's a few of a few of the like big ones. So I think it's worth spending time on this kinda "red tape." So the first one is, "Remove red tape and onerous regulation."

Emily M. Bender: Wait, we gotta get the title of this pillar. So pillar one of three. 

Alex Hanna: Oh, so this is pillar one. Yeah. One of three. "Accelerate AI Innovation." And so: "Remove red tape and onerous regulation to maintain global leadership on AI. America's private sector must be unencumbered by bureaucratic red tape. President Trump has already taken multiple steps towards this goal, including rescinding Biden executive order 14-110 on AI that foreshadowed an onerous regulatory regime." And that was "The safe, secure, and trustworthy development and use of artificial intelligence," published in October 2023.

Emily M. Bender: Which also had some issues.

Alex Hanna: Yeah. 

Emily M. Bender: But it-

Alex Hanna: Yeah.

Emily M. Bender: Wasn't as bad as this.

Alex Hanna: We did a review of that one, and I forget which episode of that. So "AI is far too important to smother in bureaucracy at this early stage, whether at the state or federal level. The federal government should not allow AI related federal funding to be directed towards states with burdensome AI regulations that waste these funds, but should also not interfere with states' rights to pass prudent laws that are not unduly restrictive to innovation." So that's fascinating. Would y'all like to comment on that?

Kate Brennan: I would love to. Oh, sorry. Were you- go ahead. 

Sarah Myers West: No, I was gonna say this line is them equivocating, and then when you get to the bullet point, I think the stance becomes much clearer.

Kate Brennan: Yeah.

Sarah Myers West: So it might be worth scrolling down. I think it's...

Emily M. Bender: To the next page?

Sarah Myers West: No up a little bit. Yeah. This middle one: "Led by OMB work with federal agencies that have AI related discretionary funding programs to ensure consistent with applicable law that they consider a state's AI regulatory climate when making funding decisions and limit funding if the state's AI regulatory regimes may hinder the effectiveness of that funding or award." So the idea is, if you have, an AI regulatory regime and that's passage of new law or the enforcement of existing law that is deemed by OMB or other federal agencies, that's the Office of Management and Budget to be like overly onerous then they can pull your funding. And that, I think is a stick that they'll be able to wield before states more, more generally to, basically say don't pass new AI laws. It's a new form of the moratorium.

Alex Hanna: Yeah, exactly. And that's a, that's really astute point, and it's going to obviously go to states who are gonna be more compelling, not just in AI specific regulation, but things within the supply chain, including having types of legislation and guardrails around data center build out, envi- around environmental concerns, around chip manufacturing, et cetera.

Kate Brennan: Yeah.

Sarah Myers West: The AI frame is like a lens that can be incredibly expansive depending on who's doing the interpretation.

Alex Hanna: Yeah.

Emily M. Bender: And then this fits right in with the usual Trump MO of targeting, like using money and withholding of money as a wedge and as a bludgeon basically to try to get his way and targeting, specific enemies of his with this, yeah.

Kate Brennan: That's exactly it. I also think this section is setting up another major issue that we're probably going to see, which is federal preemption. So, Trump, and this administration is pretty obsessed with figuring out ways to conflict out state laws. So where they can't use conditions or funding to coerce state behavior, figuring out legal mechanisms in order to stop states from being able to do it. So I'm interested in this next bullet point about potentially there being conflicts with the FCC, so with the communications act. That to me, whether or not there actually are conflicts with the FCC, it's more evaluative of like where they're heading. So maybe passing a very weak, this would be Congress of course, but like passing a very weak law so therefore states wouldn't be able to conflict or do anything else stronger. And so seeing both the straight up attack on regulation- so like it is positioned as onerous, it is positioned as smothering, it is positioned as burdensome- so therefore let's repeal anything that hinders. Let's set up a preemption framework to like conflict with any law that might be coming that, we can say there's no jurisdiction there for states. And then let's also use coercive federal funding to, punish and stop states from being able to pass strong laws too. All three of those are working together in this section.

Emily M. Bender: Yeah.

Alex Hanna: Yeah, a hundred percent.

Emily M. Bender: So the next bullet also I think is important given the context of what the FTC was doing under Commissioner Lina Khan. So it says, "Review all federal trade commission, FTC, investigations commenced under the previous administration to ensure that they do not advance theories of liability that unduly burden AI innovation. Furthermore, review all FTC final orders, consent decrees, and injunctions, and where appropriate, seek to modify or set aside any that unduly burden AI innovation." And the FTC previously under Lina Khan was actually doing some pretty amazing work, I thought, saying, look, we regulate the activities of businesses, doesn't matter if you're using so-called AI to do it, it's still, in our jurisdiction. And so this is particularly sad to see. 

Sarah Myers West: Yeah. 

Alex Hanna: Yeah.

Sarah Myers West: And there's a number of cases that were opened up against the big tech firms that now could be called into question. And then the broader, I think, context for this that's really important is that the FTC is an independent agency. It's meant to not be influenced by one or another administration. And this is very clearly trying to take major steps toward asserting executive authority over the agency.

Emily M. Bender: Yeah.

Alex Hanna: We already saw moves towards that kind of, the new head at FTC has been much more a, amicable to the Trump agenda and has been hands off, although it's they're doing that plus, wielding a stick around antitrust and they are pursuing a few of the cases that they've, they started, including Meta case regarding Instagram and I think another, I know there's a few more. Y'all know much more than I do. Oh, where should we go next? There's so much in this document.

Emily M. Bender: There's so much.

Alex Hanna: There's parts on science that might be interesting, the sort of like open source and open weight element of this. And also there's elements around around NIST. One thing I guess I'll read is the open source and open wait thing, which I think is interesting. So "Open source and open weight AI models are made freely by developers for anyone in the world to download and modify." Which if you want to talk more about open source, watch the other episode we did with Sarah. "Models distributed this way have unique value for innovation because startups can use them flexibly without being dependent on a closed model provider...." Blah, blah, blah. "We need to ensure America has leading open weight models founded on American values. Open source and open weight models could be global standards, could become global standards in some areas of business and academic research worldwide. For that reason, they also have geo-strategic value-" which is funny to say. It's like these are soft power. This is the soft power. "While the decision of whether how to release an open or closed model is fundamentally up to the developer, and the federal government should create a supportive environment for an open models." And I think that's funny on this is also saying what they should do, which is basically try to promote this at NSF. And so they say, on the first bullet, "Ensure access to large scale computing power for startups and academics by improving the financial market for compute. Currently companies seeking to use large scale compute must often sign long term contracts with hyperscalers far beyond the budgetary reach of most academics and many startups, America has solved this problem before with other goods through financial markets, such as spot and forward markets for commodities. Through collaboration with industry, NIST at DOC, OSTP, and the na, National Science Foundation's NAIRR, National AI Resource Research Resource pilot, the federal compute can accelerate the deterioration of a healthy financial market for compute." Which is very funny given the kind of attack at NSF, that this is being promoted as a potential venue for access to compute.

Sarah Myers West: Yeah, one thing that's like, shines through this whole section is the re, rejigging of a lot of the conversation about like public research into AI, which, was at the sort of in, in the original like documents around NAIRR, like that's kind of what the, the threat was. And we, AI Now wrote about it back in 2019 and had qualms about the way that was structured. Same thing when the pilot came out. It was structured as a private public partnership, which is problematic for a variety of reasons, particularly that the companies providing the resources would have a gatekeeping capacity for deciding what projects then get resourced, which gives them a lot of shaping influence over the field. This really doubles down, not only on that pub, private public partnership frame, but also that like the end goal is more startup oriented. Like it's a about producing commercial projects with a hat tip to academics, but really this is about startups and giving startups access to like private sector compute resources through a government run marketplace. Which I think is a pretty far cry from the conversations around NAIRR taking shape last year.

Alex Hanna: That's a really good point. And I would be, I'd be keen if anyone's listening, if they wanted to analyze even who are the NAIRR recipients, 'cause the data was public. I'd be curious if there's a shift in the next year of seeing who is granted NAIRR resources in the pilot. And as you said, like the P3 ver, like, kind of version of NAIRR has a bunch of problems, 'cause it's effectively trying to beg for the kind of crumbs of the hyperscalers to have to, basically put in compute into a public resource. And now it's just gonna be oriented towards operations that are more market oriented rather than academic.

Emily M. Bender: All right. I'm gonna keep us moving.

Alex Hanna: Yes.

Emily M. Bender: There's something that I wanna talk about in "Enable AI Adoption," and then I think we should skip to the terrible stuff they say about science. Here, under "Enable AI Adoption" starts, "Today, the bottleneck to harnessing AI's full potential is not necessarily the availability of models, tools, or applications. Rather, it is the limited and slow adoption of AI, particularly within large established organizations. Many of America's most critical sectors, such as healthcare, are especially slow to adopt due to a variety of factors, including distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards. A coordinated federal effort will be beneficial in establishing a dynamic try first culture for AI across American industry." And my note here just says no no no no no.

Kate Brennan: Yeah, you hear "try first culture" and what it sounds like to me is how quickly can we push unproven, untested technology into our most vulnerable, critical systems? And they even call out healthcare, which we know is one that is already facing so much top down helicoptering in from AI companies coming in and saying, AI broadly is gonna solve every single problem that exists. And what we know from on the ground reality, from nurses who are doing incredible work of showcasing how pushing AI systems is undermining their clinical decision making power, masking systemic issues at hospitals. And what I find to be so interesting about this whole section is that it's, again, it's this inevitability. It's saying, we have to study how much AI is going to increase productivity. It's not if AI is going to help these sectors. It is, when and how quickly they can. And it's it's a particularly dangerous section I think for everyone who is working in healthcare and then also for patient safety writ large.

Emily M. Bender: Absolutely. 

Sarah Myers West: And it ignores that like distrust might be for a reason, like that might be very much warranted that in healthcare context it would be problematic if an AI system were to hallucinate answers and to not be able to differentiate meaningfully between hallucinated outputs and not hallucinated outputs. That, that would be a problem.

Emily M. Bender: Yeah.

Alex Hanna: Yeah, for sure.

Emily M. Bender: And that on top of the displacement of accountability and everything else. And I think this hearkens back to what you were saying earlier about how this is really about how do we funnel as much money as possible to big tech and big oil. And healthcare is a big fat pile of money that you know, gosh, all of this pesky regulation is keeping us from being able to access.

Alex Hanna: Yeah. I wanna ensure that we hit up the data cen, like the data center section, and then get to the kind of like tech stack export pieces, which are like very critical to talk about. So I think that the start of the data center element is like in pillar two.

Emily M. Bender: Okay, we're not gonna do science though?

Alex Hanna: I know. I wanna get to the data centers.

Emily M. Bender: Okay. We'll get to the data centers.

Alex Hanna: I wanna ensure that we, ensure, get that, 'cause we could talk.

Emily M. Bender: Okay.

Alex Hanna: I'm sure we could talk about this document for a whole day with y'all. 

Emily M. Bender: Rich text. Okay. Pillar two.

Alex Hanna: Yeah.

Emily M. Bender: Yeah.

Alex Hanna: So it's, so this is, yeah. So this is, yeah. Our producer said in Signal, "Whispers: 'three part episode.'" Going back to our roots. So "Pillar two, build AI, American AI infrastructure." And so, okay. "Create streamlined permitting for data centers, semiconductor manufacturing facilities, and energy infrastructure, while guaranteeing security." "Like most general purpose technologies of the past, AI will require new infrastructure factories to produce chips, data centers to run on those chips, and new sources of energy to power it all. America's environmental permitting system and other regulations make almost impossible to build this infrastructure at the United States with the speed that is required. Additionally, this infrastructure must also not be built with any adversarial technology that could undermine US AI dominance. But fortunately, the Trump administration has made unprecedented progress in informing the system. Since taking office, President Trump has already reformed National Environmental Policy Act regulations in almost every relevant federal agency, jumpstarted a permitting technology modernization program..." Yada yada. Okay. Let's talk about data centers. Yeah. So what are y'all's thoughts on this, and, feel, feel free to call out any of these these bullet points.

Kate Brennan: Oh my gosh. There's so much to say about data centers, but I think the, there are a few big things to capture here. All of this is happening in a groundwork of an energy emergency. This goes back also to Trump's, day two, declaring an energy emergency, though importantly, that excluding any sort of renewable energy. So wind and solar has been excluded from the beginning. So already we're seeing that what an energy dominance agenda and building out data centers to meet this demand requires, is essentially building out our dirty energy infrastructure and our fossil fuel energy infrastructure. So that's like point one, that has been like, from the outset, what we're seeing. Second, it's positioning any sort of environmental regulations as burdensome red tape to repeal. So bullet number one, number one recommendation, exclude any sort of AI infrastructure and data center projects from environmental laws for maximum efficiency.

Alex Hanna: Yeah. Yeah.

Kate Brennan: And so what this is saying is, again, framing this race, this push for national competitiveness, this has to come at any cost, including our environment, including our health, including the cleanliness of our water. The Clean Water Act is called out here. These laws are in place to protect our land and the communities that surround them from the harms that we know can come from industrial, renaissances as they wanna say. And I it's extremely troubling to, to see that.

Alex Hanna: Yeah, a hundred percent. There's so many alarming things here, the Clean Water, "Streamline reducing regulations propagated under the Clean Water Act, the Clear Air Act and the Comprehensive Environmental Response Compensation Liability Act." And then the next one is quite alarming. "Make federal lands available for data center construction, the construction of power generation infrastructure for those data centers, by directing agencies with significant land portfolios to identify sites suited to large scale development." So the attack on Bureau of Land Management Lands US, US Department of Interior and agricultural managed lands. Places that, you know, are already kind of opened up to industrial mining and resource extraction, and now trying to identify them as sites of data centers. To me, this to me is super alarming.

Emily M. Bender: I wanna take us back just a little bit to the intro to this, where it starts with, most general purpose technologies of the past," is slipping in as a presupposition that this AI, whatever it refers to as a general purpose technology, which it, first of all, it doesn't refer to anything. And secondly, it doesn't refer to anything that could be a general purpose technology. So that's infuriating. And it also shows just how little the people writing this document actually understand about anything about what they're writing about.

Sarah Myers West: And I think even more worrisome is that, should this move, which it seems like, they've already announced like three places that they've already dec, decided our priority, like federal land parcels that are going to be targeted for data center construction. Like the, this is the part that seems to be moving most quickly within the, like the entirety of the action plan. We're gonna end up with infrastructure that introduces path dependencies. So if the AI bubble bursts as has been, the topic of speculation, there are gonna be these data centers that are optimized for large scale AI that are going to need to be purposed for something. And so like it's, there's the, it sort of cements the incentive structures toward the perpetuation of building AI at large scale and pushing it out into as many use cases as can be thought of and profit can be made off of locking us into this future no matter what the cost.

Alex Hanna: Yeah.

Kate Brennan: And I feel like just one other thing to say on this federal land data center area is this year, I think alone, the hyperscaler companies are spending $300 billion on investing in data centers. Like they have no shortage of areas where they're bulldozing into communities and building data centers. So if you needed any indication, just how firmly this plan is in the pocket of handouts to big tech industry, like giving more federal land and like in many cases vulnerable federal land over to data center infrastructure is showing just how endless it really is.

Emily M. Bender: Yeah.

Alex Hanna: Yeah.

Emily M. Bender: So one last thing that I wanted to put in then I'll, we'll ask you both for sort of final thoughts on this. I just wanna point out that there's this call out to in, under the building out the grid to be ready for all of this. They say, "Prioritize the interconnection of reliable dispatchable power sources as quickly as possible and embrace new energy generation sources at the technological frontier, e.g. enhanced geothermal, nuclear fission, and nuclear fusion." Which isn't a thing, right? So we've got wishful thinking here, that's again, like what the wishful thinking that the tech bros want. So the Altmans and the Andreessens of the world want this.

Alex Hanna: Yeah. Be, before we get final conclusions, can we at least go to pillar three?

Emily M. Bender: Okay. 

Alex Hanna: Because that is where the tech stack is mentioned in the export and,

Emily M. Bender: Okay.

Alex Hanna: It's the kind of, as the anti-China, the China, the race condition is kind of throughout, but this is kind of where it's most sharply.

Emily M. Bender: Yeah.

Alex Hanna: So this is "Pillar three: Lead in international AI diplomacy and security." And so this is there's two subheads on this page. "Export American AI to allies of partners," so, "The United States must meet global demand for AI by exporting its full AI technology stack- hardware, models, software applications, and standards- to all countries willing to join America's AI alliance. A failure to meet this demand would be an unforced error causing these countries to turn to our rivals. Our distribution and diffusion of American technology the distribution and diffusion of American technology will stop our strategic ri, rivals from making our allies dependent on foreign adversary technology." And then and then the last, the second one is "Counter Chinese influence in international governance bodies." And so this one's actually interesting too, because it's I'm curious on y'all's thoughts on this in terms of course there's certain bodies like OECD, G7, G20, but then there's the jockeying of these other organizations like the IT, ITU and the Internet Corporation for Assigned Names and Numbers, which I don't know if I've ever seen in a Chinese counter influence of international governance. But I'm curious on your thoughts on these two.

Sarah Myers West: Yeah, so this is a notable shift in the posture of US industrial policy. So under the Biden administration, the approach was, we want to see American AI that's competitive and to see other places in the world using it. And there was a, an ex, in the sort of policy structure of the diffusion rule, which is the set of export control rules that limited the sale of advanced semiconductors. So leading node, like the ones that are most capable of being used in large training runs. They were limiting the sale of that to countries on a tiered basis. It was a very like complex system of export controls. That post a conversation between Jensen Huang and President Trump seemingly has been wound back in favor of a stance that, we wanna, like the US should export the full stack and get as many people dependent on US made infrastructures, which are monopolized all the way down the stack as possible. And that that is in the US eco, like the interest of US economic security. We want everybody that's like dependent on our stuff and that's what will keep us the most safe. And that includes selling chips to more places, including China. And then shortly after this was released, one, China released its own AI action plan. And then two, there was this news article, where China I forget which government agency, but there was a Chinese government agency that said that they had found security vulnerabilities in Nvidia chips. And that those chips contain capabilities that enable you to track their geolocation and to shut them down. And this, these were known capabilities. This was not new news, but the, it was this, like this soft clap back that yes, you now may be trying to get everybody to adopt your tech, but we're gonna remind everybody else about what some of the built in capabilities within that tech that cement US power, the power of US firms to know what you are doing and also to remotely shut down those, those chips. So, it's, it's, you know, I think a shift in the posture of the the way the US China arms race is being enacted. And again, like I said at the top, it's like a return to the Clinton era where there was a policy stance that we just want to export US, like private company technology as widely as possible and get everybody dependent on that technol, like that technology and that's going to be really a strategic asset for the United States.

Alex Hanna: Yeah. Yeah. And there's so much in here that I think is it's it's all very wishful thinking. Like effectively trying to ensure supply chains that are US only or from trusted allies. It's like you can't, so much of a, so much components are not coming from US or US allies. So that's also with this dream that manufacturing can be, can come back to the US.

Sarah Myers West: So like some of the underlying capabilities that you need to produce, like the cutting edge semiconductors rely on manufacturing equipment that is produced by one producer in the Netherlands, for example.

Alex Hanna: Yeah.

Sarah Myers West: That's, and you can't package AI chips anywhere else except for in Taiwan. So the supply chain is already very distributed. And like that, that it doesn't quite acknowledge that that's the case.

Emily M. Bender: Yeah. All right. We do need to get moving to Fresh AI Hell. But Kate, I wanted to ask if you had any other thoughts that you wanted to add to wrap things up.

Kate Brennan: My my concluding thought, there's this line in the plan that says, "we need to develop a grid to match the pace of AI innovation." And I think it's, I think it's actually the perfect foil, because what I keep saying is no, we need to develop a grid to match the needs of the American people. And I think that, like you can look at this entire document is written as like a plan to match the pace of AI innovation, and not a plan to center the needs of what policy actions would actually be centering the needs of America the American economy or just the economy writ large. And if that were the case, we would see an entirely different set of policy actions. And so I I think that's, that sums it up for me, that's what this, that's exactly what this document is doing.

Emily M. Bender: Yeah.

Alex Hanna: Do you wanna also say, shout out the People's AI Action Plan? 

Kate Brennan: We, the, as a response, as a prebuttal response both, all of our organizations in coordination with, I think now we're at 135 different organizations signed on to a People's AI Action Plan responding to this exactly, saying, if we actually want to lead to a sense of shared prosperity and wellness, then we need an AI action plan that is written for the needs of people and not the tech industry. And so hopefully we will all work on continuing to build out that positive policy agenda.

Emily M. Bender: Yeah. Excellent.

Alex Hanna: Cool.

Emily M. Bender: Alex, for the transition: musical or non-musical?

Alex Hanna: I don't think we did musical last time, so why don't we do musical?

Emily M. Bender: Okay. Let's say this is a jazz song, but uptempo or you can change the style if you want. And you are a staffer assisting in the authoring of this document tasked with finding additional agencies that haven't been mentioned yet to figure out how to draw them in.

Alex Hanna: Oh, interesting. I'm trying to think about how you would do something for an alphabet soup. So doop, buh-doop, buh-doop, buh-doop, DOC, DOD, NIS- ST, gettin down to business, we're gonna put things here and do the AI action list. Doo, duh-doo, duh-doo - and then you can continue just like it's just an alphabet soup type song. Think about the, think about a jazz version of that Anamaniacs song where they just say every single country.

Emily M. Bender: Love it. Thank you.

Alex Hanna: Yeah.

Emily M. Bender: All right. We're gonna do these really fast just as headlines. So from May 27th, 2025 in Bloomberg Law by Kaustuv Basu the headline is "Medical Pros Risk Malpractice Suits By Avoiding AI Innovation," which is like, just terrible, but also really jives with what they're saying in this AI action plan of, we've gotta shove this into medicine, like no matter what. So this is what happens when people buy the hype because you get the sense that you aren't doing the cutting edge thing, and so you aren't providing best care, and we can sue you. Yikes.

Alex Hanna: Yeah.

Emily M. Bender: I'm gonna do this one too, because it's little bit in depth, but you get the next one, Alex. So this is Semafor from June 25th, 2025 by Reed Albergotti. With the headline "Google Search's Next Cash Cow," and basically is talking about how, because the AI overviews and also like asking the AI agents for product recommendations is circumventing the ability to put advertising in front of people, there is now this like secondary advertising market where the targets of the advertising are the AI systems. Which is just, ugh.

Alex Hanna: What a nightmare. Yeah. 

Emily M. Bender: This one.

Alex Hanna: This is from Wall Street Journal. This is maybe some AI schadenfreude. So this is by someone named Chip Cutter that came out on Saturday August 2nd. The title is "AI Is Coming For The Consultants Inside McKinsey, quote, 'This Is Existential.'" And the subhead is, "If AI can analyze information, crunch data, and deliver a slick PowerPoint deck within seconds, how does the biggest name consulting stay relevant?" And, I dunno, like maybe these are some jobs that I am okay if some machines take over because consulting is bullshit. But they've made their made a bunch of their bread on AI implementation fuck around and find out, 

Emily M. Bender: yeah. Made their bread and now they can sleep in it.

Alex Hanna: Yeah. I love that. I, now I'm thinking about a sandwich I'm gonna sleep in.

Emily M. Bender: All right Washington Post from yesterday, so August 3rd and the headline here is "The Rise Of AI Tools That Write About You When You die." And the subhead is, "Families and funeral directors are using AI obituary generators to more efficiently memorialize the dead. What happens when they get it wrong?" And so we're not gonna go into the body of the article, 'cause we don't have time, but I just wanna say first of all, why efficiency? What is efficiency doing in this like step of grieving together? And secondly, they're asking the wrong question here. What happens when they get it wrong is actually not the relevant problem with this.

Alex Hanna: It's giving feed. Let chat, ChatGPT experience my life and summarize it in bullet points, kinda energy.

Emily M. Bender: Very much. Yes.

Alex Hanna: Yeah.

Emily M. Bender: All right. This one to you, Alex.

Alex Hanna: So this is in Variety, and this is by Todd Spangler, published on July 30th. "Amazon's Alexa Fund Invests In 'Netflix of AI' Startup Fable, Which Launches Showrunner: A Tool For User Directed TV Shows." And this is as, I don't my brain simply cannot understand this headline. But when you actually get into it, it's you, it's just like a user generated show, video to, I don't know, personalized content, just absolute nightmare stuff for any kind of creators of culture.

Emily M. Bender: Yeah. And of course it's Amazon that's investing in this.

Alex Hanna: Yeah.

Emily M. Bender: All right. Finally, a little bit of a palate cleanser. This is from the Lawrence Times on August 1st, and the headline is "In Federal Lawsuit, Students Allege Lawrence School District's AI Surveillance Tool Violates Their Rights." And so just really glad to see this kind of a pushback against some of this terrible ed tech stuff going on. And I think we don't have outcomes of this lawsuit yet. I think this is just like, it's been filed.

Alex Hanna: And these are, these are, the image we should say, these are high school students that are journalism students. And three of these students, the, the caption says three of these students are now plaintiffs in fed, a federal lawsuit against Lawrence Public Schools. Hey, the kids are okay, but I wish their school district wasn't being negligent in implementing these tools that they didn't ask for.

Emily M. Bender: Totally. All right. And then I'm, it's always a little bit awkward because I start the outro and I have to switch my windows around, but in fact, that's it for this week. Sarah Myers West is co-executive director of the AI Now Institute. Thank you so much for joining us, Sarah.

Sarah Myers West: Thank you for having us.

Emily M. Bender: This is, we could have talked for hours.

Sarah Myers West: We really could have.

Emily M. Bender: Kate- we really could have.

Sarah Myers West: This could've been like a 10 parter. Yeah.

Emily M. Bender: Kate Brennan is associate director of the AI Now Institute. Thank you, Kate, for all of your wisdom today.

Kate Brennan: Thank you both so much.

Alex Hanna: Thank you both for coming. Our theme song was by Toby Menon, graphic design by Naomi Pleasure-Park, production by Christie Taylor. And is Chris- Christie, is this your last one? I don't know. You can edit this out if it's not. And thanks as always to the Distributed AI Research Institute. If you like this show, you can support us in so many ways. Order the AI Con at thecon.ai or wherever you get your books, or request it at your local library.

Emily M. Bender: But wait, there's more. Rate and review us on your podcast app. Subscribe to the Mystery AI Hype Theater 3000 newsletter on Buttondown for more anti-hype analysis, or donate to DAIR at dair-institute.org. That's DAIR hyphen institute dot org. You can find video versions of our podcast episodes on Peertube, and you can watch and comment on the show while it's happening live on our Twitch stream. That's twitch.tv/dair_institute. Again, that's DAIR underscore institute. I'm Emily M. Bender. 

Alex Hanna: And I'm Alex Hannah. Stay out of AI hell, y'all.

People on this episode