
Mystery AI Hype Theater 3000
Mystery AI Hype Theater 3000
The UK's Misplaced Enthusiasm (with Gina Neff), 2025.01.20
In January, the United Kingdom's new Labour Party prime minister, Keir Starmer, announced a new initiative to go all in on AI in the hopes of big economic returns, with a promise to “mainline” it into the country’s veins: everything from offering public data to private companies, to potentially fast-tracking miniature nuclear power plants to supply energy to data centers. UK-based researcher Gina Neff helps explain why this flashy policy proposal is mostly a blank check for big tech, and has little to offer either the economy or working people.
Gina Neff is executive director of the Minderoo Centre for Technology and Democracy at the University of Cambridge, and a professor of responsible AI at Queen Mary University of London.
References:
The AI Opportunities Action Plan
‘Mainlined into UK’s veins’: Labour announces huge public rollout of AI
Gina Neff: Can democracy survive AI?
Labour's AI Action Plan - a gift to the far right
Fresh AI Hell:
"AI" tool for predicting how Parliament will react to policy proposals
"AI" detects age based on hand movements
Apple Intelligence misleading summaries of news
Book simplification as a service
CEO doesn't understand why kid turned AI features of toy off
Check out future streams at on Twitch, Meanwhile, send us any AI Hell you see.
Our book, 'The AI Con,' comes out in May! Pre-order now.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
- Bluesky: emilymbender.bsky.social
- Mastodon: dair-community.social/@EmilyMBender
Alex
- Bluesky: alexhanna.bsky.social
- Mastodon: dair-community.social/@alex
- Twitter: @alexhanna
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.
Welcome, everyone, to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find.
Emily M. Bender:Along the way, we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. I'm Emily M. Bender, Professor of Linguistics at the University of Washington.
Alex Hanna:And I'm Alex Hanna, Director of Research for the Distributed AI Research Institute. This is episode 49, which we're recording on January 20th, 2025. Last week, just as we were preparing to record our previous episode, UK Prime Minister Keir Starmer announced a new initiative to go all in on AI in the hopes of big economic returns, with a promise to quote,'mainline' it into the country's veins. We're talking everything from offering public data to private companies, to potentially fast tracking miniature nuclear power plants to supply energy to data centers.
Emily M. Bender:We spent a minute with this story in AI Hell last week, but we knew it needed a more substantive treatment, a higher dose of ridicule as praxis. Today we're here to do just that, explain why this flashy policy proposal is mostly a blank check for big tech, and has little to offer either the economy or working people in the UK. And we're getting help from our esteemed guest, Gina Neff. She is Executive Director of the Minderoo Center for Technology and Democracy at the University of Cambridge and a Professor of Responsible AI at Queen Mary University of London. Welcome, Gina.
Gina Neff:It's great to be here. Thanks.
Emily M. Bender:This is a snap--we got this organized really fast. We're really grateful that you can join us. And I think we should just get into it because my goodness, is this a pile of nonsense. So let me get to the right screen so that I can share my screen and take us to the lovely document that the UK government has provided. Um, called, it's an independent report, titled "AI opportunities action plan," published on the 13th of January, 2025. Um, and you know, what can you tell us about who wrote this thing?
Gina Neff:Right. So this was written by a tech entrepreneur named Matt Clifford, and he has been sort of the go to guy for both the last Conservative government, and surprisingly, for a Labour government coming in, the same, they've gone to the same tech policy advisors. So last year, Matt Clifford was all about, let's get ready for AI safety. He, he was responsible for helping to spin up the AI Safety Institute in the UK, throwing loads of resources at, um, existential risk. So this plan has been cooking for a long time. Everybody was expecting it. We, we heard in October, it was just around the corner. We heard in November, any day now. And it's finally here. It's been met with some real, um, enthusiasm, which I find shocking because there's some real howlers in here. But, but understand this has been, this has been conceived of and imagined completely outside of government, right? So, so complete independent review, Matt Clifford, one man show doing this report, but the government simultaneous simultaneously released their response in which they essentially said yes to all 50 proposals, with two sort of, 'well, making immigration easier, I don't know about that.' I mean, basically they accepted all 50 with two kind of caveats.
Emily M. Bender:Wow, and this is published at Gov.UK, so it's not like this is some, like, it's an outsider, but it's an outsider who was commissioned by the government to do this, I guess?
Gina Neff:That's right. So this was commissioned, this was commissioned by the government, and this was commissioned as an independent review, and independent reviews go on our lovely website. So here's the first thing not to ridicule. Gov.UK is such a gem in the royal crown of public infrastructure, digital infrastructure in the UK, right? It's easy to use, it's been cleanly designed, you can quickly tell it's a government website, and they all have the same sort of, um, navigability. So there's just something really wonderful that they did with public services that they seem to be throwing all of those lessons out for AI.
Emily M. Bender:So I have to say that when we decided to do this, we sort of grabbed this as our artifact. And then I thought, okay, government document, this might be relatively dry, might be a little bit hard to find--it's full of howlers, as you say, basically from the start. Um, and so there's a forward here by the Secretary of State for Science, Innovation, and Technology. Um, and just to give a sense of this, let me, let me read where it starts from. So, "Today, Britain is the third largest AI market in the world. We are home to an extraordinary array of global talent and pioneering AI firms like Google DeepMind, ARM, and Wayve--" Haven't heard of Wayve, it's spelled W A Y V E."--but despite our record of scientific discovery, from Alan Turing on algorithms and general purpose computing to Tim Berners Lee's World Wide Web, the UK risks falling behind the advances in artificial intelligence made in the USA and China." So it's all about this like international competition and we have to be on top and we have to be the best and, um, the, the thing that jumped out at me as I'm reading this and sort of going in further is there's no definition of AI in this document.
Gina Neff:That's right. That's right. So on the one hand, um, at the press conference, these, um, kind of, uh, machine learning innovations that happen in clinical care that are handled in a very different tech infrastructure, a very, a very different architecture of when you roll AI out for clinical care is being put alongside, um, local governments using AI somehow to spot potholes to fix potholes faster. I mean, it's just, you know, alongside generative AI and, and, and, and those tools. And, and the spirit of this document isn't 'let's get the country ready for this transition.' It's literally let's invest in industry. And bless, you know, bless the poor, uh, Labour government. They're, they're, they're just back in power after 14 years of being out of power. The UK government is, the UK economy is really just staggering under the weight of the austerity cuts that they've lived under for 14 years. So there's been this intentional under investment in public services and public infrastructure. And now, the UK is one of the slowest growing economies in, um, the OECD. So, this government's saying, well, we have to have growth if we want to to have nice things and the way we're going to get growth is not by undoing the massive Brexit. It's not by, you know, diving deep into those those public investments that we know really help economies. No, we're going to get growth by AI. So it's sprinkling this kind of AI magic fairy dust over pretty serious economic problems that the country has to solve. So one of the things that the spirit of this document is not, you know, how do we make sure we don't leave workers behind? How do we make sure that we don't leave our, um, you know, we leave different communities behind? How do we get our, our schools ready? How do we get our, our civic organizations and our news organizations ready? No, it's how do we open up the floodgates of investment in the UK in AI industry? Because they think that's going to patch over the problem, just like fixing a pothole.
Alex Hanna:Yeah, the thing about this document, I think compared to a lot of the government documents that we've seen in the US and analyzed on this show before, is how much it emphasizes, emphasizes bringing quote unquote "AI" into different government services. Um, and so there's, um, there's a, there's a second paragraph and the third paragraph says "The plan shows how we can shape the application of AI within a modern social market economy." Uh, so they, they're social in there to denote that they're interested in pro social, uh, economic ends."We will do so by working closely with the world's leading AI companies--" The corporate, uh, aspect of it."--Britain's world leading academics and entrepreneurs, and those talented individuals keen to start up and scale up their businesses here.""Our ambition is to shape the AI revolution on principles of shared economic prosperity, improved public services, and increased, increased personal opportunities so that--" There's three points."AI drives the economic growth on which the prosperity of our people and the performance of our public services depend. AI directly benefits working people by improving health care and education, and how citizens interact with their government, and the increasing prevalence of AI in people's working lives opens up new opportunities rather than just threatens traditional patterns of work." So there's a lot there, um, and one thing I want to highlight for, for, uh, kicking it back to you Gina is, is this thing about like interacting with the government, this, this thing about the, the, the citizen government nexus, and then like how AI is supposed to facilitate this, is so, um, confused in this document, and it's really, um, I don't know what the right word is. Puzzling is one. Like how the hell is this supposed to improve government? Like, is is it, is it the fact that you're going to sell a bunch of private NHS data? Like what's happening here?
Gina Neff:Well, and that's, I'm going to point forward to, um, the, the quick takes segment, I forget what you call it, but the end of the show where you're going to go through things quickly, because I--
Emily M. Bender:Fresh AI Hell.
Gina Neff:Fresh AI Hell. I think I'm going to point forward to that because you may have something queued up in there that's going to be really, uh, telling, I believe in that. So. Listen, I think a couple of things are going on. And one is a pure political play. Let's just call it what it is. Um, this government is, um, is one of the few In this wave of elections that happened this year, this is a, um, left party that came into power in a year that many right wing, right wing parties across Europe, the US, um, came into power. So, um, they domestically know that their threat is from the right. They're not facing a threat from the left, a serious threat from the left. So they've pivoted more centrist than what some in the labor party might want to see. Okay, fine. All of that to say the threats coming to them politically are from the same kind of anti immigration, anti marginalized community, cultural warriors that we've seen going on in the US. So they, that's the, that's the battle they're fighting. So in that environment, they're, this government is having to say, 'We want the nice things, we want to improve people's lives. But we're not going to do it by raising taxes and we're not, we're not going to do it by making the cuts to public services.' So how are they going to make up that investment? And again, it's a little bit of AI magic dust. You know, if we can only get government to be more effective and efficient. We'll anchor that to AI because AI is all about efficiency. You know, I'm I've got a paper coming out um this week called "Can democracy survive AI?" And in that paper I I do a take on this on this plan and and and the and the kind of tension there is that AI means efficiency. I mean AI means a whole lot of things. It means a whole lot of upfront capital investment. Um, but efficiency isn't a guaranteed outcome, right? It doesn't mean, you know, do AI therefore get more, get faster, better. You might get something else out of the, out of that out.
Emily M. Bender:And that 'AI means efficiency' thing seems to run so counter to everything we're learning about its environmental impacts, which are the opposite of efficiency.
Gina Neff:Well, and that's where some incredible magical thinking is going on in here. So. Remember, the one thing you might remember about the geography of the UK, um, a big island, set of islands. Um, natural, uh, uh, energy sources are really limited. And the grid capacity in the UK, the electricity grid, is incredibly stretched. Like, it's, it's, it's, uh at its max. And so where are these new power sources for new data centers coming from? And literally in the plan, it's like, Oh, well, don't worry, we've got mini nukes coming. And the mini nukes are 25 years down the road. I mean, they're, they're literally like, they're not here in the timescale that this report is about. The other magical thinking in this report is it's, it's, it's written by somebody who doesn't understand government very well. So there's some real kinds of howlers, I think, in terms of, you know, just how we think through what, um, what is, what is the role of government? What is the role of public? So on the "Lay the foundations to enable AI," on that Point Seven, if you want to scroll to that, you know, it's, "We will identify, rapidly identify at least five high impact public data sets that government will seek to make available to AI researchers and innovators."
Alex Hanna:This was-- Yeah. Yeah.. Go ahead. Sorry.
Gina Neff:So, "Prioritize the potential economic and social value of the data as well as public trust, national security, privacy, ethics, data protection."
Emily M. Bender:Privacy. They said privacy. It's all good.
Gina Neff:Well, here's the crazy thing about that, right? So public data sources in the UK? They're public. Like, what am I missing here? There's a whole host of data sources in the UK that are public and available to AI researchers. So are they talking about NHS data? Because if they're talking about the country's, the--England's healthcare data, um, that's, you know, that's a big bright line that voters have been very clear they don't want anybody touching. Yeah, so it's it's kind of interesting. It's like, okay, well, on the one hand, you're talking about public data sets. But on the other hand, the government already has this mission to make data public, like where, that's just not even joined up and what's already going on and efforts in the government
Emily M. Bender:So before we go into more of the howlers, and there's a lot, I want to just raise up a couple of the comments from the chat. So we have someone whose handle is hard to pronounce but maybe N-Dr-W-Tyler?(ndrwtylr) Um, "I couldn't get through Keir Starmer's speech announcing this. If you have any idea how broad a term AI, it just sounds like Britain has a long history of 'stuff.' The NHS uses 'stuff' every day to save lives. We're proud to announce the next generation of intercontinental, intercontinental ballistic'stuff' will be built in the UK," which I thought was great. And then, uh, Abstract Tesseract says,"Somehow it's, quotes, 'unrealistic' to demand that politicians work to end austerity and guarantee access to what people need to live and thrive, but it's totally reasonable to expect tech companies to address these things by slapping an AI on it."
Gina Neff:Uh, that's really well said.
Emily M. Bender:Yeah, that really, really gets to it. So let's, let's go back up to the, um, the top of the report here. So we had the intro that we were reading from, from, uh, the Secretary of state for Science, Innovation, Technology. And then we have the actual, um, report itself, and it starts with the heading, "The Opportunity." And the very first sentence after that is, "AI capabilities are developing at an extraordinary pace." So, it's like, it's hype from the very first sentence. This is not, I mean, I take the phrase AI in itself to be hype, capabilities, so like the first three words are just hype, and yet the government sort of is nodding along, I guess? Like, that's not what you want.
Gina Neff:No, I mean, listen, I heard, um, eight years ago when the Brexit vote was being, uh, floated that, you know, everybody said, oh, it's fine. Brexit won't matter. Any costs of leaving the EU will just be absorbed because the UK is such an AI superpower. We're going to dominate in in in the AI. So there's been this idea that, you know, the AI industry will save the UK. Now the AI industry, sure. Google DeepMind did just won Nobel prizes for, um, for the things they do. Um, but but, you know, these companies and products are not the unicorn, you know, billion, trillion IPO companies that we see being spun out of Silicon Valley, right? So on the one hand, there's this idea that we're, we can be world leading, but on the other hand, there's a real British chip on the shoulder that, that we're already, um, too late for the race. As the Brits would say, we're on the back leg. We're on the back foot. We're on the back foot. We're on the back foot. That's an Alex saying, too. That's where you got that one, Alex.
Alex Hanna:On the back foot?
Emily M. Bender:Yeah.
Gina Neff:To be on the back foot.
Alex Hanna:Really? I don't know where I got it. It might be, I might have got it from Stef Mainey, who played for London Roller Derby.
Emily M. Bender:I just remember as we were drafting the book, we had a long back and forth about that phrase. Because I didn't recognize it. I think we kept it in. Um.
Alex Hanna:I didn't know that was, that was, that was a British phrase. I don't know where--I blame Mainey. Uh, So the next sentence is, "If this continues, artificial intelligence could be the government's single biggest lever to deliver its missions, especially the goal of kickstarting broad based economic growth." I'm not, Gina, are you familiar with these five missions, because I clicked through and there seems to actually be six missions.
Emily M. Bender:There's six things there. Yes.
Alex Hanna:But what, is this a common touchpoint within, I mean, within UK policymaking?
Gina Neff:This was, this was the Labour Party's, um, uh, campaigning theme, right? Five key missions and a mission to rule them all.
Alex Hanna:Ah. Mi--yes.
Gina Neff:Five key missions and a mission to rule them all. Um, so, so, that's--
Alex Hanna:Yes. As Sauron said. Yes.
Gina Neff:Exactly. So that kickstarting economic growth is, is really, um, it, it's, it's a mission unlike other missions, right? It's the, it's the one that everybody says has to be the first thing that this government works on. The challenge is, right, you know, this government has a mandate for six years, for six years. We'll see if this government can hold together for six years. They have really started off on the back foot. And, um, you know, the, so safer streets, right? You know, AI has been talked about being sprinkled across and through all of these missions. Like, we want safer streets, we can use AI to somehow reduce, reduce street crime, um, you know, improving like--
Emily M. Bender:So safer streets isn't like smaller cars and lower speed limits so pedestrians are safe. It's something else, isn't it?
Gina Neff:That's right. That's right. Um, it's something that again looks to appeal to more conservative voters who are worried about a marginalized other somehow invading the streets of let's be clear, England, right, because those are, that's the, that's the image they're using, not, not what's going on in Scotland, Wales, or Northern Ireland.
Emily M. Bender:So I have to say this whole thing about kick starting economic growth, I can tell you from the country that is home to Silicon Valley that having large tech companies that are just amassing all the data and all the money and all the power is not economic growth. It's not making things better for people in general, it just means that we now have these billionaires that are buying our government.
Gina Neff:Well, and there's some real crowding out that can happen, right? Um, you know, right, right now we have a crisis in affordable middle class housing. We have the lack of power to build new housing developments, electricity to build new housing. We're having a water crisis. Um, so, you know, two things that we know data centers need, electricity and water are two things that there is none to be spared in the country. Um, not at the scale that we're talking about in these investments. So again, there's a little bit of magical thinking, like where this is going to come from and how this is going to really benefit ordinary people.
Emily M. Bender:All right. Anything else in this, uh, oh, I've got a bunch of highlights over here. AI safety comes up, of course. Um, this "Push hard on cross economy AI adoption. The public sector should rapidly pilot and scale AI products and services and encourage the private sector to do the same. This will drive better experiences and outcomes for citizens and boost productivity. So like all in, all in. Um.
Gina Neff:And this all in really worries me, right? So, um, This all in is about the public sector as a buyer of AI services. And so on the one hand, what the UK government is acknowledging is um, they're world leading, a, uh, listen, they're world leading in a lot of things, and I have been and am at great universities. I helped to run Responsible AI UK, which is a big investment in helping to build the research capacity in the country, in responsible, um, ethical, um, transparent, accountable, fair AI systems. Right. Um, that said, you know, this is talking about the purchasing power of the government to buy from somebody, these tools that do something in the public sector that spills over to other things when that c--data building capacity of organizations, right? So, so helping organizations do the things they need to do to, to, to, to, to, to run their run with their own AI investments. That's not what we're talking about. We're talking about the UK government spending a lot of money to buy some stuff.
Alex Hanna:Yeah. And the thing that really, this was so striking to me about the advertise, uh, advertising that the government would become a large, um, consumer of these products with little to no discussion of, um, procurement practices and how one would, would vet these tools with what kinds of responsibility that the government would have to citizens to ensure that these tools are at the very least, unbiased, at the very least, transparent, at the very least, you know, have some kind of energy action plan. Things that you would expect good governance to entail.
Emily M. Bender:Have been evaluated and validated to make sure they work the way they're being advertised.
Alex Hanna:Yeah, so you would expect some kind of discussion of that as this is a top line part, cornerstone of this plan. I mean, it really is, you know, something that, that is so significant in, in this. And it is, um, I mean, it's, given the background, the very helpful background at this, this tech entrepreneur, Matt Clifford, that he wrote it, it's not surprising someone that doesn't know anything about government procurement would just completely gloss over this, but it's, it's, it's very shocking to just to see its absence.
Gina Neff:And here's well and to come, I was just gonna say to come back to your your motto of always read the footnotes right or leave no footnote unread, um, you know, somebody doing their due diligence on what to do to spark might look to one of the UK's allies, the US and what the Biden White House did around issues of procurement, right? So the Biden White House had very little regulatory tools in their toolkit. They knew they weren't going to get legislation passed. But what they did, what they were able to do is to make executive orders outlining those fair principles as you've covered, of course, but also, um, you know, uh, basically a general, the general service administration having in effect an executive order on procurement and what needs to be in place to make sure that these systems work for government. And I think that's a key distinction, right? What, what, what Accenture or the big guys do for other corporate clients is very different for what they have to do when delivering public services. And that spirit of public services is lost in this document.
Emily M. Bender:Absolutely. And I think that that connects to the thing that I wanted to highlight here, which is we have this tech guy putting his wishlist in and at the top is "be on the side of innovators." So the government should just be helping me and people like me build the tech we want to build and like never mind looking out for the interests of the public. That should be their main thing. So it says, "In every element of the action plan, the government should ask itself, does this benefit people and organizations trying to do new and ambitious things in the UK? If not, we will fail to meet our potential." That just, that was infuriating to me.
Gina Neff:So last year, um, I worked with the, with the organization of trade unions, the Trades Union Congress, the TUC, and a group of stakeholders from across um, the UK. So both from the tech sector, from human resources professionals, um, to, um, researchers, uh, academics, um, trade unionists to think about what we need to have in place for an AI bill that works for existing labor law. And there's a lot of stitching together that needs to happen in the regulatory space, right? So, so getting AI to work for. working people, there's just some basic things that have to be put into law to make sure that that doesn't do harm. For example, a right to redress, a right to recourse if mistakes are made doesn't exist in UK law, right? A right to human review of automated decisions doesn't exist in UK law. And we were like, excuse us. These things need to be in place. What is striking about this plan is there's not a single thing mentioned about how to make sure you shore up workplaces.
Alex Hanna:Yeah. There is, yeah. The only thing I think mentioned, uh, which is not a thing at all, it's, I think it says something. Yeah. I think the, one of the three points, "Increasing the prevalence in people's working lives opens up a new opportunity rather than just threatens traditional patterns of work." Okay. What does that mean? Like, and I mean, I think this is something that I've seen really infuriating from a lot of the tech boosters within the, within the labor movement. And this is prevalent in the U S too, of people sort of saying like, well, we're going to have, you know, we're going to have these kinds of working relationships, uh, you know, with Microsoft or with whomever, we'll see what that looks like. There's not really any kind of plan. What are you going to do about worker control of technologies? What are you going to do about any kind of oversight? What are you going to do about worker data and any kind of surveillance being done in workplaces? And how do you guarantee that stuff isn't turned against workers? And so, you know, the word working people is in this doc because it was one, it was written for a labor government, but you are spot on saying that there's nothing about how uh, this is going to benefit working people or, uh, how working, uh, how, how labor organizations would have any control over their, their employment.
Gina Neff:That's right. And the theory of change that you both have pointed out, and let's just say it out loud, right? The theory of change is, um, working people benefit from efficient, effective public services. And that's true, maybe, but there's loads of other levers--oh, I just went British on you. There's loads of other levers that we can pull.
Alex Hanna:I said, I said levers in the intro.
Emily M. Bender:Yeah, but you were reading a British document, so.
Alex Hanna:I was, no, I was reading these, I was reading the script that--
Emily M. Bender:Oh, Christie wrote.
Gina Neff:But yeah, there's, uh, there's many other tools in our toolkit, right? There's loads that we could be doing to help people in their daily lives. That this is like, how do we get, how do we modernize public services without spending more? And that's where I really want to call BS on this document, right? Because we all know on this call that the AI hype is about spending more for procuring AI tools and services from the big tech guys. And that's expensive. And the costs of this are not addressed. And It's like a blank check. It's like, yeah, this would be fanta--wouldn't it be fantastic if all my friends who are innovators, they're the ones that we think of when we make these investments. Not, you know, the, the people in the regions who are really, really deprived. The, the, the statistics on child poverty in this country are shocking. Again, for a G7 country, it's shocking that, um, the rates of poverty are as high as they are.
Emily M. Bender:Yeah. Oof. So the whole thing is basically just, like you were saying before, AI magic dust, um, and we want to put that message in so that everyone will be okay as we go ahead and basically, you know, continue with unabated centralization of data and power and bending everything to the needs and desires of the innovators, rather than the needs of the people. Um, so I wanted to point out a couple more pieces of hype in this intro and then maybe we can get to some like favorites in the 50 points. Um, so, um, the first one is, uh, so second to last paragraph of the intro here."No one can say with certainty what AI will look like a decade from now. My judgment is that experts on balance expect rapid progress to continue." It's like, we don't know. Let's ask the people who I'm calling experts, so the people who are like building this stuff and actually have no expertise in evaluating it or, you know, making it an effective thing for people to use, um, those people are excited about it. That's real useful evidence in this document. And then there's this thing, this last paragraph is, um, just so much the logic of the AI safety people. Um, so he talks about, um, "Even if AI progress slows, we'll see large benefits," and then last paragraph,"If, however capabilities continue to advance, having a stake in, and being the natural home of, advanced AI could be the difference between shaping the future of science, technology, and work, and seeing these decisions made entirely outside our borders. This is a crucial asymmetric bet, and one the UK can and must make." This is exactly the logic of the X-risk people saying we have to pay attention to this because it would be so bad if it goes the other way. Just, yeah.
Alex Hanna:I kind of, in some ways I'd say it it's, it's less AI existential risk and more of this, this, this, this China, this is like, you know, like China coded competition, right? And it's--
Emily M. Bender:Oh, sure sure sure. But--
Alex Hanna:You know, and it's in line and saying, you know, like we can't, we can't, you know, they're tied. Yeah. Go ahead.
Emily M. Bender:Yeah. It's the same logic though of, this thing would be so huge. We therefore have to place our bets on the case where it is because the expected value in this case, the expected value is good rather than bad is just so big it outweighs any other consideration. So that's the, that's the parallel.
Irate Lump in the chat:"If you don't invest in our snake oil scheme, China will."
Alex Hanna:Yeah, for sure. It's, it's the China, the China boogeyman, uh, is always lurking in the, in the Anglo American consciousness. Yeah.
Emily M. Bender:All right--
Gina Neff:Well, and I think to be fair here too is the U. S. boogeyman, right?
Alex Hanna:Yeah.
Gina Neff:Um, that, and, and, uh, the harsh reality of what Brexit means. Um, I think it was sold as an idea that suddenly the Anglo, um, American alliance would be stronger than ever. And the US is like, I'm sorry, who are you again? Like, why should we, why should we bother, bother your loss? And, and, and to that effect, right, um, the UK spun up the AI Safety Institute. They're incredibly proud. It's the largest one. Just, they got a little bit of a running start. They were the first host. Um, you know, much has been invested in terms of building government capacity and understanding frontier AI. Fine. Great. Um, okay. And yet, all of that research on what is happening with frontier AI is wholly dependent on primarily US companies giving the UK access to being able to do that work. If the winds change, it's the 20th of January 2025, if the winds change and the new administration and suddenly outside influence in U. S. tech is being attacked, that will be the first capacity bites the dust. Because they won't have access to the data, they won't have access to the models, they won't have access to the companies, they won't have access to the teams at NIST who are doing the work on the US side.
Emily M. Bender:And this work, I just have to say, most of the AI safety work is nonsense from my perspective, right? So these folks are proud of the AI safety Institute getting early access to o3 and whatnot and getting to do these, like, testing for capabilities, but it's all this, like, based on this existential risk. Like, you know, could it cause biological disaster, and so on, and like, could it, there was even a thing in here about it doing, or something I clicked through to, um, could the AI models do AI research? Like, this is the whole notion with the accelerationism and the, you know, that point of no return where it takes off. I can understand that this is like a point of pride for these people, but, for the, as in like, you know, the author of this report, but it's not actually any work that is going to be, contributing in a meaningful way to the quality of life anywhere?
Gina Neff:Well, as a social scientist, I care really deeply about what people do in interaction and technical systems. And that's a piece that's been very hard for, you know, these, the AI safety institutes to imagine. So the UK one just released, um, ask me how I know, UK just released, um, a call for proposals on socio technical risks. They were like, oh, wait a second, we've just done this whole thing on risks. What about people? People are, I was like, yes, let me teach you about people. Again, ask me how I know. Um, so I'm, I'm, I'm clearly champing at the bit to say something that I can't say yet, but watch this space.
Emily M. Bender:All right.
Alex Hanna:Definitely, definitely will do. Um, let's, we should get into some of these points because we're, we don't have a lot of time. So like some of the, some of the--cat!
Emily M. Bender:Cat.
Alex Hanna:Some of the biggest, the biggest hits, um, some of these are just really, yeah, they're just truly ridiculous. So, um, and kind of making very big claims and trying, and they're kind of looking over the American shoulder to some degree, but also in a way that they're not going to be able to, um, fulfill. So points two and three. Um, so this is "Expand the capacity of, of the AI research, uh, resource or AIRR by at least 20x by 2030, starting within six months." So this looks a lot like the, the N-A-I-R-R or the NAIRR Project, which the, um, which is kind of a big multi agency partnership within the US, but also is a big private part, public private partnership, um, which is kind of, it's a very, it's very funny just because that project itself is, there's, very little public infrastructure for running these huge workloads. And so it's really like begging Amazon and Microsoft for compute. Um, so it's got this, you know, so it's like, well, we're going to do this 20 times. I'm like, okay. Um, and then there's, then there's a mention of procurement. Um, so given, uh, we're going to, uh, "Given the trends in in hardware performance, this would not mean a 20x increase in investment if the government procures smartly."
Emily M. Bender:Should we read the footnotes?
Alex Hanna:Yeah, I gotta read the footnote. I didn't get into the footnote. The footnote, uh, so, um, so there's an estimate here, uh, " Assuming trends in hardware performance continue, by 2030, each pound spent on GPUs will buy eight times more flop and require four times less power. Therefore, expending AIRR by 20x would require much less than a 20x increase in investment."
Emily M. Bender:So basically, compute costs come down, and we're going to count on that continuing, and not only is the cost of the chip, but also the power required to spend, to power it, are going to go down. Which has been true for a while, so it might still be true, but yeah.
Alex Hanna:Yeah. And then the, um, and then the, you know, the next point about basically, point 3, which is to have, um, AIRR program directors and kind of modeling them off of DARPA, which is, uh, I kind of chuckled at that as well.
Emily M. Bender:Yes. Yeah. So, Tina, do you have another favorite in this first 10 to 20?
Gina Neff:Um, I'm going to scroll down. Um, so the growth zones, if we go to number four, "Establish AI growth zones." Um, so this is, this is really, I mean, this is kind of a beauty because it brings in all the lovely quirkiness of what is so messed up about the UK. Um, UK 's much more centralized on one level in terms of power than say the United States where states have extraordinary power. But local authorities have extraordinary power over planning and there has at least in England been a real conservatism over preserving ye olde England, right? Like, let's make sure that we, um, preserve our our ancient cities and, um, and don't expand. And that's part of the crunch on housing. So this point 4 is not only is AI going to solve magically the problems of the public sector, not only is AI going to jump start growth, but point 4 is it's going to get rid of our planning problems. Like this, this backlog of planning applications so that in cities like Oxford and cities like Cambridge, you can't do new construction on the outer rings without going through years and years and years of struggle. These AI growth zones will suddenly bring this kind of federal, federal, bring this national power to bear on these, on these, what have been local decisions. This is again, fantasy land, because now they're trying to claw back power that has been devolved to regions. But they can't do it by simply saying, okay, this is part of our AI plan, so of course you should just, you know, approve these data centers to be in your community, when communities are needing to make these choices around power, electricity, and water. So, you know, again, it's like growth for whom, growth where. And the other idea is, you know, sighting, taking some of the poor cities in the north and sighting AI opportunity growth zones near them isn't again going to magically create the kind of Richard Florida creative class, technical class in those cities. That's kind of what this paragraph is saying that's going to happen.
Alex Hanna:Yeah. 100%.
Emily M. Bender:All right. Let's, uh, let's keep moving quickly. I've got one all the way down in 15. Um, so this is now in 1.3, "Training, attracting, and retaining the next generation of AI scientists and founders." And as an educator, work in higher education, I was so just enraged to read this thing. Point 15, "Support higher education institutions to increase the numbers of AI graduates and teach industry relevant skills," including here,"Supporting universities to develop new courses co designed with industry." That's not what higher education is for. And we have suffered so much already in the US and I don't doubt UK as like everything becomes, well, you know, what's the ROI of doing a degree at your institution? Um, so this was infuriating to me.
Alex Hanna:Well, they, they cite here, uh, University of Waterloo. And as I was saying this, um, with my girlfriend who went to University of Waterloo, um, and they have this system of co ops in Canada. And so many, many, many universities have this kind of system where you do co ops, you are situated within a company, uh, and that is, you know, a huge draw for a lot of STEM programs. Right. So it's not about the kind of intrinsic learning aspect of it, but it's the kind of like, you're going to get placed at a, you know, at a company and that's going to be your, your course, you know.
Emily M. Bender:That's going to be your education. Go do some work.
Alex Hanna:Yeah. Go, go, go jump into the labor market. Uh, and, and while we're here, number 16, which, uh, set me into a mild rage was the, "Increase the diversity of the talent pool." They say "Only 22 percent of people working in AI and data science are women. Achieving parity would mean thousands of additional workers." Which is very funny because it's saying, let's have, um, let's have women join the labor force. And it's very funny. I'm thinking to like prior histories of the UK, uh, uh--
Emily M. Bender:Like Mar Hicks'?
Alex Hanna:Mar Hicks's work and Janet Abbate's work on like the computing in the UK and kind of enlisting in like, Oh, interesting how history repeats itself. Anyways, um, uh, they're, so"Government should build on this investment to promote diversity throughout the education pipeline. Interventions must be tailored. There is no one size fits all approach. Hackathons and competitions in schools have proven effective at getting overlooked groups into cyber and so should be considered for AI." First off, "cyber," ridiculous word. Um, and I kind of like, and so then they cite this Center for Security and Emerging Technology, and some piece on US high school cyber security competitions. Um, which I haven't looked into. Uh, yeah, this is from Center for Security and Emerging Technology at Georgetown. And I'm like, is this, uh, I'm like, hackathons are notoriously not really diversity.
Gina Neff:Well, I, I had a PhD student, I worked with a PhD student, I should say, Siân Brooke, who did a brilliant thesis on the gendered nature of hackathons was one of her papers. And, um, you know, if you look at the ways women get into data science, um, it ain't hackathons.
Alex Hanna:Yeah.
Gina Neff:Yeah. I, I wanna touch on point 24. I wanna make sure we get down, I mean, I could say the howler of like, create more Rhodes Scholars just for AI.
Alex Hanna:Yes.
Gina Neff:It's a very, it's. It's, you know, a 100, right? It was very clear, like "have 100 fellowships on the level of Fulbright, Rhodes--" um, I'm like, you mean like the Gates Fellowship at Cambridge? You mean like the Schmidt Fellows they're having? Like, these are really highly bespoke, very tailored. Yeah. Um, I'm all for investing in Oxbridge, but. Listen guys, we got a lot of work to do. That's not, you know, that's, that's with my guys in the trade union and gals in the trade unions. Like, let's help, let's help upskill them before we're, um, doing that. So, point 24. Um, this is something that I think outside the UK you might skim over, but for me this is, this is a howler, so. You know, for Americans, what's the first thing you think of when you think of Britain? Um, tell me it's the BBC, right? Tell me it's, um, um, you know, all those great British programs where, you know, people are being murdered by the river, or they're, uh, the aristocrats--
Emily M. Bender:Let me tell you, the English countryside, clearly very dangerous.
Gina Neff:Very dangerous! You know, the aristocrats with their endless dinners and balls. You know, you think about the incredible culture that, that, um, and the incredible creative industries that the UK is very strong on."Reform the text and data mining regime to make it competitive. Our uncertainty around intellectual property is hindering innovation." Guys, this is saying, guys, listen, if we're going to have AI, we have to kill our thriving creative industries. It's subtle, but we absolutely have to call this one out. We, our team has a report coming out next month on what the AI, genAI and IP policy should be for the UK and the text and data mining regime. Mark my words, this is going to be a huge political fight in this country in the next year.
Alex Hanna:There had been that, that, um, that note, um, in, forgive me, I've completely forgotten all the details are on it, but about the kind of attempts to dramatically change copyright in the UK, correct?
Gina Neff:That's right. That's right.
Alex Hanna:And soliciting public comment.
Gina Neff:That's right. And so the text and data mining, um, um, would be an opt out program, right? So if you wanted to assert your copyright claims, um, away from being able to be used, you would have to say, excuse me, I'm going to register my complaint. Um, and that's no way to treat small creatives. Small creative producers.
Emily M. Bender:All right. I got to do one more and then I think I have to take us to Fresh AI Hell But there's lots more in here listeners, you know, go apply your own Mystery AI Hype Theater 3000 methodology to this document. You will find is a rich text. This is under section 3, "Secure our future with homegrown AI," and there's just this ridiculous non sequitur in here so the paragraph second paragraph reads, "AI systems are increasingly matching or surpassing humans across a range of tasks." Ugh."Today's AI systems have many limitations, but industry is investing at a scale that assumes capabilities will continue to grow rapidly. Frontier models in 2024 are trained with 10,000x more computing power than in 2019, and we are likely to see a similar rate of growth by 2029. If progress continues at the rate of the last five years, by 2029 we can expect AI to be a dominant factor in economic performance and national security." So basically he's confounding investment going into this with a metric of progress of the systems. And that's nonsense. Like the, the fact, so, I mean, it's sort of wrapped up. Industry is investing, right? That's so it's like, he's saying, okay, I'm not directly making this claim, I'm just looking at the industry investment. But there's no reason to believe that the, the capital pouring into this is driven by anything other than hype. It's not driven by measurable progress, because they don't even know how to measure the progress. And as we said at the beginning of the show, it's also not one thing.
Gina Neff:That's right. That's right. And, and we're not really sure, particularly in large language models, we, we're not seeing what the clear winning use cases are yet, right? I mean, they're not, you know, I'm not, I'm not going to say throw it all out. And I certainly admire the work of both of you. I'm not saying throw it all out, but I am so inspired, Emily, that you helped call it, what was that five years ago? You said, listen, this is potentially asymptotic growth. That you will get to a limit of what these things can do. Are you ready? Are you prepared to go all in, chips all in, on this bet, when you know there's going to be a hard limit to the capacity? And paragraphs like the one you just read don't acknowledge that at all.
Emily M. Bender:No, they don't. Not at all. All right. Well, on that cheery note, um, Alex, musical or non musical this time?
Alex Hanna:Uh, I think we did musical last time and, uh, I was pretty proud of my rhymes. So let's do non musical because I don't want to test my luck.
Emily M. Bender:Okay. So you are, um, taking questions at a town hall meeting in AI Hell where someone is proposing a AI hype growth zone right on top of the local park.
Alex Hanna:Wait, what does a park look like in AI Hell?
Emily M. Bender:Up to you.
Alex Hanna:Okay, so okay there from the demon in the green overalls. Yeah. Yeah. Yeah. Yeah. I I I'm growing in my park. I've got some some pole beans and i've got some flaming hot carrots and this growth zone is just going to completely decimate this. Okay. Yeah. Yeah, we've we've heard your concerns. We're gonna build a rooftop garden that's sponsored by uh by Salesforce, and this is inspired by like the weird Salesforce park in San Francisco. Now you're going to be able to take and grow a whole new brand crop of genetically modified, uh, flaming hot carrots and don't worry about it. That's, that's all I got.
Emily M. Bender:Thank you. I would love to be a fly on the wall at the town hall meeting that you just created. Okay. So very rapidly. First in the UK, this is a published today, um, January 20th, 2025 in the Guardian. Um, the journalists are Jessica Elgitt and Robert Booth, and the headline is"AI Tool Can Give Ministers, Quote, 'Vibe Check' on Whether MPs Will Like Policies."
Alex Hanna:Yes.
Gina Neff:So, right, it's a little crazy. Um, the government has been working on gen AI tools that are for government documents, including one called Red Box. That's a reference to the secret secret red boxes that hold ministerial papers and keep them safe between meetings as they shuttle between meetings. Um, so, um, supposedly fit for government tools that are safe that are running on, um, internal systems instead of, um, sharing things externally. Um, does it work? I don't know. Is it effective? I don't know. Have they done safe checks on it? I don't know. I, I only know the questions that the teams making these things have been asking around and the kind of things that people have been talking about and
Emily M. Bender:Yeah, I mean anything like this should be thoroughly evaluated before put into use and if we don't, and that evaluation should be public and transparent. If we don't have that, then there's no reason to believe that it works.
Alex Hanna:For me, it's just like that, the Dril tweet that's, you know, turning to the audience and turning the racism dial up and be like eh? Eh? This is how I see it going.
Emily M. Bender:All right. Moving down under. This is NPR, December 19th, 2024."How will Australia's under 16 social media ban work? We asked the law's enforcer." And the most hilarious part of this is a quote, um, from the person they're interviewing, um, that, uh, about what technology you might use to verify someone's age."I met with an age assurance provider last week in Washington, D.C., who is using an AI based system that looks at hand movements and has a 99 percent success rate. And the interviewer says, "Wait, what? Using hand movements to confirm someone's age?" Also, me, wait, what? 99 percent success rate? Doubt it. That's nonsense. I'm going fast, and Alex is going to give you the next one here.
Alex Hanna:Yeah, I just, about the hand movements, I just I don't know, something about that is queer phobic too. Anyways, "BBC complains to Apple over misleading shooting headline." Uh, and there's a picture of, uh, uh, Luigi, uh, uh--
Emily M. Bender:Mangione.
Alex Hanna:Mangione. Uh. Um, so yelling and, "The BBC complained to Apple after the tech giant giants new iPhone feature featured AI generated false headline about a high profile murder in the US." And I believe it said something like he had gotten shot or something. Can you scroll down a little bit? What does it say here? Yeah, here. Yeah, it says, "Luigi Mangione shoots himself" as the first headline and just like, it's just like, oh my gosh.
Gina Neff:Um, there were so many howlers in this story and this, and this was national conversation. I was on national news, um, last week talking about this story. Um, it, incredible really that, um, Apple Intelligence was doing notifications in the UK based on BBC headlines. Great. Everybody trusts the BBC, except those news alerts were flat out wrong. Listing wrong people who were winning tennis matches, wrong headlines, getting things absolutely wrong, um, listing, this was, this was the greatest insult, listing who had won that evening of Dancing with the Stars before the episode aired.
Alex Hanna:Oh wow.
Gina Neff:I mean, and just making it up, like people were, you don't mess with, you don't mess with British television.
Alex Hanna:Yeah, you don't, you can't, you can't give away who won the reality show. Like that's, that's cardinal sin.
Gina Neff:That's right. And so Apple intelligence pulled the, pulled the product. I don't know if they did it globally, but they pulled, they pulled the product from the UK.
Emily M. Bender:So you were saying before that we don't yet have a use case for synthetic text. And my prediction is that we will probably never have very many. Like there's the, there's the thing with the fake granny that they direct the scam calls to. It's sort of a maybe okay use case for synthetic text, but headlines, absolutely not a use case. And the fact that Apple didn't realize that. Um, okay, Alex, I want you to do this one.
Alex Hanna:Okay. So this is, this is from Bluesky. This is someone named, um, RockemSockemRobots--
Emily M. Bender:Throwback.
Alex Hanna:--Tenured Hickey. Uh, I don't know. Uh, anyways, "TLDR, AI CEO doesn't understand why daughter turned the AI features of the AI toy off. Because he doesn't understand how play works, his entire business model is being confronted in the story by the innocence of a child, and he lacks the emotional intelligence to see it." So, clicking through, there's three screenshots. The first one, um, is I'm not gonna try to read this. It's a huge blue check message and it's effectively this guy buys this, um, this tool, um, or this, this toy, toy, uh, and says, you know, there's, there's this, um, uh, kid. Plays it and chatted with it and then turn it off and doesn't want to talk to it anymore. And then he like tries to nudge her to like play with it more. It's like, why are you doing this? Like it's some, is it the technical issues? Is it still uncanny? And it's just, yeah, the fundamental misunderstanding of what play is and what imagination is. Um, and it's really, yeah. And it's just so, it's so cursed and it is, so it is a window a bit into the ideology um, of, of, uh, of some of these AI folks, it's like, you don't understand why, like, why someone wouldn't want, like, their kind of, their imagination scoped by synthetic text. Um, yeah.
Emily M. Bender:But your six year old does. She gets it.
Alex Hanna:Yeah. This is also a soft shout out to Imagination by Ruha Benjamin, which talks a lot about imagination. And I know Al, uh, Ali, uh, Alkhatib is doing a reading group for the book. So.
Emily M. Bender:Yeah. Share a link for that..
Alex Hanna:So check out for that. Check out his, uh, his, uh, his Bluesky.
Emily M. Bender:So we're running out of time. I'm going to do just one last one here speaking of books and save the other two for a future episode. Um, so this is again on Bluesky, um, and the, uh, the inner, uh, post is by someone named Bracidas, um, and their text is just, "What?" And then it's a screencap of something that says, "Turn hard books into easy books with Magibook." And then a star emoji, not the usual AI one."Maximize your reading potential and avoid difficult language today." And then it's like the great Gatsby turned from hard book into easy book. And uh, AjaxSinger.Bluesky.Social says,"Really love that first line from Ego and Racism by Jane Austen. Quote, 'Everybody knows if a guy is single and rich, he's looking to get hitched.'" (laughter) beautiful. All right.
Gina Neff:My son's doing Pride and Prejudice for his national exams this year, and so I'm sure he would, he would love that.
Emily M. Bender:Excellent. Please share it with him.
Alex Hanna:Oh my gosh.
Emily M. Bender:So we're at time. That's it for this week. Gina Neff is Executive Director of the Minderoo Center for Technology and Democracy at the University of Cambridge and a professor of Responsible AI at Queen Mary University of London. Thank you so much for joining us and sharing your wisdom today, Gina.
Gina Neff:Thank you both. It's been fun.
Alex Hanna:Thank you. It's been a pleasure. Our theme song is by Toby Menon, graphic design by Naomi Pleasure-Park, production by Christie Taylor, and thanks as always to the Distributed AI Research Institute. If you like this show, you can support us in so many ways. Rate and review us on Apple Podcasts and Spotify. Pre order The AI Con at TheCon.AI or wherever you get your books. Subscribe to the Mystery AI Hype Theater 3000 newsletter on ButtonDown or donate to DAIR at DAIR-Institute.Org. That's D A I R hyphen Institute dot O R G.
Emily M. Bender:Find all our past episodes on PeerTube and wherever you get your podcasts. You can watch and comment on the show while it's happening live on our Twitch stream. That's Twitch.TV/DAIR_Institute. Again, that's D A I R underscore Institute. I'm Emily M. Bender.
Alex Hanna:And I'm Alex Hanna. Stay out of AI hell, y'all.