
Mystery AI Hype Theater 3000
Mystery AI Hype Theater 3000
AI Hell in a Handbasket, 2025.04.14
It's been 4 months since we've cleared the backlog of Fresh AI Hell and the bullshit is coming in almost too fast to keep up with. But between a page full of awkward unicorns and a seeming slowdown in data center demand, Alex and Emily have more good news than usual to accompany this round of catharsis.
AI Hell:
LLM processing like human language processing (not)
Sebastian Bubeck says predictions in "sparks" paper have already come true
WIRED puff piece on the Amodeis
Foundation agents & leaning in to the computational metaphor (Fig 1, p14)
Chaser: Trying to recreate the GPT unicorn
The WSJ has an AI bot for all your tax questions
AOL.com uses autogenerated captions about attempted murder
AI coding tools fix bugs by adding bugs
"We teach AGI to think, so you don't have to"
- (from: Turing.com)
MAGA/DOGE paints teachers as glorified babysitters in push for AI
Chaser: How we are NOT using AI in the classroom
AI benchmarks are self-promoting trash — but regulators keep using them
DOGE is pushing AI tool created as "sandbox" for federal testing
"Psychological profiling" based on social media
"I was not informed that Microsoft would sell my work to the Israeli military and government"
Pulling back on data centers, Microsoft edition
Abandoned data centers, China edition
Bill Gates: 2 day workweek coming thanks to AI...replacing doctors and teachers??
Chaser: Tesla glue fail schadenfreude
Chaser: Let's talk about the genie trope
Chaser: We finally met!!!
Check out future streams at on Twitch, Meanwhile, send us any AI Hell you see.
Our book, 'The AI Con,' comes out in May! Pre-order now.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
- Bluesky: emilymbender.bsky.social
- Mastodon: dair-community.social/@EmilyMBender
Alex
- Bluesky: alexhanna.bsky.social
- Mastodon: dair-community.social/@alex
- Twitter: @alexhanna
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.
Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find.
Emily M. Bender:Along the way, we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. I'm Emily M. Bender, a professor of linguistics at the University of Washington.
Alex Hanna:I'm Alex Hanna, Director of Research for the Distributed AI Research Institute. This is episode 55, which we're recording on April 7th, 2025. And the time has come for another whiplash-filled race through the darkest reaches of AI Hell. Unfortunately, there's a lot out there, and it's not just because Elon Musk is using LLMs to purge the US government workforce and undermine the very foundations of democracy--though there's that too.
Emily M. Bender:Just speaking for myself and Alex, these episodes are always cathartic. They're a chance to yell about all the things that have been piling up in our group chat over the last few months, and then continue on with our deeper dives. Today's platter is piled high. We have more promises that AGI is just around the corner, more pollution of the information ecosystem, and what ChatGPT could possibly have to do with the United States' disastrous new tariffs. And we like to say our slogan is ridicule as praxis, and in times like these, we could all use a good laugh.
Alex Hanna:But before we get too deep on the bad news, there's also some good news. Our book, The AI Con, is coming out in just a few short weeks from this episode's release date. Pre-order it now so you can have it the instant it's available. That's May 13th in the US and affiliated markets, and May 22nd for the UK and affiliated markets.
Emily M. Bender:And we're doing a little tour to promote it, talk about some of our most important points, and meet our fellows in the fight against AI hype. Starting with a virtual event with the Distributed AI Research Institute on May 8th. Mark your calendars and visit TheCon.AI for a full list. All right. And with that, let's get on with the hell, which means--
Alex Hanna:So much hell.
Emily M. Bender:So much hell. All right. I am going to share the first batch in my screen here, and I have helpfully numbered these so that I can find them quickly. The first one is all about AGI, and I guess this is a, this is a language one, Alex, so I'm gonna take this one. This is on the Google Research blog from March 21st, 2025 by Mariano Schain, software engineer and Ariel Goldstein, visiting researcher at Google Research. The title is "Deciphering Language Processing in the Human Brain through LLM Representations," and apparently what they did was they did some uh, you know, brain imaging stuff and they did some looking inside the transformer stuff and they said, Hey, they match. And I'm just so mad about this graphic here where we have a speech input into the listener's brain, but also modeling speech processing. And then they're saying, look, we have this kind of mapping that came out of the LLM and this kind of mapping that came out of the brain imaging system. And so same.
Alex Hanna:Oh gosh, this is just language area. This area-- yeah, this is, this is wild. So, yeah, describing, it's just like these arrows pointing to different types of pieces to the brain and suggesting that they're similar in architecture. Um, that's really frustrating.
Emily M. Bender:It's, yeah. And you know, there may well be some decent scientific questions to ask here. It might be that information about the distribution of word forms in texts is a strong enough signal of something that it maps onto something about, let's say, retrieval of words in the process of listening. Um, so it could be that there's something here, but to say this is the same, it's just a huge over claim.
Alex Hanna:Yeah. Yeah.
Emily M. Bender:All right, next.
Alex Hanna:Next story. This is on um, X.com, um née Twitter. Um, and it is Jack Clark who is, um, you know, used to be at OpenAI, now is at Anthropic. Very annoying policy person. Um, he says, "I find myself increasingly convinced that powerful AI systems are going to arrive in the next few years, likely during this presidential administration. I think this will lead, I think this will lead to me advocating for different things in policy. Published some thoughts in Import AI 405." Which is his newsletter, and I mean, if you want to click through just the initial, like his thing, which is, um, can we get to this, a screenshot. So this is his first idea, which is, um, "Import A-Idea", which is the, uh, kind of the, I guess the A level idea. In italics, "What if we're right about AI timelines? What if we're wrong?" And so basically saying that if we're right, in the, in the long term, we need a third party measurement and evaluation ecosystem, which I agree that would be good, 'cause evaluation is trash. Um, "Encouraging governments to invest in further monitoring of the economy, such as they have visibility on AI driven changes." Like, I don't know if it's gonna be that, that huge. Um, "Advocating for investments in chip manufacturing, electricity generation and so on." Ugggh, you're, okay, now we're going into the, the, the, the hype lead in, um, place. And then "Pushing on the importance of making deeper investments in securing frontier AI developers." So now we're just, like, now in any, basically goes, goes fully into the securitization part of it.
Emily M. Bender:Wait a minute. Securing AI developers?
Alex Hanna:Yeah, so like--
Emily M. Bender:Like securing people?
Alex Hanna:So basically, but his argument he is basically making in the next one is saying like, what we need to do, um, because AGI is gonna be around the corner, is that we need to basically like ensure that there's no hacking or insider threats within those labs. So it's a full, like national security, sort of like-- yeah, it's, it's some, it's some bad stuff. It's, it's, it's pure hype. Um, Jack Clark is, is, is a big hyper and used to be, I think more mellow, but is like completely just gone down the hyper train. So.
Emily M. Bender:Yeah. I like this comment from Abstract Tesseract: "Every time an AI booster says, 'Find myself increasingly convinced that--' I want to respond, 'Uh, that sounds like a you problem. Can you please not make it an us problem?'"
Alex Hanna:Yeah, yeah. Uh, for sure.
Emily M. Bender:Ahhh, all right, the other thing I wanna say about this one, hold on is you cannot in March of 2025 just refer to the utter catastrophe going on in the White House as 'this presidential administration' and just like blow right by that. That's--
Alex Hanna:Well, it's also like, I mean it's, it's, you know, he's a policy guy. He's like, whatever kind of administration's gonna help prioritize a certain kind of vision of AI policy that Anthropic is very invested in. Right?
Emily M. Bender:Yeah. Okay. Next. Ha. Sebastian Bubeck. Okay, so this was Bubeck's post um, just before the great chat bot debate. So the debate took place on March 25th, I think. Um, and ahead of time Bubeck posted, I'm-- This is LinkedIn-- "I'm looking forward to celebrate tomorrow the two years of the 'sparks' paper with a debate at the Computer History Museum in Mountain View on 'Do LLMs really understand?' So much has happened in the last two years, I think it's fair to say that almost all the predictions in 'sparks' came true." And, and then he's got, oh no, actually it's, it's even worse. So, "We now routinely have articles like the following, by authors/fields who were not quite convinced two years ago: 'Powerful AI is coming. We're not ready,' by Kevin Roose.'The government knows AGI is coming,' by Ezra Klein.'OpenAI's metafictional short story about grief is beautiful and moving' by Jeanette Winterson. And the coach for the US Math Olympiad Team, Po-Shen Loh, 'My main work nowadays is to build and scale up a community of people, through education, to face the challenges of the AI age together. I thought I had more years. Now we have to move faster.' Where will we be two years from now?" And--
Alex Hanna:Good Lord.
Emily M. Bender:So first of all, if you go back to the "sparks" paper, it's not actually making predictions. Right?
Alex Hanna:Yeah. Yeah. There, no, yeah. We were talking about this in the group chat with, with with Meg Mitchell, who was on last, last episode too. Yeah, she just like looked, we're like, did he make predictions? Yeah. There's no predictions that are made. And, uh, and uh, our producer Friday, uh, sorry, not Friday. That's her derby name. Christie. Um said in, in our group chat, "*Slaps the top of sparks paper* This baby can hold so much elite capture." Yeah.
Emily M. Bender:Yeah. It's like Kevin Roose and Ezra Klein talking about AGI or AI coming is not evidence of anything. Except maybe their cred, credulousness, credulosity? Something.
Alex Hanna:Sure. It's also the fact that, yeah, like, yeah, I mean, don't get me started on Ezra Klein and his new book and how it's titled Abundance and it's about like government waste. And you're like, not now, dude. Like, what are you, like, what are you doing?
Emily M. Bender:Yeah, yeah, exactly. Uh, all right, next. You wanna take the lead on this one?
Alex Hanna:Yeah. So talk about Anthropic some more. So there's these like very, um, like Sears photo studio picture of, uh, Daniela and Dario Amodei of Anthropic. And, and it's this very glowing, um, although it's not, they're not like looking down the barrel, they're looking, but the backgrounds are like, I went to Sears with my mom and she made me wear a bow tie. And so then like the, uh, so below them the title, this is from Wired and Wired, doing a lot of good stuff these days, but a lot of shit stuff. And this is--
Emily M. Bender:This is a miss. Yeah.
Alex Hanna:This is by, this is by Steven Levy who, um, is, you know, was editor in chief, I think has a high, uh, position at Wired, is like one of the, has been a big technol, technological optimist for a long time. Anyways, the title is, "If Anthropic Succeeds, a Nation of Benevolent AI Geniuses Could Be Born," uh, the broth and then the, the subhead is,"The brother goes On vision quests, the sister is a former English major. Together, they defected from OpenAI, started Anthropic, and built, they say, AI's most upstanding citizen Claude." Um, I don't even want to get into this. This is--
Emily M. Bender:We don't need to go into the body of the article, but there's so much in the head and subhead here. So, "AI geniuses," by which they mean artificial intelligence systems, not the people building them, "could be born." So something's giving birth to either the nation of these things or the things, which is weird."Defecting from OpenAI" makes it sound like it's a, um you know, totalitarian government, um, and--
Alex Hanna:They wish I, I guess.
Emily M. Bender:Yeah. And then referring to Claude as a citizen is just so infuriating.
Alex Hanna:Yeah. There's a lot of, there's a lot of trash here. SJayLett in the chat goes, "Steven Levy is not where you go for tech journalism." Agreed.
Emily M. Bender:Yeah, oof. Okay. Next. This one I've scrolled down. This is an arXiv paper. Um, and we'll go up to the top to get the title of it in a second. But first I just want everyone to admire this horrific use of a, uh, brain diagram. Um, so this is Figure 1.1 in this paper, uh, and the caption is "Illustration of key human brain functionalities grouped by major brain regions. Annotated according to their current exploration level in AI research. This figure highlights existing achievements, gaps and potential opportunities for advancing artificial intelligence towards more comprehensive brain inspired capabilities."
Alex Hanna:Yeah, this is, this is such a terrible, you know, this has got a brain and it's got three levels. L1, L2, L3. So L1, "Well developed in current AI." L2, "Moderately explored with partial progress." And L3, "Rarely explored; significant room for research." And it's like pointing. And so I'm like, well, what's in the, the brainstem? So like reflexive responses, L1. So like, and will an LLM tell you that you need to pee or breathe? Like I am wondering what this means.
Emily M. Bender:Oh and language comprehension and production is considered well developed in current AI. Production, yes, as in we're swimming in synthetic text, but comprehension absolutely not. Also facial expression processing?
Alex Hanna:I was gonna remark on that facial expression part. Oh, you mean the, you mean like phren-- the, um, physi, physiognomy? And--
Emily M. Bender:Yeah. Physiognomy. Right.
Alex Hanna:Physiognomy and like emotion recognition. Like, hmm, okay. Yeah.
Emily M. Bender:But ugh. And then the moderately, well one is also a little bit annoying, so, uh, skill learning, apparently. Attention, because they called that one thing attention. I'm sort of surprised they didn't put this one in green. All right, so we need to at least give credit where credit is due on this paper. Um, so the title of the paper is "Advances and Challenges in Foundation Agents." So foundation models wasn't good enough. Now we have foundation agents."From brain inspired intelligence to evolutionary collaborative and safe systems." This is an arXiv pre pre-print. I hate calling it pre-print because it's not like it's actually gonna get published somewhere. Um, from March 31st, 2025. Long list of authors, um, from a long list of, um, uh, affiliations. The, oh, interesting. The order of affiliations doesn't match the order of authors. That's kind of weird.
Alex Hanna:It's, yeah, it's, they're, I don't know why. So MetaGPT is first and then, but the first author is affiliated with, uh, University of Montréal-Mila, and number 20, which is, uh, uh, CIFAR AI Chair. So this person actually is a chair position at, in CIFAR which is, uh, unfortunate. But also common.
Emily M. Bender:Yeah. Huh. Okay. So now this brings us to our first chaser. This is something that our producer Christie showed us and it was sort of in reaction to, uh, Bubeck talking up the "draw a unicorn" thing as a really important moment with GPT-4. And apparently there's a project, um, where if we go to the blog post here, um. Adam K. Dean, um, GPT Unicorn, "A daily exploration of GPT-4's image generation capabilities." And so this is just every day instructing GPT-4 to draw unicorn. What do you see, Alex?
Alex Hanna:Well, this is interesting. So this is like kind of a, it seems so random, right?'cause it, it differs all the way from, uh oh, this one on like March 26th kind of looks cute. It's like this little smiley face on a piece of cotton looking blurb. And then, but it's so different. Like--
Emily M. Bender:Oh, this the 27th this one here?
Alex Hanna:Yeah, the 27th and then the 30th is like an egg upon another egg upon another egg. So this is this, I hadn't seen this. How weird.
Emily M. Bender:Okay. It's very, very odd. Um, and I mean, it's, it's, I think it's, it's a fun project and some of them are just like blobs and some of them are like things that look like maybe unicorn pieces if you really add a lot of interpretation. This one looks like a bird wearing glasses? Anyway, that takes us to the end of the first batch. And now I have to get us to our next one here. I'm gonna set it up so that I can see it right away when, when you all can. This one is education and the information ecosystem. Um. And here it comes and get into the first one. Okay. Um, this one, oh, did I actually finally get through with Archive.is? I did. Okay. Uh, this is by the Wall Street Journal staff on March 20th, 2025."Got tax questions? The Wall Street Journal's AI bot has answers. The Wall Street Journal's tax assistant can help you find what you need from our articles and IRS guides."
Alex Hanna:Bad idea.
Emily M. Bender:Yeah, exactly. Sort of thing where you actually need some answers. Um, and somewhere there's a, there's a disclaimer that says, you know, this might be wrong, of course.
Alex Hanna:Yeah. At the end, "The Journal's tax reporters and editors have reviewed the system's knowledge--" Which I'm assuming the training data."--but don't verify each response." Right."Users should consult tax professionals for advice." Well, okay.
Emily M. Bender:And, "Lars is instructed to provide factual information and direct users to relevant resources, not to offer personalized tax advice." 'Lars is instructed to' is sort of a pointer forward to something we're gonna get to later.
Alex Hanna:Yeah, totally.
Emily M. Bender:All right, so this is just the same thing. Um.
Alex Hanna:All right. This is, uh, from Futurism by Joe Wilkins. So, "Man annoyed when ChatGPT tells users he murdered his children in cold blood." Hate when that happens. And the sticker here is"Bad dad." Which is nice. And then there's an image of like a dad looking man, white man, uh, pointing to himself and I'm assuming a wife with a crossed arms and two horrified looking children. Um, so let's see.
Emily M. Bender:This starts off annoyingly.
Alex Hanna:Yeah, very. Yeah. It says, "When it comes to the life of tech, generative AI is still just an infant." Very annoying, uh, parallelism to children, uh, which we've talked about a lot."Though we've seen tons of AI hype, even though most advanced models are still prone to wild hallucinations, like lying about medical records or writing research reports based on rumors." Um, yeah. And then there's some reporting and it says, "According to TechCrunch ChatGPT told the man he had murder two of his sons and tried to kill a third. Though Holmen--" Which is, this is the person's name, um, "--didn't know, he had apparently spent the past 21 years in prison for his crimes, or at least according to the chat bot." Okay.
Emily M. Bender:Yeah. So apparently he's, he's trying to file a complaint on, with the Norwegian Data Protection Authority, um, which is interesting, like the, the request, sorry, I'm trying to get rid of all of these moving ads. Um. There's somewhere in here where it's like, uh, trying to get the OpenAI to take this out. Yes. Um, "asking the agency to order OpenAI to delete the defamatory output and fine tune its model to eliminate inaccurate results." And then the, uh, journalist adds, "a nearly impossible task." I would say an actually impossible task. What's interesting here is like, that's not, and on one level, that's not a reasonable thing to ask because it's not possible. But on the other hand, if it's not possible, then maybe these things shouldn't be out there. Right, so it'd be great if the Norwegian Data Protection Authority can do this. Um, and there's a great thing in the chat, Abstract Tesseract again:"The Nordic noir no one asked for." All right. Did you, have you seen this one, Alex?
Alex Hanna:No, I didn't see this one. Yeah. You wanna say it? Wanna read? Read it.
Emily M. Bender:Yeah. So this is uh, 404 Media, Jason Koebler, March 26th, 2025."AOL's AI image captions terribly describe attempted murder." Um, and one example here is, "A couple smiling on a beach at sunset associated with Hawaii doctor incident." So. Apparently there's a, a story about a, um, a doctor. So the article, "Top doctor allegedly tried pushing wife off Hawaii beauty spot in wild homicide attempt," and then there's some uh, pictures. This was syndicated on the website Bored Panda, which okay. Um, and the Bored Panda version, there's no image captions, but on the AOL.com version of the article, um, there's these weird captions like, "A man smiling in a park setting with a dog, related to a top doctor news story." And first of all, AOL.com still exists?
Alex Hanna:I mean, I'm, I think AOL as the formerly common used service had been sort of reimagined and kind of sold for various parts and now I guess as a landing spot for people trying to get clicks, like a lot of websites are, right?
Emily M. Bender:Yeah.
Alex Hanna:Yeah.
Emily M. Bender:Um, and so this is reminding me of what Meg Mitchell calls the "everything is awesome" problem in image captioning. Where the, the underlying data, like the things that people post pictures of online, and then that serves the basis of the data sets that are scraped, has a lot of people talking about how great things are. And so that's what comes out of these systems.
Alex Hanna:Mm-hmm.
Emily M. Bender:Okay. I'm give you this next one, "Pivot to AI."
Alex Hanna:So this is from Pivot to AI and the folks, David Gerard, um, posts a lot of, um stuff that is very similar to our kind of all, all Hell stuff. But this is, um, the headline here,"AI coding tools quote, 'fix bugs' by adding bugs," and the sticker is "code completion." And so, um, "LLMs regurgitate a version of whatever they're trained on. If you train them on typically buggy code, that suggests you'll get back typically buggy code." Uh, so this is a preprint. Um, so, "In a new preprint, 'LLMs are Bug Replicators,' the researchers went out and tested this hypothesis. They gave seven LLMs on--they gave seven LMS on Java code--" This is a weird sentence."--from the Defects4J and the ConDefects collections of common logical errors in open source code and told them to fix bugs. What happens when you give an LLM buggy code and tell it to fix it? It puts in bugs. It might put back the same bug!" And so, and then, "Worse yet, 44 percent of the bugs the LLMs make are previously known bugs. That number's 82 percent for GPT-4o--"
Emily M. Bender:Yay. They won, 82 percent.
Alex Hanna:Yes. They won the the human's last test contest by introducing the most bugs. Um, "So it looks like the common bugs are all standard through--due to standard training." I don't know what that means either."'In bug prone tasks, the likelihood of LLMs generating correct code is nearly the same as generating buggy code.'" Yep. So pretty, pretty bad. Who the click--can you click the arXiv paper? I I, remember reading this headline. So this is from um, this is--oh, it's actually published. Um, it's actually an old paper.
Emily M. Bender:Old paper, yeah.
Alex Hanna:Yeah. So this is from 2021. Um, although maybe--
Emily M. Bender:Wait, then how are they testing GPT-4o in 2021? What's going on here?
Alex Hanna:Maybe that's some bad reporting in terms of Gerard.
Emily M. Bender:Just wanna scroll down to their evaluation.
Alex Hanna:Yeah.
Emily M. Bender:No, they're claiming to do GPT-4o. So I think, you know what this probably is? They probably are submitting it to IEEE Transactions on Software Engineering. And this is the default.
Alex Hanna:That's the, that's the default thing. Yeah. Okay. Got it. Uh, I was like, wait, what? That's a little-- so the, now we're looking at a kind of a top line table. Um, which, uh, which is the analysis of code quality. Um, I don't know how to read this'cause I read this paper, but I was just trying to see who the authors were. So.
Emily M. Bender:Oh, lemme take us back to the top then.
Alex Hanna:Yeah.
Emily M. Bender:Now that we figured out that it is, recent.
Alex Hanna:So this is, um, Liwei um, Guo?
Emily M. Bender:Liwei Guo.
Alex Hanna:And then, um, a few folks, just trying to find the institution, but they don't have it listed.
Emily M. Bender:They don't have it. Maybe if we back up and look in the landing page here it will tell us something?
Alex Hanna:No. Anyways, okay. It doesn't matter. All right.
Emily M. Bender:Yeah. All right. Alright. But you get to do this because you're the one who spotted it.
Alex Hanna:Okay. So I'm walking home, uh, from a flight on, what day did I come back, Friday. And I saw this ad in the San Francisco International Airport in Terminal D, and it says, uh, it is an ad for a company called Turing. It's all black, and the writing here is, "We teach AGI to think reason and code -- so you don't have to. It's okay. They're fast learners." And I'm like, ugh. Immediately took psychic damage upon seeing this.
Emily M. Bender:Right?'cause, because none of us want to, I mean, coding might be a chore for some people. It might be a joy from some people, but none of us want to think or reason anymore. That's definitely something we wanna stop doing.
Alex Hanna:Yeah, Lord. Just really curse stuff.
Emily M. Bender:So then you, you went and explored the, this is the right website, right?
Alex Hanna:Uh, I think, yeah, I think it is. This is the same font. So "Turning AGI research into real world impact." And so then you scroll down and I think this, this is actually a crowd work site, uh, company. So they've, uh, you know, so they're advertising "40 plus industries innovated." So they have been innovated. And then "4 million plus professionals available." Um, so it looks like what they're effectively doing is that they're trying to find, uh, people with particular expertise for model training. Um. And this is something that Scale AI has, has been doing for a while. They're another one of these companies. And the thing that's just so cursed is not only the name of the company, Turing, but also the, the use of 'AGI' to just kind of like, without any like, you know, just completely out, out of the box. I mean, no-- and I mean all these buzzwords are all over just incredibly cursed stuff.
Emily M. Bender:They've got, so, "Turing AGI Advancement: focus on advancing AGI capabilities. Turing AGI Advancement combines scalable data systems, human intelligence--" There it is. There's the crowd work."--and cutting edge model post-training to drive the future of AI development." And they're, they've got nine categories here, model evaluation, et cetera, et cetera. And the last one is "frontier knowledge processing."
Alex Hanna:Yeah. I don't know what, what is frontier knowledge? Is that like how to load a musket and how to hunt? How to hunt squirrels? Is that, I'm just imagining, how to make a Davey Crockett hat.
Emily M. Bender:How to be a colonizer.
Alex Hanna:Yeah. I mean, I mean, I guess that's a certain kind of knowledge that we're speed running right now, but yeah.
Emily M. Bender:Right. That, synthetic data generation, that's what we need. Okay. Enough of Turing.
Alex Hanna:Yeah.
Emily M. Bender:Uh, so this is a LinkedIn post by Ben Williamson from about two weeks ago at the University of Edinburgh. And he writes, "Want to know what a MAGA DOGE-inspired and AI intensive schooling system would look like? Here's the vision that's being set out by Musk adjacent ed tech and ed reform entrepreneurs. Quote, 'No one wants to admit it, but the vast majority of teachers are glorified babysitters. In the age of personal computers, it is insane to trust the local babysitter to teach kids cognitive skills that may in many cases beyond the be beyond the teacher's own grasp.' 'I think it's adorable when people think we're going to fix education by improving teachers. Sorry, it's AI tutors or the slow death of civilization. Those are the choices.' 'First step stop teachers from teaching.'" Those are the end of three quotes, and then Williamson continues, "It's an awful reactionary quote, 'anti woke' vision of automated right wing ideological training, and quite possibly a sign of the AI educational future that could come fast in the US -- a Grok bot personalized learning tutor with a cute ClassDojo-like interface for every child."
Alex Hanna:Ugh. Where are these, so where are these quotes from? Yeah, like, um, it's, um--
Emily M. Bender:Yes. Okay. We're redirecting. Mother Jones. Okay.
Alex Hanna:Oh, so this is, oh, I know Anna Merlan. So, so this is a article. This is a horrifying image too.
Emily M. Bender:It's so terrible.
Alex Hanna:So this is "Meet the educational entrepreneurs who want to teach a new generation of Elon Musks." And there's a horrifying image of like, some child in a, like, um, like with a, like a sweater vest on, all with like these tilted Musk heads. Um, yeah. Really, really horrible stuff. But it's like, this is also something we've, you know, we've talked about, um, on the, on the pod before we had Adrienne Williams, who used to be a teacher. Um, and had talked a lot about these kinds of things happening in charter schools, especially like these are the places that are really into educational reform and, um, tech enabled, um, garbage that, um, a lot of large funders are really into like Gates and, and, uh, Chan Zuckerberg and whatnot. So check out that episode if you are interested.
Emily M. Bender:Yeah. All right. So palate cleanser. Alex, you wanna take the lead on this one? This is so great.
Alex Hanna:This is a great piece. Um, by, um, uh, let me, uh, and it's, it's published at the ICMA, which is, I think, can you scroll up?'cause I think it's the international, yeah, the, the newsletter of the International Center of Medieval Art. And so then, um, so this is a newsletter and this is by Sonja Drimmer and Christopher Nygren. Um, and the title is "How We Are Not Using AI in the Classroom." And, um, this is a really fantastic piece. Um, I shared a bit of it, um, this on Bluesky and it got, um, a lot of people were really into it. So, um. Just the choice quotes here."We were giving a prompt as as an invitation to participate in this
newsletter:'How are you using AI in a classroom?' While we have accepted this invitation, we are engaging in the most humanistic act we can imagine--refusing the prompt." Uh, and so then they've got a lot of cool, like really good stuff about the kind of like pushing back on the incursion of AI in the classroom. And you know, I've been giving a few. A few talks, um, about the book and about other things in the past year. And yeah, I mean, lots of questions coming in from educators. Well, this is kind of inevitable. Well, the student--well, you can push back. You are an educator. Um, and I think there's really a feeling that it is, um, it's, it's, it's really hard to do. So, you know.
Emily M. Bender:Yeah, so strongly recommend this piece, if you want some, you know, respite from all of the AI hell reigning down. This is a very well argued and very thorough refusal, which is amazing.
Alex Hanna:Yeah.
Emily M. Bender:Um, yeah. And we have Elizabeth With A Z in the chat saying, "Big tech is trying so hard to push this on teachers, and even teachers who don't want to, uh, to do it are feeling such pressure to do so. Quote, 'Save time,' quote, 'Genie is out of the bottle.' Quote, 'You're doing your kids a disservice if you don't.' This inevitability narrative." And so love the things that push back on it. Alright, I have something of an intermission uh thing for you.
Alex Hanna:I don't have to be the one to perform, you, you have a thing.
Emily M. Bender:No, it's my turn. It's kind of long. Um, but it's also to give our viewers and listeners a sort of a, a view into our inboxes because we are now, I guess, a big enough podcast that we've landed on the radar of lots and lots of publicists. And we get continually, at least weekly pitches, um, for people to be on the podcast. So I have compiled excerpts of those pitches into what I'm calling a found prose poem with the title,"My Client," and I have anonymized this in the sense that anywhere that the client's name, um, showed up, I replaced it with the phrase "my client." Um, and you know, otherwise, I'm just taking texts outta these people's emails. So you might worry that the publicists will notice that I have quoted them, but as will immediately become clear, none of these publicists actually listen to our podcast. So, here we go.
"My Client:Your podcast, Mystery AI Hype Theater 3000, stands out for its bold conversations around tech, AI, and data. That's exactly why (my client) would make an excellent guest."I'm reaching out to propose (my client) as a guest for your podcast, as his expertise aligns seamlessly with your focus on building business slash work-life balance."I'd love to introduce you to (my client). With his expertise in AI driven product innovations and award-winning ventures, (my client) brings compelling insights into AI's real world applications, beyond the usual hype."(My client), with her multifaceted experience in AI and her recent encounters with corporate layoffs, offers a distinctive viewpoint on how tech workers can navigate their careers amidst AI evolution."After helping shape high tech in Silicon Valley, leading eBay's IPO, driving startup innovations, participating in new technologies like the internet, semiconductor chip design, and biotechnology, (my client) was driven by his deep interest in philosophy and astrophysics to explore the intersection of science, humanity, and purposeful living in a world transformed by AI."(My client) will be a fantastic guest to further dissect these myths. Known for his success in leading tech companies to acquisitions and award-winning product launches, he offers a pragmatic stance on AI's current and future role."Having explored AI and its implications through both his academic and creative work, (my client) offers unique perspectives, blending science fiction and reality. With AI such a hot topic, I believe my client, (my client), would make a compelling guest on your show."As a seasoned conversational AI technologist, (my client) has shaped the landscape of digital communication by harnessing AI to enhance human expertise."Previously, an investor at Excel, (my client) helped deploy over $250 million in crypto and AI investments, backing companies now worth over $5 billion. What truly sets (my client) apart is his keen foresight in monetizing the shift from traditional web spaces to AI driven experiences. As AI consolidates under new a few major players, my client believes this will slow innovation and limit progress. He argues that true breakthroughs will come from open collaboration, not corporate secrecy."(My client)'s latest book touches on themes that resonate with your podcasts ethos.(My client)'s documentary is not just about technology, it's about the human stories intertwined with it. By featuring them, you could provide your listeners with a fresh and captivating perspective."(My client) leads a team focused on smart behaviors, merging AI with the built environment. With a rich background in architecture, and a profound grasp of technology societal impacts, he would offer a unique perspective on AI's evolving role in education and beyond."(My client) is pioneering the world's first IP powered agentic AI protocol, creating emotionally intelligent AI companions that integrate seamlessly across devices. Think Siri, but with your favorite characters."(My client) is poised to discuss a range of topics including effective leadership strategies, corporate culture and the entrepreneurial mindset, and of course, oil and gas insights."I believe (my client) would be a remarkable guest on Mystery AI Hype Theater 3000. Her fusion of technical savvy and social awareness offers a refreshing perspective on AI and education."(My client) is willing to discuss any topics related to AI in blockchain."With a flare for storytelling that mirrors your podcast blend of high tech narratives and socioeconomic insights,(my client) can share how publishers are not just adapting, but revolutionizing their approach via connected AI agents."(My client)'s experience with analytics and audience platforms could add a new dimension to your conversations on AI's societal impacts. Would you be open to featuring him on your show?"
Alex Hanna:So good. Just incredible.
Emily M. Bender:Yeah. Um, and just to clarify, this is not one message, this is a mishmash of, um, you know, uh, basically each sentence or pair of sentences was from a different message.
Alex Hanna:Yeah. And there's, there's a few, so there's the Abstract Tesseract with the Kai Winn GIF,"my client", like "my child", uh, which, you know, uh Deep Space Nine reference. Thank you. And of course, "My clients", you know, in Borat voice, um, yeah, just, oof, just incredibly, incredibly, incredibly--thank you for reading that. And I mean that this is, and it's, the thing is y'all, it's like that, that is like slightly less cringe. Like not the performance, the performance is great, but like the kind of like, all of them are very cringe messages that come in that are just like,'I loved your episode on like how technology is going to ruin education. My client has raised $5 million in series A funding for multiple, like blockchain endeavors.' And we're like, what the, you obviously didn't listen to any of this.
Emily M. Bender:Yeah. And, and the idea that that's a good way to do business as a publicist. Just mind boggling. Um, and occasionally we get people suggesting themselves too, which is hilarious. All right, so back into the hell outside of our inboxes, um, we are now in the policy, military, law and policing area. And Alex, I'm gonna rest my voice a bit more and let you do this one.
Alex Hanna:Yeah, no, please. Yeah, please rest. Um, I'll take the, the next two. So this is also from Pivot to AI."AI benchmarks are self-promoting trash, but regulars keep on using them," also from David Gerard. Um, so yeah, so "Every new LLM um, and every new tweak to an old LLM has a press release bragging about how well it tests on some benchmark you never heard of. Every new model is trained heavily to the previous trendy benchmark. OpenAI just accused xAI of rigging the benchmark scores for Grok 3." And then these are, this is, um, uh, two, one preprint and one report from um, the European Commission's Joint Research Center. So one preprint coming from, um, some folks at Stanford. I'm not sure which, which lab. Um, it says, "Stanford researchers confirm: LLM benchmarks are spurious trash. Most LLM benchmarks are not easily replicated -- OpenAI is notorious for this -- do only a single run, don't reproduce well and don't report the statistical significance or error bars of their results." Well, that's kinda weird because, um--
Emily M. Bender:They're missing the fundamental thing, which is that they have no construct validity. As we argued back in 2021. Yeah. So yeah, Stanford confirmed, but we've been saying that.
Alex Hanna:Yeah. And then, um, yeah, the "benchmarks are funded by the companies and are even available to them ahead of time." And then the European Commission's Joint Research Center says the same. So, yeah, we, we, we knew this. We have some, um, some data for it now. All right, next. So this is a great thread from John, uh, John Skiles Skinner, um, who used to be at the um, United States Digital Service. So he says, "I helped build a government AI system. DOGE fired me, rolled the AI out to the whole agency, and implied the AI can do my job and the jobs of the others they fired. It can't, but what DOGE accidentally revealed about themselves in the process is fascinating. Thread." Uh, it's worth reading a, um, a lot of this too. So, "From November 2023 to January 2024, I worked on a tool we called the AI sandbox.
Our goal:let federal software devs test out AI tools in a safe way to discover if they have any use. Renamed GSAi, the tool has been cleaned by DOGE. It is being rolled out at an alarmingly accelerated schedule. No one from the new admin wrote a single line of code for it, but the GSAi has become their proud quote, 'AI first strategy.'"Previously at my job, we put people first. See 18 F'S work--" Uh, and that's, um, that federal agency, um, design agency within USDS who has done a lot of good work."--on, uh, human-centered design. As my coworker explains in this demo, we knew AI's capability is limited." Uh, there's a quote here, um, we can scroll down."As a fed, I am accustomed to knowledgeable and diverse coworkers who don't get snookered by the latest buzzwords. But yesterday at GSA headquarters, six white guys in suits, one of them without a tie, took the stage. They demo'd GSAi like it was pure magic to them." And um, there's a part I do want to get to, um, I'm gonna skip a little bit. So here, "There's a quasi religion in Silicon Valley that views AI as godlike. This faith has always been parallel to Evangelical Christianity. Salvation, um, parentheses, transhumanism, the rapture, the technological singularity, and demons, Roko's Basilisk." Uh, sound familiar. Sounds like TESCREAL nonsense.
Emily M. Bender:Mm-hmm.
Alex Hanna:"Lately the AI faith has fused with Christian, has fully fused with Christian nationalism." Um, and so, "Together, um, together these two faiths crave the end of American liberal democracy and government. You can smell it at GSA. They won't replace us to AI 'cause they don't wanna replace us at all. They want our work to end. DOGE is firing us as fast as possible and that is the whole plan." And that is a really nice kind of punctuation on this thread.
Emily M. Bender:Great thread. Yeah. Alright. This one is just a company's website, not writing about it. It's uh, EspySys.com, ESPY, and the headline here is"Psychological profiling -- AI. At the core of our Psychological Profile API is the AI based system, which includes summary generation, entity extraction, and danger slash violence level measurement based on an individual's social network personal profile. This method ensures precise and comprehensive insights into a person's psychological traits, behaviors, and potential risks." So this is basically describing surveillance of social media activity and then, um, using it to make claims about somebody's psychological traits, behaviors, and potential risks based on, you know, I bet if we clicked around, we're not gonna find how they evaluate this.
Alex Hanna:Yeah. This is just. This is just some Snowden leaks type metadata, social network analysis stuff with some, like, some like violence, like sentiment analysis bullshit. I don't imagine they're gonna be that forthcoming of what they're doing. But yeah, this is very much military industrial complex security state bullshit.
Emily M. Bender:Gross. Um, okay. Here is a Verge article that I'm, I want to call, it's about the, uh, ChatGP-Tariffs.
Alex Hanna:Oh Lord.
Emily M. Bender:Um, so, uh, um, I'm doing this one, Alex 'cause you get to lead on the next one. Seeing what it is. Um, so this is April 3rd, 2025 by Dominic Preston in the Verge with the headline, "Trump's new tariff math looks a lot like ChatGPT's," and then subhead, "ChatGPT, Gemini, Grok, and Claude all recommend the same quote,'nonsense' tariff calculation." And so what people found very quickly after Trump announced, uh, the tariffs that are like 10% on everybody, plus some amount for different specific countries, is, um-- and also what, uh, these are supposedly based on quote,"tariffs charged to the USA", but that doesn't actually match anything. And some folks put in queries into like, what's this first one? ChatGPT. Uh, "What would be an easy way to calculate the tariffs that should be imposed on other countries so that the US is on an even playing field when it comes to its trade deficit?" And then the same silly equation keeps coming out that says, uh, "The tariff rate is the trade deficit divided by total imports times 100." Um, and they've got screenshots of all four of the things that they asked. Um, and so people are speculating in fact, that someone in the administration just ask ChatGPT, which could be, right, I wouldn't put it past them. At the same time, this silly idea wasn't actually fabricated by the chatbots, but it came from something in the training data, and I don't see it in this article, but someone did sort of track down the sort of original source of that.
Alex Hanna:Yeah. Ugh.
Emily M. Bender:Yeah. Yeah. Alright, so this is kind of the chaser. Um, and--
Alex Hanna:Kind of, yeah.
Emily M. Bender:Kind of. So go for it.
Alex Hanna:Yeah. So this is, um, reporting um, from the Verge."Microsoft employee disrupts 50th anniversary and calls AI boss 'war profiteer'. The employee sent an email to a number of Microsoft distribution lists after being ushered out the event." By Tom Warren and Jay Peters. Um, so really, um brave action here by this employee.
Emily M. Bender:I love this photo. It's, (crosstalk).
Alex Hanna:So the photo's um, a woman in a hijab and she's standing in front of a screen that says "Your AI company," I think, uh, she's blocking part of it. And she's yelling at Mustafa Suleyman, our friend, head of Microsoft AI. Um, and she says, "'Shame on you,' said Microsoft employee Ibtihal Aboussad, speaking directly to Microsoft AI CEO Mustafa Suleyman, 'You're a war profiteer. Stop using AI for genocide. Stopping using AI for genocide in our region. You have blood on your hands. All of Microsoft has blood on its hands. How dare you celebrate when AI is killing--when Microsoft is killing children. Shame on you all.'" And if you, yeah, definitely go to this link we'll have in show notes, where, her full letter is is really, uh, is really great to read and um, and really helpful.
Emily M. Bender:Yeah. And I just wanna, um. Oh yes."She says, 'If I knew my work on transcription scenarios would help you spy on and transcribe phone calls to better target Palestinians, I would not have joined this organization and contributed to genocide. I did not sign up to write code that violates human rights.'" Just, wonderful straight talking there.
Alex Hanna:Mm-hmm.
Emily M. Bender:All right. So, chaser in the sense of we stan this employee.
Alex Hanna:Yeah. Very, very important to be still advocating despite all the kind of, um, chilling effects they've had on employee activism at Microsoft and Google.
Emily M. Bender:Yeah. Mm-hmm. Uh, I think that takes us to number five, doesn't it?
Alex Hanna:Yes. Is that, is that the last, is that the last one?
Emily M. Bender:No, I've got this, I've got this group of chasers for six though. So this is the last this, this is the last of the bad stuff.
Alex Hanna:Okay.
Emily M. Bender:Um, so we got a couple pieces here on, um microsoft and others backing out of the big data center expansion. So this is from Semafor, um, from, uh, today, um, April 7th, 2025. It must have been posted yesterday if I got it into this group.
Alex Hanna:Oh, no, no, it's not, it's not. It's, it's from March 20th.
Emily M. Bender:It's March 20th. Oh--
Alex Hanna:That's, that's just telling you what time it is right now.
Emily M. Bender:That's not helpful. Okay. March 20th, 2025. Headline is, "Microsoft chose not to exercise $12 billion CoreWeave option," by Rohan Goswami and Liz Hoffman. Um, with the sticker "business." And"The scoop," it says, "Microsoft chose not to exercise a nearly $12 billion option to buy more data center capacity from CoreWeave, people familiar with the matter said, a sign that big tech companies are starting to rightsize and tailor their AI budgets." Um, so this is actually good news, right?
Alex Hanna:Yeah? Hopefully.
Emily M. Bender:Hopefully. Uh, and then similarly, you wanna do this one?
Alex Hanna:Yeah. So this MIT tech review from, uh, Caiwei Chen from March 26th, um, "China built hundreds of AI data centers to catch the AI boom. Now, many stand unused. The country poured billions into AI infrastructure, but the data center gold rush is unraveling as speculative investments collide with weak demand and DeepSeek shifts AI trends." So lots of, uh, lots of data centers not being used. I mean--
Emily M. Bender:This is a great image too.
Alex Hanna:The image is a bunch of like Cat 5 cables covered in, uh, cobwebs.
Emily M. Bender:Yeah.
Alex Hanna:So, um, good news. I mean, I'm sad that those data centers are built at all, but at least they're not, you know, the energy is not churning out.
Emily M. Bender:Yeah, yeah. All right. Um, this one is hilarious.
Alex Hanna:Yeah. This is, so this is, uh, Sarah T. Roberts, uh, friend of friend of the pod, um,"OpenAI co-founder Ilya Sutskever's new startup Safe Super Intelligence just closed another funding round for$2 billion, Sutskever promises not to release any product at all until SSI has developed quote, 'super intelligence.'""In other words, here's 2 billion, 2 billion to do nothing." And it's got a picture of, um, Altman and Sutskever standing, uh, sitting on, doing some kind of panel.
Emily M. Bender:And I, I saw some other folks pushing back on this angle on it. Um, but you know, there's always the, the fanboys. Um, yeah. And oh yeah. Oh, well, no, Ed Zitron,"That isn't what the article they're aggregating even says." Um, so Sarah T. Roberts, "Okay. The Wall Street Journal says that he'll be focusing all his energies on ASI efforts and not releasing anything else until he achieves that goal. Since that shit is vaporware fantasy, the conclusion stands. Thanks for stopping by, I guess."
Alex Hanna:Yeah.
Emily M. Bender:Um, so, um, yeah, this is, it's like, can, can we get $2 billion to not release super intelligence?
Alex Hanna:I would love $2 billion to not release super intelligence. I'm doing that right now.
Emily M. Bender:Yeah, exactly. Where's, where's my $2 billion?
Alex Hanna:Yes, totally.
Emily M. Bender:Okay. This one I think we, we have some time to go into in some detail. It's gross. So this is in, uh, Fortune by Preston four or Foray, March 27th, 2025. Sticker is "success" and "future of work." And then, "Bill Gates says a two day work week is coming in just 10 years thanks to AI replacing humans quote'for most things,'" and this is, I think I had three-- oh, am I gonna have to-- we'll see how much we can get. Um, the, Gates talks about it automating teachers and automating doctors, and he basically went on like a, um, what's the word I'm looking for? Um, a, a podcasting spree. Talking to all these people. I think he was talking to Trevor Noah and somebody else. And like each place there's some ridiculous claim. So, "Say hello to a five day weekend. Billionaire Microsoft co-founder bill Gates says artificial intelligence may soon automate almost everything and with it usher in a two day work week in less than a decade. If you're not a fan of the nine to five weekly grind, there's good news. Bill Gates is predicting that in just 10 years, humans might just work for two days outta the week. And it's all thanks to AI." Um, oh, this one's Jimmy Fallon now. Um, and somewhere. Oh, this is the part that really got me, and this is why this is filed under TESCREALism, "A five day weekend could boost birth rates and kill burnout."
Alex Hanna:Mm-hmm. That's the birth rates thing is wild. I mean, it's just, just like straight up, like all these fuckers are natalists and they're just like, like Musk and like there's, and there was some kind of stat that they found I think from, I forgot what country that they had. I think it was Japan. They were saying like, oh, Japan like did this and it boosted birth rates, and you're like, what are you, like, what is this obsession with birth? I mean, it's positive eugenics, that's what it is, but it's just like, man, y'all have completely gone down this rabbit hole.
Emily M. Bender:Yeah. Yeah. And you know, not to say that we need to be working 40 hour weeks, right? That's um, but it's got nothing to do with ai. And we have this great, very sarcastic, um, comment from Elizabeth With A Z, "When worker productivity increases, the bosses pay you the same and give you three days off. Like that's gonna happen, right?"
Alex Hanna:Does that work? Does that how it happens?
Emily M. Bender:Yeah. And Abstract Tesseract, "Bill Gates has yet to propose a future that's more appealing than the one where he redistributes the wealth and power he's extracted in his rise to power." Yeah. Yeah. Um, and then the two professions likely to be replaced by AI, um, doctors and teachers.
Alex Hanna:That's just like really, first off, wild. Absolutely wild that that's a thing that he thinks is going to happen at all. And then, and also the like idea that like-- first off, numerically even, most of, most of the people in the world are not doctors or teachers. Um, so just numerically, most people wouldn't be doing that. But if he thinks that like most of the automation that will be done, most that like "as significant people are exposed to that automation," which isn't true as well. And then like those two, those are the two you chose to focus on like, aren't those the two? Like, and you're going on a, like a program like Fallon to say such a thing. This also, sorry, this is my own high horse about Fallon. This is just like, goes on to the um, just like the, uh, list of terrible people that Fallon has platformed, like when he rubbed, um, Trump's hair like in 2016 and like, you know, anyway, I've been a, I've been a Fallon hater from like day one. Like day one, I like was actually, like in the crowd for his show in like in like 20, like 2008. And I like trolled him in the audience. Deep, deep, Alex Hanna lore, and he got really mad at me. Anyways, I'm a hater.
Emily M. Bender:Nice, nice, nice. You know, you know, I think I know why Gates went for those two professions, because that's where he's been focusing the, the philanthropic work in the Bill and Melinda Gates Foundation. Right. Education and health.
Alex Hanna:Yeah.
Emily M. Bender:Um, and he thinks it's just, you know, he's got it all solved.
Alex Hanna:Oh. Oh, update that just literally got posted about the, about the protestor at Microsoft. From the No Azure-- because I was just on Bluesky, uh, from the No Azure for Apartheid account. It says, she was just fired for that protest. So her and someone named, uh, Ibtihal Aboussad and Vaniya Agrawal. So yeah, when I was basically, I'll drop it in the chat. Um.'cause our producer is asking for it. So this is the, um, you know, like, so, wow. Just kind of on schedule saying like, I hope they're not penalized and immediately being penalized.
Emily M. Bender:Yeah. Yeah. I mean, it's not surprising, but I hope this, I hope this gives her a great platform and that she does as much with it as Timnit has with hers.
Alex Hanna:Fingers crossed. I mean, I think that any organizing, you know, like where folks can get together is really needed.
Emily M. Bender:Yeah. All right. I've got one last batch and it's all chasers. I wanna do this first one'cause it cracked me up so hard. This is a Bluesky post, um, from March 20th and it's by Laura Bassett. And it's basically a, a, uh, image of a Wired headline um, that Laura prefixes with two lines. So I'm gonna read her two lines and then the Wired headline and it goes, "Roses are red, violets are blue. 'Nearly all cyber trucks have been recalled because Tesla used the wrong glue.'"
Alex Hanna:Wild.
Emily M. Bender:I just, I just love that she spotted that she could like, fit that in like that.
Alex Hanna:Oh, completely.
Emily M. Bender:Yeah. So that was, that was lots of fun. Um. This one is kind of long, which is good because we have time for it. And I'm giving it to you, Alex, 'cause D&D.
Alex Hanna:So this is a long thread from someone named Pavel. Um, so, uh, SPavel.Bluesky.Social. Um, so, "The literal genie is a very common trope. When you make a wish you exactly what you, you wish for. This is one of the explicit limits on the spell 'wish' from D&D, often called the most powerful spell in the game because its effects is to quote, 'make a wish, and it comes true.'" Uh, and then in parentheses,"This is a thread about tech." And so it's basically like this idea where like you have to ask what you wish for. It can lead to like sort of monkey's paw situations and so like sort of ways in which, um, certain dm, you know, DMs try to one up their player characters. Um, and also the way in which, um, yeah-- so then there's, there's another, um, another skeet that says, "People have so much fun refining the the perfect wish for, or prompt, if you will, see where we're going with this? that they miss another very important line of the spell's text. A wish can not only can-- a wish can be not only twisted, but also partially fulfilled if the effect requested is beyond the power of the spell." And the kind of kicker here is,"LLMs are the wish spell of serious and highly paid tech professionals. Um, this is why every AI fail will be met with, 'what was the prompt?' This is why devs think that asking their LLM to not hallucinate will prevent it from hallucinating. The prompt is a wish and the perfect wish can do anything, except of course, within the bounds of the spells of power." So yeah.
Emily M. Bender:That's a really nice--
Alex Hanna:Any D&D, any D&D uh, tie-ins where possible.
Emily M. Bender:Yeah. Um, I also like this next one here. So same thread, "We are constantly told that AI is a revolutionary technology. Just as the rule book explicitly says'Wish' is the most powerful spell there is. It's not true in either case, but the story around it makes it seem so, makes it seem like it can solve any problem unless you understand how it works." So, uh, d and d lore, uh, for the win here. Mm-hmm. Um, oh, and we have some, some more input from Christie."In folklore the three-wish structure also tends to be 1, wish for the thing. It's not quite what you imagined. 2, wish to fix it while still getting something good from the wish process. And 3, wish to put everything back together the way it was pre-wish." Yeah, that's what I wish for.
Alex Hanna:Lord.
Emily M. Bender:Um, all right. You ready for this last one, Alex?
Alex Hanna:Yeah.
Emily M. Bender:This is big news.
Alex Hanna:Yeah. We actually finally met in person, believe it or not. Um, right after, actually, this is before the large, the chatbot--
Emily M. Bender:The great chatbot debate.
Alex Hanna:The great chatbot debate. I almost called it the large chatbot debate, um, which large is modifying the chatbot, not the debate. Um, but we met in person and uh, it's very funny. Because it took us 50 episodes, writing a book and two papers together to actually meet in person. So yeah. Now you get a sense of our relative sizes.
Emily M. Bender:Yes. In case anyone was wondering. And no, it was, it was so great to finally meet in person and, um, I think amazing that we have done so much together over all these years. And I mean, we started working together in 2020. And I guess in 2020 it wasn't surprising that people were just inside the little box on the screen, but, you know, since then. And we've had some near misses. Remember we almost met up in Paris?
Alex Hanna:Oh yeah, that's--when I had Covid?
Emily M. Bender:Yeah.
Alex Hanna:Yes. Yeah.
Emily M. Bender:And that was, it was also totally random.'cause I was on a train to Paris and like posting about it. And then I think Sasha Luccioni saw that and said, oh, you're gonna be in Paris. I'm in Paris. And you said, I'm in Paris too.
Alex Hanna:I'm in Paris, but I can't meet. Sorry. Yeah.
Emily M. Bender:So we, we finally did, and it was worth the wait.
Alex Hanna:Totally.
Emily M. Bender:Um. Yeah. And absolutely excellent. And I just wanna raise up, um, a couple of --these are both funny from SJayLett in the chat about the monkey paws wishes here: "The only winning wish is not to play. The only winning move is not to wish? Something like that."
Alex Hanna:Yeah. You can, you can do that. And yeah.
And then Abstract Tesseract:"Mystery Hype AI Theater 3000 IRL, for the win, LOL."
Emily M. Bender:Yeah. I think we have to render that all as acronyms."MAIHT3K IRL FTW LOL."
Alex Hanna:Yeah. I'm not gonna be able to do that without stumbling. So anyhow.
Emily M. Bender:Yeah. All right. Well we've made it through another, another round of Fresh AI Hell, and I have to say that I, um, in principle want to make sure that I've culled all the links, but I only went back like two weeks and found all of those. It has been coming down so thick and so fast recently. So thank you all for coming along with the ride. I hope you also feel refreshed and ready to take on what comes next.
Alex Hanna:I also wanna say that I think. We had a lot more good news than we usually have, so you know, that's a thing. So hopefully it kind of resolves, even though the world is kind of going into AI Hell in a hand basket. Yeah. That's it for this week, our theme song is by Toby Menon, graphic design by Naomi Pleasure-Park, production by Christie Taylor. And thanks as always to the Distributed AI Research Institute. If you like this show, you can support us in so many ways. Preorder "The AI Con" at TheCon.AI or wherever you get your books. And join our virtual book tour kickoff event on Tuesday, May 8th, or find us on the road. A full list of events at TheCon.AI.
Emily M. Bender:But wait, there's more. Rate and review us on your podcast app. Subscribe to the Mystery AI Hype Theater 3000 newsletter on Buttondown for more anti-hype analysis, or donate to DAIR at DAIR-Institute.org. Again, that's D A I R hyphen institute.org. You can find video versions of our podcast episodes on Peertube, and you can watch and comment on the show while it's happening live on our Twitch stream. That's Twitch.TV/DAIR_Institute. Again that's D A I R underscore Institute. I'm Emily M. Bender.
Alex Hanna:And I'm Alex Hanna. Stay out of AI Hell, y'all. Or our clients will find you.
Emily M. Bender:My client would like you to know.