Mystery AI Hype Theater 3000

Episode 47: Hell is Other People's AI Hype, December 9 2024

Emily M. Bender and Alex Hanna Episode 47

It’s been a long year in the AI hype mines. And no matter how many claims Emily and Alex debunk, there's always a backlog of Fresh AI Hell. This week, another whirlwind attempt to clear it, with plenty of palate cleansers along the way.

Fresh AI Hell:

Part I: Education
Medical residency assignments
"AI generated" UCLA course
"Could ChatGPT get an engineering degree?"
AI letters of recommendation
Chaser: 'AI' isn't Tinkerbell and we don’t have to clap

Part II: Potpourri, as in really rotten
AI x parenting
Et tu, Firefox?
US military tests AI machine gun
"Over-indexing" genAI failings
AI denying social benefits
Chaser: AI 'granny' vs scammers

Part III: The Endangered Information Ecosystem
Fake Emily quote in LLM-written article
Protecting Wikipedia
AI: the new plastic
Google AI on 'dressing'
"AI" archaeology
Misinfo scholar used ChatGPT
OpenAI erases lawsuit evidence
LAT "AI" bias meter
WaPo AI search: The Washington Post burns its own archive

Chaser: ShotSpotter as art

Part IV: Surveillance, AI in science/medicine
Apple patents "body data"
Chatbots "defeat" doctors
Algorithm for healthcare "overuse"
"AI friendships"
"Can LLMs Generate Novel Research Ideas?"
Another LLM for science
Chaser: FTC vs Venntel

Part V: They tell us to believe the hype

Thomas Friedman: AGI is coming
Matteo Wong on o1's 'reasoning'
WIRED editor: believe the hype
Salesforce CEO: The "unlimited age"

Chaser: Emily and Alex's forthcoming book! Pre-order THE AI CON: How to Fight Big Tech’s Hype and Create the Future We Want


Check out future streams at on Twitch, Meanwhile, send us any AI Hell you see.

Our book, 'The AI Con,' comes out in May! Pre-order now.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

Alex Hanna:

Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find.

Emily M. Bender:

Along the way we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of bullshit mountain, we discover there's worse to come. I'm Emily M. Bender, Professor of Linguistics at the University of Washington.

Alex Hanna:

And I'm Alex Hanna, Director of Research for the Distributed AI Research Institute. This is episode 47, which we're recording on December 9th of 2024. This is our last episode of 2024 and it's been a long year in the AI hype mines. We've debunked the claims. We've critiqued the tech billionaires. We brought in the experts who helped shine a light on what we need instead of all this breathless AI boosting. We deserve a little treat. This is a catharsis, which is an all Hell episode of the show, which is our gift to you this holiday season.

Emily M. Bender:

We've scoured the globe for the freshest of Hell and some of it has just fallen on us in our inboxes, from more updates from the well trodden paths of AI surveillance to new disruptions of our information ecosystem. And if it all gets to feel a little too terrible, don't worry, we've also got some chasers and palate cleansers for you, including one very big one at the very end. So stay tuned for that. But these Hell purging sessions always go fast. So without further ado, let's get going. All right, this is the education region of Fresh AI Hell, and I'm going to start us off with something from medical education. So this is something from AAMC, which I understand to be the American Association of Medical Colleges. I'm not entirely sure about that. Um, and their uh, headline here is"Principles for Responsible AI in Medical School and Residency Selection." It reads, "Artificial Intelligence, AI, refers to a broad range of advanced techniques and processes that perform complex tasks such as large language models, LLMs, machine learning, ML, and natural language processing, NLP." All right. That's a weird collection of things to name. Um, "Historically, simpler statistical models have been used to analyze application data and predict performance in medical school or training." All right. So that prediction is already a little sus.

Alex Hanna:

Yeah.

Emily M. Bender:

"AI can build upon the existing body of literature and traditional techniques by using more advanced mathematical algorithms or models." Um, okay. So there's like, this could be a powerful tool. And I just got so mad at just a couple of paragraphs into this. So, "The integration of AI into selection processes offers promising advancements in streamlining operations and promoting equity." So it's going to be faster and it's going to be fairer. No."For example, ML can assist in predicting applicant performance or in prioritizing applications for review. Applications can be screened in a more standardized way by using NLP to simulate expert judgment when evaluating applicant documents, such as personal statements." No. Don't do this.

Alex Hanna:

Absolutely not.

Emily M. Bender:

And the fact that it's headed as 'responsible AI', it's like, no, you missed the point. So. Next.

Alex Hanna:

Yeah. Okay. Terrible stuff. Next. Okay. So this is from the UCLA newsroom. This is a press release from UCLA."Comparative lit class will be first in humanities division to use UCLA-developed AI system." And they have on the, this horrific AI generated cover where it says, "Of Nerniacular Latin to an Evoolitun on Nance Languis sausages." It's, it's, it's quite, it's like really horrific. Like it's, it's, and it's got like kind of, um, a church looking thing and an ambiguous flag. Um, yeah. Like, why would they post--this is the textbook for the course. I don't know if this person is real, which is professor is a Zrinka Stahuljak, Kudu Human--I don't know. It's, it's horrific.

Emily M. Bender:

All right. On the Rat Ballz album though, we do need a track called Nerniacular Latin.

Alex Hanna:

Yeah. Nerniacular Latin. So like, scroll down and just so you get the lead here. So. So, "Zrinka Stahuljak's comparative literature course next quarter will cover much of the same subject matter she has taught for years past: a sweeping survey of writing from the Middle Ages to the 17th century. But thanks to AI, the course format materials will take on a totally new look for 2025. Comp Lit 2BW will be the first course in the UCLA College Division of Humanities to be built around the Kudu artificial intelligence platform.

The text, the--the textbook:

AI generated. The class period, classroom assignments: AI generated. Teaching assistants' resources: AI generated." Um, horrific. Um, yeah.

Emily M. Bender:

Yeah. Oh, people are having fun with, in the chat with Kudu. So Abstract Tesseract says."Kudu, more like could you not, am I right?" And Crater Boon says, "Ku-don't." Yeah. I also want to point out that Kudu looks an awful lot like kudzu, which is a, um.

Alex Hanna:

That's mine, that, I exactly was about to make this joke.

Emily M. Bender:

Yeah, invasive species for sure. All right, so this whole thing is like, We are gonna have the AI do this stuff so we can focus on the real part of teaching. And the thing that got me the most in this, though, was, um, this isn't the first class at UCLA that's done this. So, "Kudu got its start as a tool for UCLA science courses." Um, and, uh, Alexander Kusenko, Professor of Physics and Astronomy, and his former doctoral student Warren Essey developed it. Um, and Kusenko says, "Coming from a STEM field, I was surprised to see the extensive and sophisticated use of Kudu tools in the humanities." Fuck off! I just gotta say, you know what? Humanists are super sophisticated scholars too.

Alex Hanna:

Right.

Emily M. Bender:

Which isn't what's happening here.

Alex Hanna:

Just absolutely, also nightmare stuff. We'll take a, we'll take a chainsaw to that. Also, there's a chainsaw going in my backyard. So sorry about if you hear a chainsaw on the stream or in the pod. All right.

Emily M. Bender:

Next. Next. Okay. This is Antoine Bosselut, if that's how you say his name, on Bluesky, um, posting about an article and this is, so this is the top of the thread and it says, "1/ Could ChatGPT get an engineering degree? Spoiler, yes. In our new at PNAS.org article, we explore how AI assistants like GPT-4 perform in STEM university courses. And on average, they pass a staggering 91. 7 percent of core courses."

Alex Hanna:

So what, so what do they evaluate? So the, so in the second, uh, throw, they say, "We evaluated GPT-3.5 and GPT-4 using eight prompting strategies on 5,579 English and French questions from 50 STEM university courses at EPFL from both exams and assignments, uh, including many in--" Uh, and then I think, uh, this is a, a department here.

Emily M. Bender:

And, it's just--also the title of the article from PNAS is, "Could ChatGPT get an engineering degree," subtitle, "Evaluating higher education vulnerability to AI assistants."

Alex Hanna:

That's bizarre.

Emily M. Bender:

Yeah. And it's like--

Alex Hanna:

Vulnerability is such a weird way of framing it.

Emily M. Bender:

What do you think a degree consists of? It's not the what you write in the answers to the questions on tests that is the stuff of a degree,

Alex Hanna:

Right? Yeah And I think that you said something else, what can we do with a bunch of test taking algorithms?

Emily M. Bender:

Yeah.

Alex Hanna:

All right.

Emily M. Bender:

And Cnel Kurtz says in the chat, "Wow, a model trained on data can retrieve said data. Mind blowing. LOL."

Alex Hanna:

Big if true. All right. Next. So, um, so this is, so there's, this is a reply. Um, and so why don't you go up to the first one just so we see what it's a reply to.

Emily M. Bender:

Well the very first one is deleted.

Alex Hanna:

Oh, the very first one is deleted. So, um, this is Patricia Goldsworthy who says, "Our dean is hosting an event to teach faculty how to use AI to write our syllabi." And Erin uh, Hochman says, "Our school of graduate studies had a session on how to use AI to write reference letters!!!" And then the replies to this are just very upset and annoyed.

Emily M. Bender:

Yeah, like, what--what is a letter of reference, but a personal communication on behalf of someone from someone who knows the student well to someone else who's considering taking the student on as a student or employee? Um, synthetic text has no business in that at all.

Alex Hanna:

Yeah. I mean, there's lots of problems in like kind of letters of recommendation and that whole, you know, but I think it says maybe a bit about the form of letters of recommendation that they are now, you know, this is within the realm of possibility that they have a bit of a formulaic framing, you know, I've known blah, you know, they're within the top 1 percent of all students I've ever--I mean, it's just like, it's just, I mean, it's, yeah, there, it's, it's a systemic issue that, ugh.

Emily M. Bender:

Yeah, no, and, and just, you know, for the record, a good letter of recommendation is specific and ideally positive. And so when you're writing one, you think about what are all the things I can say that are specific about how great this person is. Okay, we reached our first uh, Chaser here, um. This is a fun piece that we're not going to go into all the details of by Riley McLeod, um, in something called Aftermath, Aftermath.site.

Alex Hanna:

Yeah, they are a new journalism, um, outfit that, uh, I actually like a lot of the stuff they're doing. So Aftermath.site.

Emily M. Bender:

Yeah. And this is, uh, the, the sticker here is "blog" and the headline is,"Clap or AI gets it," with subhead,"Can bad reviews kill companies? It's a start." And the whole thing is about like backlash against someone who was doing honest reviews that in many cases were bad of of tech products. And, um, I really appreciated how he's like, you know, that's fine. Like, we don't have to give good reviews. Um, and, um, this is, uh, just a really funny bit in here with a call out to Peter Pan. So, uh, the McLeod writes,"What we believe about AI matters an awful lot to them. As Aftermath pal Ed Zitron tweeted in response to Humane employees reactions to Brownlee's review,'Why do we have to be optimistic? Why do customers have to fill in the gaps between what you've promised and what you've achieved?' Here's why." And then there's a video clip of the 'clap if you believe' part of Peter Pan where Tinkerbell's dying."Like children clapping for Tinkerbell. We have to be optimistic because the grift dies if we're not."

Alex Hanna:

The thing that's pretty interesting in this article is that it's, it's kind of the ecology of reviews. So this is, um. Marques Brownlee who is a really popular YouTube product reviewer. He reviewed a car called the Fisher--Fisker. It was by Fisker, it was called the Ocean. He didn't like the car. And then the stock price dropped and possible bankruptcy entered the picture. And then he did a review of the Humane AI pen, which I think we've talked about on this podcast. Um, and then he became such a touchstone of it. And I mean, something that's interesting about it is like, yeah, there is a bit of consumer media ecology of this possible to push back. And one of them is saying, we don't like this shit. And it can take some convincing of some of these, you know, these people on YouTube or, or famous podcasters or whoever to say so.

Emily M. Bender:

Yeah. Yeah, yeah. So anyway, appreciate the pushback, appreciate this, uh, very trenchant description of it. And that takes us out into our next area, um, which is, and I numbered my windows so I can find them easily this time.

Alex Hanna:

We're trekking through the AI Hell landscape. We've got our Yaktrax on. We're here to potpourri.

Emily M. Bender:

Potpourri, as in a really rotten pot of stuff.

Alex Hanna:

Mmm, yes. Mmm.

Emily M. Bender:

Okay, your turn.

Alex Hanna:

Okay, so this is on LinkedIn. Justine Moore, who's an investment partner at Andreessen Horowitz, and she says, "New Andreessen Horowitz investment thesis: AI x parenting." Like the x as in a collab in the modern parlance."Raising kids is one of our most challenging important jobs. Every parent needs support sometimes, but many can't access it because it's too--it's too inexpensive or--" Don't they mean expensive?"--or or inaccessible." Sure, because it's too inexpensive."AI tools can ease the burden. More on the X thread." And we're not looking at the X thread. And then there's just like a list of a port--like a big portfolio. Look, okay, I don't know much of these tools. I'm assuming many of them are ways to, like, find cheap labor to hire or do things in the gig economy. Because what the fuck does parenting have to do with AI? Like, people need food, like, cooked. They need more money. They need more--

Emily M. Bender:

They need access to quality child care.

Alex Hanna:

Access to child quality--uh they need, like, wages. They need hours that are flexible. None of this shit's going to help. Like, I, you know, I don't have to look into the portfolio to know that this is dog shit.

Emily M. Bender:

Yeah, absolutely. All right. Next. We're moving. Okay. So this is a picture posted on Bluesky, um, by someone called Space Prez with the, uh, comment, "Ahhhhhhh." And it is a picture of what looks like advertising somewhere in San Francisco, um, with a, uh, the face of a person who looks like a white woman, probably a synthetic image, and it says, "The era of AI employees is here," and the company is called Artisan, and then the thing on the top says, "Artisans won't complain about work life balance." And then this is quote posted by Greg Park, sorry, Greg Pak, who says, "The incredible gall of calling it Artisan."

Alex Hanna:

I don't even, what are the even, so AI employees, like, what do they actually do? I don't even want to dig into it, but it's, this is just a classic San Francisco, no clue on what the fuck this, this, this does, but it's, there's a billboard for it at a bus stop, um. As if, as if tech bros take the bus, but that's another story.

Emily M. Bender:

And it looks like this, this, uh, Artisan AI employee is, uh, somehow sitting in the Tron landscape.

Alex Hanna:

Yeah.

Emily M. Bender:

All right. Let's keep moving. Your turn.

Alex Hanna:

Next one, this is Tri4ge, spelled with a four, dot ICU, um, still on Twitter."Not sure if anyone's posted this yet, but Firefox has added AI in a recent update. And so this is, uh, here's how to disable it." This is a screenshot of someone from, I'm assuming Mastadon, because it's dot--or I don't know, it could also be Bluesky. Who knows, distributed protocols. Okay. It says, "With Firefox having AI added in the most recent update, here's how you can disable it. One, open in your URL bar 'about colon config', uh, accept the warning it gives. Search 'browser dot ML' and blank all the values and set false where necessary as shown in the screenshot. Anything that requires a numerical string can be set to zero. Once you restart, you should no longer see the grayed out checkbox checked, and the AI chatbot disabled from ever functioning." It's not clear from the screenshot what the chat bot is and I can't see in the image, let me see if I can click in from the link to actually see this. I'm gonna go from the chat, thanks for posting that. So it seems like if you get in here, there's a browsing and it says, "Adds the chatbot of your choice to the sidebar for quick access as you browse."

Emily M. Bender:

The chatbot of my choice is no.

Alex Hanna:

Yeah. And so then it says, "When you choose a chatbot, you agree to the provider's terms of service and privacy policy," gross, and, "Show prompts on text select." So you could text select something, it'll show, it'll be a prompt. Really gross stuff. No one needs this shit, Firefox. We thought you were better than this, but apparently not.

Emily M. Bender:

Yeah. Arcane Sciences says, "Sure is a good thing Mozilla justified a bunch of layoffs with, quote, 'we have to refocus on AI'."

Alex Hanna:

Didn't they--Yeah, and Mark Sermon went to stand up something on trustworthy AI. Uh, so, but I know that often one side of the house doesn't talk to the other. So. Really unfortunate though.

Emily M. Bender:

All right, speaking of super trustworthy AI, here's an article from The Byte, published on November 17th, uh, by the journalist Victor Tangerman with the headline--oh, oh, I like the sticker here, "Computer Vision Murder."

Alex Hanna:

Incredible.

Emily M. Bender:

And the headline is, "U.S. Military Tests AI Powered Machine Gun." Subhead, "What could go wrong?" And I think that that basically says it all. I mean, if you, if you read the article, it's a bit about how, um, this is something that is designed to target drones, like that's the problem that they are ostensibly looking into. And the company that built it said, you know, there still has to be a human in the loop to authorize firing. Um, but we could be fully autonomous. We're just waiting for the U S military to like, turn that feature on.

Alex Hanna:

Well, yeah, but these are all already in places like these are already at checkpoints in the West Bank where they have this ad for this. I think it's called Sharpshooter? Where and their tagline was literally, "One shot, one one kill." Just nightmarish stuff so I mean yeah you're having some kind of authority on it but I mean it really depends if the authority wants to just kill somebody they're going to make the option to do so.

Emily M. Bender:

Yeah. Okay. Keeping moving.

Alex Hanna:

Yeah. So this is--

Emily M. Bender:

This one's for you.

Alex Hanna:

This is from, so this is from New York Times journalist Kashmir Hill, who has a book, who's on the facial recognition beat and has been for a while, has a book on Clearview AI called "Your Face Belongs To Us." And she says, "Is there a term for the practice of over indexing a technology's early flaws? This happened with facial recognition technology. Critics were still saying it didn't work after it had gotten incredibly accurate. Seems like something similar may happen with genAI and hallucinations." And so I replied, "Question mark, question mark, question mark. Quite a bad take. Racist facial recognition technology is still racist because of the institutional context, not because of its accuracy. With LLMs, there's a qualitative difference. The bullshit machine will always bullshit because that's what it's designed to do." And so, really terrible job just missing the point. And it's funny because she replies. She's like, "Reminds me of a quote from my book.'Even a highly accurate algorithm deployed in a society with inequalities and structural racism will result in racist outcomes.'" And I'm like, did you, you wrote it, did you read it? Like, anyways, there's a further thread down it about the debate about accuracy and how that, I think that accuracy is a, is a point about these things and a way into discussing how they're embedded in structurally racist institutions. So it shouldn't stop there, but there's definitely a qualitative difference. Bullshit machine will bullshit. Period.

Emily M. Bender:

Exactly. So it kind of cracks me up that this starts with 'is there a term for,' because I went looking on like the Wikipedia entry for fallacies. I'm like, this feels like a named fallacy where it's like, look, something that was shaky at first got better. And so here's this thing that's shaky now. And so therefore it's going to get better too. I'm sure there's a name for that fallacy and maybe somebody in the chat can give it to me. Um, but also like, kind of beside the point. Like, yeah.

Alex Hanna:

Lord.

Emily M. Bender:

Okay. Uh, Washington Post reporting from November 20th of this year. Sticker is "tech brief." Headline is, "Flawed AI denies benefits to low income Americans.""Report warns--" Um, and this is, uh, the, the, um, byline says, "Analysis by Will Oremus with research by Andrea Jimenez." Um, interesting, uh, way of describing the authorship. Um, and basically this is an article about how there is, uh, algorithmic decision making embedded across all of our social services in the U.S. So there's a 197 page report by something called Tectonic Justice, um, and, uh, "finds that almost all 92 million low income Americans already, quote, 'have some basic aspect of their lives decided by AI,' end quote, or automated decisions making software. That includes people affected by eligibility and enrollment processes for Medicaid, prior authorization processes used by private insurers, fraud detection systems used by food stamp programs, and landlords' use of background screening algorithms and rent setting algorithms." Um, and this prior authorization stuff includes the UnitedHealth story. And it's just like, yeah, I mean, if you're going to make an automated system, it's going to be tuned to the goals and priorities of the people who have designed it and decided to use it. Right.

Alex Hanna:

Yeah, absolutely. And I mean, this is sort of not surprising because I mean, so much of what we're seeing, especially in the health, uh health care and, um, and welfare with that, we knew this was the case. I mean, we, Virginia Eubanks' book came out in 2018. I mean, so we knew that, but it's just, I mean, maybe the scale is a bit surprising, but horrific nonetheless.

Emily M. Bender:

Yeah. Yeah. Just, okay. We're on to our second chaser.

Alex Hanna:

Second chaser is from the futurism or The Byte. Uh, and I liked the sticker on this, which is "Scam-ma". Uh, And so the title is "Phone Provider Deploys State of the Art AI Granny to Waste Scammers' Time," by Ashley Bardhan. Uh, "Thanks Grandma" is the, uh, subtitle here. So, "UK telecom company Virgin Media 02O2ust revealed a fascinating AI entity: an audio chatbot that takes a persona of a confused grandmother, fine tuned to do nothing except make phone scammers angry." So maybe a use, I mean it's a, it's a bullshitter that is intended to bullshit. I don't know if it's good.

Emily M. Bender:

Yeah, I mean, so we still have the environmental impact problem and the labor exploitation problem. And you might worry a little bit about like, did someone who's not a scammer land on the AI granny? Um, but in the case where this is wasting scammers time, and a little bit further in this article they talk about how they have ways of detecting possible scam calls and basically redirecting them to this chatbot. Um, so, you know, not a terrible use case.

Alex Hanna:

Yeah, I bet you could do the same thing with less environmental costs. You could probably have a granny ELIZA. Let's call her Elizabeth. I don't know.

Emily M. Bender:

Betty. Um, and so, uh, catching back up to the which fallacy was it, Magidin in the chat says, uh, "The fallacy is probably an instance of the fallacy of extended analogy." Thank you for that. Um, okay. Let me get us to our next one. We are on to region three for the day, um, and that is, what is it again?

Alex Hanna:

Region three, the endangered information ecosystem. We got our camp pack on our back. We're walking over to the ecosystem. We're going down to this trash.

Emily M. Bender:

Yes. Okay. So this one, someone sent it to me. This looks like a news article. Um, it's on a site called HappyFutureAI.Com. Um, it's got the sticker "Deep learning" and the headline, "The rise of Anthropic and the birth of Claude." Um, and, uh, it says by brent Richard Dixon, March 5th of 2024. Somewhere in here I thought I saw that it was said it was coauthored with Claude. Oh yes."By Claude, the Anthropic AI and Brent Dixon, founder of Happy Future AI." And the reason this was sent to me is that it purports to include a quote by me.

Alex Hanna:

Oh my gosh.

Emily M. Bender:

Uh, quote, not quote, I did not say this, but, "'Claude is a true milestone in the history of AI,' said Dr. Emily Bender, a renowned AI ethicist and professor at the University of Washington." The fake quote continues, "It represents the first time we have an artificial system that can truly reason, learn, and engage with the world in a way that is on par with human intelligence. This has profound implications for fields as diverse as scientific research, education, and even the arts."

Alex Hanna:

Lord, okay. Good stuff. Good stuff.

Emily M. Bender:

Yeah. Listeners to this pod will know that I would never say anything like that. And in fact, I'm realizing that this has kind of screwed up the game of like, this is how you could tell I was taken hostage.

Alex Hanna:

Oh, no. Well, this is, well, uh, Christie, our producer made the same joke in our Signal, so I don't know if that was, you just made it at the same time.

Emily M. Bender:

Yeah. So how do you know Emily's been kidnapped? The quote, except that apparently people will also just make it up. And so no, I have not been kidnapped. I never said any such thing as a sign of distress or otherwise. It's fully fake.

Alex Hanna:

That's so wild. Damn. Yeah, nightmares.

Emily M. Bender:

All right next. Hey Alex your hat.

Alex Hanna:

Hey, so this is from 404 Media and for those of you who are who are listening on the pod, I'm wearing a lime green 404 Media hat. Uh, this is not sponsored content, but we love their stuff. And so I've got a lime green hat on and a pink sweater on. So I'm, I'm very pastel. It's very, it's giving Easter bunny, even though--

Emily M. Bender:

Lime green is not pastel.

Alex Hanna:

It's, it's, it, well, on the camera, it looks pretty pastel cause it looks kind of washed. Um, you know, so anyways.

404 Media:

"The editors protecting Wikipedia from AI hoaxes," by Emanuel Maiberg, this is published October 9th, 2024. And they talk about a project called Wikipedia AI Cleanup, or WikiProject AI Cleanup, which is quote, "a collaboration to combat the increasing problem of unsourced, poorly written AI generated content on Wikipedia." So, this is to the goal, "The group's goal is to protect one of the world's largest repositories of information from the same kind of misleading AI generated information that has plagued Google search results, books sold on Amazon, and academic journals." And yeah, for one, very thankful that there's a group dedicated to doing this, but on the other hand, I'm sad that a bunch of Wikipedia editors have to go about it and manage all this bullshit that is spread and spilled into Wikipedia.

Emily M. Bender:

What a waste of everyone's time, but I'm yes, grateful that they're there and grateful to 404 Media for documenting.

Alex Hanna:

Yeah.

Emily M. Bender:

Um, all right, next. Um, so this is a I think this is Mastodon. So this is, uh, Cederbs@Infosec.Exchange, uh, with an image of something that looks like it's from LinkedIn. Um, and, uh, the, the post says, "You ever seen something so painfully out of touch and oblivious it hurts?" And this is a post from something called Notion. With the text, "AI is reshaping our digital landscape much like plastic transformed our physical world, with fascinating parallels between AI and plastic, such as versatility, efficiency, and potential for innovation. And just as plastic enabled entirely new products, AI is opening doors to tools and workflows we couldn't have imagined before. How can we harness AI's qualities to create truly transformative tools that complement rather than replace our human intelligence? Read more." And then, you know, on to something else. And it's like, oh, there's this great image of what looks like very mid century modern plastic goods, um, which, who knows probably this is a synthetic image.

Alex Hanna:

This is also fascinating. There's so there's, this is such a rich, rich image. First Notion, Notion is a project management software. It doesn't really matter what it does. It's, it's one of a hundreds. But the second that the, like the text that it says that it's--first, a lot of mid century modern stuff is actually made out of wood. And it's actually, it's actually, it's actually a bit of, um, you know, It's like, it's going backwards to suggest that mid century modern was actually made out of like, um, petrochemical, like particle board, which does have a lot of plastics in it, which is like, I'm just like, wait, you've got this backwards.

Emily M. Bender:

Yeah, exactly. And, and the thing is, this is like, it's apt, but not for the reason they think right?

Alex Hanna:

Yes, a hundred percent.

Emily M. Bender:

So Christie says,"Have they never met plastic?" And, um, you know, uh, Abstract Tesseract says, "Ah, yes, mining finite resources to produce harmful garbage." And if you think about, I've put this into the information ecosystem section of AI Hell here because we've got this pollution now flowing all over the place, much like microplastics and not yet broken down into microplastic plastics, like, yeah. Okay. Next.

Alex Hanna:

Yeah. Uh, this is from Dan Hon, who is a great, um, poster on many platforms. Uh, and he's saying, "I love having the world's information organized." And it's a, a screenshot of a Google AI overview that says, "The main difference between a sauce and a dressing is their purpose sauces, add flavor and texture to dishes while dressings are used to protect wounds." You know, I love just putting balsamic vinegar right on a gaping flesh wound. Nothing is better for really ensuring that that moisture barrier is sealed.

Emily M. Bender:

All right. Martha Stewart meets Monty Python. It's just a flesh wound until you dress it up with some nice vinegar.

Alex Hanna:

Exactly. Hey, it's only, hold on, hold on. I'm trying to work this. It's it's only if it. It's only a flesh wound if it was incurred in--uh, I can't do it outside of, outside of Cham--whatever, come back to us.

Emily M. Bender:

Otherwise it's just sparkling injury. Is that where we're going? Yeah. Yeah.

Alex Hanna:

Something, I don't know, whatever. Let's workshop it in the chat.

Emily M. Bender:

Yeah. Okay. And, uh, Cnel Kurtz says,"Ranch does miracles on rashes." Okay. Uh, so this one is from Scientific American, um, uh, from November 22nd of this year, um, by Rachel Feltman, Fonda Mwangi, and Jeffrey DelVisio. Uh, with the headline, "Could AI ghosts of ancient civilizations help us connect with bygone cultures?" Subhead, "Social psychologists could turn artificial intelligence powered tools like ChatGPT on to writings from past cultures. Will this help us study ancient civilizations?" No.

Alex Hanna:

No, no, no, I can't see it working.

Emily M. Bender:

Yeah, but I wanted to, um, let's see. Uh, just, I've scrolled down a little bit here. Um, so this, uh, researcher, Varnam, is talking about how, um, to look at ancient civilizations you have to deal with indirect proxies. They say, "Maybe we can get archival data on things like marriage and divorce or crimes, or we can look at cultural products like the language folks use in books, and we try to infer what kinds of values people might have had, or what kinds of feelings they might have had towards different kinds of groups. But that's all kind of indirect. Wouldn't it be amazing if we could get the kind of data we get from folks today, just from say, you know, ancient Romans or Vikings or medieval Persians--" Very narrow view of the world. Okay. Um, "--and one thing that really excited me in the past year or two was folks started to realize that you could simulate at least modern participants with programs like ChatGPT and surprisingly, and I think excitingly, replicate a whole host of--" It's like, no, that was a bad ideafor the case where you could go talk to the people and like, get the actual answers. And guess what? It's also a bad idea for ancient civilizations.

Alex Hanna:

That's wow. People really got harebrained ideas of what these things can do. Okay, go to the next one.

Emily M. Bender:

I've got a lot of ads popped up on this one. Okay.

Alex Hanna:

This is from the San Francisco Gate. So, um, "Stanford lying and technology expert admits to shoddy use of ChatGPT in legal filing." This is by Steven Council. Oh, I know Steven. I think I've met him before. Uh, December 2nd, 2024. There's a picture of the big Stanford, uh, bell.

Emily M. Bender:

Hoover Institution. No.

Alex Hanna:

Oh, the Hoover Tower. Yeah.

Emily M. Bender:

Hoover Tower. Yeah.

Alex Hanna:

I, this is me, willfully ignorant, uh, about Stanford and I will not learn any real things about Stanford.

Emily M. Bender:

Fair, fair, fair.

Alex Hanna:

"A Stanford University professor and misinformation expert accused of making up citations in a court filing has apologized and blamed the gaffe on his sloppy use of ChatGPT--" As if there's a non sloppy use of ChatGPT. This is Jeff--his name is Jeff Hancock. He made "the ironic errors in a November 1st filing for a Minnesota court case over the state's new ban on political deepfakes. An oft-cited researcher at the Bay Area school and founding director of the Stanford Social Media Lab, Hancock defended the law with an"expert declaration" document that, he admitted Wednesday, contained two made up citations and one other error. The two made up citations pointed to journal articles that do not exist; in the other mistake, Hancock's bibliography provided an incorrect list of authors for an existing study." So not only are you making up things, which you are a professor that has to do this kind of work on deep fakes and misinformation. You're also just making up authors to, uh, yeah.

Emily M. Bender:

And this was, this was for, he was functioning as an expert witness in a case. Um, and he was paid 600 an hour to create this thing. Abstract Tesseract says, "Didn't realize a misinformation expert referred to producing the misinformation."

Alex Hanna:

Yeah. Hey, who knew?

Emily M. Bender:

Uh, okay. Speaking of legal cases, um, this is from The Verge by Kylie Robinson on November 21st, 2024. Uh, sticker "OpenAI slash artificial intelligence slash tech." And the headline is, "OpenAI accidentally erases potential evidence in training data lawsuit." Subhead, "Lawyers representing the New York Times and other outlets spent over 150 hours searching OpenAI's data." Yeah. Uh, yes, and IttyBittyKittyCommittee20 says, in quotes, "'Accidentally.'" Um, so just the first paragraph here, "In a stunning misstep, OpenAI engineers accidentally erased critical evidence gathered by the New York Times and other major newspapers in their lawsuit over AI training data according to a court filing Wednesday." The thing that's puzzling to me about this story is that somehow the New York Times lawyers had gone through OpenAI's stuff and the resulting artifact was stored somewhere where OpenAI could delete it? That's the part I don't get.

Alex Hanna:

I think they had a good, according, if I'm remembering details of the story, they had sort of agreed on like a shared repository or something where both the New York times legal team and OpenAI could access it, and OpenAI kind of, they, they said, this is misinformation and they launched back and they were like, the New York times team is like saying, you know, making really unreasonable demands. We have the data available in this format or something. And the New York times team was like, no, you're not actually. I'd imagine the OpenAI team is probably like, 'use the outputs of the model' or something. And the New York Times legal team is, I mean, I don't, it's like, that's, that's the whole point is that the outputs of the model are not to be trusted. Like we actually need to see the data, you know? And so. Anyways, I don't know enough of the details of the case just to, you know, get into it. Anyways, big mess.

Emily M. Bender:

Okay. So from the New York Times to the LA Times.

Alex Hanna:

So the LA Times, this is on a publication called TheWrap.Com. So, "LA Times to publish quote, 'bias meter' on news stories, owner says." And the subhead, "'The reader can press a button and get both sides of that exact same story,' Patrick Soon-Shiong tells Scott Jennings. So the picture of the owner. So, " Patrick Soon-Shiong plans to give the Los Angeles Times newsroom a rebirth continue--" Is there a word missing."--continue to take shape, this time with the implementation of a so-called bias meter." Uh, and then, and then, uh, there's this, where he said this and then the quote, "'Whether in news or opinion, you have a bias meter,' Soon-Shiong said.'So that someone could understand as a reader that the source of this article has level, some level of bias." And I'm just like, uh, this, this has me triggered in the same way that it, that the New York Times, uh, needle for the prediction of elections has people triggered. Like, okay, first off, bias is not left or right, for one. Bias refers to many different axes. Second, some things shouldn't be aired the other side, you know? Like, oh, racist incidents, don't actually care what white supremacists say. You know? Issues of rape, eh, don't actually care what men's rights activists say. Like, what the fuck are you thinking? And this is just like, this is some kind of like, this is billionaire brain rot that Soon-Shiong has that he thinks like there's some kind of linear scale that you can metricize which, you know, this is the same kind of logic Musk has in terms of like Grok being like anti woke or whatever. This is just infuriating stuff.

Emily M. Bender:

And there's in this little bit here really got to me too."He elaborated, quote, 'The reader can press a button and get both sides of that exact same story based on that story and then give comments.'" Which sounds like it's, we're going to program a large language model to retell a story with, you know, the, in quotes, 'two different kinds of bias'.

Alex Hanna:

Yeah.

Emily M. Bender:

No, no, you're supposed to be a news outfit.

Alex Hanna:

Yeah.

Emily M. Bender:

All right.

Alex Hanna:

Uh, Mad, so Magidin in the chat said, for this kind of, was talking about the, the, the, the prior story, "For this kind of discovery, I'd expect OpenAI would ask rather than send documents physically or digitally to the New York Times lawyers, they'd make the data accessible for the lawyers while still maintaining control over it. And then after the New York Times lawyers located some of the evidence, they then try to find it again and it is no longer there." Okay. So then, yeah, so it's, yeah, it's more of--

Emily M. Bender:

So it was all on OpenAI's servers.

Alex Hanna:

Yeah. Yeah. Yeah.

Emily M. Bender:

Yeah. All right. Onto the Washington Post. So this is a Bluesky post from Tom Scocca, um, reading, "LOL the Washington Post replaced its archive search tool with an AI that tries to summarize the archive ahead of delivering any actual stories, and which only ranks its findings by quote 'relevance' with no option to rank them by date"

Alex Hanna:

Terrible.

Emily M. Bender:

And then reply by someone named Steven, "Love their continued commitment to being to becoming the darkness that democracy dies in." So I went to the Washington Post and put in a search, I put COVID-19,, figuring that might show this problem nicely. Um, and the first thing that comes up, I did not ask for it, is "Ask the Post AI." And then there's some synthetic text that I'm not going to read. Um, "Five articles were used to generate the answer." Um, and then below, um, "57,600 results related to COVID 19." The first one has no date. And then we have April 24th, 2020. Um, another one without a date, another 2020, a 2023, a 2021, um, and indeed no way to rank this by date. So.

Alex Hanna:

This is infuriating for someone who does like a lot of archival research. So for instance, I talk about this a lot when talking about protest event work and you know, one of the examples that I use often, and this isn't an original insight for me, it was an insight from someone that works on queer politics, was that the New York Times for I think up to 1987, um, in the prior editorship didn't use the word 'gay'. They would use the word 'homosexual'. Um, and that was something that they keyed all kind of gay politics around, and then they made the editorial change. And so, okay, hypothetically you have, you know, embedding space that casts kind of gay and homosexual onto the same space. But I mean, it's actually quite important that you need to know when the term was used. And not as, not as, not as if the modern New York Times is some gay bastion of queer politics, but it is certainly one in which you could sort of suss out when there was a discursive move between one and the other.

Emily M. Bender:

Yeah. So, newspapers not only about being a reliable source of information for present events, but also they sit on these rich, rich resources of their archives and making those less accessible is a harm to the information ecosystem. All right, for our chaser on this one, Alex, take it away.

Alex Hanna:

So this is for a Bluesky from A Libi Rose. Um. And they say, "stopped by the Denver, um, MCA--" which I'm assuming is the Museum of Contemporary Art, "--to pick up an order from the shop and they have a really excellent audio installation in the entryway made up of sounds that triggered Shotspotter false positives." So this is by, uh, Ben Coleman is the artist from Chertsey, England, lives and works in Denver. The name of the, uh, thing, the name of the piece is "False Positives", uh, uh, parentheses ShotSpotter Series 2024. And the, um, I'll just read this very quickly. The first, uh, the first graph of the description, um, "Ben Coleman's sonic artwork False Positives confronts the unintended consequences of ShotSpotter, an acoustic gunshot detection system. ShotSpotter uses sensors to detect potential gunfire, transmitting audio to a review center, where it's analyzed. Misidentified non gunfire shots, termed false positives, can mobilize police to investigate the area near the trigger sensor. Leaked internal ShotSpotter documents reveal a list of sounds that are so frequently flagged by the system as gunshots that they have been assigned codes, including 'FC' for firecracker. Many other sounds that we associate with celebration and joy can be mistaken for gunfire by ShotSpotter as well: claps, cheers, champagne bottles uncorking, party poppers, balloon pops. Exploring the sonic confusion between these real and mischaracterized threats, coleman's installation transformed the museum's entrance into a charge and vibrant soundscape." And it's just, and, and then I think that this is a piece I was going to mention, but as mentioned in the, in, in this, uh, which I'm thankful for, "ShotSpotter technology is highly racialized. The sensors are frequently installed in neighborhoods with predominantly Black and Latinx communities, which are already historically over policed and are less likely to be found in majority white neighborhoods. The system's tendency to detect false positives has also contributed to an exponential increase in negative interactions between the police and residents of these neighborhoods. False Positives'-- Italicized, um, in terms of the name of the piece."--juxtaposition of sounds challenges ShotSpotter's pervasive presence in the United States and Denver, highlighting technology's role in civic surveillance and the perpetuation of racial inequality." So really, really poignant piece.

Emily M. Bender:

Yeah, delightful. And so, so well done. And, and I think really underscores the importance of actual art.

Alex Hanna:

Mm hmm. Yeah.

Emily M. Bender:

Yeah. Alright, having been refreshed, it is time to go on to our next group of things. Um, are we on to number four?

Alex Hanna:

We're on to surveillance, and AI in science, and medicine.

Emily M. Bender:

Yes, okay, so the first thing is, this is the surveillance thing, um, this is something called, uh, BiometricUpdate.Com. Um, and this is posted on November 27th, 2024 by someone named Joel R. McConvey. Um, with the headline, "Apple patent uses FRT with in quotes 'body data' so cameras can ID people without seeing faces." Subhead, "Tech uses clothing, gait, more to recognize individuals in the home and elsewhere". Why does Apple need to patent this?

Alex Hanna:

This is interesting because this is actually, so gait research is, is, is, has been an area of recognition, um, and computer vision for some time. That Apple is patenting it, says something to kind of, either cornering a market or a potential market. Um, but it's also, you know, this pervasive surveillance of getting away from, from using faces, even when they're, you know, when they're masked, for instance.

Emily M. Bender:

Right. Alright, moving on.

Alex Hanna:

So the New York, the New York Times, uh, cred, giving, granting cred, credulity to, uh, chat bots, says, "AI chatbots defeated doctors at diagnosing illnesses," as it is a competition, as everything must be in AI. The subhead, "A small study found ChatGPT outdid human physicians when assessing medical case histories, even when those doctors were using a chatbot." Who's the author on this on this little one? This is by Gina Kolata and let's see. So, "Dr. Adam Rodman, an expert in internal medicine at Beth Israel Deaconess Medical Center in Boston, confidently expected that chat bots built to use artificial intelligence would help doctors diagnose illnesses. He was wrong. Instead, in a study Dr. Rodman helped design, doctors who were given ChatGPT-4 along with conventional resources did only slightly better than doctors who did not have access to the bot. And to the researcher's surprise, ChatGPT alone outperformed the doctors.'I was shocked,' Dr. Rodman says."

Emily M. Bender:

Small study. So this thing where basically they get a case description and the answer to the test involves giving three possible diagnoses and like relating them to the um, what information is given in the, in the case study. And, um, it's just like, why are you trying to use ChatGPT this way? Like what's the, you know, um, yeah. All right. I think--

Alex Hanna:

Before we move on, there's some incredible, uh, uh, like links at the top. The second being, "Does your teen recognize AI?" And I just, and I just think this is, this has the same energy of, do you know where your teen is at right now? Uh, and I just, I just, I just want to make that, okay.

Emily M. Bender:

And like, just thinking about if you were going to make some sort of automated tool for assisting clinicians with diagnoses. Um, you wouldn't design it so you say, hey, ChatGPT, you know, answer this question for the doctor. It might be, here's a database of symptoms associated with all these, or something, something where it's affordances are very clear to the physician using it. Um, but, okay. Uh, ProPublica, this is under healthcare, an article from November 19th by Annie Waldman. Um, "How UnitedHealth's Playbook for Limiting Mental Health Coverage Puts Countless Americans' Treatment at Risk." And, um, so. This is like the sort of subhead here."United used an algorithm to identify patients who it determined were getting too much therapy and then limited coverage. It was deemed illegal in three states, but similar practices persist due to a patchwork of regulation." So, this is more excellent reporting from ProPublica, basically getting in and finding these misuses of algorithms that are basically tuned um, to support the profits of the companies and not the patients who need care, um, and sort of tracking down how they're being used. So, hellish indeed, and I'm grateful to ProPublica for, uh, being on it.

Alex Hanna:

Yeah and this is, and this is by Annie, Annie Waldman, November 19th. And I think this kind of has gone along with a lot of the other stuff from UnitedHealth, including the, um, the algorithm that had a 90 percent error rate. And that was also, um, some great reporting from STAT News in particular, focusing on a, um, an algorithm that says to determine the length of care that, um, participants of Medicare Advantage plans, uh, were being granted in, uh, post acute injury care.

Emily M. Bender:

Yeah. All right. We need to pick up the pace if we're going to get through all this stuff.

Alex Hanna:

Okay. All right, I'm flexing. Okay, I'm, I'm, I'm shrugging my shoulders. All right, Washington Post, "AI friendships claim to cure loneliness. Some are ending in suicide." December 6th, uh, "Researchers have long warned of the dangers of building relationships with chatbots, but an array of companies now offer AI companions to millions of people who spend hours a day bonding with the tools." Uh, terrible image on the front, but at least it's an illustration. This is by Natasha Tiku. Um, Yeah, it's, it's just bad stuff, folks.

Emily M. Bender:

Yeah, this is bad, but I do want to take us to this Zuckerman, Zuckerberg quote in here.

Alex Hanna:

Yeah.

Emily M. Bender:

"'One of the top use cases for Meta AI already is people basically using it to role play difficult social situations, like a fight with a girlfriend,' CEO Mark Zuckerberg said at a tech conference in July." It's like, who role plays a fight? Like you might role play, I've got something I'm nervous about talking about it, whatever. But anyway, that's so. So out of touch. And Abstract Tesseract says, "When are we getting a Weizenbaum reprint?" Indeed. Okay.

Alex Hanna:

Hey, hey, keep your eyes open. There might be something in the works.

Emily M. Bender:

Um, okay.

Alex Hanna:

This is, this is you. Yeah.

Emily M. Bender:

Yeah. Okay. So this is something posted to arXiv on September 6th. It's a research paper by, uh, Changlei Su, Diyi YAng and Tatsunori Hashimoto from Stanford University. A subject or the head, sorry, this title is, "Can LLMs generate novel research ideas?" Question mark. Subtitle, "A large scale human study with 100 plus NLP researchers." And this, the, the news reporting on this just went bananas and they're like, look, it can come up with novel stuff. But then the people dug into the article and the article actually said, well, we're only judging novelty here and not like, is it feasible? But also the way it was put together, um, the researchers were not going to give the ideas they were really excited about. Right. And on top of that, the LLM generated ideas were first filtered by some researchers before being put into this evaluation. So, yeah.

Alex Hanna:

Yeah.

Emily M. Bender:

Uh, all right. We can go pretty quickly on this next one too, go for it.

Alex Hanna:

Next one. This is, uh, this is kind of a, I mean, it's not exactly Galactica redux, but it has parts of it. So this is from Ai2, the Allen Institute.

"Ai2 OpenScholar:

Scientific literature synthesis with retrieval Augmented language models." This is written by Akari Asai, November 19th. And so this is a tool. Scroll down a little bit here. Sorry. And, uh, "On the shoulders of giants", uh, which I don't, I guess there's a, like, this was on Google scholars page for a while. I don't know why they like it. Um, just going to skip to the second, uh, paragraph."To help scientists effectively navigate and synthesize scientific literature. We introduced AI to open scholar. Collaborative effort between the University of Washington and the Adelaide Institute for AI. OpenScholar is a retrieval augmented language model designed to answer user queries by first searching for relevant papers and literature, and then generating responses grounded in these sources. Below are just some examples." Just read the papers. God damn it.

Emily M. Bender:

Yeah. Like the whole point, if, if you're standing on the shoulders of giants, it's because you've learned from their work and you are building on it.

Alex Hanna:

Just, yeah. Computer scientists just read, please read, just read, just read a damn paper. I beg of you.

Emily M. Bender:

Okay. Our chaser for this section, more reporting from 404 Media. Um, this one is by Joseph Cox on December 3rd of 2024. Headline is "FTC Bans Location Data Company That Powers the Surveillance Ecosystem." And then we have a subbed over here."Ventel is a primary provider of location data to the other, to the government or other companies that sell to U.S. agencies. The FTC is banning Venntel from selling data related to health clinics, refugee shelters, and much more." Um, let's enjoy Lina Khan's FTC while we still have it. Um, yeah. Any thoughts on this one, Alex?

Alex Hanna:

I mean, yeah, big ups to the FTC, and they've been, you know, coming out with enforcement actions in the last few months of this FTC. The, uh, the kind of picks, this is not really the article, but the potential commissioners that are coming on are going to be quite bad indeed, so.

Emily M. Bender:

Yeah. But this is, this is interesting and exciting. I was listening to the 404 Media podcast and they were sort of talking about how what's particularly interesting here is that the FTC is one U.S. government agency and it is going after a company that is selling data to other U.S. governments uh, agencies. So, that is kind of interesting. Okay. We are on to our last segment here. Um, that one is the one that I called number five, they tell us to, we should believe the hype. Um.

Alex Hanna:

So the first one, this is by our favorite mustachioed, taxicab sourced, New York Times opinion writer, Thomas L. Friedman, who says a Harris presidency is the only way to stay ahead of AI, um, and he says, "There are many reasons I deeply, I'm deeply disappointed." Um, scroll down a little bit, let me see if I can read this, um, for, "for Jeff Bezos to kill the newspaper's editorial," which, rightly be pissed about that. Um, but then he also, then he turns, dramatic twist, "and this election coincides with one of the greatest scientific turning points in human history, the birth of artificial general intelligence, or AGI, which is likely to emerge in the next four years and will require our next president, put together a global coalition to productively, safely, and compatibly, compatibly govern computers--" Like Windows 95, what the fuck are you talking about?"--to govern computers that will soon have minds of their own, superior to our own." Just like okay like you're just making, I mean, of course you're making up shit. You're Thomas Friedman. It's the brand to make up shit whole cloth. But like that's the that's the thing that you're turning on? That is--

Emily M. Bender:

All right. Okay. All right. This next one is a longer piece than we will have time to get into it thoroughly, but I did want to talk about it. This is something that just came out December 6th by Matteo Wong in the Atlantic. Sticker, technology. Headline, "The GPT era is already ending. Something has shifted at OpenAI." And you might think, oh, good, but no. All right. So this is, he is very credulously reporting on their new, o1 thing that supposedly can reason. I refer listeners back to our episode not too long ago where we tore into that. And the very frustrating thing here is that he interviewed me for this piece and like missed the point. I talked to him for a long time. It felt like he was understanding, but I guess he is so deep into the hype that, um, it's not. Like what I say is not sticking. So, um, he says, "On the surface, the startup's latest rhetoric sounds just like hype the company has built its $157 billion valuation on. Nobody on the outside knows exactly how OpenAI makes this chatbot technology. And o1 is the most secretive release yet. The mystique draws interest and investment." And then he quotes me."'It's a magic trick,' Emily M. Bender, a computational linguistics at the University of Washington and prominent critic of the AI industry, recently told me.'"

Alex Hanna:

But that's all the cite--

Emily M. Bender:

Well, there's, there's a little bit more further down, but it's like, yes, I meant magic trick, like sleight of hand.

Alex Hanna:

Yeah.

Emily M. Bender:

Not like it's magic. And, uh, he does, um, so he quotes stochastic parrots. Hold on, back up. Um, why is this doing that? Um, I feel like there was one other place where he--all right. Yeah. So, um, the other thing he talks about the, we got into this in our other episode, um, but took a very different lesson from it. So, um, they supposedly had their system compete in this computer science thing. And, uh, so it was a recent coding test that allowed participants to submit 50 possible solutions to each problem, but only when o1 was allowed 10,000 submissions instead, then it scored better than most humans. And it's like, right. So the point is, it's not actually finding solutions. And the quote from me here is, "This is back to a million monkeys typing for a million years generating the works of Shakespeare." But he doesn't seem to really get it. Um, so anyway.

Alex Hanna:

It's, it's pretty it's pretty bad stuff. Uh, Wise Woman For Real says, "I wish American journalists would send what they thought I said to me before publishing." Many American journalists do, uh, unfortunately it's not standard practice.

Emily M. Bender:

Yeah.

Alex Hanna:

Yeah. All right. Uh, this is from Steven Levy in Wired. We've spoke about him with, uh, Brian Merchant when he was on the show, uh, last time and that'll be coming out soon. And the title, this is from May, it says "It's time to believe the AI hype." So Steven Levy, who is very, very believing really into the hype in 2015, still really into the hype. Maybe, maybe work against the hype? I don't know."Some pundits suggest generative AI stopped getting smarter. The explosive demos from OpenAI and Google that started the week show there's plenty more disruption to come." And the funny thing is the things that, that he's referring to, it's got an image of that weird, like, like modern home in which OpenAI discussed their model that had the voice of Scarlett Johansson and without her consent. So scroll down a little bit more. Um, yeah, it's like, you know, he's--

Emily M. Bender:

It's just hype.

Alex Hanna:

It's just hype. There's not much worth reading. Yeah.

Emily M. Bender:

No, it's just the thing that got me here was that the headline is, "It's time to believe the AI hype." It's like nope, your meta hype is still hype.

Alex Hanna:

Yeah. Lord.

Emily M. Bender:

Um, okay. So, uh, this is something in Time Magazine by Mark Benioff, who is the chair and CEO of Salesforce, and the owner of Time. Headline is, "How the Rise of the New Digital Workers Will Lead to an Unlimited Age." And it's more of the same. So, "Over the past two years, we've witnessed advances in AI that have captured our imaginations of unprecedented capabilities in language and ingenuity. And yet as impressive as these developments have been, they're only the opening act." So it's more of this like breathless, like here it comes kind of a thing. Yeah. Um, and this one is from November 25th. So it's like, it's not stopping.

Alex Hanna:

Yeah.

Emily M. Bender:

Which brings us to our final chaser here. Um, I have to say, live viewers of the Twitch stream, you are the first to get to see this aside from like us and our closest associates. Alex go for it

Alex Hanna:

Yeah, so cover reveal of our book, "The AI Con: How to Fight Big Tech's Hype and Create the Future We Want." You can go and check out our site. It's TheCon.AI. Love it. It's like if Tegan and Sara wrote a book about AI. No, it's not about that at all. Deep cut for Tegan and Sara and the album, The Con. Anyways, uh, so great cover reveal, um, by, uh, folks over at Bodley Head in the UK. And we loved it so much for using it for the U.S. cover as well. Um, yes.

Emily M. Bender:

On the site, you can find links to where you can pre order it now. Um, and you know, please support your local bookstore. Please ask your local library, um, if they could get a copy or two. Um, and we have one blurb so far. There'll be others to come. Uh, we have, Alex put this gorgeous site together. We have, um, a new site with media so far that is pre book media, but we will be putting the book related media there too. Um, and yes, our beautiful cover design that we are super excited to share with you now. So, um, spread the word. It's out. You were the first to get to see this webpage, but now I'm going to stop sharing so you can't see it anymore. You've got to go to TheCon.AI yourselves to see it.

Alex Hanna:

We should say the book is out on May 13th, 2025, so you can pre order it now. And yeah, that's going to be part of the pitch. And, uh, so yeah, let's take it out. We got through Hell. That's it for this week. Our theme song was by Toby Menon, graphic design by Naomi Pleasure-Park, production by Christie Taylor. And thanks as always to the Distributed AI Research Institute. If you like this show, you can support us in so many ways. Rate and review us on Apple Podcast and Spotify. Pre-order The AI Con at TheCon.AI or wherever you get your books. Subscribe to the Mystery AI Hype Theater 3000 new newsletter on Button Down or donate to DAIR at DAIR-Institute.org. That's D-A-I-R hyphen institute.org.

Emily M. Bender:

Find all our past episodes on PeerTube and wherever you get your podcasts. You can watch and comment on the show while it's happening live on our Twitch stream. That's Twitch.TV/DAIR_Institute. Again, that's D A I R underscore Institute. I'm Emily M. Bender.

Alex Hanna:

And I'm Alex Hanna. Stay out of AI hell, y'all.

People on this episode