Mystery AI Hype Theater 3000

Episode 15: The White House And Big Tech Dance The Self-Regulation Tango, August 11 2023

September 20, 2023 Emily M. Bender and Alex Hanna Episode 15
Episode 15: The White House And Big Tech Dance The Self-Regulation Tango, August 11 2023
Mystery AI Hype Theater 3000
More Info
Mystery AI Hype Theater 3000
Episode 15: The White House And Big Tech Dance The Self-Regulation Tango, August 11 2023
Sep 20, 2023 Episode 15
Emily M. Bender and Alex Hanna

Emily and Alex tackle the White House hype about the 'voluntary commitments' of companies to limit the harms of their large language models: but only some large language models, and only some, over-hyped kinds of harms.

Plus a full portion of Fresh Hell...and a little bit of good news.


References:

White House press release on voluntary commitments
Emily’s blog post critiquing the “voluntary commitments”
An “AI safety” infused take on regulation

AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype
“AI” Hurts Consumers and Workers — and Isn’t Intelligent

Fresh AI Hell:

Future of Life Institute hijacks SEO for EU's AI Act

LLMs for denying health insurance claims

NHS using “AI” as receptionist

Automated robots in reception

Can AI language models replace human research participants?

A recipe chatbot taught users how to make chlorine gas

Using a chatbot to pretend to interview Harriet Tubman

Worldcoin Orbs & iris scans

Martin Shkreli’s AI for health start up

Authors impersonated with fraudulent books on Amazon/Goodreads


Good News:


You can check out future livestreams at https://twitch.tv/DAIR_Institute.


Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

Show Notes Transcript

Emily and Alex tackle the White House hype about the 'voluntary commitments' of companies to limit the harms of their large language models: but only some large language models, and only some, over-hyped kinds of harms.

Plus a full portion of Fresh Hell...and a little bit of good news.


References:

White House press release on voluntary commitments
Emily’s blog post critiquing the “voluntary commitments”
An “AI safety” infused take on regulation

AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype
“AI” Hurts Consumers and Workers — and Isn’t Intelligent

Fresh AI Hell:

Future of Life Institute hijacks SEO for EU's AI Act

LLMs for denying health insurance claims

NHS using “AI” as receptionist

Automated robots in reception

Can AI language models replace human research participants?

A recipe chatbot taught users how to make chlorine gas

Using a chatbot to pretend to interview Harriet Tubman

Worldcoin Orbs & iris scans

Martin Shkreli’s AI for health start up

Authors impersonated with fraudulent books on Amazon/Goodreads


Good News:


You can check out future livestreams at https://twitch.tv/DAIR_Institute.


Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

ALEX HANNA: Welcome everyone to mystery AI hype Theater 3000 where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find. 
EMILY M. BENDER: Along the way we learn to always read the footnotes. And each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discovered there's worse to come. I'm Emily M. Bender, a professor of linguistics at the University of Washington. 
ALEX HANNA: And I'm Alex Hanna, director of research for the Distributed AI Research Institute. This is episode 15. Can't believe we're on 15 in this. And it's August 11th of 2023. And we're here to talk about what happens when the very people poised to make money off large language models are the ones governing how they're used. EMILY M. BENDER: From the White House's announcement of voluntary commitments to ensure quote "safety security and trust" to an industry group that promises, pinky swear, to ensure safe and responsible development. We're covering it all.  

ALEX HANNA: And as you might guess, we look at these commitments and see only more hype and misdirection. So hey let's get into it. Emily. 

EMILY M. BENDER: Yeah, are you ready, should I share the first specimen that we've got here? 

ALEX HANNA: Let's do it, let's make it happen. 

EMILY M. BENDER: All right. We are starting with a fact sheet from the Biden-Harris Administration, um and yeah so: "Biden-Harris Administration secures voluntary commitments from leading artificial intelligence companies to manage the risks posed by AI."  

Um so this is not regulation, this is not our electeds standing up for the rights of the people, it's our electeds uh gathering the--this was released with this hilarious photo op where it was President Biden and um like what was it, like seven guys in suits um. 

ALEX HANNA: Yeah, they were universally men um all very smiley including great folks involving in--including Mustafa Suleyman who used to be at Google's Deep Mind, was pushed out uh and had been accused of harassment, um Kent Walker who is the notorious head lawyer for Google, all kinds of great folks. 

EMILY M. BENDER: Yeah I love what's happening in the chat already. Abstract Tesseract says, "Give me an A! A. Give me an I! I. What does that spell? Agency capture. Which is perfect because this is like, you know, ChatGPT can't reliably spell things, right. If you ask it to say what are the letters in this word it's going to get it wrong. So it's like got that Meme in there and also yes this is--it's not even agency capture because there's no agency here. 

ALEX HANNA: Right. 

EMILY M. BENDER: There's no regulatory body, this is just um the these seven and how is this billed? "Seven leading artificial intelligence companies." Um. 

ALEX HANNA: Right.  

EMILY M. BENDER: Which are you want to you want to share that list with the people Alex? 

ALEX HANNA: Right I mean Amazon, Anthropic Google, Inflection, Meta, Microsoft and OpenAI. Uh so Anthropic is a is a kind of an offshoot to OpenAI where the founders went and Inflection is something that doesn't even have a product. It is a collab between Mustafa Suleyman and Reed Hoffman, the founder of LinkedIn, and a major serial investor um and founder. And so you know it's interesting that these are the people that are being brought to the table. I mean this has happened a few times and just contextualizing this um this move was you know roundly criticized by civil society. 

They had kind of a counter uh meeting uh that seemed a bit for show where people like Joy Buolamwini from the Algorithmic Justice League uh were invited, which is good, but then you had people like Tristan Harris from the Center on Humane Technology who's been become immediately--whose first thing you know he got famous for basically saying that we were spending too much time on social media and has gone full AI doomer uh involved in that meeting so just a real motley crew as a response.  

Um but yeah voluntary commitments rather than any kind of invitation of kind of a broad civil society uh that's been talking about this stuff for years and then just jump onto this to the to this bandwagon. 

EMILY M. BENDER: Right and and has--like so the fact that Anthropic and Inflection are in this list as leading AI companies I think says a lot about how AI is being conceptualized here.  

Um and you know every time I say "AI," please hear scare quotes, um but um for these people it's not in scare quotes, right. These are the folks, OpenAI, Anthropic, Inflection, um who really think they are building artificial minds and we have to be protected from those artificial minds, and so somehow these companies are going to get together with these voluntary commitments um. So just on the upside, um so this starts with, "Since taking office, President Biden, Vice President Harris and the entire Biden-Harrison Administration have moved with urgency to seize the tremendous promise and manage the risks posed by artificial intelligence (AI) and to protect Americans' rights and safety. As part of this commitment President Biden is convening--" Blah blah blah. So uh I wish we could talk about managing risks and protecting rights without like paying lip service to tremendous promise, that's always annoying right um but the um you know the upside here is that this is just part, and um you know hopefully the Biden-Harris Administration means it and they're going to keep working on other angles and not have it stop here and but that doesn't stop us from going through the trash that's in this document. 

ALEX HANNA: Yeah and I mean we understand there's you know, government has many things and there's many ways of getting on this. Right now we're at a place where the executive and also the legislature are in a place where they are kind of on the back foot, where they're looking for ways to intervene to get ahead of these things, especially with ChatGPT, um and other generative technologies that have um really become you know such a center  of conversation. So I want to read this part and then get kind of into the commitments, and then I know we also have the the long the long form of these commitments queued up, which um gets into the nitty-gritty. Uh but they've they've that says, "These commitments, uh which the companies  have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI safety, security, and trust, and mark a critical step towards developing responsible AI."  

Um and then they go down and uh say what these commitments are.  

Um so maybe we can go you know one by one and kind of pan them as they're structured here. 

EMILY M. BENDER: So these are reflected in the larger document too and I think it might--well no okay this is this is important because this is this is the Biden-Harris Administration speaking and the other one is maybe the companies speaking, and so we should probably look at it from this document first, yeah.  

Um so: "Ensuring products are safe before introducing them to the public," um notice that above it said, "these companies have chosen to undertake these things immediately" um and yet uh ChatGPT was just sort of dropped out there and not retracted on the point of taking these commitments. We've got synthetic text machines in Bing and in Google Bard um and I see nothing about ensuring that those are safe for you know protecting people against misinformation, protecting the information ecosystem. No movement there right? But it says, "The companies commit to internal and external security testing of their AI systems before their  release. This testing, which will be carried out in part by independent experts, guards against some of the most significant sources of AI risks such as biosecurity and cybersecurity, as well as its broader broader societal effects." What do you think about that Alex? 

ALEX HANNA: Well I mean there's a lot in this statement right because one, I mean actually trying to define what security testing means in this case.  

Uh we've talked a lot about evaluation on this podcast before and what evaluation entails, um and what it would mean to actually talk about this. They talk a lot about kind of the independent experts part of this, the idea of red teaming internally and externally.  

We talked a lot about one of the um--the partial novella, the the GPT-4 system card and the way that the kind of testing has been pretty ad hoc on that. But also the kind of idea of red teaming  as this kind of thing where if you get enough kind of people in the room and have a vague kind of notion of what testing is, that you're going to get to some kind of notion of security.  

Um so that's the first part of it. The kind of concern around biosecurity and cybersecurity then  and this is kind of writ within this whole thing. Does and this is more um played out in the in the other aspects of this it really gets to this notion of AI systems as being a national security risk, and so this is playing--and we spent a lot of time last week with Lucy Suchman talking about  the kind of the resurgence of kind of the Cold War rhetoric around security and national security.  

Um and so this is re-inscribing a lot of these things around a security frame um which brings with it its own kind of implications from a you know a national security context, uh without really paying attention to a lot of these things around uh bias, discrimination, um kind of direct harms that are existing uh where there's this much more of a of a military industrial complex built up. So I mean that's my initial read of that just from the jump.  

EMILY M. BENDER: You know it feels like a big hey look over there at the scary boogeyman and like never mind the rights we're trampling on right now. Right? 

ALEX HANNA: Yeah. 

EMILY M. BENDER: Yeah. Okay second bullet under "ensuring um products are safe": "The companies commit to sharing information across the industry and with governments, civil society, and academia on managing AI risks. This includes best practices for safety, information  on attempts to circumvent safeguards, and technical collaboration." And I just want to maybe foreshadow that that there's absolutely nothing in this about dataset documentation. So if they were actually committing to sharing information in a way that would allow us to manage the risk  of this kind of technology, they would pick up on that proposal, which has been well elaborated by many groups including people at industry labs for over six years now. But that doesn't come  up, because that's not what they want to do, right? 

ALEX HANNA: Well it also plays into this next one because that has to do with openness, where it's the next the next set of things say, "Building systems that put security first: The companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. These model weights are the most essentially--essential part of the system--" And I want to take umbridge with that in particular, "--and the companies agree that it's vital that the model weights be released only when intended and when security risks are considered." So this itself is you know an argument here in lieu of openness, and in a way of openness that would that would document data and different parts of data as well as how data  play into particular model weights and particular outcomes, that they want to basically hide these model weights. 

I mean this is playing into sort of a China competition frame very uh very squarely so, right um, and so you know like if and so there and then I--I do want to read the next one because it ties to how this framework envisions kind of the way that these things get red-teamed and tested, where they where it reads, "The companies commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems. 

Some issues may persist even after an AI system is released, and a robust reporting mechanism enables them to be found and fixed quickly."  

And so they they're talking--you know what I'm envisioning here is that they provide probably  some kind of licensed third-party auditor or some kind of a post-hoc certification to come and induce something of that nature. Which you know by itself may sound which may sound good but if you are actually committing to doing any kind of thorough documentation, that falls quite short. 

Um and it is it is it is this post-hoc activity as being one that is um you know that is a commitment.  

And and OpenAI um I mean open an OpenAI um could say that it's doing this for instance, so for like one of the things that they did in GPT-4 is that they said that they contracted with uh  what was it the Alignment Research Center or ARC, to to test these we you know kind of particularly very stylized scenarios as a third party entity. Um but that--you know that is not that was testing one particular part of it, and it is the most fanciful part of it, the alignment part of it.  

Where they were trying to see if this thing would game or self-replicate or any of these other types of fantasies. 

EMILY M. BENDER: Or be able to yeah take action in the world by recruiting a human or something, but it was completely unclear in the system card what actually happened in that scenario. Like how much did the people doing the testing like say prompt it to do each next step, versus just sit back and see if it did it, which of course it wouldn't because it's just a language model um. 

But also that uh OpenAI put out the system card which is like a mockery of the data and model documentation stuff that we were talking about and also in I think if that that document then in their um tech report on GPT-4 they said we're not going to share information about the data or how we train this thing "for safety." Which is just-- 

ALEX HANNA: Right.  

EMILY M. BENDER: --antithetical to safety and it only makes sense in this world view where they are building something that is inherently dangerous and might, you know, escape the lab as it were um as opposed to, this is something that people are using to automate tasks or create synthetic media and so therefore in order to mitigate the risks of that we would do better if we knew what was in it. Um so they've got this weird viewpoint on what safety means. 

ALEX HANNA: Yeah. 

EMILY M. BENDER: And unfortunately they've co-opted that term.  

ALEX HANNA: Yeah well yeah yeah. 

EMILY M. BENDER: All right uh so the first so first bullet point was making sure the products are safe before introducing them to the public, um second one was building systems that put security first. We just did that. And then "Earning the public's trust: the companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system." That sounds fantastic. We need that right? 

ALEX HANNA: Yeah. 

EMILY M. BENDER: Um put a pin in that because we're going to see what that actually amounts to in the commitments document.  

Um, "This action enables creativity with AI to flourish but reduces the dangers of fraud and deception." Uh that is not speaking to the data theft underlying these systems, um so encountering synthetic media and not knowing that it's synthetic is one problem, but data theft  is another, and this is only addressing the first one--if it actually were. And we will get to that.  

ALEX HANNA: I would want to say just these commitments say very little with regards to any types of the labor impacts of any of these things, which is pretty disappointing especially as this White House has has has tried to come out and say that they're pro-labor, that you know that they are they're committed to that. I mean I don't think they've even made any statements, or if they have, very few statements of kind of ongoing labor struggles have had AI at the center, including the SAG and and and WGA strikes, as well as the um the the UPS uh--the threat of the UPS strike. I think the one thing they they they talked about was um when there was the threatened labor action by the rail workers that would have shut down so much, um but it was only you know enough where Transportation Secretary Pete Buttigieg basically said yeah we're gonna get people to come to the table.  

Um and so yeah the fact that there's no discussion of labor and this is is pretty pretty disappointing. 

EMILY M. BENDER: Indeed indeed um but again this is this is the the White House apparently negotiating with these companies and so um you know you and I couldn't even get a pro-labor commentary into The Economist. 

ALEX HANNA: Yeah right oh yeah I mean if you--shout out shout out to our inability to uh uh--you know we wrote a piece we can drop it in show notes um uh two pieces out this week, one piece in Tech Policy Press uh on kind of labor impacts of AI and another piece in um in Scientific American that came out today uh kind of about the [unintelligible] around these things. So we'll drop them in show notes. 

EMILY M. BENDER: Yeah. 

ALEX HANNA: Um yeah so the second one here, "The companies commit to publicly reporting their AI systems' capabilities, limitations, and areas of appropriate and inappropriate use. This  report will cover both security risks and societal risks, such as the effects on fairness and bias." Our first mentions of fairness and bias in any of this. 

EMILY M. BENDER: Yeah there was the bland societal impact thing above but yeah. 

ALEX HANNA: Yeah. 

EMILY M. BENDER: Um, "The track record of AI shows the insidiousness and prevalence of  these dangers and the companies commit to rolling out AI that mitigates them." 

ALEX HANNA: Oh there's the bolt the bold--the bolding for that where it says "the companies commit to prioritizing research on the societal risks on AIs that can put--that AI systems can pose, including on avoiding harmful bias and discrimination and protecting privacy." Uh so like you know down here in trust on bias and privacy um yeah but sorry I interrupted you because uh you you were on something before that. 

EMILY M. BENDER: Yeah um so I just just noticing that the um uh they're rolling out AI that mitigates these dangers of AI, it's like maybe the way we want to mitigate these things is not with so-called AI but actually with regulation about what can be automated and information about what's in the training data right. And did we skip over this one Alex? Um "publicly reporting their AI systems' capabilities, limitations--" 

ALEX HANNA: I did that one. Yeah yeah.  

EMILY M. BENDER: Oh that's when we got into the system card thing. The thing we didn't say about this is that I I really dislike this term capabilities, so "their AI systems' capabilities" sounds like the AI has um autonomy and is thinking and is its own thing that can like learn new things  um and I think a much more accurate term when we're talking about software is "functionality."  

Um so anytime someone talks about "a capability" of a system that's already a red flag for me.  

I was listening to episode eight where we did the red flag thing, I didn't get a red flag but I'd be waving a red flag on every other word here. Um-- 

ALEX HANNA: We have a whole document just committed to--I I didn't realize this but uh uh uh Emily is much more uh a better documentarian than I uh am and I aspire to that, but she just keeps a track of all the basically all the bits that we are thinking about doing. We have so many bits and I've been watching so much improv comedy I'm like ah I want to commit to bits, I wanna do bits. So many bits. 

EMILY M. BENDER: Um okay so there's one last thing here under this uh--"Ensuring the public's trust," which was the last heading. This just like made me so mad. 

ALEX HANNA: It's just--it's just it's just puff here. It says, "The companies commit to develop and deploy advanced AI systems to help address society's greatest challenges, from cancer  prevention to mitigating climate change, to so much in between. AI--" Em dash. "--if properly managed" End em dash. "-can contribute enormously to the prosperity, equality, and security of all." 

EMILY M. BENDER: I've never heard a more compellingly read em-dash, Alex. I love it. 

ALEX HANNA: I know it's the yeah I've never actually read an em-dash before but I felt that our listeners dramatically needed to know. 

EMILY M. BENDER: Just the dramatic reading of the em dash. Yeah so this is this is not any any commitment to anything. This is we are wonderful and we're going to keep being wonderful and um you know this is all about the upside of AI which is just ugh. Yeah.  

Um so that's the that's the White House's summary of what's in the longer document.  

Um and then there's this call out to international collaboration. Um, "So as we advance this agenda at home--" I hope 'this agenda' refers back to the thing in the opening paragraph about actually reining this stuff in and not this list of bullet points. Um "--the administration will work with allies and partners to establish a strong international framework to govern the development and use of AI. It has already consulted on the voluntary commitments with," list of countries, "Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE and the UK."  

Um I'd like seeing Kenya and Nigeria in that list because that sounds to me like maybe there's some intention being paid to the outsourcing of the most traumatic labor involved in this, um which of as someone was pointing out in the chat has not shown up in this document at all.  

Because that's not what these companies want to be committing to is being better on the labor front.  

Um but yeah so um yeah basically we're going to talk to other countries too is what this is.  

ALEX HANNA: Right um I'm wondering if we want to move on to the longer document, because there's some there's some stuff some real howlers in there um that I mean it's you know the rest of this documents kind of kind of revisits um some of the stuff that they've done prior to this uh there's a shout out to some positive things like the blueprint for an AI Bill of Rights, which provides a bit of a framework, and then leveraging existing enforcement authorities which is the piece um the the joint statement I think um that was um that was put together by the FTC, the CFPB, uh the DOJ and the Equal Opportunity uh Commission. So yay to that call out but uh you know um and then woof to much of this. 

But yeah why don't we. get into this longer document because the first part of this um the um this is when we get really Mystery AI Hype Theater-ness where yeah actually get into these documents and read the fine print. 

EMILY M. BENDER: Yeah yeah but as we're making this transition I just want to call out a comment that Abstract Tesseract put in at the beginning and when we were Biden-Harris administration's document there, saying "This feels so bland compared to the FTC's spicy salami warnings." Um and so there's a call out at the end of it to the FTC's work but yeah we want we want more FTC and spicy salami warnings, and less kowtowing to um these seven companies, only some of which are arguably leading AI companies. 

ALEX HANNA: Right and shout out to RaginReptar uh for this um in the chat, it says, "GenAI helped me design a room temp superconductor." We love cross disciplinary jokes. Did you hear about this thing Emily where this-- 

EMILY M. BENDER: LK99? 

ALEX HANNA: --where they said that they designed a room temperature superconductor and then it was just wrong? I forget. 

EMILY M. BENDER: I I haven't I haven't been following I just been seeing that's been trending and realized that it was actually outside my beat. I don't have the energy to follow any more of these.

ALEX HANNA: Right but you know we love cross-disciplinary hype cycle synergy. Anyways back to the PDF.  

EMILY M. BENDER: Okay so this--interestingly the PDF is at a whitehouse.gov site and it is it has no header on it, so it's just got a title at the top in centered bold font, "Ensuring safe, secure, and trustworthy AI" um and hold on there's no--so this is the document that purportedly represents the voluntary commitments. It doesn't name the companies, it doesn't have a date on it it doesn't like it's sort of weirdly disconnected from what you would expect an official document to look like.  

ALEX HANNA: Yeah also shout out I guess White House runs WordPress because it's in a WordPress upload file document in the URL which--I I mean yeah, read that as you will. Um so I mean the kind of thing yeah go ahead-- 

EMILY M. BENDER: So the first mini paragraph, "Artificial intelligence offers enormous promise and great risk." So again we have to talk about the upside. Um, "To make the most of that promise America must safeguard our society, our economy, and our national security against potential risks. The companies developing this--" In the second paragraph. "--the companies developing these pioneering technologies have a profound obligation to behave responsibly  and ensure their products are safe." Got any feelings about any of that Alex? 

ALEX HANNA: I mean sure. Again I mean it's just that again I think kind of an echo of the kind of national security um framing of this. I mean in the safety framing which I think really has taken over this, you know and it's in a way that I mean I I'd love to get someone like Heidi Khlaaf on here um to talk about safety engineering and the way that there's kind of a rhetoric of safety that has become kind of uh the warp and woof of uh of how these companies are talking about their products.

EMILY M. BENDER: The warp and oof. [ Laughter ] 

ALEX HANNA: Yeah yeah is that right sorry my idioms-- 

EMILY M. BENDER: I think it's right I'm just I'm just tying it back to how you've been saying ooof about a lot of this stuff. 

ALEX HANNA: Yeah just like big ooof. Is it warp and weft? I don't know. 

EMILY M. BENDER: I'm sure we have some textile artists in there in the chat who can set us straight on the idiom. 

ALEX HANNA: Yeah right right idioms are not my strong suit uh I blame the facts yeah yeah no go ahead. 

EMILY M. BENDER: I derailed us. So the national security framing is the safety framing, um also "safeguard" right again safety framing. Um but I noticed so we talk about society, we talk about economy but we don't talk about people and we don't talk about workers which again is people. 

ALEX HANNA: Yeah, right. 

EMILY M. BENDER: Yeah and we don't talk about rights, which seems very relevant.  

ALEX HANNA: I do want to go through this a little quicker because some of this is repeats, but there is there's a there's an element in "trust" um where it introduces shielding children from harm. So this is the first time we've seen that. And even though I think that's that's one of these things that kind of appears facially uh agreeable but a lot of the rhetoric especially from conservative politicians has been about sort of um using children as means of sort of limiting other parts of internet usage.  

EMILY M. BENDER: And using scaremongering about children in particular. 

ALEX HANNA: Yeah yeah. So that's been kind of one of the things. There's there was a bill  um I forgot who is the sponsor of this bill, but it was one in which they were posing age verification um on platforms um and really the fact that one, there's not really a way of doing age  verification, two, um it is sort of a call for more surveillance on in the online sphere, um but it's become more more of kind of a cudgel against which a lot of conservative politicians have been talking. So that this comes in and here is interesting, that's like a little that's a little delta that's come from here. 

Um another thing that another place that where that's appeared more prominently has been this new joint legislation that's been sponsored by um Senator Warren and Senator Graham, uh where-- 

EMILY M. BENDER: Odd couple. 

ALEX HANNA: Would--the oddest of couples, honestly. You know Senator Warren and then uh former never Trumper Lindsey Graham who quickly turned coat and became his best friend and then um--but the kind of way that the way this is written was really you know, the balance of sort of the protect the children and antitrust. So odd things in the bill.  

We don't have that in front of us, maybe we can address that um a little in the future.  

Um but I do want to I do want to call that out. And the thing I really want to get into is the uh before but unless you want to point something on this page before moving on, uh Emily, is is like the the um the um the scope text of this. 

EMILY M. BENDER: Yeah, yeah. I know. The only thing I want to point out here is just the sentence where you found the children was uh inside of "trust" um "it means ensuring that the technology does not promote bias and discrimination--" Oh hey that's nice, that might be the first time we've seen discrimination, too. "--strengthening privacy protections," also good, and then "in shielding children from harm." So that's sort of like there was a couple of positive things in there that we didn't see so much in the previous document.  

Um okay so uh yes the scope all right. So I'm on page 2, the uh heading here, again centered  bold face text, "Voluntary AI commitments." So that previous page was like sort of the intro and now we've got the list of actual commitments, um but there's no signatories indicated. Uh, "The following is a list of commitments that companies are making to promote the safe, secure, and  transparent development and use of AI technology." 

All right transparency, something else we've been calling for. Not implemented as you might hope. 

"These voluntary commitments are consistent with existing laws and regulations and designed to have a generative AI legal and policy--sorry designed to advance a generative AI legal and policy regime." 

Um 'consistent with existing laws and regulations,' so I guess that just means that you you can follow these commitments without breaking any laws?  

ALEX HANNA: I I think or without passing any new laws or-- 

EMILY M. BENDER: But I mean they could they could commit to doing something better than what the law requires without passing any laws always, like that's not worth saying. 

ALEX HANNA: Yeah. 

EMILY M. BENDER: "Companies intend these voluntary commitments to remain in effect until regulations covering substantially the same issues come into force." So please regulate us exactly like this.  

Um, "Individual companies may make additional commitments beyond those included here."  

Um great well you could leave that out and the document would would mean the same thing. Um all right so then scope. Did you want to read the scope? 

ALEX HANNA: Oh yeah, it's a weird--so this says in bold, "Scope: Where commitments mention particular models they apply only to generative models that are overall more powerful than the current industry frontier. E.g. models that are overall more powerful than any currently released models, including GPT-4, Claude 2, PaLM 2, Titan and, in the case of image generation, DALL-E 2." So this--there's so much here. So they they--everybody had to get their most powerful model in here. 

EMILY M. BENDER: Yeah, have you heard of Titan? Titan was news to me. 

ALEX HANNA: What what is Titan? Is it--who's that by--is that a uh was that that's a that's the Meta model is it? Or is the Amazon model? 

EMILY M. BENDER: I don't know yeah I think PaLM is Google right. 

ALEX HANNA: Titan is the Amazon model um, I think, that was or yeah yeah um because you know it--one that no one has used um but yeah in the chat saying AWS. And so and Claude 2's Anthropic's model, PaLM 2 is the Google model, and then there's two OpenAI models here. So first off everybody had to get their models in, because they want to see you know, and if it was done a few weeks later we've gotten LaMDA too, but then the real I mean the real um what's the word I want here the real shitshow show in this statement, sorry I know that's a blunt way but also a fine way of characterizing it, is that these are only applying to anything that's more powerful, right? 

So in this case it's it's kind of a call back to the pause letter um that the Future of Life Institute put out. Basically we need to put--now that we have these models uh and we're doing great and we're making money hand over fists with them, um we're going anything that's more powerful than the current industry models we need--we will we will we will commit to. Um. 

EMILY M. BENDER: So two things there. What does powerful mean, how do you measure that? 

ALEX HANNA: Exactly. 

EMILY M. BENDER: Secondly, we don't have to do anything to mitigate the harms of these current models. 

ALEX HANNA: Right. 

EMILY M. BENDER: Right and I just have to lift up from Abstract Tesseract, "And now a word from our sponsors." Which is what that list of models is. 

ALEX HANNA: Yeah, completely. uh it's and so that's just I mean that's already a red flag. Getting into the next part of it is is this, kind of the first thing is this element around um kind of the whole thing in point one, an expansion from point one, which is um they don't actually call out red teaming, um but then they it--so in the in the in the fact sheet it says they commit to "internal/external security testing."  

On this document it says "commit to internal and external red teaming of models or systems  in areas including misuse, societal risks, and national security concerns such as bio- cyber- and  other safety areas." And so this is the first time they're actually naming red-teaming as a strategy and that that's kind of the primary strategy, and I think that's a real big problem here.  Um red teaming has become kind of the um the kind of strategy du jour and I think that it's one  that's very palatable to these companies. Uh the White House has sponsored a red teaming  competition, which is going on currently or soon at Defcon, um and the thing is that I mean the fact that the fact that this--I mean I want I have a lot of thoughts on this. 

One the idea that red teaming is this kind of from princip--that is the kind of principled stand on thinking about what systems ought to do. It's more of this sort of like frame around putting things within a technical frame about kind of a a a a a sort of bug bounty sort of situation, and that is as a sufficient condition? I mean I think that is flipping this backwards. Basically I mean you need to start in my estimation and this reflects what someone said in the in the chat, which um which is a char--Charlie three uh I'm sure you pronounced this different, CharliePwnAll. 

Um, "Interesting that the word ethics or ethical is not mentioned in any of these docs." Yeah so  you're not starting from any kind of value commitments of discrimination. Um you have kind of mentions of bias or discrimination, but no kind of definition of what those are. Um of any definition of kind of things around um you know things that would remedy inequality. Um and it gets it sort of something that I'm going to paraphrase from um some friends at AI Now and they've written about uh in a recent report, um which was effectively, "Why are we starting from the point of red teaming or auditing? We should start from position, especially if you're a regulator or the executive, of drawing some bright lines, ensuring that companies do not cross those bright lines, and making that your jump off point.  

Why are we going to this kind of continuous always post-talk testing regime from like a very a very technical frame?" And I mean that's this whole kind of idea of red teaming right here really  infuriates me because of what kind of system and what kind of um--how it places accountability in who needs to be testing these systems, and what these systems should be even allowed to do.  

EMILY M. BENDER:Yeah and what what we should be allowing people to use these systems for, right? ‘

ALEX HANNA: Yeah. 

EMILY M. BENDER: That it's it's the the systems are just tools and if we are talking about like well anybody should be able to build these and put them out in the world and then we just have to rely on really robust red teaming to figure out you know what could possibly go wrong and sure we promise to be responsive and to like fund that red team and help them do that work um, but we get to put the thing out in the world first and then we just have to like make sure that its behavior is reined in or carefully studied. Like that sort of again sets the maths up as yeah um something that can just exist in the world and go about its business and then needs to be corralled rather than decisions by corporate interest to automate certain things. 

ALEX HANNA: Yeah and I mean yeah yeah I I just want to give a shout out to Ruth Starkman, perennial supporter of the pod, in the chat she says, "Now that we have the technical person facing harms, back to business." And-- 

EMILY M. BENDER: Summarized as "red team and done." 

ALEX HANNA: Yeah exactly. And even in here too I mean the um uh even in here you know the kind of the kind of admission of what red teaming constitutes and what the kind of boundaries of that um are very that it is uh inchoate and poorly defined. They say in a sentence, second sentence here, "Model safety and capability evaluations, including red teaming, are an open area of scientific inquiry and more work remains to be done." 

And I feel you know I'm I'm feeling a way of saying, well we're going to red team and yet we don't know what evaluation and evaluations look like. Okay that's uh that's a problem. 

EMILY M. BENDER: And also again it's capability evaluations and safety evaluations, which is  if you've got your focus only on the model and you're thinking of it as something that can go out  in the world and take action, then that's the kind of harm that you're worried about. As opposed to how is this constructed? You know what was the the data theft, what was the labor exploitation, and then how is what is it being used to do, and what follows from it. 

And you don't need red teaming for that. You need this, like, let's as you were saying, let's not set the bar as anybody can just you know put one of these things out there to do with synthetic media thing um for no real designed purpose, but rather if you're going to build something and release it then it should have a purpose and be tested for that purpose. 

ALEX HANNA: Right right. 

EMILY M. BENDER: Oh there's something that I think didn't make it into the Fresh Hell but I was reading about that kind of comes in here. Um some grocery store in New Zealand had like a a chat bot a recipe generator where you put in your ingredients and it'll give you some ideas. And so people started putting in like other things in their kitchen like in the Mr--what we call in Washington State the Mr. Yuck cupboard, right the um the cleaning supplies and stuff. 

And then it starts making recipes that would do things like create chlorine gas, and if you're gonna put one of these things out in the world um just for you know the public to interact with, you're going to have to assume that people are going to come at it with all kinds of things and then be accountable to what happens after that.  

And I I doubt anyone has been directly harmed by this, but it really does sort of show the  risks of putting so-called generative AI that is synthetic media generators just yeah out to  the public to play with. 

ALEX HANNA: Right. I want to give a shout out here to this this great comment from Hubert in the chat who says, "I work as a red teamer for a bank. We have very specific objectives we're going after on particular assignments. Pretty unclear what this would be when quote 'red teaming' a model or how failure to achieve any objective would provide assurance of anything in particular."  

Yeah. 

EMILY M. BENDER: Yeah. All right so before we leave the safety thing I want to talk about um their bulleted list here. So this is um, "In designing the regime--" which I guess is the red teaming regime, "--they will ensure that they give significant attention to the following: biochemical and radiological risks--" I love how bio is a word here. "--cyber capabilities, such as the ways in which systems can aid vulnerability discovery, etc." 

Um so those are like very much like what if someone used the system to do bad things--let's let's test to see. And like you know if you knew what was in the training data you could answer a bunch of those questions. Right if you didn't include in the training data information about how to create dangerous chemicals, then you're not going to get how to create chemicals back out. 

ALEX HANNA: Right. 

EMILY M. BENDER: Um but then so that's bullets one and two. Bullets three and four: Three: "The effects of system interaction and tool use, including the capacity to control physical  systems," and four: "The capacity for models to make copies of themselves or in quotes "self-replicate."  

This is straight out of the AI doomer narrative, both of these things. Like oh no what if it um  is able to use tools and control physical systems? Like don't hook the random text generator up to anything that interacts with the physical world. Done. There's no red teaming needed here. Right?  

Um and then this whole thing about self-replication is exactly like--this is this when I hit this showed me that this whole dialogue was thoroughly infected with the AI doomers' discourse and I was really sad to see that. But also not surprised given that list of companies. 

ALEX HANNA: Yeah totally it's also related to this kind of sentence here that begins number two uh which says, "Work towards information sharing amongst--among company--" among, should they be amongst? I don't know.  

Um "--among companies and governments regarding trust and safety risk, dangerous or emergent capabilities, and attempts to circumvent safeguards." So very much within the doomer frame here, especially the kind of discussion of emergent properties I think is which is kind of a fantasy, um in different in different um in different domains. 

EMILY M. BENDER: But Alex we just skipped right over how they had a fifth bullet. "Societal risks such as bias and discrimination." 

ALEX HANNA: Yeah. 

EMILY M. BENDER: And that was it on that bullet. 

ALEX HANNA: And very poorly defined, you know. I mean I think bias and discrimination become you know like these become catch-alls. I mean there's a lot of writing on this kind of issue of kind of bias as a framing.  

Um yeah um so it's very yeah. 

EMILY M. BENDER: So that was that was all safety. We have security.  

ALEX HANNA: There's a lot--there's a lot more but I know we want to get into hell this time.  

EMILY M. BENDER: Yeah oh. Oh goodness yes okay there's one thing we have to do which is the watermarking thing.  

Um because that was infuriating. So under trust--so first of all there's the complete lack of anything about data set documentation I already ranted about that. Secondly um so under trust they've got um with number five, "Develop and deploy mechanisms that enable users to understand if audio or visual content is AI generated, including robust provenance, watermarking or both for AI generated audio or visual content." 

Two things here. One, remember all existing systems exempt. Anything that you happen to create which is just not more powerful than existing systems, exempt. Secondly, nothing in here about text synthesis which is a huge problem. Right so we can pollute the text-based information ecosystem as much as we want, but we'll think about--we'll investigate how to watermark audio and video for future models. 

ALEX HANNA: That's a great point that's a great point, an omission that I somehow missed. No I I missed it I missed it I just did but yeah it's it's that's just I think it's sort of kind of the you know part of it's like it's an admission that you can't really do that, or like the two existing tools at detecting these things are like rubbish. And so they're going to say yeah we're not going to do this at all. Um yeah. 

EMILY M. BENDER: All right um so we need to do our transition into AI Hell, so I'm going to stop the share here. Um. 

ALEX HANNA: Yes. 

EMILY M. BENDER: Alex, I need to give you a prompt. Um you are a-- 

ALEX HANNA: Oh no. 

EMILY M. BENDER: Uh one of the demons guarding AI Hell and you are trying to negotiate with some of the other demons. What should be in your voluntary commitments? Go. 

ALEX HANNA: Right okay. All right. So here's what they got all right so I I got this guy down  here, his name was uh Marvin Minsky. And uh you know down when he came down he told me about this thing, I couldn't believe it. He said that robots could walk and talk they could do these things. All right, here are my volunteer commitments. 

No uh no work on the weekends, uh I can't have any of these people any of these demons any of these robot demons replacing putting bees in mouths. My favorite part of the job just shoving bees in mouths. And uh uh anything involving uh just flattening of fingers, taking nails off, uh you--those robots are not taking my job. All right that's all I got. 

EMILY M. BENDER: I love it I love it. All right, welcome to Fresh AI Hell. And I had to make it so that I can see the thing that we're looking at. All right we've got a backlog here because I've been collecting, we didn't get through very much last week. Would you like to tell the people what we see on the screen, Alex?  

ALEX HANNA: All right so this is uh a tweet um from CC Adams, who's a journalist, and it's a screenshot just saying, "What the hell is wrong with you all?" And it's the Washington Post article, which says, "'We interviewed'" in quotes "--Harriet Tubman using AI. It got a little weird." Yeah no shit. What the what the hell is uh what the hell is wrong with you?  

EMILY M. BENDER: Right and was there nobody in that newsroom who said, wait stop terrible idea? 

ALEX HANNA: Yeah. 

EMILY M. BENDER: Ugh, all right. 

ALEX HANNA: Yeah this boggles the mind. 

EMILY M. BENDER: Yeah, all right next. Here. Uh I was--I noticed a little while back when I was trying to search up the information about the EU's Artificial Intelligence Act, if on Google you search for 'EU AI Act' the first site that you hit um has the URL artificial intelligenceact.eu and if we click through on that we find it is not something given by the EU government, but in fact um this is uh maintained by the Future of Life Institute. 

ALEX HANNA: Oh geez I--it that's wild that they had bought the you know bought the domain name and it is the first search result.  

Um yeah you think the EU could could do something about that or I guess not. 

EMILY M. BENDER: And and it and it shows you just how poor of--and we knew this we knew this at least since Safiya Noble's um you know 2018 book and well before--that Google is a bad steward of the world's information if this is coming off on top. All right next. 

ALEX HANNA: Okay--so okay what uh oh so this is the uh--so this is WorldCoin, so this is this is an art--artnet.com but it has been in the news uh so the headline is, "Nefarious data collection masking as public art? An AI company has placed mirrored spheres around the world in a massive eye scanning project." 

And the um sub head here is, "Worldcoin's efforts have been criticized by the likes of Edward Snowden, who tweeted 'don't catalog eyeballs.'" So this is the latest kind of dream from Sam Altman um where they've developed something called Worldcoin and then say massive quote unquote 'voluntary' surveillance project where they've been scanning people and trying to  establish some kind of a world ID for a currency that has no value uh called Worldcoin. 

And we've had this conversation a lot with one of the DAIR fellows, Meron Estefanos, who you know has talked about how they have this, they're trying to push this very heavily in Africa and Kenya, um in um in in in uh Uganda and Rwanda. Where they're basically having these places  um where you can go to a place and get your eyes scanned um and so um and and they'll give you some token and say you're part of some registry. 

But I did see recently that the Kenyan government pushed back and they're like, hey what the hell are you doing. Stop. 

EMILY M. BENDER: Yeah I love um you know--as as a parent, my kids are bigger now but when they were toddlers you go through the phase of sentences I thought I would never have to say. And 'don't catalog eyeballs--' 

ALEX HANNA: Don't scan your eyes, yeah. 

EMILY M. BENDER: Like but this isn't so this isn't Snowden telling people don't participate, which is also a good piece of advice, but like telling the world don't do this. 

ALEX HANNA: Yeah, good Lord. 

EMILY M. BENDER: Next. Jane Friedman um documents how--so this is this is her blog post--"I would rather see my books get pirated than this (or why Goodreads and Amazon are becoming dumpster fires)." 

And the story here is Jane Friedman's an author and um the um oh it looks like it's been cleaned up um but she discovered that someone had uploaded um synthetic, you know LLM garbage output as books to sell through Amazon under her name, and then it got sucked over into her Goodreads profile, because Goodreads is connected to Amazon and  um Jane Friedman was saying, 'I don't have anybody to contact to say take this down,' in both cases. But  if I'm looking at these updates here um she says uh, "August 7th, hours after this post was published, my official Goodreads profile was cleaned of the offending titles. 

I did file a report with Amazon complaining that these books were using my name and reputation without my consent. Amazon response: 'Please provide us with any trademark registration numbers that relate to your claim.' When I replied that I did not have a trademark for my name, they closed the case and said the books would not be removed from sale." That is that's like you know name and likeness are your own, you shouldn't have to trademark them, right?  

ALEX HANNA: Right. 

EMILY M. BENDER: Uh um August 8th, "The fraudulent titles appear to be entirely removed from Amazon and Goodreads alike. I'm sure that it's in no small part due to my visibility and reputation in the writing and publishing community. What will authors with smaller profiles do when this happens to them? If you ever find yourself in a similar situation I'd start by reaching out to an advocacy organization like the Author's Guild." 

So yeah that was but um it's another example also of the way the information ecosystem is interconnected, so you get this you know oil spill going on on Amazon, where someone has uploaded this thing, and then it leaks from Amazon to Goodreads and so somebody like Jane Friedman has to clean up in both places.  

Um and who knows where else. 

ALEX HANNA: Yeah yeah. 

EMILY M. BENDER: Okay. 

ALEX HANNA: All right next. 

EMILY M. BENDER: Yeah. 

ALEX HANNA: This is uh this is a Mastodon toot uh-- 

EMILY M. BENDER: Back from April it's old but yeah. 

ALEX HANNA: Yeah so, "Sometimes it's fun to browse the Y Combinator website and catch a glimpse of the future dystopia." And it's an upvoted idea called Fairway Health. 

"Process prior authorization faster." Um and the text says, "We use LLMs to analyze long (70 plus page) messy medical records and determine if a patient is eligible for a treatment." Holy shit.  

So get rejected uh for your treatment via a large language model. Let me tell you the idea of this as um as someone who is frequently denied uh health requests as a trans person under certain policies, this is just a next level of nightmare. 

Even navigating that as someone that has relatively good insurance is terrible. Um for someone who is doesn't have that insurance, even worse for someone who's multiply marginalized, um is in poverty. This is just a fucking nightmare. Uh so yeah I mean like this is--I mean health insurance is already a trash fire but imagine shoving an LLM on it? 

Oof. Ooh doggy. 

EMILY M. BENDER: Yeah it feels like there should be some demons who work in denying claims who are mad about the robots taking their jobs here. 

ALEX HANNA: You know, yeah, no kidding. 

EMILY M. BENDER: All right, we're gonna go fast for a little bit. Um still in the healthcare space, um here's a tweet from Sasha Luccioni, um quote tweeting everyone's favorite healthcare demon Martin Shkreli. "Introducing--" So this is Martin Shkreli's tweet. "Introducing Dr. Gupta, a virtual doctor chatbot using the latest techniques in AI and LLMs, Dr. Gupta can answer any health related question at any time and trying it is just a click away. I hope we can reduce healthcare costs with AI agents like Dr Gupta, giving time back to physicians who are overworked and saving money for a system that is over budget."  

Note the patients who aren't actually being providing care aren't mentioned there. "Dr. Gupta is not a doctor but I do think it will help a lot of people, especially those in economically disadvantaged environments." There's the patients not getting care. Um and Sasha says, "LLMs shouldn't be used to give medical advice. Who will be held accountable when things inevitably go sideways? Also this techno-saviorism crap is absolute BS, helping quote economically disadvantaged people with AI is a myth." Thank you Sasha. 

I'm going to keep us rolling here. We've got um also in the healthcare space, you want to read this one? 

ALEX HANNA: Yes, uh "Robot receptionists--" This is from Telegraph, the UK publication, "Robot receptionist to make NHS--" the National Health Service, "--fit for the future. Rishi Sunak to announce largest expansion of workforce in the NHS's history."

EMILY M. BENDER: Also weird pairing of heading and sub-heading. Is the robot receptionist the expansion of the workforce? 

ALEX HANNA: Yeah that's that's yeah--what's the rest of this article, because I haven't read this. 

EMILY M. BENDER: I I haven't no I'm not subscribed, it's just the um we just have robot receptionists will be used "to free up NHS staff under a 15-year workforce strategy to build a service fit for the future." 

And then there's a picture of this metallic robot with a sort of cross look on its face and blue eyes, um with a NHS thing behind it. Similar idea in the next one here um, "Can humanoid robots modernize your reception?" And this is published in something called Aldebaran. United Robotics Group. Um. 

ALEX HANNA: Yeah. 

EMILY M. BENDER: And it's this looks like it's maybe a hotel or other business reception and there's a woman in tall heels and a pantsuit um interacting with a very short uh white robot with a screen on its chest.  

ALEX HANNA: Didn't we talk to somebody I thought because I think this came up because we were looking up--I think someone had mentioned that there was a robot receptionist at like one of their jobs, and they were like what and they were like why the hell do we have this thing here? 

EMILY M. BENDER: Yeah I don't know. We're running out of time and I've got a bunch more tabs, so I'm going to take us to the good news tabs here I think Alex, unless you want to do one more bad news one? 

ALEX HANNA: Well what's the next bad news one I just I'm curious. Like you can't just have one. Uh oh yeah this is this is this like is a--didn't we have something like this before? "Can AI language models replace human participants?"  

Um no they no no. 

EMILY M. BENDER: No no. You don't have to write a whole paper on Trends in Cognitive Sciences on this. Just no. All right what's this Yahoo one? This Yahoo one is bad. 

ALEX HANNA: Oh oh yeah I think you had the chat uh or in the in the document, "Are the straights okay?" and of course the answer is no, never. Uh but it says, "A couple in. Colorado had their wedding officiated by ChatGPT--only after the AI chatbot initially turned on the honor." Um yeah we don't need to get into that let's get into other stuff. EMILY M. BENDER: Yeah all right. Uh here's Nate Silver um so quote tweeting someone named Stefan Schubert. 

Uh, "One of the most notable things about this terrible editorial is that there's no argument whatsoever for the proposition that existential risk from AI is low. The crucial aspect is simply assumed." And this is a link to a Nature editorial that apparently was sensible in saying, stop it with the stupid X risk stuff. Nate Silver quote tweets this saying yeah the median expert thinks that AI's essential risk is a five to ten percent threat. That might be wrong but "let's focus on other AI harms instead" isn't really a good argument unless you're refuting that claim. 

ALEX HANNA: It's a made-up it's a just made up number. Like you don't aughghgh.

Nate Silver, when when all you think about is like posterior distributions that are made up, like yeah like I don't need to refute reply guys. 

EMILY M. BENDER: Right right and 'the median expert'--like no dig into that data this is based on that survey of like, people at NeurIPS who chose to reply to the survey and were you know forced--like you would think that Nate Silver knows a little bit more about survey design and not just taking numbers out of surveys without but maybe not, hmm. Okay um okay another tweet from someone named Joshua Achiam. "Real talk, has anyone convincingly demonstrated that large-scale general purpose AI learns better with a curriculum than with a uniform data distribution from the start?"  

Okay fine, like I don't love the terms there but that's just a science question. Then, "If quote  'uniform distribution' wins, should we be should we be rethinking early childhood education for humans?"  

ALEX HANNA: Oh gosh this is again the kind of equation of sort of childhood learning and machine learning. Um and I didn't even see the second part of this tweet. "Maybe kids wouldn't be afraid of calculus if they were going to see it from the time they were like three years old. When I have kids I'm gonna run this experiment." 

Oh please please don't have kids. 

EMILY M. BENDER: Okay and then I'm just is going to keep us we're going to a little bit late to go through all these. I went to post something on LinkedIn this morning because um the social network that I like to use to publicize my work is becoming more and more of a dumpster fire. So I'm branching out. 

And I went to post a link to our Scientific American editorial and was confronted with this from LinkedIn. This is a screen cap of it. Um so my name, post to anyone, and then, "Start typing or," with an underline, "draft with AI." Magic wand. And then there's some ad copy here. "Let AI help you with a first draft. Give us detailed information on what you want to write about, including key points and examples, and we'll help you get started with an AI-powered first draft." 

Now I didn't try this um but Alex did, and Alex I don't--I'm not hooked up to share the screen cap that you sent but do you have it to hand to read. 

ALEX HANNA: Oh yeah I I started writing some copyfor a tool called hustle.ai and it gave to me  this this this text. "Get into the hustle economy with hustle.AI. Our new AI system is the perfect  tool to get help you hustle efficiently. With hustle.AI you can manage multiple jobs simultaneously and maximize your ROI. I've personally been able to handle three different jobs at once and my investment has been 10xed. Are you ready to take your hustle to the next level? Try hustle.ai today. hashtag hustle hashtag productivity hashtag AI." 

Now the text I entered was very ridiculous and it just really leveled it up just in a way that I could not handle.

Our producer in our chat is saying, "Taking this audio for a fake ad." Yes please do it let's buy hustle.ai if it's still available, uh making t-shirts, first podcast branding let's go.  

EMILY M. BENDER: Okay, I'm going to end this with a little bit of good news. You all might have heard um last weekend someone noticed a change that Zoom made--oh oh this is sorry there's two of these here. One is there was a fiction analytics site called Prosecraft and um it has quietly been gathering people's books and then producing numerical analytics over of them and people got upset because they realized that it was obtained without consent, and the um creator of the site has taken it down and has posted that he's going to look into a way to do this based on consentful data collection. Um so we are starting to see some backlash. The second backlash was to Zoom. So apparently back in March Zoom change the terms of service to basically say and we can use your data to train generative AI, or something to that effect, and nobody noticed until this past weekend because nobody reads the terms of service um because nobody's got that kind of time. Um but there was a big outcry earlier this week and last weekend because somebody noticed, and at first Zoom said, 'Oh we didn't mean that,' in a blog post and people were like yeah but it's still in the terms of service. And now they've actually changed. Not anymore. 

They've changed it, they've actually changed the terms of service so um it is worth pushing back is what I understand. 

ALEX HANNA: It is not inevitable, collective outrage works, complain, shut stuff down. Okay. 

EMILY M. BENDER: And I wanted to add in one other thing for anybody else who's in an academic context um Monday, after I saw this I asked at my institution and it turns out that  because we use it for teaching and it has to be FERPA compliant, um my institution already  has a different contract with Zoom that is more constrained than the default terms of service.  

So it was already not going to be a problem for me at the University of Washington but it seems like Zoom has actually listened and changed what they're doing, so that is good.

All right. 

ALEX HANNA: That's amazing, all right. 

EMILY M. BENDER: You made it. 

ALEX HANNA: So we're doing it we are on our way out, thank you so much. That's it for this week our theme song is by Toby Menon, graphic design by Naomi Pleasure-Park, production by Christie Taylor, and thanks as always to the Distributed AI Research Institute. If you like this show you can support us by rating and reviewing us on Apple Podcasts and Spotify and by donating to DAIR at dair-institute.org. That's d-a-i-r hyphen institute.org. 

EMILY M. BENDER: Find us in all our past episodes on peer tube and wherever you get your podcasts. You can watch and comment on the show while it's happening live on our Twitch stream. That's twitch.tv slash d-a-i-r underscore institute. Again that's d-a-i-r underscore Institute. I'm Emily M. Bender. 

ALEX HANNA: And I'm Alex Hanna. Stay out of AI Hell y'all.