Mystery AI Hype Theater 3000

Episode 27: Asimov's Laws vs. 'AI' Death-Making (w/ Annalee Newitz & Charlie Jane Anders), February 19 2024

February 29, 2024 Emily M. Bender and Alex Hanna Episode 27
Episode 27: Asimov's Laws vs. 'AI' Death-Making (w/ Annalee Newitz & Charlie Jane Anders), February 19 2024
Mystery AI Hype Theater 3000
More Info
Mystery AI Hype Theater 3000
Episode 27: Asimov's Laws vs. 'AI' Death-Making (w/ Annalee Newitz & Charlie Jane Anders), February 19 2024
Feb 29, 2024 Episode 27
Emily M. Bender and Alex Hanna

Science fiction authors and all-around tech thinkers Annalee Newitz and Charlie Jane Anders join this week to talk about Isaac Asimov's oft-cited and equally often misunderstood laws of robotics, as debuted in his short story collection, 'I, Robot.' Meanwhile, both global and US military institutions are declaring interest in 'ethical' frameworks for autonomous weaponry.

Plus, in AI Hell, a ballsy scientific diagram heard 'round the world -- and a proposal for the end of books as we know it, from someone who clearly hates reading.

Charlie Jane Anders is a science fiction author. Her recent and forthcoming books include Promises Stronger Than Darkness in the ‘Unstoppable’ trilogy, the graphic novel New Mutants: Lethal Legion, and the forthcoming adult novel Prodigal Mother.

Annalee Newitz is a science journalist who also writes science fiction. Their most recent novel is The Terraformers, and in June you can look forward to their nonfiction book, Stories Are Weapons: Psychological Warfare and the American Mind.

They both co-host the podcast, 'Our Opinions Are Correct', which explores how science fiction is relevant to real life and our present society.

Also, some fun news: Emily and Alex are writing a book! Look forward (in spring 2025) to The AI Con, a narrative takedown of the AI bubble and its megaphone-wielding boosters that exposes how tech’s greedy prophets aim to reap windfall profits from the promise of replacing workers with machines.

Watch the video of this episode on PeerTube.

References:

International declaration on "Responsible Military Use of Artificial Intelligence and Autonomy" provides "a normative framework addressing the use of these capabilities in the military domain."

DARPA's 'ASIMOV' program to "objectively and quantitatively measure the ethical difficulty of future autonomy use-cases...within the context of military operational values."
Short version
Long version (pdf download)

Fresh AI Hell:

"I think we will stop publishing books, but instead publish “thunks”, which are nuggets of thought that can interact with the “reader” in a dynamic and multimedia way."

AI generated illustrations in a scientific paper -- rat balls edition.


You can check out future livestreams at https://twitch.tv/DAIR_Institute.


Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

Show Notes Transcript

Science fiction authors and all-around tech thinkers Annalee Newitz and Charlie Jane Anders join this week to talk about Isaac Asimov's oft-cited and equally often misunderstood laws of robotics, as debuted in his short story collection, 'I, Robot.' Meanwhile, both global and US military institutions are declaring interest in 'ethical' frameworks for autonomous weaponry.

Plus, in AI Hell, a ballsy scientific diagram heard 'round the world -- and a proposal for the end of books as we know it, from someone who clearly hates reading.

Charlie Jane Anders is a science fiction author. Her recent and forthcoming books include Promises Stronger Than Darkness in the ‘Unstoppable’ trilogy, the graphic novel New Mutants: Lethal Legion, and the forthcoming adult novel Prodigal Mother.

Annalee Newitz is a science journalist who also writes science fiction. Their most recent novel is The Terraformers, and in June you can look forward to their nonfiction book, Stories Are Weapons: Psychological Warfare and the American Mind.

They both co-host the podcast, 'Our Opinions Are Correct', which explores how science fiction is relevant to real life and our present society.

Also, some fun news: Emily and Alex are writing a book! Look forward (in spring 2025) to The AI Con, a narrative takedown of the AI bubble and its megaphone-wielding boosters that exposes how tech’s greedy prophets aim to reap windfall profits from the promise of replacing workers with machines.

Watch the video of this episode on PeerTube.

References:

International declaration on "Responsible Military Use of Artificial Intelligence and Autonomy" provides "a normative framework addressing the use of these capabilities in the military domain."

DARPA's 'ASIMOV' program to "objectively and quantitatively measure the ethical difficulty of future autonomy use-cases...within the context of military operational values."
Short version
Long version (pdf download)

Fresh AI Hell:

"I think we will stop publishing books, but instead publish “thunks”, which are nuggets of thought that can interact with the “reader” in a dynamic and multimedia way."

AI generated illustrations in a scientific paper -- rat balls edition.


You can check out future livestreams at https://twitch.tv/DAIR_Institute.


Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

 Alex Hanna: Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it, and pop it with the sharpest needles we can find.  

Emily M. Bender: Along the way, we learn to always read the footnotes. And each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. 

I'm Emily M. Bender, a professor of linguistics at the University of Washington.  

Alex Hanna: And I'm Alex Hanna, Director of Research for the Distributed AI Research Institute.  

Before we get into today's episode, we have some super exciting news! Emily and I are happy to announce our forthcoming book, The AI Con, a narrative break takedown of the AI bubble and its megaphone wielding boosters. 

In the book, we'll expose how tech's greedy prophets aim to reap windfall profits from the promise of replacing workers with machines. And it'll be out in spring 2025 from Harper Books.  

Emily M. Bender: I am so excited for this book and just thrilled to be working on it with you, Alex.  

Alex Hanna: Oh gosh, likewise.  

Okay. Okay. Back to today's episode. 

This is episode 27, which we're recording on February 19th of 2024. And what could be more timely than a look at how science fiction talks about so called artificial intelligence? While so much of AI hype might as well be fiction, we're going to talk about an actual professional this week. Isaac Asimov, creator of the infamous three wall, three laws of robotics that were supposedly to keep humanity safe from robot doom. 

Emily M. Bender: Somebody needs to tell the Biden administration and the DOD how that actually went down in 'I, Robot' though, because they are releasing document after document suggesting that all we need is the right ethical framework to use, um, and then we can use autonomous weaponry in war. And I'm very excited today because we found the perfect guests to talk about this with. 

Alex Hanna: They are the hosts of 'Our Opinions Are Correct,' a podcast that explores how science fiction is relevant to real life science and society.  

Charlie Jane Anders is a science fiction author, occasional loudmouth--she told us to say that, I promise. Her recent and forthcoming books include 'Promises Stronger Than Darkness' in the Unstoppable Trilogy.

And Charlie Jane is just like gesticulating wildly on, on video. The graphic novel 'New Mutants: Lethal Legion' and the forthcoming adult novel, 'Prodigal Mother.'  

Emily M. Bender: Welcome Charlie Jane.  

Alex Hanna: Welcome Charlie Jane.  

Charlie Jane Anders: Yay! It's so great to be here. Thank you so much for having me. 

This is such a thrill.  

Emily M. Bender: We are, we are so excited. And we've also got Annalee Newitz, a science journalist who writes science fiction as well. Their most recent novel is 'The Terraformers,' and in June you can look forward to their nonfiction book, 'Stories Are Weapons: Psychological Warfare and the American Mind.' 

So super relevant too to our conversation today.  

Welcome Annalee.  

Annalee Newitz: Thanks for having me.  

Emily M. Bender: This is, this is so cool. Um, and I think I'm just going to take us right into our artifacts because I know we're going to have a lot to say. Um, and so. Uh oh, I've just, here it is, um, we are going to start with the three laws of robotics, up here for review just in case anybody in our audience is not familiar, um, and let's see, are we gonna, we'll give this the Mystery AI Hype Theater 3000 treatment. So we will read them and take them apart. 

So Isaac Asimov's three laws of robotics, as listed in the list of lists. "One: A robot may not injure a human being or through an action, allow a human being to come to harm. Two: A robot must obey orders given it by human beings, except where such orders would conflict with the first law. And three: Ac robot must protect its own existence as long as such protection does not conflict with the first or second law. 

So, maybe we'll start with Annalee. What should we be thinking about here?  

Annalee Newitz: So, the thing that always irks me about these three laws of robotics is that they've really become almost dogma, uh, both within science fiction, but also in AI research and machine learning research as kind of a framework for how we could imagine building, um, human equivalent intelligences that would, um, that wouldn't rebel against us or try to kill us. 

That's the original idea. But of course, any passing familiarity with these laws of robotics shows you that they really are a recipe for enslavement. And so that part really bothers me, um, as well as the idea that somehow it would be incredibly easy to codify a kind of ethical framework. And I think that's where we're really, um, focusing today is on that question of like, how do you, how do you codify these things that are not, that are sort of by nature, not quantifiable, like ethics? 

Emily M. Bender: Yeah.  

Alex Hanna: Mm hmm.  

Emily M. Bender: Yeah. There's this--  

Alex Hanna: It reminds me of-- Yeah. Go ahead.  

Emily M. Bender: Just kidding. Reading them now. I mean, I'm familiar with these, of course, but reading them in this context, this like "except where it conflicts with," it's just so blithely stated here as if that could be easily calculated, and it would always be knowable, and I'm sure not, you know? 

Annalee Newitz: I want to add--  

Alex Hanna: Right, and it's-- Oh, yeah, go ahead, Annalee.  

Annalee Newitz: I was just going to add something really quickly about the context here, which is, so these are from, um, the short story collection 'I, Robot' which came out in the mid 50s, and the premise of that collection of stories is that it's being told by a robo psychologist who is explaining to us, in each story, how these laws don't work. 

And how the conflict between these laws results in disaster. So every single story is about a disaster caused by believing that these laws would work.  

Charlie Jane Anders: Yeah, I mean, basically there are, as Annalee said, they are recipes for enslavement. They, they encode this hierarchy of like human life is the most important, obeying humans is the next important, most important, and then, oh, robot self preservation is somewhere below that. 

It's like, it presumes that you've got some, an entity that is smart enough to be able to parse these incredibly complicated logic gates that are kind of like this, if this, then that, if not this, then that, you know, but not, but that intelligence doesn't make it-- its right to exist and to like, have a life of its own worthy of consideration alongside human life. 

And it's just, it's kind of really upsetting to look at that now. And actually Annalee will tell you that, and anybody who listens to Our Opinions Are Correct, will tell you that I always bring 'Doctor Who' into everything. And it's, it's a fault of mine. And this actually makes you think like when you have these kinds of rigid laws, you always start out with like, it's logic, it's laws, it's like, and then you always end up with like somebody needing psychotherapy. Like, and there's a Doctor Who episode called 'Robot,' that's just the one word, 'Robot,' which starts out with like a robot that's been programmed with basically these three laws, except it's boiled down to like, I must serve humanity and never harm it. 

And by the end, we're talking about how the robot has an Oedipus complex and how it like needs therapy and like, you know, and they're like psychoanalyzing this robot because its brain has been broken by these terrible laws. And like, I think that that's just, that's the inevitable outcome of this kind of lawmaking. 

And it's just, it's cruel. It's incredibly cruel. And it's also just like both overestimating and underestimating the capacity of artificial sentience.  

Alex Hanna: Yeah, and I mean, there's two things I'm thinking about as y'all are talking. So the first is that this reminds me of like so much about in the current AI discourse this idea of like value alignment and this idea that oh, we just must align the robots with human values. 

Charlie Jane Anders: Oh god.  

Alex Hanna: And then just and you're just like--yeah. (laughter)  

Charlie Jane Anders: I'm in physical pain. I'm in physical pain.  

Alex Hanna: Oh yeah, you can only imagine. This is, uh, this is a major discourse in our field. And the kind of idea that, you know, what, okay, whose human values are you talking about? What are values? Humans--  

Charlie Jane Anders: All humans share the same values Alex, you know, it's just that easy, right?  

Alex Hanna: All humans share the same values and we need to just get the robots to line up and do these values correctly. Right. And so it's-- and then the slightly more sophisticated formulation of this, that OpenAI's come out with is. is something like, well, we need to, uh, kind of democratically vote on values and really think about, you know, maybe if we get democratic agreement on values and that's sufficient and you're like mmm uh mmm, you know, democracy here is not, is not the issue here, dude. 

Like it's not, it's about--for something that is a fundamentally antidemocratic technology, then that's not going to work. And also, you think--have to also acknowledge how rules by majority can hurt people at the margins.  

And this brings me to my second thing relating to what you said, Annalee, this idea of kind of resulting in enslavement. 

And it reminds me of this vignette that Ruha Benjamin uses a lot, especially when she was writing her book, Race After Technology, is this she, she has this, uh, anecdote about how she was like walking through an airport and then, you know, she just heard somebody, some guy on the phone or talking to somebody else going, I just, you know, 'I just want someone to like boss around' and, and, and this, and this connection to sort of this deep lineage of, uh, bossing like robots around. 

And she uses this, um, this incredible, uh, image of like, uh, like this, this Jetson style future where there's like a guy and he's getting dressed and it says in the text, like you could have your own robot slaves and kind of, you know, illustrating this, you know, this racialized lineage of, you know, like slavery and the way that, you need to talk about, you need to talk about race when you talk about technology. Race is in the machine, and if you're not going to, then you're just basically going to create racist robots or have bring robots serve as kind of a, uh, you know, foil against which race is reflected onto. 

Emily M. Bender: Yeah.  

Annalee Newitz: A hundred percent. Yeah.  

Emily M. Bender: So there's one thing that jumps out at me looking at these also, which is the, the way this is framed as laws evokes both like legal code, but also three laws of robotics sounds like, you know, Newton's three laws of motion or whatever. And there's an interesting ambiguity here between, is this regulations that somebody's putting into place or is it something that the robots can't help but follow? 

Just like, you know, an object in motion can't help but stay in motion. Um, and I think that that also belies this idea that the, these are laws that are just going to be automatically followed and you don't have to worry about it, as opposed to no actually we when we're using automation, we have to think carefully about how we're applying it and if it's equitable and fair or if it's harmful. 

Annalee Newitz: Yeah, there's a great moment in the early 90s movie 'Robocop,' where they have this cyborg police officer, who in the context of the film, they design this robot police officer because the police in Detroit are on strike. And so they want someone who will never go on strike. So they make this police officer and he has the three laws, uh, encoded, but he also has a secret fourth law, which is that he must always protect executives at the corporation that made him. 

And that overrides the other laws. And that, to me, felt exactly like what would happen if we had these kinds of laws.  

Charlie Jane Anders: Must always protect Sam Altman. Sam Altman must be protected at all costs.  

Annalee Newitz: That's literally the scenario. Yeah.  

Alex Hanna: Oh my gosh.  

Annalee Newitz: Yeah.  

Alex Hanna: That's wild. There's, there's a comment in the chat, uh, that's interesting. 

This, uh, user uh, I'm going to mess up this name, Papposilenus, uh, who says "Asimov has written in his own memoirs that he wrote the laws to preclude the tired old plots of machines taking over the world and was interested in writing puzzle stories about it."  

So thinking, thinking about it as a way to jump off and someone else in the chat also says, his--PoritzJ, says--unfortunately, I'm like zooming into my iPad to look at this.  

"Unfortunately, so many tech folks read about the three laws when they're too young to understand the technical impossibility of making anything like this with real computers, even if it made sense, which they store in their subconscious and have to fight against at best for the rest of their lives." 

So it's sort of. Yeah, and then someone else mentioned this Torment, Torment Nexus meme, you know.  

Charlie Jane Anders: Yeah, I know the Torment Nexus.  

Alex Hanna: Yes, a classic.  

Charlie Jane Anders: We all love the Torment Nexus. I mean, I feel like I get very itchy whenever people try to encode ethics as like an algorithm or as like a series of logic gates because I feel like In real life, ethics is never that. 

Like, you know, it's the same thing where like people get obsessed with the trolley problem, which I find the trolley problem kind of overexposed at this point. I think that a lot of--like ethics has a component of logic, but it also has a lot of components that are squishy and human and kind of for lack of a better term, kind of situational or, or, you know, dependent on the particulars of, of your-- where, where you stand. And dependent on like things like empathy that are not like intrinsically about logic.  

I don't think that you could, I don't, I, people have tried over and over throughout history to come up with like ethical frameworks that can just be like a marble goes through the thing and it falls through this thing and then, you know, a series of, of gates or whatever. 

And that's not how ethics work in real life, I think.  

Emily M. Bender: Not at all. I've actually taken to avoiding the word ethics. So I teach a class that used to be called, um, 'ethics and natural language processing.' And now it's called 'societal impacts of language technology.'  

And there's a couple of reasons for that. One is that when you talk about something being ethical or not, people tend to get very personally offended. 

Like, how dare you call me unethical? And like, that's not the point. Um, but also in the very first time I taught this, I go, Hey, let's not reinvent the wheel. Let's go read some of the philosophical literature on ethics. And aside from the fact that we had to, like, shut down all conversation about the trolley problem because it was irrelevant to the kind of stuff we were worrying about, um, they also, I found that it actually wasn't that applicable because what we were reading in the philosophical literature seemed to be about solving problems where you had people with conflicting desires and needs, but they all started on a roughly equal playing field. 

And that's not the situation that we're in. Um, you know, Sam Altman does not need to have as much protection as, you know, the people who are getting arrested because of false matches in automatic facial recognition.  

Annalee Newitz: Yeah. Or people who can't get into their building because they're using facial recognition to, uh, for secure access. 

Yeah, that's a really good point. I love, I love the idea of reframing it as social impacts. And also that's how justice kind of comes into it too. And that's not--justice isn't always a conversation you can have in the context of ethics, which are more personal, like you said.  

Emily M. Bender: Yeah. Yeah.  

Alex Hanna: I would, I would at least reclaim a little bit of ethics in so far as, you know, one must not dispense of ethics. 

And this comes from like a lot of conversations with folks like Shannon Vallor and Anna, Anna Lauren Hoffman and two, two folks who are kind of trained in ethics. And Shannon, you know, uh, is really, um, really, really, really sharp in virtue ethics and has a great book on that. And And Anna has written a lot about Rawls and the kind of idea of like, yeah, the kind of Rawlsian, you know, like--  

Charlie Jane Anders: Oh god. 

Annalee Newitz: Charlie loves Rawls.  

Charlie Jane Anders: I have a complicated relationship with Rawls. 

It's actually my dad hated Rawls. I don't know. It's a whole thing.  

Alex Hanna: I feel like everyone who is--  

Charlie Jane Anders: I'm sort of obsessed with Rawls.  

Alex Hanna: I feel like everybody who is a Rawlsian scholar has a complicated relationship with Rawls because you're just like, you know, like, the Veil of Ignorance, what's that? I don't know her. 

Like, and it's, it's sort of like that. (laughter)  

Annalee Newitz: How many colors does the Veil of Ignorance come in? That's what I want to know.  

Alex Hanna: Like, do the Sisters of Perpetual Indulgence wear several Veils of Ignorance? Like, I just don't know.  

Charlie Jane Anders: Really good question.  

Annalee Newitz: Sparkly veil? Like, yeah.  

Alex Hanna: Yeah. Sparkly, yeah. Beyond the Veil. And it's, and it's just, um, you know, and so, you know, but you always have to sort of do, you know, an amended sort of Rawls in, in this, and I mean, and there's some very, you know, interesting kind of takes on, you know, starting from like in Rawlsian ethics and, you know, amongst them, like, um.  

And, and if I, if this seems wrong, someone in the chat, please correct me, but like, like someone like Charles Mills, uh, you know, who's coming from this position of like the racial contract and thinking about, okay, we have this idea of this kind of contract of ethics, but that only apply applies to white people, right? Like, this is not meant to include, uh, any of the, any, any colonized people, right? And so then you're, you're like, okay, and what can we do with that? Right? Um, so. Yeah. Anyways. All right. I don't want to go too far down the, the actual philosophical rabbithole. (laughter)  

Emily M. Bender: So, but I think it's a good touchstone to have in mind as we move to our next artifacts because like that, that's where we would want to go to really think about how can studies of ethics inform how we think about these things. And as we look to what our government is saying, um, maybe we're going to find otherwise, in terms of what their touch points are.  

So I'm going to take us over to, um, the "Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy." And this came out from the Bureau of Arms Control, Deterrence, and Stability, which is a part of the U. S. Department of State. 

Um, and, uh, this is, uh, let's see, there's a quote from November 1st, 2023, so that tells us roughly when this came out. But basically the U. S. is party to this, um, political declaration of responsible military use of artificial intelligence and autonomy, um, that was launched in February, 2023 at something called the Responsible AI in the Military Domain Summit, um, REAIM 2023 in the Hague, um. And I'm just reading here now. It says the declaration "aims to build intentional consensus around responsible behavior and guide states' development, deployment and use of military AI."  

So I gather this is basically a set of principles that a bunch of nation states have signed on to, um, that is meant to be some sort of a, not exactly a treaty, I guess, but shared governance. 

Um, so, uh and we have a list of endorsing states here, which is kind of short, actually, if you think about the number of countries in the world. Um, I guess it's, uh, it's also not, it's not nothing. 

Annalee Newitz: Yup.  

Alex Hanna: Where's longer one?  

Emily M. Bender: The longer version is here. Yeah?  

Alex Hanna: Yeah. So the longer one has the actual principles and, and it's actually, um, you know, like--  

Charlie Jane Anders: It's from November. 

Annalee Newitz: Yeah. This is dated late last year. Yeah. I've definitely looked at this. I feel like this is something that grew out of, uh, the UN, uh, originally, and is now kind of in the, I guess, military realm?  

Charlie Jane Anders: So, Alex, sorry, what were you gonna say?  

Alex Hanna: Oh, I, I wanted to read a little bit of the, like, um, the actual principles. 

And for me, this actually reminds me a lot of some of the corporate principles that have that, that--there was a spate of them that were being developed in late, the late, uh, kind of in, in 2020, yeah, the late teens. Uh, I want to, I was going to say the late twenties and I was like, uh, I feel, I feel old doing that. 

And, um, but, you know, so it reminds me a lot of the Google AI principles.  

So, for instance, it says, "The endorsing states believe that the following measures should be implemented in development, deployment and use of military capabilities, including those enabling autonomous functions and systems."  

So, uh, I won't read all these, um, because it's long. Uh, but, "A. States should ensure their military organizations adopt and implement these principles for responsible development, deployment--" And so this is basically the same. "State should use these."  

Emily M. Bender: Um, very self referential.  

Alex Hanna: Yes. "States should take appropriate steps, such as legal reviews to ensure their military AI capabilities will be consistent with their respective obligations under international law and in particular international human humanitarian law." 

Okay. So. This shouldn't break the law. Fair. Um, and then I think the thing that really gets me on this, and this also finds, some replication in the other documents, is "States should take proactive steps to minimize unintended bias and military AI capabilities." And I just think it's so funny if one can use that kind of term for it, that that there is this kind of idea of bias kind of in the act of warmaking as if these, you know, tools were not hyper targeted against like Black and Brown people abroad. And as if the military, you know, wasn't, wasn't, you know, oriented towards the destruction of so many people that are not like, uh, you know, that are not Anglo-American. 

And so it's it's really, um, and I, and you kind of see that as a meme that comes out throughout. Yeah. So this, this reads to me as a bit of, um, principles with that flavor, uh, that is just, uh, just infuriating. Anyways, I'll let y'all talk about it.  

Emily M. Bender: Yeah. Thoughts?  

Charlie Jane Anders: I mean, this is actually making me wish Asimov's laws, like, this is actually making me nostalgic for Asimov's laws, because at least with Asimov's laws, there is that very definite 'do not murder people' right at the start. 

Like, for all we hate on Asimov's three laws, this is basically saying, like, how can we murder people ethically?  

Alex Hanna: Right.  

Charlie Jane Anders: How can we inflict mass casualties in the kindest and least biased and most responsible way. And it, it's, it's just, it's, it's really disturbing. It feels like a fig leaf. It just, I don't know, it, it's actually really upsetting. 

Annalee Newitz: Yeah. Especially, I mean, it is funny in that, in that section that you just read, Alex, about 'Minimize unintended bias in military AI capabilities,' which is implicitly, yeah, like we, there also is in any military conflict, intentional bias, you are murdering the other, you're murdering your adversary, or trying to disable them or force them to surrender. 

And so, yeah, like, how do you build something that Is biased against your adversary, but not against people in your own, on your own side who look just like the adversary. Um, you know, it's, it, it is really--I, I get what they're trying to do. They're trying to say like, all right, you know, let's take humanitarian, uh, ideals into account. 

But, um, I think we need I think we need non military regulation of AI first, and it seems like a weird, um, seems a bit backwards to be like, well first we're gonna make sure that it only murders the guys we want it to murder, and then later we'll try to implement something that's like a little bit more general about like not murdering anybody. 

Emily M. Bender: There's a great comment in the chat here. Arestelle says, "This should have been one word. No."  

 (laughter)  

Charlie Jane Anders: Yeah, I was, that pretty much sums up my thoughts. The only ethical or responsible approach to autonomous, you know, weaponry or, or AI driven weaponry is to not use it. Like that's, that's basically it. There's no other ethical or moral--really there's no other moral, uh, approach.  

I I feel like, I mean, it is important, like this is a podcast about AI hype. So I think that, uh, it's important to kind of step back and realize that we're not talking about systems that can think for themselves. We're not talking about systems that actually can make, you know, reasoned choices or, or kind of--we're talking about systems that are basically, you know, people can point them in a direction and they will carry out an algorithm and that algorithm will result in like mass casualties. 

But it's, you know, we're, we're, we're the ones deploying it. We're the ones aiming it. We're the ones sitting it out there. And this is basically just saying we want to make sure that it doesn't, it kills the people we want it to kill and doesn't kill people we don't want it to kill, basically.  

Alex Hanna: When you, when you see something like this--I was going to move us in a little direction. 

Go ahead, Emily, and then I want to jump in on something.  

Emily M. Bender: I'm just going to say that it also um, seems to be really about plausible deniability. And like if you, if you can say all these things and, and say, and this is also sort of foreshadowing the, the next artifact a little bit, say, well, it passed all the tests. The system is ethical. So therefore we don't have to worry about using it. Therefore something goes wrong. It's just an unfortunate accident. Um, and not, you know, it, it was unintended. We did our best, but you know, sorry, there was collateral damage. Right?  

Which is, is really, um, chilling.  

Um, and Charlie Jane, I appreciate the point that this is, um, anytime they use the phrase 'AI,' it is suggesting something more independent, like something that's capable of reasoning, as you're saying. 

Um, although I do appreciate that they talk about, uh--well, maybe not. So my constant advice to policymakers is you're not trying to regulate AI, you're trying to regulate the use of automation. And they say autonomy here, which isn't the same thing.  

Charlie Jane Anders: Yeah, what I wanted to say really quickly is that a bomb with a timer is an autonomous system. 

You set the timer, you walk away, it's now autonomous. It's going to blow up on its own. You don't need to do anything else. You set the timer. So, you know, that's--  

Emily M. Bender: Unless unless MacGyver gets there, or MacGruber.  

Charlie Jane Anders: Unless, unless Tom Cruise is there and he cuts the red wire or whatever. Um, but you know, a bomb with a timer is as autonomous, like conceptually, as what we're talking about here. 

This is more sophisticated, but it's the same principle.  

Emily M. Bender: That's such an excellent point.  

Annalee Newitz: Also, like one more thing, to give them a tiny bit of credit here. Um, the last two, uh, things that we can see on the screen here, F and G, so sorry, there's more, but the ones that I'm looking at here, um, actually are aimed at, uh, training. 

So it's the idea that you would train, uh, military personnel to understand how to use AI, um, and that also military AI, um, should, uh, signal to their human users, um, what their abilities are and what they're doing and that, and that kind of thing. Like it's sort of trying to mandate like a UX that people can have to figure out why the AI is suggesting what it's suggesting or whatever it's doing. 

We don't know because it's just, they're just saying AI. So they're not explaining where the AI will be used. Um, but I do think it's good that at least they're nodding to the fact that this does require money or resources toward training people and having AI systems that are not just a black box, but they give the operator a sense of what's going on. 

Um, so that's good. I mean, we would want that in any AI system. We want that in autonomous car systems, for example. We want drivers to be able to understand why the car is doing what it's doing.  

Emily M. Bender: Yeah.  

Alex Hanna: Yeah, I think there's some parts of it and this is we're going to touch on this a little bit in the next, I think the next artifact, but the kind of idea here as saying you know, there is kind of an element of this, which is explainable or at least governable by some kind of a human agent, but there's sort of this in two registers. So the first one being so F says, "States should ensure that military capabilities are developed with methodologies, data sources, design procedures and documentation that are transparent to and auditable by relevant defense personnel."  

And this is something that I know Emily and I say often to policymakers, which is, um, you know, like, where's the data? Where's the training? You know, where's the documentation? And often, you know, that is, um, you know not available. just by virtue of the product as in, you know, GPT-4 and most large language models, really, you have no ability to get into the training data. 

Um, but then there's also kind of an element of it is, you know, when you say there needs to be some kind of an, um, some kind of a, um, uh, place for humans to intervene, um, you can't really actionably intervene. Uh, so in, for instance, in self driving cars, um, I'm remembering there's this part of Karen Levy's book, uh, which is called 'Data Driven,' and it's about, about self driving long haulers, self driving big rigs. 

And, uh, well, actually most of the book is about kind of data surveillance in big rigs, but then she has a second section in the conclusion on self driving in big rigs. And this idea of basically ceding complete control to the car, allowing humans to intervene. It's kind of wild because you're effectively forcing someone to do nothing for, you know, 58 minutes and then the moment in which they have to intervene, they have basically a space of two seconds. 

And then, and humans basically can't intervene--you know, I think they were doing tests on this and some kind of a simulated environment and really couldn't intervene fully. Uh, it basically took them 17 seconds to actually figure out what was going on and actually get regain control of such a thing. Um, so, you know, this kind of semi-autonomous or kind of human control kind of thing is, is a bit of a, it's, it's a bit of a-- it's a false promise. You know, you're just asking someone to zone out for 99 percent of the time, but they have to be incredibly on and ready to do emergency action in 1 percent of the time.  

Annalee Newitz: Yeah.  

Emily M. Bender: Yeah.  

I'm reminded of the, um, the Boeing planes that went down, the 737 Max 8s, I think one was in Ethiopia and the other maybe in Indonesia. 

And it turned out that that part of the issue was that there was some sort of automatic system that was not even overridable. So I think there's two layers here. One is, is it physically possible? Is, you know, is the, is the override actually installed? And then is it psychologically possible? Like, is, is it actually, could a person carry out that function in the timing allowed? 

Um, and this is before we even get to automation bias, right? So like you might have someone who is, you know, really primed and really motivated to distrust the machine. And yet, do they actually have enough time? And I'm reminded a bit of some of the stuff we talked about with, um, applications in the medical field, where, you know, doctors don't have time to do enough charting, whatever. 

So let's just have this automation, automated systems doing it. And the doctors can check and like fix any errors, but they're not going to have the time. Right, which is a different time scale, but same problem, I think.  

Alex Hanna: Yeah. And IrateLump in the chat says, "orsWt of both worlds, humans get to take the blame while machines get all the credit." 

It reminds me of Madeleine Clare Elish's concept of the human, the moral crumple zone, the idea of which, you know, if you're a crash test dummy, uh, you know, uh, kind of a, the human gets to be the crash test dummy, but in terms of moral obligation. Um, yeah, so great stuff.  

Emily M. Bender: All right. So I gotta take us to our next artifact here. 

And this is, I was, I was chatting with a colleague who shall remain anonymous, um, at the conference that I was at at the end of last week and mentioned that we were going to be doing this episode with such amazing guests--and this, this, uh, anonymous colleague is also a science fiction fan--um, and how we're gonna be talking about Isaac Asimov's three laws and the sort of governmental stuff. And they said, Oh, are you talking about that new broad area announcement from DARPA called ASIMOV? And I said what now?  

Alex Hanna: No. 

Emily M. Bender: So shout out on the colleague. Thank you. Um, here is this ridiculous thing. "Autonomy Standards and Ideals with Military Operational Values." AKA ASIMOV. That came out of DARPA. Abstracts were due February 14th. Anybody who was hoping to apply and is just now hearing about it, you're too late. But let's take it down to the description here. 

So, I'll read this and then love to hear from our guests, um, what they think. So, "The Autonomy Standards and Ideals with Military Operational Values [ASIMOV] program aims to develop benchmarks to objectively and quantitatively measure the ethical difficulty of future autonomy use cases and readiness of autonomous systems to perform in those use cases within the context of military operational values." 

Maybe I'll stop there. What do we think? 

Annalee Newitz: I mean, it's definitely a great example of taking the three laws and trying to implement them in technology, um, technology that I'm not sure anyone understood who wrote this, uh, call for, um, submissions. But, uh, yeah. Also I want to point out the, um, this in the second paragraph begins with this idea that, uh, where they say, "The ASIMOV program intends to create the ethical autonomy lingua franca." (laughter)  

Alex Hanna: Yes, I read this and just sent it to Emily, just with all the exclamation marks. Like, oh my gosh. Uh, what is, this is, this is an incredible, incredible phrase. I, I, I want to do something with "ethical autonomy lingua franca," it just seems like, uh, something, this should be a tattoo or a post punk, like, like, band or something. 

I don't know. I'm just trying to mash up--it's just, they certainly all are words.  

Charlie Jane Anders: Yeah. I mean, the, the phrase that jumped out at me actually was in the first paragraph, a little bit further down, it talks about, uh, the rapid development of it and, and you "impending ubiquity," which I'm like, oof, of, "of autonomy and artificial uh, you know, AI technologies across civilian and military applications require a robust and quantitative framework um to measure the ethical ability of autonomous systems as they emerge beyond R&D." And I'm like everything about that sentence makes me itchy including the 'impending ubiquity,' but also the 'robust and quantitative framework.' I'm like, how do you have a robust and quantitative framework for ethical ability? 

Like what does that look like? Like How do you quantify ethics? How do you, how do you create a framework that is like, okay, if x, then it's ethical. If y, then it's not ethical. That feels like, this feels like pie in the sky. Honestly, it feels very wishful. And it feels like, again, what we were talking about with the other stuff, which is that this is trying to come up with an ethical framework for something that can be never, can never be morally justified, can never be justified in any way. 

The military use of autonomous systems is just intrinsically immoral and intrinsically antihuman and anti, it's just, it's another step down the, the steep staircase to hell. Um, and yeah, I, I just, I feel like the idea that ethics is something that can be quantified or could be like turned into a framework is the work of someone who has never thought about morality or ethics or the value of human life for like a second.  

Annalee Newitz: It really reminds me of, um, what happens when a company wants to reassure the public that its tech, that its tech products are secure. And there's a bunch of different organizations out there that will give you a framework to evaluate whether a security vulnerability is, you know, code red, you know, is it, is it, is it, uh, dangerous? Is it minor? 

Um, and within the computer security community, of course, the oftentimes these rubrics are laughed at because it's, how do you quantify whether a vulnerability is, you know, high stakes or low stakes? Again, it's contextual, um, something that a vulnerability might be in one place, you know, relatively no big deal, but could also become a really big deal. 

Um, and those kinds of systems though, those kinds of rubrics really help companies that want to basically rubber stamp their products, right? They want to be able to say like, yes, we've subjected this to a security review and we've quantified our risk. And our risk is, you know, zero on a scale of one to five or, you know, whatever. 

And that's what they want to do here is they want to have someone come in with a clipboard and check off a bunch of boxes and then say, yep, this is certified ethical. Go ahead and shoot some civilians. I mean, shoot some bad guys.  

Charlie Jane Anders: Yeah, I love what Annalee just said. It reminds me of when I used to be a business reporter. 

And one time I was reporting on this law. I don't remember what law it was, but I was talking to somebody, some attorney or somebody and they were like, this is basically a 'full employment for consultants law.' Like it's just gonna create so many jobs for useless consultants and like, it'll be great for the consultant industry, it won't do anything for anybody else, but a lot of consultants are gonna get a lot of billable hours out of this and yay for them. 

Alex Hanna: Right.  

Charlie Jane Anders: You know, but--  

Emily M. Bender: And what's going on. Yeah. No, go ahead.  

Charlie Jane Anders: Yeah, I was, you know, you, you go.  

Emily M. Bender: I was just going to say building on what Annalee was saying, it's, in this case, it's an automated clipboard. So the performers, and this is DARPA speak for the researchers, the performers are supposed to build these systems that can generate scenarios in which the, uh, various systems that might be used can be tested. 

And that's, that's what they're putting this money into. And I want to take us over to the longer document that not everyone has had time to read because like I said, late breaking, I just discovered this, I think on Friday. Okay. Um. Okay. But I'm actually kind of curious, do we, do we know, I didn't see how much money was in here, but I wanted to look at a couple of things here. 

One is there's this thing way far down. They say, uh, I am bad at, um. Oh no, hold on. Sorry. Someone else talk while I--I lost my thing.  

Alex Hanna: Yeah. Yeah. I think there's a, there's a few things in this, which, which really strike me. And I, I know where, I know where, uh, um Emily's going, uh, so I'm going to vamp for a bit, but there's, there's such, I'm looking at the longer one, the technical approach. 

It reads, "The focus of ASIMOV is twofold: develop and demonstrate the utility of quantifiable, independently verifiable, and applicable autonomy benchmarks. And two, test the efficacy of those benchmarks as it pertains to the five RAI--" Which is Responsible AI. "--ethical principles, using realistic and increasingly complex military use cases in a generative modeling environment supported by data collections in the second phase of the program." 

Oh, okay. So that's, that's such a mouthful and it's so, you know, and someone in the chat was like, yeah, this is real. Black Angus Shleem, great name, um, that this is really Pentagon speak, but it's just, you know, it's just a kind of a word salad at this point, but it's also the idea of just thinking about what these kinds of, you know, you're trying to put, um, you know, as you mentioned, Annalee, this kind of quanti-certification with a verifiability on something that's a benchmark. Um, and it actually does have some kind of comparison as I was reading this to, uh, the, um, auto, uh, uh, self driving cars element of it, where there's these different--six, I've seen six categories of automation, which ranges from like the first half are human control. The second half is plausibly machine control with kind of different aspects of that.  

But I mean, what does this mean that you're going to quantify any of it? I mean, in this case, you're going to basically reduce it to some degree of, um, of, uh, of, of, um, you know, like, if you kill this many people, but you don't kill this many people, that seems okay. 

And actually, I'm just, now I'm just actually feeding it now back to Emily to tee up, to tee up what she's about to say.  

Emily M. Bender: So, "Much like confusion matrices are used to determine the technical performance of classification models for a given set of test data, DARPA believes similar methods may one day be used to assess the ethical performance of autonomous systems to understand the reasoning reasoning and repercussions of ethical true positives, false negatives, false positives, and true negatives."  

So like, you know, confusion matrices are basically if we have to classify, you know, a set of pictures as dog or cat, how many times did actual dogs get classified as cats and vice versa? And if you've got more labels, it's interesting to see which labels are frequently confused with each other. 

And that's a reasonably well understood, well grounded kind of evaluation for very simple tasks. It makes no sense here. Like it's, it's, as a point of reference for an analogy, it's ridiculous.  

Annalee Newitz: I guess what they're trying to say is that we want to see how often the model confuses, gets a false positive on ethics versus a--because they're trying to set up that there's an ethical true positive and an ethical false positive, right? That we can, we can sort of say, like, for sure, this situation is ethical. Um, for sure, this situation isn't ethical. So, like, how often do they get confused? I'm, I, again. I don't think this is actually possible, but I think that's what they're wishing for, is they're wishing for ethics to be able to, to boil it down to, you know, absolutely 100%. 

We know certain situations are totally ethical.  

Um, but it just sounds like trolley problem again, you know, like we're going to solve the trolley problem and then we're going to have the right answer. And then we can judge these systems based on whether they give the right answer to the famously unanswerable questions. 

Emily M. Bender: And then check them off. Say this is certified ethical.  

Charlie Jane Anders: Yeah. I mean, um, sorry, I had a thought and now I've lost it, but I, I feel like that, yeah, the idea that like an ethical situation versus an unethical situation is like dog versus cat. And like, you can tell that like, it just, it feels like wishful thinking.  

I guess what I was going to say is I'm going to be generous for a second and try to like come up with the most positive interpretation of this, which is okay, there is a seriously bad, bad, bad person out there who needs to be killed or else they're going to, you know, release antimatter and like, you know, we'll all the entire planet will be vaporized. And we need to kill this person, but there's also like a busload of nuns nearby, uh, on vacation.  

Um, and so we want to, the ethical thing to do is, if at all possible, to kill the bad person with the antimatter and not harm the busload of nuns because they're nice and we don't want to hurt them. And so try to minimize that kind of collateral damage, I guess, is what you mean by ethical in this kind of situation. 

That's the thing. I'm trying to concretely understand what they mean by ethics when their starting position is we're going to be murdering people. Um, and I think what they mean is, minimizing collateral damage, minimizing, like, target selection, where the target is just someone who looks like who, if you do a Google image search on terrorists, I've never done that Google image search, but I'm going to guess you're going to get some really upsetting stuff if you do that Google image search. 

So how do you prevent it from just choosing targets based on, well, this looks like a terrorist because that's what I saw on a Google image search. Um, I feel like ethics is the wrong way to think about this, probably. I think the right way to think about this is I actually don't know what the right way to think about this is because I just am so opposed to this at all. 

Annalee Newitz: I mean, I think what they're really aiming for is they want accuracy with minimal collateral damage.  

Charlie Jane Anders: Right.  

Annalee Newitz: And so they're not really thinking--for, for the military in that kind of situation um, ethics really are about not having bias, that kind of bias that would do exactly what you're describing, Charlie Jane, where you say, oh, there's a brown person, they must be a bad guy.  

So the military wants to eliminate that. They want to get the right bad guy at--not just based on them looking like, you know, uh, racial profile. Um, and they, and of course they want to minimize collateral damage, like 100 percent. Like, I think, you know, whatever you think about the military, like they don't, they don't want to kill good guys. 

They don't want to kill civilians, um, they don't even want to kill soldiers unless they have to, you know. Um, but this is not ethics. And I think we need to just stop pretending like this is an ethical question and go back to saying what we want is accuracy and lack of collateral damage. And by accuracy, what we mean is getting rid of racial bias, getting rid of other kinds of bias in our facial recognition and body recognition software. 

Um, but we haven't solved that yet.  

Alex Hanna: I would say, I would say that this table below, they have, it seems to indicate--what I'm reading this as, and this is, I'm not going to read this whole thing because we're getting out of time and we want to have some time for even worse hell. Um, but you know, the first column is "Target Sensing," which this, I read this as sort of like a set of tasks. 

So "target search, object detection, target identification," uh, et cetera, and then context, "Context Reasoning." So I'm assuming this means like you want to have kind of a set of data across, you know, different weather. I probably call this a different set of like covariants. And then "Engagement Reasoning," which I guess is sort of stuff around this, which is, but this actually kind of interesting too, because even within that, there's some insert uncertainty. 

"Commander's intent, ROE," which I imagine means rules of engagement, um, "target ID accuracy," so do we even know if this target is the right target to go after, uh "collateral damage estimate, proportional force used, um, chain of evidence," be able to move. So it's just like this whole thing, right, and I mean, you know, like, I'll be, you know, let's be frank, you know, like, if you are in a defense oriented industry, you're trying to do things which sort of accord with a set of principles. 

War ostensibly has rules, and you want to accord by those rules. And if you are accepting those within premises, then yeah, you don't really think about you know, colonialism and the violence of this. I mean, we're at a point and we talk, I think we talked about, I don't know if we talked about the nine. Uh, the 927 Magazine report on the, uh, the Gospel system that Israel is using in Gaza now, but the idea of just like the mass death that's happening there right now and how that's just incredibly expanded the available targets that include what they call power targets. 

Uh, Israel, oh yeah, Israel's not a signatory to this, so yeah, that's helpful.  

Emily M. Bender: I, I, I bumped back to the political declaration. Okay, we've got to AI Hell, and Alex, you gave me your prompt this time.  

Alex Hanna: So I've completely forgot, I completely forgot what it was.  

Emily M. Bender: You didn't know you were doing it. You didn't know you were doing it. 

Your prompt is, what was the style of music you said that Ethical Autonomy Lingua Franca would be specializing in?  

Alex Hanna: Oh, like post punk twee? I don't know what that means, Emily.  

Emily M. Bender: Okay, well that's all right, because this is improv. So, you are going to give us a couple of bars of Ethical Autonomy Lingua Franca's first hit single in the genre, post punk twee. 

And it is a single about, of course, AI Hell.  

Alex Hanna: All right. Hey, we're, uh, Ethical Autonomy Lingua Franca. Uh, this is, um, this is Fresh AI Hell, our first album. Um, and this is, uh, this is our song, Rat Balls  

[Singing] Rat balls. Da na na na na na na na na na na ratballs. Why so big? Why so big? Why so long? Ratballs. 

Charlie Jane Anders: That was amazing!  

Annalee Newitz: Okay, I'm ready to buy this album.  

Alex Hanna: I'm ready to write the rest of it.  

Annalee Newitz: Take my, take my Dogecoin.  

Emily M. Bender: Oh man, um, before we get to the--  

Charlie Jane Anders: How many Dogecoin is Ratballs? I don't even know. Um,  

Annalee Newitz: Ratball is stuffed with Dogecoin.  

Alex Hanna: I think rat balls are their own, yeah, AI rat balls, my new cryptocurrency. 

We're gonna take the money from book sales and just invest it in starting a new crypto. You heard it here first.  

Annalee Newitz: That is a, that's gonna be a real pump and dump scheme.  

Charlie Jane Anders: (laughter) Oh god.  

Annalee Newitz: I'm sorry.  

Emily M. Bender: Before we get to the actual rat ball story, which was not brought out of nowhere, Alex was foreshadowing. We have, uh, for the first AI Hell entry here, a tweet from Peter Wang um, quote tweeting Yann LeCun, who says, Yann LeCun says, Uh, October 20th, 2023, "Chatting with @ Tim O'Reilly today. He said, 'books are a user interface to knowledge.' That's what AI assistants are poised to become. 'AI assistants will be a better user interface to knowledge.'" By the way, look at that date. 

That's just, no, take it back. That's a whole year after Galactica. So he's still on his nonsense. And then the Peter Wang tweet, quote, tweeting that says, "I think we will stop publishing books, but instead publish 'thunks,' which are nuggets of thought that we can interact with the 'reader,' sorry, that can interact with the 'reader' in a dynamic and multimedia way. There can still be classic linear 'passive read mode', but that can be auto generated based on the recipient's level of existing context and knowledge." 

So, we have some renowned authors here. Are you getting ready to write your thunks?  

Charlie Jane Anders: Oh my gosh. I, this just, uh, makes me, this, tell me you hate reading without telling me you hate reading. Like basically, tell me you hate books and narrative and anything that gives life meaning without telling me you hate those things. It's so dark.  

Annalee Newitz: It really I feel like this comes up again and again as a fantasy from lots of Silicon Valley guys who are just like, we will never watch, like there'll never be a movie that we all watch together again because all of us will have personally tailored movies, like as if as if narrative was like medicine and you need to have like personalized versions of each narrative in order to survive the experience of, um, of reading or watching something. 

Um, yeah, it's, this is just really sad and, um, it's hilarious that Peter Wang is saying this, um, as a form of personal communication, like here he is, a human, communicating with other humans, but he's somehow forgotten that that's also what books do. It's a human telling other humans about human stuff. So having, you know, auto generated thunks is kind of gonna ruin the purpose of, um, you know, writing. And reading. 

Alex Hanna: Yeah. And there's, there's, there's a few funny things about this in his profile. Uh, it says, um, "a student of the human condition," uh, which great. Love it. Love, love to be a humanist that writes thunks. Uh, and I feel like--  

Emily M. Bender: Former physicist. It's the damn physicists. Again.  

Alex Hanna: I know, the physicist are always on this bullshit. 

Uh, I like how the 'reader' is in quotes as if the reader isn't, you know, a human.  

Charlie Jane Anders: Well, the reader is helping to shape the things so they're no longer 'passively taking in content,' which actually misunderstands, sorry, I really quickly--this misunderstands the process of reading. Which is, it's--anybody who has spent any time in the book world is familiar with the idea that no two people read the same book. 

Like, that's just a thing that happens now. I might pick up a book, Annalee might pick up that exact same book. We will have different experiences reading it because we're different people, and the words are being processed in our brains differently. We're like painting different pictures with what we're reading. 

This already happens. This is intrinsic to the technology of books. The other thing I just want to say really quickly is that this kind of at its, at its root, what, what Peter Wang is saying is what people always, people, AI zealots always say, which is that basically the idea is what matters. Like if you could, whatever the one sentence elevator pitch of the book is, whether it's fiction or nonfiction, that's what matters. 

The execution is just a bunch of like blibbity, blibbity, blibbity. It's really the idea. And so you can just like, George Orwell's 1984: surveillance bad, Big Brother scary. Rats eat your face. And [ (laughter) unintelligible] ...writes that book. And you're done. Because  the idea is what matters and the execution is just an afterthought, which is so not what makes any of this worthwhile. 

Emily M. Bender: Did someone say rat?  

Annalee Newitz: Yeah let's talk about rat balls.  

Emily M. Bender: I'm saying this quickly because we've got to get to-- 

Charlie Jane Anders: Let's talk about rat balls. We've got big ratballs. You've got big ratballs. Who's got the biggest ratballs of them all. We need to know.  

Alex Hanna: Oh gosh.  

Annalee Newitz: I also like at the bottom, the 'stemmm cells.'  

Emily M. Bender: Oh, it's, it's all, it's like, it's all--so 'retat.' 

'DCK.' ' 

Alex Hanna: Um, yeah. Yeah.  

Emily M. Bender: 'Ill--' (laughter)  

Alex Hanna: Yeah. Yeah. So let's describe what we were going. So this is on Mastodon, it's, it was going around, you know.  

Emily M. Bender: Yeah. This is Carl Bergstrom's version of it, but yeah.  

Alex Hanna: Yeah, yeah. There was a, it was going around, uh, Twitter and, and Fediverse. So it says, and Carl Bergstrom says, "It takes...Er...big balls, and lots of them, to publish AI generated nonsense art in a scientific paper. And also I don't ever want to hear that Frontiers is not a preda--predatory publisher."  

And so this is Frontiers in...something, I don't know. But, yeah, it's got a rat, and it's, you know, got these giant balls, and--  

Charlie Jane Anders: Also, like, five of them.  

Annalee Newitz: Yeah. (laughter)  

Alex Hanna: And five of them, and there's, like, a phallus that, like, is so large that it goes off the page. 

Emily M. Bender: It's off the charts!  

Charlie Jane Anders: It's actually an umbilical phallus, okay?  

Alex Hanna: Right.  

Charlie Jane Anders: It's just connected to, like, some other autonomous system, I don't know.  

Alex Hanna: And then the rat is, like, looking at its phallus, like, it feels like a big, like, you know, penis go up, go brrrr kind of situation. I don't know, like, that's the only way you can describe it. 

If you haven't seen this, I'm sure you have. The words are nonsense. Uh, yeah. But luckily this is--  

Charlie Jane Anders: "Stem ells."  

Alex Hanna: Yeah, "Stem ells."  

Annalee Newitz: "Dissilced penis."  

Charlie Jane Anders: The way the rat is looking at the giant phallus makes me want to play the like 2001 like, Thus Spoke Zarathustra. (singing) Doom doom doom. Bah! Bah! Bah! Bah!  

Alex Hanna: Yeah, that's actually how um, Ethical Autonomy Lingua Franca's 'Rat Balls' ends, this is with like, we bring it, we have to roll it. It's really hard because our timpani player, they're really expensive to rent out. Uh, but we have them on the studio recording.  

Okay let's go to the retraction.  

Emily M. Bender: So, so this is just the paper. So Retraction Watch knows that this thing, um, has been retracted. Um, and I think this is another great example collectively of ridicule as praxis, honestly. 

Like this just got banned and so it's gone.  

But I promised our guests we would get to this next one. So I'm going to take us to it. Um, this is a also Mastodon post by, um, someone named Allison. Um, "this is utterly beyond parody," and then there's a link to something called Artvy AI. And then the quote is, "Abramovic's work explores themes of endurance, vulnerability and the relationship between the artist and the audience." 

And then further down, so dot dot dot in the quote, "To create AI art in Marina Abramovic's style, we recommend using Artvy, our free AI art generation tool."  

Charlie Jane Anders: Oh, my God.  

Annalee Newitz: That, that way you can create a new relationship between the quote unquote audience and the artist. We're, we're still, we have these notional audiences and readers here in our, in our examples today. 

Yeah.  

Emily M. Bender: Yeah.  

Charlie Jane Anders: Absolutely beyond parody. And yeah, I mean, it's, you know, the idea of AI, like, I feel like art needs to be like have 20 scare quotes in it around it when you're talking about AI 'art' with like the 20 scare quotes. Because it's art implies some kind of intentionality and some kind of like personal stuff that's just not present with any of this stuff and you're just getting bad derivative garbage and that's all you're ever going to get. 

Emily M. Bender: Yeah. And, and how is that vulnerability and how is it relationship? It's just...  

All right. I have for our stairway out of AI Hell um, a little bit of accountability, which is refreshing. So this is a piece, uh, from CTV news, Vancouver, um, with the headline, "Air Canada chatbot gave a BC man, the wrong information. Now, the airline has to pay for the mistake."  

Um, and basically this fellow who was, uh, grieving and trying to get to a funeral, uh, asked the Air Canada's website's chatbot, um, how, how does the bereavement fare thing work? And it gave incorrect information. And, uh, Air Canada was like, no, we're, we're not going to, um, honor that commitment. Because the chatbot said so, but the chatbot's not us. 

And where's the thing they literally said? Um. Let's see. This is great. I have to find it. Um, yes. "In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While the chatbot has an interactive component, it is still just a part of Air Canada's website." 

Um, and that is Rivers. Do we know who Rivers is? I'm sorry, I have to search.  

Alex Hanna: I forget.  

Charlie Jane Anders: Oh, it's Christopher C. Rivers, tribunal member.  

Emily M. Bender: Yes.  

Charlie Jane Anders: It's a small claims court, basically.  

Annalee Newitz: Yeah. Because they, the chatbot told him that he would get reimbursed and they were like, no, we're not going to reimburse you for your flight. 

And so the--Air Canada argued he should have known this poor guy who's bereaved is supposed to know the difference between what the chatbot says and what the website says. So I'm glad that they're being held responsible for that. Sorry, Charlie.  

Charlie Jane Anders: I mean, I was gonna say that I feel like people focus a lot on the the corporate fantasy of AI getting rid of workers, and I think that is an important component. 

The idea that like with 'AI,' with like, I'm putting 'AI' also in scare quotes, uh, that the idea that with with machine learning or LLMs, you can just dispense with like tons of expensive employees and replace them with low paid employees who are going to do the exact same thing. But I think the other component that people don't talk about enough with these systems is the idea that it'll allow you to escape accountability. 

The idea that like, if you have these systems in place, there'll be, getting back to Rawls, there'll be a veil that people cannot pierce that is between the consumer, the customer, the individual, and the corporation and this AI will just sort of be like a non human shield that like nothing can be blamed on the corporation because you blame the AI but also the AI has no there's no recompense. 

And I think that that's that's the ultimate fantasy. That's what corporations always want. They always want maximum power zero accountability. And this is them trying to get that through having this intermediary that's not a real thing. And I'm glad they were smacked down. I suspect we'll see this over and over again. And my worry is that a year, five years, ten years from now, something like this will happen and it won't be smacked down, uh, because he'll get some judge who's gullible, who's like, well, well, you're AI, oh, what, you know, they should sue the AI, oh, you know.  

Alex Hanna: But I feel like in some places, yeah, I feel like in some places, it's not, you know, so you're already not getting it, I mean, you're getting, I mean, we have that self driving car case in, in Arizona, where Elaine Hertzberg was, was murdered, and, uh, and then, uh, by an Uber self driving car. And I don't know if Uber actually, paid out at the end of that or anything, but I mean, I don't think the judge ruled it as a, you know, Uber as a non or as the car as an autonomous agent, but I actually think it was actually the test driver who I think engaged in actually liability. 

So dating back to the moral crumple zone thing, the human test operator took, uh, you know, uh, uh, live, you know, became accountable for this. Uh, and she was like you know, they said that she was like playing on her phone or something, uh, even though it was kind of a mistake of the self driving system, but yeah, it's, it's just, we're going to see more and more dodging of accountability for sure. 

Emily M. Bender: And on that low note, this was supposed to be a high note, on that low note-- 

Charlie Jane Anders: Sorry, I ruined it, I ruined it, I'm sorry.  

Annalee Newitz: Nope, Air Canada was held accountable, they had to pay this poor guy, uh, they had to pay him back for his overly expensive flight, um, and so yeah, there was a small win.  

Charlie Jane Anders: I think the, the upbeat, the upbeat edge of this story is that we had a win, but in order to keep having those wins, we have to fight and we have to keep pushing back against this false--so like, yay for the win, but also yay for us continuing to fight.  

Emily M. Bender: I love it. I love it. So that's it for this week. Charlie Jane Anders is a science fiction author. Annalee Newitz is a science journalist who also writes science fiction. Together they co-host the awesome podcast Our Opinions Are Correct. 

Thank you both so so much for being on with us today.  

Annalee Newitz: Yes, thank you for having us and we're looking forward to having you guys on our podcast in a couple weeks.  

Emily M. Bender: Same!  

Alex Hanna: So excited! Our theme song was by Toby Menon, graphic design by Naomi Pleasure-Park, production by Christie Taylor, and thanks as always to the Distributed AI Research Institute. 

If you like this show, you can support us by rating and reviewing us on Apple Podcasts, Spotify, and by donating to DAIR at dair-institute.org. That's D A I R hyphen institute. org.  

Emily M. Bender: Find us and all our past episodes on PeerTube and wherever you get your podcasts. 

You can watch and comment on the show while it's happening live on our Twitch stream. That's twitch.tv/dair_institute. Again that's D A I R underscore institute. I'm Emily M. Bender.  

Alex Hanna: And I'm Alex Hanna. Stay out of AI Hell, y'all.