
Mystery AI Hype Theater 3000
Mystery AI Hype Theater 3000
Episode 2: "Can Machines Learn To Behave?" Part 2, September 6, 2022
Technology researchers Emily M. Bender and Alex Hanna kick off the Mystery AI Hype Theater 3000 series by reading through, "Can machines learn how to behave?" by Blaise Aguera y Arcas, a Google VP who works on artificial intelligence.
This episode was recorded in September of 2022, and is the second of three about Aguera y Arcas' post.
You can also watch the video of this episode on PeerTube.
Check out future streams at on Twitch, Meanwhile, send us any AI Hell you see.
Our book, 'The AI Con,' comes out in May! Pre-order now.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
- Bluesky: emilymbender.bsky.social
- Mastodon: dair-community.social/@EmilyMBender
Alex
- Bluesky: alexhanna.bsky.social
- Mastodon: dair-community.social/@alex
- Twitter: @alexhanna
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.
ALEX: Welcome everyone!...to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype! We find the worst of it and pop it with the sharpest needles we can find.
EMILY: Along the way, we learn to always read the footnotes. And each time we think we’ve reached peak AI hype -- the summit of bullshit mountain -- we discover there’s worse to come.
I’m Emily M. Bender, a professor of linguistics at the University of Washington.
ALEX: And I’m Alex Hanna, director of research for the Distributed AI Research Institute.
This is episode 2, which first recorded on September 6th of 2022. And it’s actually one of three looking at a long…long blog post from a Google vice president ostensibly about getting “AIs” to “behave”.
EMILY: We thought we’d get it all done in one hour. Then we did another hour (that's this episode) and then we did a third (episode 3)...If you haven't listened to episode 1 yet, we recommend you go back and do that first.
ALEX HANNA: All right so welcome to the second installment–
EMILY M. BENDER: Here we go.
ALEX HANNA: –of Mystery Science Mystery AI um Mystery AI Hype Theater 3000. And let's introduce ourselves. Tell the folks about yourselves Emily.
EMILY M. BENDER: Yeah about myselves. All of me.
ALEX HANNA: You contain multitudes.
EMILY M. BENDER: Yes um yeah. And we also didn't–we should talk about why we're doing this because we lost that from the recording last time. Recordings going in both places?
ALEX HANNA: Yeah we're totally recording yeah.
EMILY M. BENDER: Excellent. So I'm Emily M. Bender I'm a linguist at the University of Washington. And I am involved in this conversation about AI hype, not because I'm actually interested in building AI, that's not my thing, but because a lot of claims in the AI hype first of all sort of impinge on my research area, which is computational linguistics, and secondly seem to be leading to a lot of harm and potential harm in the world.
So that's how I got into this conversation. How about you? What are you doing here?
ALEX HANNA: Yeah my name is Alex Hanna. I am the Director of Research at the at
the Distributed AI Research Institute and we are–if you're not familiar with DAIR we are an institute that attempts to center um people who are typically left out of conversations about AI and technology and center their perspectives in building with communities rather than sort of for or even against, which happens to happen in AI much more.
So my background is I'm also not an AI researcher by training. I'm actually a sociologist
by training. I'm a sociologist of technology and I entered into this understanding kind of
AI as a bit as a bit of a social technology that needs to be interrogated from that
frame.
EMILY M. BENDER: Yeah yeah and Ben is is waving high.
ALEX HANNA: Hey Ben. I'm actually trying to see that. I can't get the chat over here. Yeah so let me I've got it on my side screen. So yeah like let's talk about this this sort of project and like the the AI hype
EMILY M. BENDER: Yeah so I think I think we were saying last time that although the part that got cut off um that we've been kicking around this idea for months now.
ALEX HANNA: Totally.
EMILY M. BENDER: Because in our group chat on twitter and–
ALEX HANNA: Yeah.
EMILY M. BENDER: –elsewhere we frequently sort of collectively roll our eyes at the AI hype and lots of snide comments about it and thought that maybe the world should you know get to see that.
ALEX HANNA: Yeah exactly. I mean I am loving this format of like taking group chats and sort of turning them into streams. I was listening to um the um the podcast that is um it's called um oh gosh now I've totally forgotten what the podcast is called. It's by Saeed Jones and Zack Snyder and Sam Sanders um and it's a podcast that I'm going to I'm going to bring up my podcasting uh
thing on my phone just so I remember what it's called so I can give it a proper plug.
Let me go to the bottom. "Vibe Check." And they're sort of like you know they've had their group chat and they're like why don't we turn this into a podcast and so I think that's a great idea. I think more people should do that.
EMILY M. BENDER: Yeah so we're trying to do it too and hopefully we'll be enjoyable in a similar way.
ALEX HANNA: Totally. Let's uh while we get into this you know?
EMILY M. BENDER: Yeah.
EMILY M. BENDER: Before I share my screen, there's a couple of housekeeping things I want to bring up from last time, right?
ALEX HANNA: Yes.
EMILY M. BENDER: So we got super frustrated last time with this notion of talking about an AI or the AIs out there um and you had a great phrase um to refer to them instead as, "mathy-maths."
ALEX HANNA: Yes.
EMILY M. BENDER: So we'll be saying that as we're reading along. We'll probably talk about instead of when the text says an AI we'll be like and a "mathy-math."
ALEX HANNA: Yeah.
EMILY M. BENDER: I just want to share that so we have a we have a live captioner which is wonderful. Thank you so much for being here. And thank you to DAIR for sponsoring the live captioning.
But uh the way we did the recording last time, we didn't have those captured and so I was fixing auto transcription for the subtitles on the YouTube video and it was hilarious what it did with mathy math. So most of the time it came up with "Matthew Math" as if we were talking about
a person. But then once, and it was me because I talk too fast, it instead it transcribed "vacuum apps."
It was so perfect because they're empty like vacuums and also they work by like vacuuming up all this data.
ALEX HANNA: Oh! I like this double entendre. I was thinking when you said you know like when you said um vacuuming up and vacuum apps you know I was already thinking about like I'm sure like Dyson or somebody like really–Well actually this is happening with Roomba right? Like Amazon bought Roomba didn't they? And they're like getting the data from like mapping individual households. So now you have like you're getting like internal layouts of people's houses via this. So vacuum apps is something that exists in the world. Terrifying, right?
EMILY M. BENDER: Yeah so in so many ways but anyways. Yes so mathy maths is what we're going to call them and now we need mathy maths merch.
ALEX HANNA: I know yeah if we if we'll do that yeah I would also do this. I'm thinking about this. I know! Dylan's like I just want to keep this, I just want to keep my dog's fur off the floor yeah no I have some–
EMILY M. BENDER: Right without having to like send all the you know exact layout of your
house to Amazon.
ALEX HANNA: I know right?
EMILY M. BENDER: Yeah.
ALEX HANNA: What a concept. So we started with this um I'm gonna make this text bigger so folks see this a little more um so don't mind me futzing with caption you know caption size on the screen um and so um and so let's talk about this article.
So we're getting into this article which was entitled, "Can march- Machines Learn How to Behave?" by um Blaise Aguera y Arcas.
And apologies for any Spanish speakers out there for mangling um his surname. So we're getting into this article um that um that he wrote.
So Blaise is a VP at Google um wrote this rather long article. By Medium's count about 64 minutes long by reading time and um about this kind of central idea of kind of machine understanding. And implicitly here Blaise in this article is calling out um I think the language of sort of like AI critics or I'm not quite sure what the I forgot what the language is a little up but it was sort of about the the–
EMILY M. BENDER: I think it was actually AI ethicists actually.
ALEX HANNA: Yeah, narrow their focus to the problem so I think this is this is implicitly calling out uh um uh uh Emily and Emily Bender and Emily yourself and Timnit and especially the Stochastic Parrots paper um that y'all wrote last year and sort of that and I think and then the sort of other hand is sort of I think this sort of dichotomy is set up by the AI ethicist and then the sort of people who are really into into like “Superintelligence,” the kind of Nick Bostrom kind of people and sort of like.
So that's kind of where we are and that's where we ended this and sort of that we're sort of and so we're sort of in this place we're reading about like the history of AI and understanding this idea of understanding. So like let's go from there. I mean is that a good summary of like where
we ended last time?
EMILY M. BENDER: Yeah I think that in terms of the structure of the piece what we basically
got to last time was just the introduction. And now I think we're diving into the first major section called, "Misunderstanding Intelligence."
I guess the other thing that people don't necessarily know is that I did read this whole thing. It's been a few weeks now um and Alex as of last time had only skimmed it. Did you do any more?
ALEX HANNA: No I didn't anymore homework. I still just skimmed it.
EMILY M. BENDER: No but the idea was that we're going into this pretty fresh actually and just like sharing with people our initial reactions to these things. But just to give you sort of a top-down sense, I think we basically read the introduction and now we're reading the the first section.
ALEX HANNA: We're getting into it.
EMILY M. BENDER: Yeah.
ALEX HANNA: Yeah all right cool so so let's do it so: "Misunderstanding Intelligence." So we're learning about the context of where we are in uh AI. Uh so we're doing this sort of um stylized history here. Okay let's do it. So where AI comes from, "coined by Dartmouth Summer Research project on AI. They held that every aspect of learning or any other feature of intelligence can be in principle be so precisely described, a machine can maybe simulate it–” Okay. “--sought to make it possible for machines to use language form abstractions etc and solve problems."
EMILY M. BENDER: Now, Alex do you know who was actually at that meeting?
ALEX HANNA: uh I don't know I'm assuming it's people like Marvin Minsky and Seymour Papert
and those types of figures right. Would that be correct Emily?
EMILY M. BENDER: So I also don't know but that's my assumption and the reason I'm asking the question um is when they're making these claims about "every aspect of learning or other feature of intelligence." Were they the sort of researchers who had like state-of-the-art knowledge for 1956 of what learning was?
ALEX HANNA: I really doubt it right I mean these are I mean when we talk about the histories of AI at least the ones that exist within the community itself it's sort of discussed I think typically of these people who are sort of these big men of history so Marvin Minsky being one Seymour Papert another both of them at MIT. I'm also thinking Herbert Simon also comes up in these discussions and Herbert Simon was a bit of a kind of a polymath who sort of dabbled in everything but I think was an economist by training. Is that correct? Yeah.
EMILY M. BENDER: Maybe folks in the in the comments are more on top of it than us. But yeah so they're making this big claim about um something that is not computers right? "Every aspect
of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."
But I don't think that those sort of people even among you know what people could have known in 1956, they are probably not the best positioned to know that. um and it hasn't gotten any better right? Like artificial intelligence has failed to define what they mean by intelligence at every step.
ALEX HANNA: Right yeah yeah to to kind of paraphrase I think Kate Crawford machine-- I think she's cited in this piece as saying, "Artificial intelligence is neither artificial nor intelligent."
And that's sort of a reiteration of that. So, "While neural networks played some role in their thinking, the Dartmouth researchers that invented it part partly to distance themselves from cybernetics, an existing approach to creating machines that could 'think' using continuous values to form predictive models of their environment."
Which is an interesting separation okay.
EMILY M. BENDER: All right, and then we've got a picture of a white dude here yeah.
ALEX HANNA: Norbert Wiener with his Palomilla robot, okay.
EMILY M. BENDER: And a cigar, also.
ALEX HANNA: And a cigar and you know some mathy maths on the board behind him, just you have to have some some calculus behind the board just to show that you're you know a real researcher.
EMILY M. BENDER: Right.
ALEX HANNA: Because if you're not doing calculus on a whiteboard are you actually sciencing bro?
EMILY M. BENDER: Right or blackboard, back in the day right?
ALEX HANNA: Right.
EMILY M. BENDER: All right so: "Despite its ups and downs, the term AI seems here to
stay while cybernetics has sunk into obscurity. Ironically, today's most powerful mathy maths
are very much in the cybernetic tradition. They use virtual quote neurons with continuous
weights and activations to learn functions that make predictive models based on training data."
Do we know who's responsible for this use of the word predict and predictive and prediction? Because it's such a disaster for the field.
ALEX HANNA: I mean it's I mean it's interesting and I'm wondering. I'm curious when you say it's a disaster because I mean I sort of think it is an articulation to say that predictive is different from explanatory.
And here I'm thinking a little bit about the sort of what is it the two cultures paper um of statistics. I forget the author um but this has sort of been replicated in other places where it's sort of been there's the explanatory science or side of this and then there's the predictive side of this. And this is sort of a way of kind of distinguishing from for instance social science which tends to um you know you sort of train models and then you report out um your coefficients and you try to explain some kind of a um you know correlation in a data set.
Whereas prediction is more about whether you can have some kind of accuracy on a held out set. But I'm curious on why you say it's disastrous?
EMILY M. BENDER: So the the problem that I see is that a lot of people seem to think that mathy maths can literally predict the future. So when we repeatedly tell ourselves that its output for the held out test data is predictions about those labels as if they weren't actually known in advance by somebody. As if its actually predicting the future, I think feeds into some of
the things that people like say well I you know I can make an AI system that for example is going to predict where crime is going to take place.
Right where they actually sort of bleeds out into the ordinary use of that term as opposed to the technical use where it you know sounds like it had a had a technical definition that was reasonable um that it was about labeling held out data um but you know if you look at the ordinary use of the term predict it means you know this is pre- diction, right, saying ahead.
ALEX HANNA: Right right that's a that's a really good point and as one I haven't really thought about the way in which prediction bleeds out to saying test data to how it's actually predicting a future you know.
EMILY M. BENDER: Yeah.
ALEX HANNA: And that is a subtle distinction I think that's super important to note. So, "As recently as 2006 when the surviving members of this group had their 50th reunion, these founders doubted the cybernetic approach could yield any meaningful progress. Overall, the mood was pessimistic; nothing seemed to be working." All right. "Mainstream attempts at AI between 1956 and 2006 had been often based on logic rules and explicit programming. Just like the rest of computing, this approach is now sometimes referred to as GOF- GOF AI. GOFAI, good old-fashioned AI. Much of classic computer science including now-standard data structures and programming patterns were developed in the quest for rule-based AI. In this sense GOFAI was a highly productive research program, even if its greater ambitions missed the mark."
All right. Interesting. All right let's keep on going on this.
EMILY M. BENDER: Keep on going yeah, we can go fast here.
ALEX HANNA: Yeah, yeah. "Combinations were able to beat expert humans at games that could be characterized by fixed rules and the discrete states like checkers and chess."
I'd like to see what the citation is on that.
EMILY M. BENDER: All right what do we have for 17? Oh it's not a citation it's a footnote. "The game of Go was resistant to the brute force approaches that allowed machines to be humans at chess and checkers. While Go is rule-based, a very large number of moves are possible during any turn with a correspondingly enormous range of possible states of play. Hence Go requires more generalization, both for evaluation of board positions and to decide on the next move;
computers only began outperforming masters of the game using deep the deep learning approach soon to be described."
So I think what he's saying here is that the search space is too big in Go to brute force
it the way chess and checkers were brute forced.
ALEX HANNA: Yeah I want to take a step back and sort of talk about this sort of notion of intelligence too that is encoding and talking about chess. So um Nathan Ensmenger who's a uh uh historian of computing has a great article uh and I'll drop it in the chat um. I am like forgetting the citation um but it is basically about how uh chess became this kind of stand-in for
intelligence. So you know some of the work that Ensmenger is as named uh is known for some
of this work on sort of how kind of um computers became masculinized. This is also something that Janet Abbate has also talked about um and and and so the the name of the article is "Is chess the drosophila of artificial intelligence?"
And so you know this is a reference to the kind of notion of fruit flies or drosophila melanogaster is typically used in laboratory settings in genomics as a sort of thing with this really easy to map um DNA structure. Also this kind of type of worm is also used for this in genomics. And so this article "Is chess the drosophila of artificial intelligence? A social history of an algorithm." In this article um uh Ensmenger talks a bit about the way that like chess became this the stand-in for uh smarts and for um intelligence and this idea where- This idea that chess became to stand in and it's sort of an idea of like how chess became the stand-in.
So you know and I I don't know how much uh that um Blaise is gonna go into this in the article. From my skimming I don't think he goes into a lot. But the way that AI was so forcefully shaped by cold war politics um and cold war competition.
So in this article Ensmenger talks about how um chess became this really um this really uh
fascinating thing for a few different reasons. One, there was a pretty standardized ranking
system for chess, this ELO system where you can rank a player across their career. And that's
been standardized and been used for years. But also the way in which chess is this kind of thing that also had this kind of masculinist sort of like tie to technical acumen. So like if you were
better at chess you know this made you sort of the better kind of thinker.
You know it was kind of rare to find chess players or programmers who were not you know like expert chess players. Then it also had this nationalistic pride tied to it right? So chess uh you know the grand masters you know the two countries and cold war competition were the Russians and the Americans. You know if you've watched um uh if you watched um oh gosh the uh the uh the show on Netflix–
EMILY M. BENDER: The Queen's Gambit, right?
ALEX HANNA: You know like where this young um I'm forgetting the character's name I'm
so terrible at this. If people drop it in the chat who actually can drop pop culture knowledge.
But like this young woman who uh goes to Russia and plays the grand masters right? um and already in that scene we see this masculinist view of chess right where it's like the big men are going to play chess in Russia and to have this young woman who's you know challenging these grand masters is already something that's upsetting this.
And I would love to sort of say that the way that like you know the the argument here from the purely technical perspective in this footnote in which uh Blaise is saying like you can't actually you know Go uh has too many moves. But Go is also a Chinese game right? And so this idea it would be amazing if Ensmenger or some other historian of technology or computing could write about why not only why Go became the sort of new benchmark or this new standard for intelligence.
But of course you can't ignore the nationalism embedded in this right? That there's a comp there is a competition with the Chinese that needs to be sort of like integrated in this. And there's a way in which the Americanism or the Western centrism because AlphaGo I think was
developed by DeepMind, which is based in London.
But that kind of competition with the Chinese also has to come up and in these conversations.
EMILY M. BENDER: Mmm. Okay so popping back up. Where are we? Okay so uh uh, "Combinations of rules and brute force were eventually able to beat expert humans at
games that could be themselves characterized by fixed rules and discrete states like checkers
and chess." Go can also be characterized by fixed rules and discrete states and so he has to have those footnotes saying but it's search base is a bit too big.
"Such approaches made far less headway when it came to using language, forming abstractions and concepts or even being able to make sense of visual and auditory inputs."
And now we're going to dive into this discussion of categories using bicycles. So, "How do we recognize a bicycle? Consider for instance looking at a picture of something and deciding whether it's a bicycle or not. This problem would likely have seemed straightforward, at least initially, to practitioners of good old-fashioned mathy math. They–" Is GOFAI mathy math? I don't know maybe.
ALEX HANNA: I don't know what I I'd like to maybe let's reserve uh mathy math for sort of neural–
EMILY M. BENDER: Deep learning, neural nets.
ALEX HANNA: Also like it when it's AIs plural.
EMILY M. BENDER: Yeah yeah okay good. Got it. All right so "to practitioners of GOFAI. They believed that databases of knowledge encoded in the form of rules and logical propositions could produce intelligence."
Did they? I guess maybe they did. I have I have no horse in that race. "So they set out to encode all the world's quote-unquote facts like wheels are round and a bicycle has two wheels. This turned out to be surprisingly hard to do---impossible even---for a number of reasons."
ALEX HANNA: Okay. "We all know a bike when we see one.” Okay. “We have trouble saying it
why–" and I think this is a citation to that um the statement by uh Justice Potter Stewart I know
obscenity or often used in pornography, ‘I know it when I see it.’
EMILY M. BENDER: Yeah.
ALEX HANNA: What is it um that's sort of the it what is like they're calling people the new pornographers, also, where the band got its name from. Coming from a New Pornographer's uh um fan girl. "More precisely we can tell plenty of stories about why something is or isn't a bicycle. These stories resist reduction to mechanical rules that fully capture our intuition. It's still a bike if it has an engine–" I'm going to try to speed so we can get through this.
EMILY M. BENDER: Yeah.
ALEX HANNA: "The complications are endless, so we see a silly bike, we chuckle, but it's a bike." Okay, great.
EMILY M. BENDER: And this is this is a fun picture, a shoe bike.
ALEX HANNA: I like the shoe bike. "The kind of machine learning system successfully and um didn't rely on hand but learned by examples recognizing bikes even silly bikes. Beyond the practical advances this brought including vast improvements in quote narrow AI, working speech recognition–” And we should talk about that.
EMILY M. BENDER: We should.
ALEX HANNA: Because we already talked about that. “--image recognition, video tagging, much else. These approaches offered powerful lessons in knowledge representation, reasoning and even the nature of truth–"
EMILY M. BENDER: Okay.
ALEX HANNA: Okay. "--we haven't come to terms with culturally." Okay this this has a lot of things.
EMILY M. BENDER: Yes.
ALEX HANNA: Yes.
EMILY M. BENDER: All right so first of all he's sort of saying gosh bicycle seems like an easily defined thing and yet the category is fuzzy. Well linguists could have told you that right? Categories are fuzzy, we've known this for a while right? You have you have prototypes there's all the stuff about yeah. And and psychologists too looking into this. So the idea that um well of
course there should be neat boundaries and if there isn't then gosh weere we wrong. It's like such a computer science-centric way of thinking about things.
ALEX HANNA: Completely yeah.
EMILY M. BENDER: Yeah um oh and then okay does speech recognition work? Well depends
on what you need it for, depends on whose language variety is being listened to, depends on the noise conditions, depends on...So yes we have a speech recognition for some language
varieties to a level that makes it useful for some things right?
ALEX HANNA: Exactly.
EMILY M. BENDER: But yeah yeah and–
ALEX HANNA: I mean like yeah continue please.
EMILY M. BENDER: Okay so then image recognition, let's back up this. I I would love it if we could come up with alternate terms for these things so instead of speech recognition it's automatic transcription that describes what's happened, rather than coming up with these sort of cognition terms.
So for speech recognition it's automatic transcription. For image recognition, I think that's typically image labeling.
ALEX HANNA: Yeah automated image labeling or even classification would be sort of you know because classification definitely is you know like recognition implying some kind of a self-reflective kind of machine or mathy math doing the work whereas classification can be a task that is done machine based or you know labeling is even if you want to sort of um try to make it you know less cognitive sounding you know
EMILY M. BENDER: Yeah.
ALEX HANNA: Yeah yeah.
EMILY M. BENDER: And then–
ALEX HANNA: And then but then this last one and then this last one the kicker the kicker scroll can you scroll up down a little bit so that yeah yeah these approaches "--powerful lessons about reasoning or even the nature of truth, many of which we can't come to terms with culturally." And I I mean he might be sort of um prefiguring what he's going to be talked you know talked about talking about but it's sort of like "even the nature of truth" is sort of a real doozy to sort of say aloud right.
EMILY M. BENDER: Yeah yeah I mean okay so knowledge representation um right so people trying to build this as sets of rules that were hand coded ran into problems when they were trying to encode world knowledge. Um if you're trying to encode linguistic knowledge like the rules of grammar and stuff that actually is amenable to a rule-based approach. It's long slow painstaking work once you get beyond morphology but it's actually totally doable.
Um but that's a that's more constrained than the whole wide world and things we know about the world. Um so a powerful lesson in knowledge representation is that it's hard maybe? Um yeah but lessons in reasoning? There's no reasoning going on with these things right?
ALEX HANNA: Yeah. Right I'm wondering what then the approaches are sort of telling us about the nature of “truth”? That's so- I and truth is in quotes so I don't know what he's saying. I want to read on.
EMILY M. BENDER: We'll have to figure it out yeah yeah let's see. Okay so yeah you want to read calculemus here?
ALEX HANNA: Calculemus, that's sort of like mathy math. "There's nothing inherently wrong with GOFAI when a problem can be expressed in unambiguous mathematical formulae uh
and logical propositions. We can manipulate them to prove or disprove statements to explore the implications of a theory, kind of reasoning of the powerful tools given us boundful gifts in math, sciences, and technology.” Okay sure. “Formal reasoning is also limited.
It's also a recent invention in human history–” Uh okay. “--and despite the high hopes for more, most ardent practitioners’ options occupies a small niche in day-to-day life. Most people aren't
skilled in formal reasoning and it has nothing to say about many human concerns." I'm curious what the citation–
EMILY M. BENDER: Yeah me too. What is 20 there?
ALEX HANNA: It's uh, "Language models show human-like content effects on reasoning" so it's a an arXiv article.
EMILY M. BENDER: And language models don't reason so.
ALEX HANNA: Yeah so that's yeah let's yeah so holding that fixed–I like this, Ben's comment truthy truth. "The belief that reasoning could be applied universally found its expression during the enlightenment Leibniz inventor of calculus believed that one day we can formulate anything."
I don't know enough about Leibniz to say anything conclusively about this. "In this sense he anticipated the GOFAI agenda before anybody had uttered those words. Leibniz imagined that disputes about any topic could be resolved the same way we do we do formal proofs." If and he's quoting here from this Bertrand Russell piece on Leibniz in which Leibniz says if controversies were to arise there would be no more there would be no more need of a dis–" There's a gnat who's like attacking my face. "--no more need of a disputation between
two philosophers than between two accountants. For it would suffice to take their pencils in
their hands, sit down with their slates, and say to each other, with a friend as a witness if they like, let us calculate or calcul calculate calculemus."
EMILY M. BENDER: Calculemus is my guess for what we would do with that in English.
ALEX HANNA: Calculemus? Okay this is really yeah amazing.
EMILY M. BENDER: But I don't know what the stress pattern was in Latin. That's Latin right that's the the Latin conjugation of the verb to calculate.
ALEX HANNA: Yeah. Yeah my Latin stop stops at the words that I find useful for the crosswords so I have nothing.
EMILY M. BENDER: All right so: "There's no reason to doubt that Leibniz meant this literally. He dedicated a significant part of his career." Okay Leibniz was doing this. "Many AI researchers still believed some version of this to be possible throughout the 20th century and a few keep the faith even today though their numbers have dwindled."
ALEX HANNA: Keeping the faith on this okay cool.
EMILY M. BENDER: "Neuroscientists now know that the processes taking place in our own brains are computable–" We need definition of computable for that.
ALEX HANNA: Uh-huh, yeah.
EMILY M. BENDER: "--but they're nothing like the hard rules and limits of propositional logic.” Okay fine. “Rather even the simple task like recognizing a bike uh even the simplest task involves comparing sensory input with vast numbers of approximate, mostly learned patterns combined and recombined in further patterns that are themselves learned and approximate. This insight inspired the development of artificial neural nets and especially the many layers deep learning approach." Oh! Drinking game, ‘neuroscience,’ drink.
ALEX HANNA: Oh yes I finished my drink, so unfortunately.
EMILY M. BENDER: I'll drink some of my water so.
ALEX HANNA: I'm kind of curious because I think we're sort of getting at this concept we were sort of touching on last time this idea of the concept and the idea of like what is a concept and how concepts um kind of in a computational sense are sort of assumed in a certain kind of way um. And we've talked about this kind of in the concept in the kind of contextualization of um of
sort of um you know TCAVs and sort of things that are get called concepts.
But I'm sort of I'm curious on where he's going with this and I think it's sort of like I'm going to be annoyed by it but I want to see where–
EMILY M. BENDER: Yeah yeah I think the one marker that I want to put down is that um if you talk about our experience with bicycles as humans right we have we've heard about them we've seen them in picture books we've seen them in movies we've encountered physical ones we've touched them we've ridden them and all of those things combined um I think inform our concept of a bike.
Um it's very um embodied and multimodal and culturally bound.
ALEX HANNA: Yeah.
EMILY M. BENDER: And uh one of the things that I think happens in deep learning I mean all sciences do this all sciences have some sort of abstraction and then they sort of forget what the abstraction leaves out of view. Um but the way um deep learning is applied in so-called AI I think does that to an extreme.
ALEX HANNA: Yeah yeah.
EMILY M. BENDER: I think we're probably going to see some of that but let's see. All right.
ALEX HANNA: Yeah.
EMILY M. BENDER: We've got some more bikes coming you want to read?
ALEX HANNA: Yes. "I've used the term approximate but this would be misleading. It's usually wrong to think of an output of a neural network as um as an imperfect or irrational approximation to an objective rational reality that exists out there.
The physics of torque, friction, wheels and spokes may be universal. Our mental models of what counts as a bicycle aren't." I don't know if wheels and spokes are universal. Sorry! "They've certainly changed a great deal since the 19th century. This fuzziness has allowed us to play with the form of a bike many years to reinvent. As bikes have evolved our models of bikes have evolved." So lots of bikes.
"None of our intuitions about object categories, living beings, language, psychology, or ethics have remained constant throughout history. Such concepts are learned and the learning process is both continuous and lifelong. Cultural accumulation works because each–each generation picks up where the left last left off. It would be absurd to believe that our current models no matter how cherished represent some kind of end of history, or that they're successively
approximations some some kind of Platonic ideal."
Fair enough. "It's not just that we have a hard time using logic to recognize bicycles. More fundamentally there's no logically defined canonical bicycle somewhere out there in heavens.
The same is true of more abstract concepts like beauty or justice." Okay fine I'm willing to
take that to take that he is sort of taking out the concept of you know feminist technoscience seriously, that you know knowledge is situated, we are in agreement on that. That's fair enough.
EMILY M. BENDER: Yeah, fair enough.
ALEX HANNA: So let's–
EMILY M. BENDER: I just want to go back to this um I think what he's claiming is universal is actually not wheels and spokes but the physics of them, which yeah.
ALEX HANNA: Yeah.
EMILY M. BENDER: I think that that's also fine. Although I every time I see one of these I have
to like–what would it have felt like to ride that?
ALEX HANNA: Yeah I mean I I was when I I was in Paris earlier this week and I went to the the Museum of Arts and Metiers and it was–They had a bunch of cool like penny farthings and I would love to like if you've gone to the right kind of critical mass uh people would like will build these penny farthings um or very tall bikes. They look super fun but I'd also worry about falling off it and busting my ass, so.
EMILY M. BENDER: Also want to point out that the um the comments are raising some interesting points here about um you know abstract concepts like beauty and justice change over time. There's a lot I mean that that sort of gets a little bit close to moral relativism and like sort of saying well you know you have to you have to look at the comment in its time and you know for most of those things it's like yeah no the people who were on the receiving end of that discrimination and oppression they knew it was bad even in the time like.
ALEX HANNA: Right. Right yeah. Who am I to a judge? Yeah and I think I'm willing to grant a little um grace at this point. But it's sort of what you know what he does with it I'm curious. Let's let's do the next thing.
EMILY M. BENDER: Yeah. Okay all right. "Laws of robotics. Science fiction
writer Isaac Asimov's I, Robot series of stories–” Sorry. “--illustrate how GOFAI's unrealistic ambitions have shaped our thinking about AI ethics.” Okay. “Asimov imagined a future in which all robots would be programmed with a set of standard laws to govern their behavior."
These are the famous laws right? "One: A robot may not injure a human being or through inaction allow a human being to come to harm." I always liked when reading when I was a kid reading Asimov's stuff, that was always like wait a minute, the robot doesn't actually have control so what this means is the robot has to try, not that it has to succeed and that that like always bugged me a little bit. Anyway.
"Two: A robot must obey the orders given it by human beings except where such orders would conflict with the first law." And then you're like okay but how does the robot know?
ALEX HANNA: Yeah.
EMILY M. BENDER: Right? And, "Three: A robot must protect its own existence as long as such protection does not conflict with the first or second law. Of course in Asimov's stories, as in all sci-fi, trouble ensues, or there would be no plot." And Asimov of–well I don't need to go off on Asimov.
EMILY M. BENDER: I've got things to say.
ALEX HANNA: I got I got a lot I I have to be honest I bought the Foundation books with a desire to like you know read about prediction and I really couldn't even get past the first one. Um this is my problem with a lot of hard science fiction and um talking about science fiction I mean, I went wearing a shirt that literally says, "Octavia Butler tried to tell us."
If you want to tell if you want to say something in which there's a more you know more informative sci-fi author uh let's talk about let's talk about Octavia Butler let's talk about um Octavia's Brood, thinking about the edited volume by Adrienne Maree Brown and um kind of her inherit you know her inheritors rather than Asimov, who is um this kind of touchstone uh from the uh GOFAI and maybe the new-FAI uh crowd too.
EMILY M. BENDER: Yeah.
ALEX HANNA: But I digress.
EMILY M. BENDER: Right. Okay, so: "The trouble is typically lawyerly. Some combination of an unusual situation and apparently sound yet counterintuitive reasoning based on the laws leads a hyper-rational robot to do something surprising and not necessarily in a good way." I'm trying to think I guess I guess I spent more time with the Foundation series, because this isn't evoking any particular stories from Asimov's for me, but that's fine.
"The reader may be left wondering whether the issue could be debugged by simply adding one more law or closing a loophole, something Asimov himself undertook on several occasions over the years." I mean, okay. "Asimov imagined that intelligent robots would have GOFAI like mental processes proceeding from raw stimuli to internal states to motor outputs using Leibnizian logic–” Calculemus! “--to which these laws could be added as formal constraints. This would make such robots clearly different from us we don't think so logically as both common sense and many experiments in psychology and behavioral economics demonstrate.”
Okay, fair. “Unexpected results wouldn't then be the robot's fault any more than an unexpected
output from a program as a computer's fault."
EMILY M. BENDER: Okay.
ALEX HANNA: Okay.
EMILY M. BENDER: Where are we going with this? "Asmiov's imaginary robots were entirely rational. They might even be called ethically perfect."
ALEX HANNA: So so he's basically saying that uh ethics can be programmed in and then um you know we're we're all kind of there's no sort of judgment, there's something that's gonna be completely rule-based uh so Al is only can can only be "attributed to human radar error. HAL's homicidal tendencies are user error. These are cultural landmarks. Their visions are flawed and the usual GOFAI ways one could program a robot with GOFAI code but executing such a program is mechanical it doesn't require the judgments and generalizations we associate with intelligence. Following instructions or policies written in natural language does require judgments and generalizations though it can't be done robotically."
Okay so language and–
EMILY M. BENDER: Yeah.
ALEX HANNA: –entails interpretation.
EMILY M. BENDER: But I'm still stuck on robots were- the imaginary robots were entirely rational. Because this is making me think of Abeba Birhane's wonderful work on um relational ethics for AI and sort of opposing rationality to um relationality and um like I guess we're not quite sure where Blaise is going yet with this but–
ALEX HANNA: Yeah.
EMILY M. BENDER: –that's what it feels like. This isn't ethics. I'm curious where I'm going.
ALEX HANNA: I think I know where he's going it's sort of on values but let's–
EMILY M. BENDER: Let's keep on.
ALEX HANNA: Let's keep going with it because I'm curious on what the punch line is um as. Ooh we're getting into um getting into abortion. As humans, which to say: "As humans we have no universal agreement on the basic on the most basic nouns and in the laws–”
EMILY M. BENDER: Laws being the Asimov's laws above.
ALEX HANNA: "--such as what counts as human, which has gained urgency with repeal of Roe, let alone how to weigh or interpret flexible terms such as inaction, injure and harm.” Okay. “Subtly different interpretations will lead to different decisions and when doing formal logic the slightest wobble in any definition will lead to logical contradictions, after which all bets are off: does not compute as Star Trek's Data, another fictional robot with–"
I don't think Data–look I'm a Trekkie I don't actually think Data has ever said "do not compute" like maybe in seasons one and two but like Data is I think a bit more subtle than that. So like–
EMILY M. BENDER: Yeah.
ALEX HANNA: You know like I you know like I know he's sort of a foil but don't you know don't malign Brett Schneider. Brett Snyder's like amazing.
EMILY M. BENDER: Also, like what's the- I I'm objecting to this notion that Asimov's robots and Data are GOFAI examples right?
ALEX HANNA: I mean yeah I don't think, yeah well.
EMILY M. BENDER: I don't get the I don't get the reasoning there right. So so Data is this
is it canonical somehow in Star Trek that that Data is entirely a rule-based programming
situation?
ALEX HANNA: No, it's not! Um like actually like um I'm dropping um Abeba's article in the chat.
But no I mean like the thing about Data that I think that you know the point around Data and I
think TNG like we might have to do a whole thing about Data. But TNG does this um actually in a way you know there's a whole actual series in which Noonien Sing Soong? Sing um his creator, his father like allows him to dream actually and allows him to sort of rise above sort of his programming in some sense um there is a there's some bullshit canon where he does get like an emotion chip and like his evil brother Lore sort of like lords it over his head and like I thought and it was I think was in one of the movies um which you know I don't know the movies are kind of bad and I refuse to accept them as canon um you know.
But there's some way in which like there's an allowance of some kind of self-creation um yeah Generations, you know I'm glad Em is in the chat like really backing me up here because I'm like very much like I feel like my TNG knowledge is a little rusty. My DS9 knowledge much better. um But it's sort of thinking about like there's a way in which he does he doesn't like you know he's allowed to make judgments right?
The whole um episode "Measure of Man" um like allow is sort of like I forgot these I forgot the exact defense of it but there's a sort of like allowance and like they allow him to serve in a commander capacity because of his um ability for kind of subjectivity and subject judgment. Um like there's no kind of pre-programmed trolley problem in in in Data's data banks.
EMILY M. BENDER: Yeah.
ALEX HANNA: There's sort of like there's an allowance of sort of breaking out of sort of a set of rules. So you know this is my defense of Data and the character of Data.
EMILY M. BENDER: Yeah right and so so there's this sort of like cavalier--Data and Asmiov's robots are are GOFAI and have GOFAI issues. Um this isn't substantiated.
ALEX HANNA: Yeah.
EMILY M. BENDER: All right so: "Fundamentally then Asimov's laws are nothing
like theorems, laws of physics, or computer code. They don't bind to stable concepts or define
mathematical relationships because natural language isn't math. Words can't be manipulated
like algebraic variables or run like computer code.” Fair. “Rather language offers a distinct way to express a policy requiring human-like judgment to interpret and imply. To calibrate such judgment, case law is generally needed, worked examples that clarify the intent and scope of the language which may be subject to debate, vary culturally and evolve over time."
Uh yes there's a reason that linguistics is a fantastic pre-law degree.
ALEX HANNA: Right? And I mean and there's one citation I would sort of want to drop here it's not necessarily about the interpretation of laws um but it's about the nature of explanation uh is a book by John Levi Martin who's a sociologist and he has a book called um let me get it correct but I think it's called um I just have to look this up because if I don't get the it's called uh “The Explanation of Social Action.” And in this book he actually does talk about the idea and the kind of reference to law as a means of sort of adjudicating kind of like thoughts of explanation um and so like it's sort of a more socially situated view of this.
So I'll drop the the citation in the chat um um and I forgot the different chapter. I haven't read the entire thing but there is a chapter sort of on this idea of reference to case law as kind of a matter of kind of thinking about different kinds of matters how how we do explanation. And in this case I think how to do ethics is also referenced.
EMILY M. BENDER: Yeah all right so Blaise says–
ALEX HANNA: Most highlighted thing here.
EMILY M. BENDER: Yeah which is not my highlight this is this is um Medium showing us other people highlighted it.
ALEX HANNA: Yeah.
EMILY M. BENDER: Yeah. "So while we have little choice other than towrite ethical rules in natural language---an idea with a superficial resemblance to Asimov's laws—” Right, because Asimov wasn't going to drop actual code into his in fiction books but okay. “--we need to keep in mind that programming is the wrong paradigm. Rather applied ethics relies on language understanding, which in term relies on learning, generalization, and judgment." Okay. Hang on.
This seems to be presupposing that the way we do AI ethics is that we program in ethical behavior into autonomous agents. And everybody who I take to be a serious researcher and thinker in this field is not looking at how do we make autonomous things behave ethically, but rather how do we understand the problems that come about with automation and decide when and where and how to automate?
And how we design things and who needs to be at the table in designing regulations and so on
And so he was- We were talking last time about how he was saying that the AI ethicists have taken a narrow view of the problems with language models and it's like no we're taking the wider view because we're looking at the systems in the world and not looking at just the systems and how do we make them be ethical. And this paragraph here is jumping out at me as one of the places where he's doing that.
ALEX HANNA: Right right so I mean we've already we talked a little bit last time about how also about how he was sort of giving away the the game by presupposing that AIs or mathy maths were independent agents and you know and so even though what he's saying is sort of you can't have some kind of universally applied kind of thing here in terms of you know writing in laws, you need some kind of interpretation you're still already giving away the game by presupposing an AI as a type of agent. And so you know like yeah and that's not actually you know pushing back and saying that you've if you've already started there then you've you're leading to a premise that doesn't actually have you know you you've already you have to sort
of write your way out of this this plastic bag.
EMILY M. BENDER: Yeah a plastic bag! He's going to suffocate while he's in there.
ALEX HANNA: Yeah maybe the paper bag plastic bag. I mean the plastic bag is a little more
permeable or not permeable impermeable.
EMILY M. BENDER: Yeah.
ALEX HANNA: So let's I so we got about uh 10 minutes left to go. I'm curious how much of this do we have because I think we might I'd love to finish this.
EMILY M. BENDER: Let me let me see uh where's my where's my scroll bar? Yeah yeah finish what so so we're we're coming to the end of a section we've got one paragraph left at the end of the section.
ALEX HANNA: All right so: "Since natural language isn't code, unexpected or counterintuitive interpretations are best thought of as simply wrong, not right based on a technicality or consequences of a user error." I'm trying to grok that sentence. "In a system based on learning rather than programming, errors in judgment are determined relative to decisions made by thoughtful human judges looking at the same situation operating from the same broad principles."
EMILY M. BENDER: Yeah so we're he's inside the same plastic bag.
ALEX HANNA: Yeah yeah.
EMILY M. BENDER: Right.
ALEX HANNA: "Human judgment changing over time is the best and only available ground truth." Lord, ok.
EMILY M. BENDER: No no okay we've got 10 minutes let's take this paragraph slowly. Okay, "Since natural language isn't code, unexpected or counterintuitive interpretations are best thought of as simply wrong, not right based on a technicality or consequences of user error." So my problem with this sentence is he's not saying who's doing the interpreting there right?
ALEX HANNA: Yeah.
EMILY M. BENDER: Unexpected or counterintuitive interpretations, by whom?
ALEX HANNA: Yeah right exactly.
EMILY M. BENDER: I guess this is one of his mathy maths that is encountering natural language and interpreting it.
ALEX HANNA: Mm-hmm yeah it's our best our, "unexpected or counterintuitive interpretations
are best thought of as simply wrong not right based on technicality or consequences of a user error–" Okay so I have a lot of problems here too because it's sort of you're getting into problems sort of of judgment and sort of what and sort of the assumption that sort of an AI or or mathy math is this kind of thing that is making kind of reasonings um kind of um out of thin air, that it hasn't had a set of training data that you know has a set of values imbued with it that you know and sort of you know you might be thinking unexpectedly or counter as simply wrong you know. In what kind of value system and what kind of a training data and what kind of a milieu right?
So yeah go ahead sorry.
EMILY M. BENDER: Yeah so the next sentence is sort of he's he's got the training data there um implicitly right? So "In a system based on learning rather than programming–" And there specifically learning is learning from training data. "--errors in judgment are determined relative to the decisions made by thoughtful human judges looking at the same situation and operating from the same broad principles."
So this is exactly describing we've got some annotators we've set up the task we've told them to annotate for the task and now we're going to have the machine do the same thing.
But if you think about you know errors in judgment where someone has done something that's harmed somebody else I don't think annotators are necessarily the people who have the ground truth there right? The people who are doing the work of annotation are not necessarily the ones who are actually really well positioned to understand the right and wrong of a situation, depending on the situation.
ALEX HANNA: Yeah and it's I mean and also understanding that and and even the kind of phrase, "thoughtful human judges looking at the same situation operating from the same broad principles." I mean there's a lot in in these three in these phrases, "thoughtful human judges," you know not which is sort of you know like it's it evokes a kind of vision of um it evokes this kind of vision and I'm thinking about this Saturday Morning Breakfast Cereal Comic which is a little lewd, so if you'll allow me to be a little lewd here.
It sort of thinks I'm thinking about this there's an image of of Einstein um you know um looking and and writing down E=mc2 and thinking that it's very like it's very like elegant and it's like oh wow I've encountered this. But then the reality is Einstein walking through the halls of his department yelling "Guess who's science dick just grew 10 feet?" and I'm using this like in a way, you know sorry for being lewd and for the reference, but it's it's I like this example
because it's sort of thinking about people aren't influenced by the sort of elegance of
these kind of situations.
They're influenced by their systems of prestige, their systems of power in which they're embedded right and you know and Ben thanks for dropping this into the chat oh he drops another one to the chat. Yeah dropping another SMBC in the chat. Oh this is the one about linguistics linguists, physicists re- redefining linguistics. Yes this is a classic one I just want to if so if you could go into this but basically the idea that like physicists can encode everything um but the same could be said by AI scientists yeah exactly thank you for this.
But the idea that sort of like that people are free from their kind of judgments of of where they're sitting in social structures or where they're sitting in institutions and how that frees them from some kind of a um a judgment of what is what is the good what is the right youknow um. And so maybe his next sentence is the you know like and then operating from the same broad principles also basically a category error there because where are the same broad principles?
EMILY M. BENDER: The annotation guides clearly are the same.
ALEX HANNA: Yeah, the annotation guides which are you know you know, the how we how we adjudicate these different things.
EMILY M. BENDER: Yeah.
ALEX HANNA: So “Human judgment changing over time is the best and
available ground true–only available ground truth." I would say changing over time you know
changing over subject position is probably even a more um a more accurate one if one actually is trying to judge who's on the receiving end of most of these harmful things right? Um "necessarily noisy contingent, always imperfect and never entirely fair but hardly alien or inscrutable."
So like there's a lot there too and the citation here is to this article by some fairness researchers that um at NeurIPS, in which I haven't read this article, but I'm assuming that the um the claim here is sort of that you know there's the the different the sort of impossibility theorem uh of different kind of definitions of fairness and how one can't sort of meet all the criteria of these.
Even though fairness is this very limited frame, um this is sort of the idea here. So I mean yeah yeah I I want to I want to pause there and let you let you go into it, Emily.
EMILY M. BENDER: Well yeah just just briefly that the- So human judgment, all these caveats, is the best and only available ground truth. And again it's like why are we thinking about this in terms of ground truth? What what is it that he's hoping to automate here? Um and it's and it's again this like from the get-go, ceding decision-making to autonomous artificial systems, when that is always a choice that isn't something that's just happening naturally in the world. Um and this framing of the discussion just seems to think well of course that's going to happen of course we have these you know mathy maths that are coming and so we better figure out how to teach them you know teach them how to behave.
When it's like well, no, we can decide what to build and whether or not to build it and what um tasks to set them to and how those tasks are shaped for their environment and so on.
ALEX HANNA: Right.
EMILY M. BENDER: And so this is just you know yes of course you know human judgment is as as you were saying um you know he says culturally contingent you have a much more I think articulate and uh nuanced way of talking about it in terms of how everyone's positionality influences their judgment and then I think um Birhane's work reminds us to think about also you know our relationships to others um influence our judgment and so on. Um but that's our judgment and and you know relevant to conversations about governance and decision making, but doesn't have to be relevant to conversations about what the machines are doing because we don't have to think in terms of annotating a bunch of data to train machines on to get them to
do this for us. Because why do we want that?
ALEX HANNA: Right yeah why is that desirable socially you know like why we already have to let and I mean this is sort of the idea of value alignment which we were talking about last time you know like what if my values are that I want to throw this thing out the window?
Because I think for a lot of things that's that's what I want to do, right.
EMILY M. BENDER: Yeah.
ALEX HANNA: All right right we're wrapping up. I'm looking at how much we have left we're taking this a great like, 200 words at a time.
EMILY M. BENDER: We tried to go faster today too!
ALEX HANNA: I know we might have to pre-meet prior to next time and sort of consolidate you know like what we focus on because we get into a section on LaMDA the next and then there's a section on ENIAC and and then factuality um you know I might actually have to read and then we get to the point and then when we get to a citation of Stochastic Parrots. So I feel like you know to defend your honor.
EMILY M. BENDER: We have to at least get to that.
ALEX HANNA: We have to at least get to that. But I mean you know I'm I'm I'm you know like I'm going to read this. Uh for folks who are in the chat, on the stream, I'd love for you if
you wanted to like if there's anything that you all want to see us discuss and maybe focus on.
I can't take much more of this article but–
EMILY M. BENDER: No I know it's bad we can both pre-read and like highlight things we want to take apart I suppose for the next ones.
ALEX HANNA: Yeah that might be that might be good so maybe uh we'll kind of come back to this try to sum this up in maybe maybe you know one or two sessions because there's a lot of we've got a lot of folks pointing out things. Um there's a discussion about uh um about um looking at art and art models and I know um Dr. Flowers on twitter has mentioned that uh Dr Lina is a cultural sociologist also has a lot of thoughts and maybe we'll have like some special guests that you know when we start getting into that because art is not an idea that I get into but I love the sort of points that are being brought up in this series.
And you know yeah so um hey if you like if you like this you know hit that subscribe button. Uh we'll again like uh the other one will post this to YouTube once uh within a day or so. Hopefully the sound captured okay and we can get the you know and we had our fantastic captioner Amy here on the call. Uh so yeah but thanks for thanks for joining us.
EMILY M. BENDER: Thanks for joining us, yeah! We'll do some more in some form.
ALEX HANNA: Right take it easy y'all bye!
EMILY M. BENDER: Bye!
ALEX: That’s it for this week!
Our theme song is by Toby Menon. Production by Christie Taylor. And thanks, as always, to the Distributed AI Research Institute. If you like this show, you can support us by donating to DAIR at dair-institute.org. That’s D-A-I-R, hyphen, institute dot org.
EMILY: Find us and all our past episodes on PeerTube, and wherever you get your podcasts! You can watch and comment on the show while it’s happening LIVE on our Twitch stream: that’s Twitch dot TV slash DAIR underscore Institute…again that’s D-A-I-R underscore Institute.
I’m Emily M. Bender.
ALEX: And I’m Alex Hanna. Stay out of AI hell, y’all.