Mystery AI Hype Theater 3000

Episode 46: AGI Funny Business (Model), with Brian Merchant, December 2 2024

Emily M. Bender and Alex Hanna Episode 46

Once upon a time, artificial general intelligence was the only business plan OpenAI seemed to have. Tech journalist Brian Merchant joins Emily and Alex for a time warp to the beginning of the current wave of AI hype, nearly a decade ago. And it sure seemed like Elon Musk, Sam Altman, and company were luring investor dollars to their newly-formed venture solely on the hand-wavy promise that someday, LLMs themselves would figure out how to turn a profit.

Brian Merchant is an author, journalist in residence at the AI Now Institute, and co-host of the tech news podcast System Crash.

References:

Elon Musk and partners form nonprofit to stop AI from ruining the world

How Elon Musk and Y Combinator Plan to Stop Computers From Taking Over

Elon Musk's Billion-Dollar AI Plan Is About Far More Than Saving the World

Brian’s recent report on the business model of AGI, for the AI Now Institute: AI Generated Business: The rise of AGI and the rush to find a working revenue model

Previously on MAIHT3K: Episode 21: The True Meaning of 'Open Source' (feat. Sarah West and Andreas Liesenfeld)

Fresh AI Hell:

OpenAI explores advertising as it steps up revenue drive

If an AI company ran Campbell's Soup with the same practices they use to handle data

Humans are the new 'luxury item'

Itching to write a book? AI publisher Spines wants to make a deal

A company pitched Emily her own 'verified avatar'

Don't upload your medical images to chatbots

A look at a pilot program in Georgia that uses 'jailbots' to track inmates


Check out future streams at on Twitch, Meanwhile, send us any AI Hell you see.

Our book, 'The AI Con,' comes out in May! Pre-order now.

Subscribe to our newsletter via Buttondown.

Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

Alex Hanna:

Welcome everyone, to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find.

Emily M. Bender:

Along the way we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. I'm Emily M. Bender, professor of linguistics at the University of Washington.

Alex Hanna:

And I'm Alex Hanna, director of research for the Distributed AI Research Institute. This is episode 46, which we're recording on December 2nd of 2024. And it is business time. We're going to talk about how the tech companies hype themselves up in the name of luring investment dollars, and maybe someday drawing actual revenue from paying customers, who would have thought. All with the help of the ever hand-wavy promise of so called artificial general intelligence.

Emily M. Bender:

As we all breathlessly and exhaustedly wait for the AI bubble to finally pop, doesn't it feel like the appeals to a sentient, problem solving, self actualized intelligence are increasing in number? And yet, OpenAI's very beginning, nearly a decade ago, hinged on the myth of AGI, and that it, once developed, would figure out how the company would eventually make money.

Alex Hanna:

With us today is Brian Merchant, a tech journalist, journalist in residence for the AI Now Institute, and author of a new report on how AGI has been the crux of the business model for open AI and its ilk since the AI boom began. Welcome, Brian.

Brian Merchant:

Yes, thanks so much for having me. I'm so pleased to be on here. I love what you all do here. It's, uh, such a breath of fresh air in this stuff I usually have to pay attention to.

Emily M. Bender:

We have fun. It's all about catharsis here. And this is, I think, the second time that we have sort of dug into the past. The previous time we took the time machine back to 1956, um, for that Dartmouth report, and we had some fun costumes on that episode. This time we're going back just to December of 2015, um, and I am going to share our first artifact here, if I can get the machine to behave. That is this one. Okay. Um, so we have an article from The Verge published on December 11th, 2015. So nine years ago and change, um, or in a moment, I should say. Uh, the journalist is Russell Brandom. Um, and, uh, the headline is, "Elon Musk and partners form nonprofit to stop AI from ruining the world." And I have to ask, um, Brian, were you paying attention already to this stuff in 2015?

Brian Merchant:

Yeah, you know, I was. It was on my radar. I was a, uh, a senior editor at Motherboard, which was Vice's, uh, tech, uh, platform at the time, RIP. It was our publication that we covered this stuff, and we were a little, I think, Naturally inclined to be more, more, uh, critical than most, but, and I don't know that we covered it just this way. In fact, I looked around, you know, as happens to so many digital media companies, Vice was bought by a private equity firm, bundled up, sold off for parts, and now it is generating AI slop of its own. So I couldn't find our coverage, uh, in, in the archives, but this is like pretty indicative, I think, of how it was covered at the time. This is not to pick on The Verge or, or Russell Brandom in particular. This is like, I feel like a reflection of how everybody covered the inception of OpenAI at the time.

Alex Hanna:

Yeah, it's really breathless. I mean, all the artifacts are just really no critical eye. I mean, I guess this is proceeding what has become known as the techlash, but really just look at these boy geniuses and, and the kind of things that they're doing and they really want to do this stuff for the good of society. And it's really, really outlandish when you look back, even this thing's not that old, just nine years ago, but things move fast, you know.

Emily M. Bender:

For me anyway, this is a little bit before my time in the sense that at this point I was minding my own business, doing grammar engineering and sort of, you know, good old regular computational linguistics, and starting to have these conversations with other people in NLP about how no--actually it's before even that, before even the language models were taking over our field, and I had to be saying like, no, they're not understanding. So it was sort of interesting as a historical exploration for me.

Brian Merchant:

Yeah. And, and I think it was, you really, yeah, like I said, reflective of, of the way that this announcement was covered. I think if there was criticism or like criticality, it was more around like Musk's sort of, uh, sincerity or his intentions or--he was already starting to become kind of this figure who was getting his, uh, fig--fingers and everything and just sort of like, it was a little ridiculous seeming, uh. But I think by and large, the, like the tech press, the tech world certainly took him, uh, at, at his, at his word, took him at face value. And like, so you could say something like this and expect, you know, uh, coverage, uh, you know, this was just an announcement at this time. There was like very little substantive around it. But the fact that Elon Musk, you know, the founder of Tesla and SpaceX, those were his two big things at the time, had met with Sam Altman who was barely, was not a household name at the time. This and I think as I kind of argue in the report, this was a canny move by Altman, uh, to sort of start seeding press like this to get him a self associated with figures like Musk as he climbs this ladder into sort of like centrality of Silicon Valley.

Emily M. Bender:

So let's, let me read a little bit of this so that the people know what it is that we're talking about. So the first couple of paragraphs,"Tesla CEO Elon Musk has never been particularly shy about his fear of killer robots, but now he seems to be doing something about it. Today, Musk and a group of partners announced the formation of OpenAI, a nonprofit venture devoted to open source research into artificial intelligence. Musk is co-chairing the project with Y Combinator CEO Sam Altman, and a number of powerful Silicon Valley players and groups are contributing funding to the project including--" All of our favorites."-- Peter Thiel, Jessica Livingston, and Amazon Web Services." I guess that's not all of our favorites, but some of them."Altman described the open source nature as a way of hedging humanity's bets against a single, centralized artificial intelligence. Quote, 'Just like humans protect against Dr. Evil by the fact that most humans are good and the collective force of humanity can contain the bad elements, we think it is far more likely that many, many AIs will work to stop the occasional bad actors than the idea that there's a single AI a billion times more powerful than anything else,' Altman said in a joint interview with Backchannel.'If that one thing goes off the rails, or if Dr. Evil gets that one thing and there's nothing to counteract it, then we're in a really bad place.' And I just have to say like, this is nonsense and it's just being quoted and platformed and the journalist isn't saying, 'Uh, excuse me?'

Alex Hanna:

I mean, for me, this is of course not the point, but it's just, it's just a blast from the past to think that Dr. Evil is still enough in the public consciousness, you know, from Austin Powers, which came out in what, like the early aughts, um--

Brian Merchant:

For this to land?

Alex Hanna:

--for this to land at all. If you said that today, the Zoomers would roast you, um, but it's, it's just--anyways, not, really beside the point, but yeah, I mean that this, that this is even kind of a, a thing, that this is kind of an argument and that open source even is argued as, as a kind of corrective of this, and I know this gets talked about in the, in another article we're going to look at, um, but I mean, the kind of idea that there's kind of this way that bad actors are the problem, but then there's also going to be, um, many quote unquote'AIs' are going to counteract them.

Brian Merchant:

Yeah.

Alex Hanna:

Truly ridiculous.

Brian Merchant:

And like Emily said, it's nonsense, but it's like of a very specific sort of science fictionalized, uh, you know, sort of vernacular in the way that it's laid out, right? Like it's there, you know, and now with hindsight, as I argue in this report, you can kind of map this out, right? This one big bad AI is Google, and Elon Musk has recently at this point been. frustrated with Google, both personally and professionally. And I think because he's personally invested in AI and interested in it, even from a business perspective at this point, like you can view the founding of open AI as through the lens of, of it being this hedge against Google. So like that, if you look at that language again, he's saying, oh, we need all of these other forces to counteract it. So pay attention to us. It's like, we're like, these little ones are the good guys. There's a bad guy, you know, there's a bad AI and it is all, it like really is nonsense, but it is something that, you know, again, the readers of the tech press or like science fiction fans will immediately kind of glom onto.

Emily M. Bender:

Yeah. Um, and I think actually, um, I want to take us over to the next one, because there's, these are such rich texts. This next one has this big long interview with the two of them, and a lot of the same themes come up. So this is Wired, also December 11th, 2015. Um, notice the timestamp, 12 a.m.,, so this was like embargoed news that they were dropping like at the moment that they could, right? Um, journalist is Stephen, do you know if he's a Levy or a Levy?

Brian Merchant:

Yeah, Steve Levy, he was like one of the founding sort of tech journal, like he was, he like wrote Hackers, which is one of these like early books that kind of glorified the sort of the first wave of, of, uh, Silicon Valley entrepreneurs, guys like, um, you know, Wozniak and, and Jobs and kind of wrote a really, really early, to the game, but from the perspective, like, these guys are like, you know, starting a revolution that's gonna, you know, be great for humanity.

Emily M. Bender:

Yeah.

Alex Hanna:

Yeah, I think Levy has been, I mean, he's been a bit of a simp for the industry and, to be, uh, somewhat unkind. But I know he's Is he the editor in chief of wired now? Or am I, or did he, but he's some high ranking editor.

Brian Merchant:

He said he's kind of, you know, he's written some of these bestsellers. He's written a book about Google. He's, he wrote a book about the iPod and I think he's made enough money where I don't, he doesn't really want or need to be the editor in chief of Wired. He's one of these perpetually kind of editor at large guys who can like, who gets the plum assignments because he's done such a good job of, you know, cheerleading the tech industry for so long that he gets, you know, he's a friendly with the CEOs and stuff. So he gets these scoops.

Emily M. Bender:

Alright, so here he is on 12 a. m. December 11, 2015. Headline, "How Elon Musk and Y Combinator plan to stop computers from taking over."

Brian Merchant:

Yeah.

Emily M. Bender:

Sorry, I had to roll my eyes just at the headline. Subhead--yeah?

Brian Merchant:

Sorry just to interject, but yeah, like, exactly like this, you can, by the embargo and everything, they almost certainly, like, shopped this to him. I mean, we'd have to ask, but like, this is like, hey, we've got this story, we're gonna save the world from, uh, killer robots, do you want a, you know, exclusive in Wired? And that's probably how this came about.

Emily M. Bender:

Yeah. So, the subhead, "They're funding a new organization, OpenAI, to pursue the most advanced forms of artificial intelligence, and give the results to the--" Dot dot dot. Um, and I don't know if that's because like Wired has changed their format in the intervening nine years, but that's it on the subhead.

Alex Hanna:

Fill in the blank. Give the results to the people.

Emily M. Bender:

Yeah. Okay. So, "As if the field of AI wasn't competitive enough with giants like Google, Apple, Facebook, Microsoft, and even car companies like Toyota scrambling to hire researchers, there's now a new entry with a twist. It's a non profit venture called OpenAI, announced today that vows to make its results public and its patents royalty free, all to ensure that the scary prospect of computers surpassing human intelligence may not be the dystopia that some people fear. Funding comes from--" Blah, blah, blah. Um, let's see. Uh.

Alex Hanna:

There's something I do want to read on this, which is interesting, which, "Musk comma, a well known critic of AI comma, isn't a surprise." First off, hilarious. Um, and then, "--as is Altman himself." Um, sorry. Uh, that was the, that was a prior paragraph, "But Y Combinator?""Yes, the tech accelerator that started 10 years ago, as a summer project that funded six startup companies by paying founders, quote, 'ramen wages' and giving them gourmet advice so they could quickly ramp up their businesses." Uh, and they talk about everything Y Combinator has done. Um, but the weirdness of having something so pro capitalist like Y Combinator joining up with Elon Musk to do something open source, bazinga, so wild, you know.

Emily M. Bender:

I mean, do these people not remember what the late nineties were like, where the whole thing was open source everything, and you didn't need a business model?

Brian Merchant:

Yeah, I mean, it, it, it is, it's, it's wild, just the amount of credit that they're like just giving everybody involved in this, uh, enterprise from, from, from the get go. Again, I do think it is an artifact that is like pre tech, it is very, very, uh, telling and very sort of indicative of just like the pre techlash sort of coverage. Where not, or at least after that, you kind of have to at least, you know, drop a caveat in or two, even if you're still not going to be properly critical. But yeah, I just, just, just seeing the formation of this and we'll talk about more later, but this is like really, it really integral to, you know, OpenAI's founding myth and operative myth, even, even still today, what, what it's doing, even now, that is $160 billion company.

Emily M. Bender:

So there's something I want to take issue with in the next paragraph here, where they're talking about how this is a research lab, um, and they're trying to counteract these, um, you know, big companies. Um, and so they say, "It may sound quixotic, but the team has already scored some marquee hires, including former Stripe CTO Greg Brockman, who will be OpenAI's CTO, and world class researcher Ilya Sutskever, who was formerly at Google, and was one of the famed group of young scientists studying under neural net pioneer Geoff Hinton in Toronto." And I just want to say that it is so irritating to, and they do this elsewhere, they'll say top researchers or world class researchers. And it seems to me there's a couple of things going on there. One is, a world class researcher to me is somebody who, you know, really contributes to the research community by having good citational practice and sort of connecting what they're doing to what other people are doing. And that's not what's going on here. But also it occurred to me that this notion of like top scientist is a way to locate people who don't own their own positionality. That instead of sort of saying who they are, where they come from and what they're working on, we're locating them at the top. Um, and that is just rampant in these articles and, and in thinking about AI, like we've got to get the top talent.

Alex Hanna:

It's also like a genius kind of notion of the, of the individual contributor, right? And we have this in many guises in Silicon Valley, the 10 X engineer. Um, the, you know, whatever, uh, pick your poison. This, this kind of operates in many guises and you know, the idea that, and there's, I think this is said in the other piece where it's sort of saying the two constraints on AI companies are top research, talent and data. And I'm like, oh, what about compute? What about power? What about these various supply chains? And it's, and it, and it really, um, to me, this is something I'm always harping on, but it's like you're placing kind of the real labor of AI in kind of genius, genius, all white men in the global North and not like the huge labor underclass that even makes any of this stuff possible, right? And I mean, that's the really, that, that, that, I mean, is a suffused through all this journalism.

Emily M. Bender:

Yeah.

Brian Merchant:

Yeah. Yeah. It's very much about the people at the top that get, that can participate in like a hero narrative or a genius narrative. And that's sort of, I hadn't really thought about that much before, Emily, that's such a, such a good point. So interesting because yeah, he, it rarely even gets attached to his actual accomplishments or publications, but he's, you read any of these articles over the last 10 years and Sutskever in particular is, you know, groundbreaking AI scientist, top AI scientist. And he's just, you know, to the point where he's just, you know, just worth so much money to these companies and that this sort of elite tier who fit that bill that you're talking about, um, can just charge exorbitant sums or start their own, you know, AI startup.

Emily M. Bender:

And get all the funding. Yeah. Um, so, so we have this interview that I want to try to get to at least some of because it's, it's a rich text. These guys are both bonkers. Um, but we should also save time for the, the other Wired article that comes out a couple days later. Um, but so this is, this is, uh, Levy interviewing Altman and Musk, not at the same time, but I think asking them the same questions and then merging them together. Um, so, "How did this come about? Sam Altman, we launched YC Research about a month and a half ago, but I had been thinking about AI for a long time, and so had Elon. If you think about the things that are most important to the future of the world, I think good AI is probably one of the highest things on that list. So we're creating OpenAI. The organization is trying to develop a human positive AI, and because it's a non profit, it will be freely owned by the world." That's not what non profit means.

Brian Merchant:

But it is interesting that he like really feels compelled to repeat this over and over. And it's funny to see it dissipate and ultimately disappear. This notion that the fruits of, of what they're doing are to be owned by the world. Um, that's, it's really one of the core foundational myths of, of OpenAI. And it's today, it's just, you know, all but completely gone. But once in a while he'll still say something about it uBI or the need to give people something like that. But it's, yeah.

Emily M. Bender:

And Musk has a slightly different take here. So in answer to the same question, he says, "As a result of a number of conversations, we came to the conclusion that having a 501c3, a nonprofit with no obligation to maximize profitability would probably be a good thing to do."'And also we're going to be very focused on safety.' So, um, where Altman saying it'll belong to the world to be freely owned by the world, Musk is saying we're going to protect ourselves from the profit incentive, which like, I mean, that is kind of good commentary on a lot of what's happened elsewhere in Silicon Valley. Coming from Musk is surprising, but it's totally not what happened, right?

Brian Merchant:

Yeah. I think, you know, I think he's starting to position it as this you know, again, anti Google sort of way of way of doing things where Google was already starting to be seen as invasive and people didn't, wouldn't really respond well, if a Google is going to release something. He's going to kind of also, I think, I think it's like, I think at this point, it's kind of tactical. It's kind of, it's a hedge against Google. And because, you know, there's this famous, piece of lore that, uh, you know, in OpenAI's founding, that just before that, he has this big fight, uh, with, with a Google co founder. They have a personal falling out, and they, and they're, it's over AI, and he's, Google's doing AI, and the subtext that I get, and that even, like, one of, uh, sort of Musk's biographers sort of points out, is that it's maybe kind of underscored by jealousy. Like Musk wanted to be involved in these sort of the big future projects and he's feeling like, like left out. And he does what he's now, we know what we all now know that he does is lash out and kind of punishes competitors.

Alex Hanna:

Yeah. And we were seeing, I mean, what was this? There's this kind of move now. I mean, we're jumping the gun, but it's open that OpenAI is moving toward this for profit model and that Musk has tried to block that right now. And then, you know, we have this founding of xAI and kind of the moves of saying, well, you know, I'm going to do it better. You know, I want to be, I want to be the one to do this. Um, and so, so much of it just coalescing off of just personality dynamics and all of us having to deal with this shit.

Emily M. Bender:

Yeah. So there's a comment in the chat here from, uh, Black Angus Schleem."Tech journalism was really credulous back in the day, and no wonder we're, we've been left defenseless against these assholes."

Brian Merchant:

I mean, it was, you know, and I am not immune from some of that, that blame. I wasn't, you know, that I wasn't writing articles in Wired at the time. But, you know, even when Musk started, uh, Tesla, uh, I, I was working for the Discovery Channel at the time writing, you know, articles on its online sites and I think the prevailing wisdom at the time was like, ooh, he's doing electric cars because, you know, because this is the heroic, a good thing to do. So, you know.

Emily M. Bender:

But also he didn't start Tesla. Didn't he sort of muscle his way in with money and then claim to have started it?

Brian Merchant:

Yes, he swooped in and kind of become the, it was like a kind of a foundering, you know, startup and the pieces were there and he kind of, yeah, he swooped in and sort of muscled out the other, the other founders, um, and became the public face of it. Yeah.

Emily M. Bender:

Yeah. So, okay, let's keep going with these two clowns. Um, I'm going to skip over that one, I think. Um, so they're talking about how important it is to put things out in the public so that, like, you could have lots of AIs and everyone owns them. Um, and also that this helps them recruit, uh, top talent because everyone who's working on this wants to be able to publish their work. Um, and so the, Levy asks, "Doesn't Google share its developments with the public like it just did with machine learning?" And I think that was PyTorch? Um, and Altman--

Alex Hanna:

No, TensorFlow.

Emily M. Bender:

Oh, TensorFlow, sorry. Um, and Altman says, "They certainly do share a lot of their research. As time rolls on and we get closer to something that surpasses human intelligence, there is some question, how much Google will share."

Alex Hanna:

Okay.

Emily M. Bender:

So.

Brian Merchant:

Just on every level, right? It's not like OpenAI is sharing anything anymore.

Emily M. Bender:

So OpenAI is not open. They are not open about what they're training it on. They're not, you know, they aren't even publishing papers anymore. But also, there's this inevitability narrative in there, right?'As time rolls on and we get closer to something that surpasses human intelligence,' as if that's necessarily going to happen.

Brian Merchant:

Yeah.

Alex Hanna:

Yeah. I mean--go ahead, Brian.

Brian Merchant:

Oh, yeah. No, I was just going to say the one thing that is probably accurate that they said is that like this is like, positioning it this way and with this mythology is a pretty good way to probably recruit, you know, engineers who are hoping to be attached to something that Musk is involved in, raise their status, maybe make a bunch of money on, you know, more, maybe they're making a ton of money already at Google. And this sounds like a more interesting project. So they were able to recruit, you know, a bunch of, uh--

Emily M. Bender:

Top talent.

Brian Merchant:

Top talent that fit that bill that you were talking about it pretty, uh, pretty wholly. Uh, so that, you know, that, that, that was probably part of the equation at this point.

Alex Hanna:

So we have this other quote, that's the Dr. Evil stuff, which I think we can just skip over, over time.

Emily M. Bender:

Although I have to say, so the journalist says, "If I'm Dr. Evil and I use it, won't you be empowering me?" And Musk says, "I think that's an excellent question and it's something we debated quite a bit." End of sentence. They were having like--

Alex Hanna:

We thought about it.

Emily M. Bender:

--sophomore dorm room late night conversations about this, it sounded like.

Alex Hanna:

I know. Every time I see Musk debating anything, it's like, I just think of the Joe Rogan meme of him, like, taking a huge, you know, hit of a blunt and then just opining about whatever. Um, yeah.

Emily M. Bender:

Um, so I think, let's do this one."What's an example of bad AI?" And Altman says, "Well, there's all the science fiction stuff, which I think is years off." Like, the Terminator or something like that. And it's like, again, 'years off' suggests that it's actually coming.

Alex Hanna:

Yeah.

Emily M. Bender:

Right?

Brian Merchant:

Yeah.

Emily M. Bender:

Um, he says, "I'm not worried--"

Brian Merchant:

And it just starts with science fiction, right?

Emily M. Bender:

Yeah. Uh, Altman says, "I'm not worried about that any time in the short term. One thing I do think is going to be a challenge, although I'm--not what I consider bad AI, is just the massive automation and job elimination that's going to happen." So he's already sort of marketing this to the companies that will be laying off people, right? Um, and then, "Another example of bad AI that people talk about are AI-like programs--" I don't know why that's AI like. Um, "--that hack into computers that are far better than any human. That's already happening today." Um, so the, the things that he's worrying about--

Alex Hanna:

Sorry, what does that mean? AI, I'm just trying to read this is AI-like, does that just mean you just have a forward loop going through a password list? Like, what do you mean? Like.

Brian Merchant:

Is he thinking of like Stuxnet and things like, yeah.

Alex Hanna:

Yeah, maybe. I don't know. That's, that's, that's, that's quite a bit more complicated, but yeah, I don't even know what the, what the referent is there.

Emily M. Bender:

Yeah. Yeah. So is there anything else in here that we wanted to be sure to get to. Oh, they talk about data. Um, so uh, this is Altman again. Um, so will your, question is, uh, "Will your startups have access to the OpenAI work?" Um, and Altman says, "If OpenAI develops really great technology and anyone can use it for free, that will benefit any technology company, but more so than that. However, we are going to ask YC companies to make whatever data they are comfortable making available to OpenAI. And Elon is also going to figure out what data Tesla and SpaceX can share." And then there's this next thing, um, so an interviewer asks for an example, um, and Altman says, "All of the Reddit data will be a very useful training set, for example. You can imagine all of the Tesla self driving car video information also being very valuable. Huge volumes of data are really important. If you think about how humans get smarter, you read a book, you get smarter. I read a book, I get smarter. But we don't both get smarter from the book the other person read. But using Teslas as an example, if one single Tesla learns something about a new condition, every Tesla instantly gets the benefit of that intelligence." Thoughts?

Alex Hanna:

Yeah. There's a great comment in the chat where it says, Homesar315 says,"Neither of these guys actually write code and that's obvious."

Brian Merchant:

It's a great point. Like, look at how like nebulous these ideas are. They are just like vaguely science fiction shaped ideas based on some, I don't, you know, I, things that they've maybe read or maybe like skimmed or, you know, remember from science fiction movies from, from, from years ago. Like at this point, neither of them have even really interacted with much like AI technology or a company even building this. This is all like, and this is part of the argument I make in the report. It's just like, it's all ideation. It's, it's all vibes really. And, and, uh, and strategic like market uh, positioning again against Google and, yeah.

Emily M. Bender:

And with such a dismal view of what happens for people, right. I read a book, I get smarter. Well, I mean, first of all, why are we talking about ranking intelligence? But also I read a book, I learn something. I engage with the book, I learn something. Like it's it's a and so here's Altman already trying to reduce what people do so that it looks something like what machine learning is.

Brian Merchant:

Yeah. Yeah. And, yeah. And I think it's also useful for context and some of these emails have now been made public as part of Elon's lawsuit against Altman, but you look at like their early emails together and to me, it just seems like Altman knows that Elon has made some comments in the press about AI and that he's like worried, quote, "worried" in this, you know, only in this, you know, apocalyptic Skynet sort of sense of the word. And he writes this introductory email to him reaching out and like professing to have the same worries, but really just seeing it as an opportunity to sort of just like, you know, kind of link up and, and become a remora on, on Musk's power there. And that's exactly what he did. And they're just building this scaffolding and it's all narrative. It's all stories. It's all uh, sort of just detached from, from, from reality at this point. Uh, and I think that that in hindsight now is pretty glaringly clear. So the things they're saying about Tesla, it's just like retconning it on like, oh yeah, we could have Tesla's driving around learning things.

Emily M. Bender:

And Musk says, "Certainly Tesla will have an enormous amount of data, of real world data because of the millions of miles accumulated per day from our fleet of vehicles. Probably Tesla will have more real world data than any other company in the world." My data set's bigger than your data set.

Alex Hanna:

Well, it's also a weird view of like, what is data, too? Because it's sort of saying like, well, is Tesla data going to be helpful for building any kind of sensible language models? No. Uh, it's well--

Emily M. Bender:

Okay. First of all, Alex, sensible language models. Is that a thing?

Alex Hanna:

Yeah. I mean, like, I don't know if it's a thing, but I mean, I'm saying, I'm not saying a sensible large language model.

Emily M. Bender:

Okay.

Alex Hanna:

I'm saying like a language model. I mean, a language model that is doing what is intended to do right. And I mean, but you're basically getting to a point where, you know, like. There's this sort of not even, it's such, it's such an interesting view. Cause I think it's not only reducing the human to kind of a rank order of intelligence, but it's also like data to like just whatever slop bucket it is, where you just put the bigger data set into the machine and then it goes brrrr and then I go, the number goes up or whatever. So like, that's just, that's just the vibe here.

Emily M. Bender:

Uh, okay. I think I'm, I'm kind of, I think we may be done with this. Um, I'll just, I'm just checking if there's anything else in here, anything you see that you want to jump in on. Um, uh, okay. Um, oh, this one."Elon, you are the CEO of two companies and chair of a third. One wouldn't think you have a lot of spare time to devote to a new project."

Musk:

"Yeah, that's true. But AI safety has been preying on my mind for quite some time, so I think I'll take the trade off in peace of mind." Um, and yeah.

Brian Merchant:

And we now know that it worked. He has peace of mind and has left the public sphere and is quietly contemplating the world on a hilltop somewhere.

Emily M. Bender:

Yeah, or jumping into his millions like Scrooge McDuck. That's what I wish all billionaires did. Just go swim in your gold. Okay, so I wanted, this last one is also wonderful, so let's go for it. This is Cade Metz writing in Wired under the tag or sticker'Business,' December 15th, 2015, 7am. So four days and seven hours later. Headline, "Elon Musk's billion dollar AI plan is about far more than saving the world." Subhead, "There are more forces at work in the creation of OpenAI than just the possibility of superhuman intelligence taking over the world.'

Alex Hanna:

There's a great image here. It's a picture of Elon Musk wearing a gray blazer over a, um, I guess that's like a black button up and he's in front of like, either what is like a nebula or a, like a, a volcano exploding.

Brian Merchant:

A solar flare maybe?

Alex Hanna:

Or, I don't know. It's just, anyways, just one of these anyways. Very extra as a, in an image.

Brian Merchant:

That headline too. Can, like there are some, you know, excitable headlines that we've--but'there's far more than saving the world.' So like they're saving the world is the, you know, the supposition, that's already there. And then this is, and it's not only more, but far more, like, so this is, this is like, I don't know, like intergalactic, they're saving like the universe. Uh, it's--

Emily M. Bender:

With that space image there. Yeah. And then also the, um, again, um, more forces at work than just the possibility of superhuman intelligence taking over the world. So Mets is taking no critical distance here.

Alex Hanna:

No. Um. And Christie, our producer, says, "The headline is giving Hamlet: 'There are more things in heaven and earth, Horatio, than are dreamt in your philosophy."

Emily M. Bender:

Okay. Um, so starting in here, first paragraph, "Elon Musk and Sam Altman worry that artificial intelligence will take over the world. So the two entrepreneurs are creating a billion dollar not for profit company that will maximize the power of AI and then share it with anyone who wants it. At least this is the message that Musk, the founder of electric car company Tesla Motors--" Uh, correction, he wasn't the founder, but going on."--and Altman, the president of startup incubator Y Combinator, delivered in announcing their new endeavor, an unprecedented outfit called OpenAI. In an interview with Steven Levy of Backchannel, timed to the company's launch, Altman said they expect this decades long project to surpass human intelligence. But they believe that any risks will be mitigated because the technology will be, quote, 'usable by everyone instead of usable by, say, just Google.'"

Brian Merchant:

Yeah, it does also give you a sense of like the amount of gravity that Musk already commands, that this is an article that's like basically kind of just restating even in more grandiose terms what they'd already published a couple days ago. So this is really, it really did, like, I do remember this when this when this news broke, we weren't in on the insider circles. We didn't get any interviews with any of the principles, but it's it's it's really interesting. It was like a week of, uh, of headlines coming out like this.

Alex Hanna:

Yeah. And I, there's this one statement here where they're talking about the interview and I don't know if this was in the other view interview in Wired, but there was also a, a Medium, um, page called Backchannel.

Brian Merchant:

I think it's the same one that they republished it on Wired. The one that we just read, I think was, was originally published on Backchannel, which was Steve Levy's like special thing. Yeah.

Emily M. Bender:

Yeah.

Alex Hanna:

And just highlighting just the line because I kind of missed this the first time around where Altman said,"Just like humans protect against Dr. Evil by the fact that they're most humans are good and the collective force of humanity can contain the bad elements." And then someone in the chat and I think it was SJayLett was like, "Oh yes because that's how the Austin powers movies worked. Good humans protected against Dr. Evil and it wasn't the international man, the mystery himself." Oh no. It said, yeah, SJayLett says, "I don't remember humanity stopping Doctor Evil so much as Austin Powers and poor planning."

Emily M. Bender:

And, uh, MMitchellAI in the chat says, "Maximize the power of AI." This was in that first paragraph. Um, "Is corporate speak being passed off as a normal thing to say?" Um, so again, we have the journalist sort of in here helping, right? Helping build the hype. Helping make the rest of us, like, have to deal with this instead of coming at it with a critical eye.

Brian Merchant:

And we still see stuff like this happening. Like, I just, I mean, and now, and I, it's just like, why, like, can we think for a second about why they might want to do this? Like, why beyond just saying, I want to save the world? Like, two people who have given us very little evidence that we should trust them, even then, even 10 years ago, uh, that we, that we can just kind of reprint what's on the press release. It did, you know, our earlier commenter in the chat here was right. It did, you know, giving them so much slack did, did make things worse in a, in a pretty demonstrable way.

Alex Hanna:

Yeah.

Emily M. Bender:

Um, okay, so I'm going to get into this open thing again. So, "Increasingly, companies--"

Alex Hanna:

Oh, prior to that, let me, I want to get, I want to get into this thing Miles said.

Emily M. Bender:

Oh, yeah.

Alex Hanna:

So Miles--

Emily M. Bender:

PhD student Miles.

Alex Hanna:

Yeah, so this is back when Miles Brundage, who went to work as a high up policy person at OpenAI, still as a PhD student at Arizona State, uh, States, who said, "It's not yet an open shut argument," um, says of OpenAI."At the point where we are today, no AI system is at all capable of taking over the world and won't be for the foreseeable future." And I'm like, okay, so there's, there exists a realm in which that is within the possibility and already as a PhD student lending credibility to that. Anyways, I just want the dog on that.

Emily M. Bender:

Yeah. I was actually curious to see when he's listed as a PhD student, did he finish his PhD? And he did.

Alex Hanna:

He did. Yeah.

Emily M. Bender:

Yeah. Um, so, you know, good job Miles finishing. That's an achievement. Um, but also--

Alex Hanna:

But sorry for everything you did after that.

Emily M. Bender:

Okay. Um, so, uh, "Increasingly companies and entrepreneurs and investors are hoping to compete with rivals by giving away their technologies. Talk about counterintuitive." And again, I'm like, did you miss the nineties? Cause that, that already happened once, right? The, the, the new economy, if you were trying to do something other than like giving away open source software, that was so old economy. So, you know. I don't think this journalist is so young, but maybe, all right. Um, so, uh, talking about the advantages of open, um, and, so, uh,"Such sharing is a way of competing. If a company like Google or Facebook openly shares software or hardware designs, it can accelerate the progress of AI as a whole. And that ultimately advances their own interests as well. For one, as larger community improves on these open source technologies, Google and Facebook can push the improvement back into their own businesses. But open sourcing also is a way of recruiting and retaining talent." So we've seen this point before, right?

Brian Merchant:

I think it's interesting how, like, now, who is the actual open source player now? It's, it's Meta. It's Facebook. And it's so interesting to see, like, OpenAI gets so, and Elon Musk gets so much credit and so much, you know, press for doing it then. But it just, I think it just shows how much, like narrative and like building this scaffolding is important. Like they had, they, they weren't part of one of the, one of the sort of extant tech companies. So they got the chance to sort of build this narrative as like kind of a reaction to what Google and at the time Facebook was doing. And now Facebook is making, they're trying to make these arguments, but like nobody cares because it's Facebook.

Emily M. Bender:

But also they're lying. Because what they're calling open is not, they're saying it's open source, but what they're doing is they're releasing model weights. They're still not giving any documentation on the data, they're not giving the training software, it's not actually open source. The only, the only group I know that's doing something, well, there was the whole, um, the Pile and the Eleuther AI, but then also, um, Ai2 is with OLMo, trying to.

Alex Hanna:

Well, there was also, uh, what Hugging Face was doing with Bloomz. Yeah. And the, and the model. And, but I mean, the, the thing for--

Brian Merchant:

But OpenAI was lying right now, was at this point they're, we know now that they were lying too, and they got all, they got the benefit of the debt because they had a kind of a fresh--

Alex Hanna:

And I do wanna give a shout out about the, sorry, like are getting to a lot of, but the ecosystem element of this too, because the, there's a remark in this about TensorFlow. So, "This competition may be more direct than it see, may seem we can't, but, but help, but, um, we can't help but think that Google open sourced its AI engine TensorFlow because it knew OpenAI was on the way." Um, yada, yada, yada, but even talking about TensorFlow as being this kind of ecosystem of openness is not is not true. You're, you're the largest player in the ecosystem, that goes for Android and Chrome as well. I want to refer readers or listeners to our episode 21 with Sarah West, Andreas Leisenfeld about open source and what open source means, including a paper that I think Sarah wrote with Meredith Whitakker and David Witter on open source and the kind of, um, what is the word, not closure, but the kind of capture um, or falseness of open source itself and the way that open really, um, doesn't mean really anything at this point.

Emily M. Bender:

Yeah. Yeah. Um, okay, there's some, this, this whole article is just like credulous AI hype. So, you know, believing them on the open. And then this paragraph here, "Deep learning relies on what are called neural networks, fast networks of software and hardware that approximate the web of neurons in the human brain." No, they don't. Like, really? Just because they said that, you've got to dig a little deeper. And then it gets worse."Feed enough photos of a cat into a neural net, and it can learn to recognize a cat. Feed it enough human dialogue, and it can learn to carry on a conversation. Feed it enough data on what cars encounter while driving down the road and how drivers react, and it can learn to drive." And, it's like, no. Okay, so yes, you can create a program that can classify images as cat or not to a certain degree, right? Um, but that's not I wouldn't even call it recognition, right? There's a whole bunch of anthropomorphization happening in here. So recognize sounds like cognition. Feeding it sounds like it's some biological thing, right? But then, okay, does ChatGPT carry on a conversation? It produces the form of a conversation, right? And that's getting us into all kinds of trouble. Um, have the Teslas learned to drive? No, they have not. Um. So, um, yeah. All right. Um, uh, talking about data and how important it is. So, "Chris Nicholson, the CEO of deep learning startup called Skymind--"

Alex Hanna:

Not creepy at all.

Emily M. Bender:

Yeah. Skynet. Uh, "--which was recently accepted into the Y Combinator program:'I'm sure Airbnb has great housing data that Google can't touch.'" So this is interesting because this is 2015 and we've got like, total credulous access journalism going on and these guys are saying the quiet part out loud, right? Your data is gonna make us rich.

Brian Merchant:

Yeah.

Emily M. Bender:

And people didn't react as we should have.

Brian Merchant:

Yeah. I mean, and then we, it was, it is, it's just so much more out in the open right there there. Yeah. We're going to use, I mean, that's that specific example too. It's how, your housing data? Like, oh, I bet Airbnb has a bunch of great data about your house that we can plug into a, into a large language model to, to train whatever for the future. Like, yeah, I mean, it, it also doesn't make a ton of sense, but it also just reflects exactly what they're, what they're thinking and what has come to pass, right? That's so many of these things, as you pointed out earlier, like this, it doesn't really, you know, matter to them what it's just size. It's like if it's car, if it's Tesla cars collecting data, or if it's like, you know, Reddit threads, as they mentioned in the articles, whatever is bigger, more out there, they figure they can just like brute force it into something that will matter. They can just transmute that into something that will be meaningful and useful and profitable.

Alex Hanna:

Yeah, yeah, there's a few, yeah, there's a few things there. First on the access journalism part, IrateLump in the chat says, "Access journalism has led to so much of this. Either you uncritically reprint their press press releases or they shut you out entirely." And I know that there's been some very trenchant critiques of kind of access journalism, especially within the tech sphere. Um, I think Paris Marx, who we had on the, uh, who we had on the pod, I think two or three eps ago, has had, um, a really great kind of, um, critique of, uh, Kara Swisher and her kind of turnabout on Elon Musk, who, 'Elon was good and then turns. Now he just wants to do and get a bunch of money.' No, no, you were playing, you were buddy buddy with him, and then he pissed you off and then, and then now you're here and there's no way to actually hold any of these people to account.

Emily M. Bender:

Yeah. All right, I want to get this pessimistic optimist section in here before we wrap up and head over to Fresh AI Hell. So, subhead, "Pessimistic optimists. But no, this doesn't diminish the value of Musk's open source project. He may have selfish as well as altruistic motives, but the end result is still enormously beneficial to the wider world of AI." Barf. Okay."In sharing its tech with the world, OpenAI will nudge Google, Facebook, and others to do so as well, if it hasn't already. That's good for Tesla and all those Y Combinator companies, but it's also good for everyone that's interested in using AI. Of course, in sharing its tech, OpenAI will also provide--" Ack, I'll try again."Of course, in sharing its tech, OpenAI will also provide new ammunition to Google and Facebook. And Dr. Evil, wherever he may lurk. He can feed anything OpenAI builds back into his own systems. But the biggest concern isn't necessarily that Dr. Evil will turn this tech loose on the world. It's that the tech will turn itself loose on the world. Deep learning won't stop at self driving cars and natural language understanding. Top researchers--" There's those top researchers again."--believe that given the right mix of data and algorithms, its understanding can extend to what humans call common sense. It could even extend to superhuman intelligence."

Alex Hanna:

Dun, dun, dun.

Brian Merchant:

It's overwhelming. And again, just as a reminder, OpenAI does not exist yet. It is just, this is three days after it's been announced. This is all just a combination of projection and just like sort of you know, a reading of what exactly Altman and Musk have said and put into press releases. That's all that exists at this point.

Emily M. Bender:

And all of this super intelligence stuff is just like platformed as if it made sense.

Alex Hanna:

Yeah.

Emily M. Bender:

Right.

Alex Hanna:

And I mean, this is the people that they talk to. I mean, this is this Nicholson character who's talking about a quote"escape velocity" of an AI system becoming, quote, 'smarter and smarter.' And if it did do that, it would be scary. Uh, and then they talk about guardrails and, you know, the guardrails is by giving good AI to good people. Um, so, and the, the thing that really irks me is the penultimate paragraph of these, where they say, "How necessary those precautions really are depend ironically on how optimistic you are about humanity's ability to accelerate technological progress. Based on their prior successes, Musk and Altman have every reason to believe the arc of progress will keep bending upward." And I'm just like, first off, fucking up the MLK quote to talk about technological process, you know, you can harness, you could harness the, the centrifugal force in MLK's grave to power some of these data centers. Um, but just the second, secondly, the idea that again, issues of harm have to do with technological breakthrough, which of course is patently ridiculous. It's a category error, it is just false.

Emily M. Bender:

And the arc of progress will keep bending upward. Like that's, as you said, Alex, that's not how that quote goes.

Alex Hanna:

Yeah. Yeah. It's not even, exactly. First off, that's not what an arc is. You mean, you mean the parabola of, of, of progress? Like, I mean, maybe you mean the exponential function? Like, what are you talking about, man?

Emily M. Bender:

Yeah. All right. So, so the business model in all of this, I think, did we dog enough on the, 'the AGI will figure it out.'

Brian Merchant:

Yeah, can I do one? Look at this, this quote at the bottom of that graph, which is, "Thinking about AI as the cocaine of technologists. It makes us excited and needlessly paranoid." Like that maybe is that we'll give them that one.

Emily M. Bender:

Yeah. So, so, you know, we, we billed this episode is looking at the business model of AGI and you know, the, the business model, I don't know if it was in any of these articles, but it's in your report, what they say is the business model is, 'build the AGI and it will figure it out for us.'

Brian Merchant:

Yeah, these, yeah, these articles predate even the earliest, sort of, thinking about a business model, I think. As we mentioned earlier, the way that I think about, uh, this genesis of OpenAI is as this like strategic hedge as, you know, as, uh, as Altman's sort of, uh, Silicon Valley socioeconomic, uh, ladder climbing and, and then like the, the, sum product of this is this like this, then this research project that they then actually do sort of attract people to work for. And it's from the beginning, it is just sort of, I think it's, you know, I would have loved to have been in the room, uh, as any reporter would, when they're actually sort of hammering out the early steps of, of like what OpenAI is to be. But there's people from Amazon there, there's, uh, there, there are other tech companies, there's VCs there, so like, far from this sort of you know, origin story of it being this totally, uh, altruistic, world saving program. It's in the air, right? Like, they're adjacent to, like, huge fountains of capital and some of the biggest players in Silicon Valley. So from the beginning, it's like, like there's there's we don't know what it is yet but like we think that like there could be a play against Google and it's starting with this. And then so yeah, you have a number of years where they're just kind of building the mythology, right, and I think it's also has to be noted that This is all happening and sort of like the 0 percent interest rate period where it's like easy to sort of get money for startups and to get money invested in things. And then so you have all these companies arising like Uber that aren't profitable a decade or more, and it's leading everybody to believe this idea where the story is the most important, this sort of the strength of the conviction, getting investors on board. And then, you know, it doesn't really matter whether or not you have anything resembling a sustainable or working business model. So all this is kind of in the air as OpenAI is forming, and they just kind of mess around for a few years. They get good researchers. They, for a while, they're working on robots. For a while, they're working on E-sports and playing, and they're like, doing all these things very much catered towards the press, which is like, oh, we're gonna, you know--remember? Do you do either of you remember when they released, where they announced that they had made a model that was too powerful?

Emily M. Bender:

Oh, yeah, we were--the whole NLP community was laughing at them. And like, ha, ha, ha, this is marketing, yeah.

Alex Hanna:

It was the, it was like the, the, the, whatever, I forgot they called it the Q* or whatever. That was, that was a few cycles ago or whatever. And that, that was, that was with, around like the board mixup and everything.

Emily M. Bender:

We've got to get ourselves over to Fresh AI Hell here.

Brian Merchant:

Yeah, so I'll, I'll just wrap up by saying that, yeah, so they used all of this and then they just, they, they understood, or at least had some reactive understanding of how the press responded to their moves. And then they built this mythology. So when ChatGPT drops, they really, they, they don't have a business model. They have a vague sense of the things that they've been saying, right? Like, as you said, how about halfway through, they start saying, well, we're gonna ask AGI to figure it out, and literally like saying that on stage at tech conferences and having people, you know, kind of nod along and investors giving them more money. So it really is another sort of snapshot of the moment that we were at when you could just kind of say that and still get billions of dollars. And then you have a product that people like, kind of like, it's not clear that it's gonna make money like ChatgPT. And then from there, you kind of have to, the last two years have been the story of them trying to figure out how to harness this, uh, this apocalyptic hype that they've built for themselves into something that's going to generate returns.

Alex Hanna:

Yeah. All, all, uh, all, uh, all bark, no bite, all, all hype, no, I don't know. I'm trying to think of something else that starts with H.

Emily M. Bender:

All hype, no--

Alex Hanna:

Hustle?

Emily M. Bender:

Horchata, I don't know.

Alex Hanna:

Horchata. I did not expect that.

Brian Merchant:

No horchata.(laughter)

Emily M. Bender:

Alright, so Alex, here's your prompt. I'm gonna make you--

Alex Hanna:

Okay, but I, I have an idea. Yeah, I have a, I have a musical styling I want to do. But you, uh, gimme the prompt.

Emily M. Bender:

Okay. So the prompt is you are not fresh AI held demon, but its corresponding angel this time. Sipping some horchata. Disappointed to have found out that OpenAI was not actually altruistic.

Alex Hanna:

I know, uh. All right. I'm gonna do, since we started with a Flight of the Concords reference in the intro, I'm going to, that's this musical styling, and I'm just like, I've got my horchata in AI heaven. I'm ready to be benevolent. I opened the newspaper, what do I see? Sam Altman? No. Could it be AGI?. I like hear an echo of myself and it sounds like a, like a thing. Anyways, that's all I got. AGI is a lie. Say it isn't so. I do a spit take, horchata everywhere, in my corresponding angel's hair. That's all I got.

Emily M. Bender:

All right. So now we're going to have to make a Mystery AI Hype Theater 3000 cookbook with a recipe for angel hair pasta with horchata.

Alex Hanna:

Yeah, I'm I'm I'm about it I'm already in the work setting up a band with with my girlfriend And it will be maybe maybe our side project will be Rat Ballz. So.

Emily M. Bender:

Yeah, awesome Okay, so we've got too many but we're gonna go quickly. This first one is from the Financial Times uh, today, uh, journalist is Madhumita Murugia and Christina Criddle and George Hammond. Um, headline is, "OpenAI explores advertising as it steps up revenue drive." Surprise, surprise, they're trying to figure out how to sell ads. Uh, so, "ChatGPT maker hires advertising talent from big tech rivals."

Alex Hanna:

I, I make this, I make this joke, or it was a joke in an earlier edition of the book, um, The AI Con coming out in May 2025, but it's something about putting AI ads in ChatGPT results and guess what? It's happening.

Emily M. Bender:

There it is. Yeah. Okay. Uh, next, uh, TechCrunch, um, by Ingrid London on November 19th. Headline, "Itching to write a book? AI publisher Spines wants to make a deal." So this is a self publishing platform that claims that thanks to being powered by artificial intelligence, it can do all of the work of a publisher and do it faster and cheaper.

Alex Hanna:

Yeah, that task list includes"editing a piece of writing, providing suggestions to improve it, and giving users a frank projection on who might read the published work, providing options for the cover design layout, and distributing the finished product in ebook or print on the demand formats." Gosh, imagine getting a uh press kit on a book generated this way. Nightmare scenario

Emily M. Bender:

Yeah. Ugh.

Brian Merchant:

I think it, that just like really shows like what it's all about here, right? They're just trying to, it's just purely trying to, you know, degrade the, the, the, the work and the labor conditions of people in publishing or in a given field, a creative field. Yeah. That's all it is.

Emily M. Bender:

And then at the same time, the information ecosystem, because then you have all of this stuff just, you know, flooding the zone.

Brian Merchant:

Slop. Of course, because they're going to be inputting AI generated text into these things.

Emily M. Bender:

And, you know, speaking as a co author of a book where we are in that, like, the book is written but it's not out in the world yet and we're impatient, like, it just, it hurts to see this because I know the work that the publisher is doing and there's a reason that we didn't just sub publish, self publish the book. Okay.

Alex Hanna:

Yeah.

Emily M. Bender:

That leads in very nicely to this one from Business Insider, um, analysis by Alistair Barr, updated November 30th, 2024, headline,"In a world of infinite AI, the new luxury item could well be humans."

Alex Hanna:

Yeah. So this is really, this is really, uh, saying the quiet part out loud, which is, hey, for all the rest of you, you're going to get AI slop and for the wealthy, you're going to get actual human contact.

Brian Merchant:

Two tiers.

Alex Hanna:

Yeah.

Emily M. Bender:

Um, and then why the image is of an aerial view of a carnival parade."Residents enjoy a carnival parade on February 6, 2005 in Viareggio, Italy." Interesting choice.

Alex Hanna:

I don't know, maybe, maybe he, maybe an AI quote "assisted" journalism.

Brian Merchant:

It's supposed to be a lot of people. Infinite.

Emily M. Bender:

Infinite people?

Alex Hanna:

Infinite people.

Emily M. Bender:

Alright, so this next one actually is from my email inbox with a little bit of redaction. The subject line was,"Verified avatar of Dr. Andrew Ng" and then slash it in the name of this company. Um, and it was sent to me on Friday, November 22nd at noon."Hello, Dr. Bender. Redacted was launched by Dr. Andrew Ng's firm AI Fund with a focus on building conversational agents slash avatars in partnership with leading academics and thought leaders. Think virtual teaching assistant offering office hours or a personalized study plan. We have built official verified avatars of Andrew Ng and Lawrence Moroney and are just beginning to work with Barbara Oakley, Terry Sejnowski, Eric Brynjolfsson, and Brian Green. Would you be willing to talk with us about collaborating on a verified avatar of you?" And then this was Friday at noon. How does Monday to Tuesday, November 25th to 26th look? Thank you, redacted." And there's not very many contexts where I would be comfortable saying this, but do these people have any idea who I am?

Alex Hanna:

They didn't do much research. They maybe saw that you were at the Time AI 100 or something.

Emily M. Bender:

Yeah, I'm guessing it's something like that. They did not get a reply.

Brian Merchant:

Experiment. I feel like it--document it on the podcast to do it. Get one. And you could do inside the belly of the beast.

Emily M. Bender:

Nope, nope, nope, nope, nope, at no point will there ever be a verified avatar of me. So if you see one, it's fake.

Alex Hanna:

Horrifying. Unverified. Unverified.

Emily M. Bender:

Unverified.

Brian Merchant:

Unverified.

Alex Hanna:

You know, you can't get this, you can't get this in the stores, man. This is unlicensed Emily M. Bender tap water.

Brian Merchant:

Black market?

Alex Hanna:

Black market.

Emily M. Bender:

Oh, gosh. Okay. Uh, TechCrunch, "PSA, you shouldn't upload your medical images to AI chatbots." And this is by Zach Whitaker, published on November 19th. Um, and he says, "Here's a quick reminder before you get on with your day. Think twice before you upload your private medical data to an AI chat bot." Because people are actually using these, you know, ChatGPT or Gemini or whatever to ask questions about their medical concerns. And they're doing this, uh, through uploading things like x-rays and MRIs and stuff like that. Don't do that.

Brian Merchant:

Don't do that. Musk, there was that thing Musk was asking, like, just X users to, like, send him their medical data, and they were just, like, tweeting it. They were just, they were like MRI scans.

Alex Hanna:

Oh my gosh. Good lord.

Emily M. Bender:

Okay, keeping us going quickly, um, this was an NPR piece that I heard multiple times over the weekend. They kept airing it, um, but initially played on November 26th of 2024."A look at a pilot program in Georgia that uses, quote, 'jailbots' to track inmates," by Leila Fadel, host. Thoughts?

Brian Merchant:

Um, it's just the, it's a torment nexus case if you've ever seen one, right? Just, you know, why, why would you aspire to this?

Alex Hanna:

Huge, huge nightmare scenario. I wonder if they'll have to have two Department of Corrections people babysitting them like they have to do for the, the New York City subway Dalek, or whatever.

Emily M. Bender:

Yeah. I also, it kept bugging me each time I heard this. It starts with, "Six foot tall robots are now monitoring inmates at a county jail in Georgia." And the, the, the fact that they lead with the height of the machine. was somehow bothersome to me.

Alex Hanna:

Yeah, like it's just like, It has to be intimidating to the, it can't be a cute little, uh, you know, um, what was, what was the robot dog called? Whatever.

Emily M. Bender:

The Boston Dynamics one?

Alex Hanna:

No, no. I was talking about the, the prior one that I think, that was more of a mass market one. Anyways.

Emily M. Bender:

Oh, aibo.

Alex Hanna:

The aibo yeah.

Brian Merchant:

They have to assign some like corporeality to it. Right. To like, you know, under so much of it is as you both have documented exhaustively, hype and ephemeral. So they're like, yeah, you have to say like six foot tall, like sit up, pay attention. This thing is actually, you know, a being in a, in a jail somewhere, which does make it worse.

Emily M. Bender:

Yeah. Okay, and so then to take us out on something of a high note, I have this wonderful comment from Sage at trans.bluesky.social, um, in this thread about, um, how the, the tech bros say we can't possibly handle data carefully. Sage says, "If an AI tech bro ran Campbell's, quotes, 'prepping all these dead chickens is too much work, so I just put millions of rotting poultry carcasses into a giant vat. Hope everyone loves soup that tastes like chicken shit and burnt feathers.'"

Brian Merchant:

Thank you, Sage.

Emily M. Bender:

Yeah, Sage just nailed it there. All right. That's it for this week. Brian Merchant is a journalist in residence for the AI Now Institute. Thank you so much, Brian, for joining us.

Brian Merchant:

Thank you so much for having me. Can I plug my new podcast with the aforementioned Paris Marx?

Alex Hanna:

Please do.

Brian Merchant:

Okay, yeah, we've just started a tech critical podcast of our own called System Crash. So yeah, check it out, and uh, we'll have to have you both on there sometime.

Emily M. Bender:

That'll be fun.

Alex Hanna:

Yeah, I'm excited, very excited for what y'all have cooking up in the lab. That's it for this week. Our theme song was by Toby Menon, graphic design by Naomi Pleasure-Park, production by Christie Taylor, and thanks, as always, to the Distributed AI Research Institute. If you like this show, you can support us by rating and reviewing us on Apple Podcasts and Spotify, and by donating to DAIR at DAIDAIR-Institute.org. That's D A I R hyphen institute dot O R G.

Emily M. Bender:

Find us and all our past episodes on PeerTube and wherever you get your podcasts. You can watch and comment on the show while it's happening live on our Twitch stream. That's Twitch.TV/DAIR_Institute. Again, that's D A I R underscore Institute. I'm Emily M. Bender.

Alex Hanna:

And I'm Alex Hanna. Stay out of AI hell, y'all.

People on this episode