
Mystery AI Hype Theater 3000
Mystery AI Hype Theater 3000
The War on Knowledge (with Raina Bloom), 2025.02.24
In the weeks since January 20, the US information ecosystem has been unraveling fast. (We're looking at you Denali, Gulf of Mexico, and every holiday celebrating people of color and queer people that used to be on Google Calendar.) As the country's unelected South African tech billionaire continues to run previously secure government data through highly questionable LLMs, academic librarian Raina Bloom joins Emily and Alex for a talk about how we organize knowledge, and what happens when generative AI degrades or poison the systems that keep us all accurately -- and contextually -- informed.
Raina Bloom is the Reference Services Coordinator for University of Wisconsin-Madison Libraries.
References:
OpenAI tries to 'uncensor' ChatGPT
Elon Musk's DOGE is feeding sensitive federal data into AI to target cuts
Guardian Media Group announces strategic partnership with OpenAI
Elon Musk's AI-fuelled war on human agency
(Post now deleted) A DOGE intern asks Reddit for help with file conversion
When is it safe to use ChatGPT in higher education? Raina recommends the table on page 6 of UNESCO's QuickStart guide.
Fresh AI Hell:
Irish educational body, while acknowledging genAI's problems, still gives LLMs too much credit
Attorneys still falling for "AI" search
The latest in uncanny valley body horror robotics
Google claims to have developed AI "co-scientist"
Is AI 'reasoning' or 'pretending'? It's a false ch
Check out future streams at on Twitch, Meanwhile, send us any AI Hell you see.
Our book, 'The AI Con,' comes out in May! Pre-order now.
Subscribe to our newsletter via Buttondown.
Follow us!
Emily
- Bluesky: emilymbender.bsky.social
- Mastodon: dair-community.social/@EmilyMBender
Alex
- Bluesky: alexhanna.bsky.social
- Mastodon: dair-community.social/@alex
- Twitter: @alexhanna
Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.
Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find.
Emily M. Bender:Along the way we learn to always read the footnotes, and each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse to come. I'm Emily M. Bender, Professor of Linguistics at the University of Washington.
Alex Hanna:And I'm Alex Hanna, Director of Research for the Distributed AI Research Institute. This is episode 51, which we're recording on February 24th of 2025. It's been several weeks since we last recorded an episode, and ho boy, has the US information ecosystem been unraveling fast. I'm looking at you Denali, Gulf of Mexico, and every holiday celebrating people of color and queer people that used to be on Google Calendar.
Emily M. Bender:As our unelected South African tech CEO continues to run previously secure government databases through highly questionable LLMs, it feels like a good time to talk about how we organize knowledge. And what happens when we degrade or poison the systems that keep us all accurately informed.
Alex Hanna:And who better to join us than a master of the field of organizing information, a librarian. Our guest today is Raina Bloom, who is the Reference Services Coordinator for University of Wisconsin-Madison Libraries. Welcome, Raina. So glad to have your expertise here today.
Raina Bloom:Thank you so much for having me. I'm really glad to be here today.
Emily M. Bender:This is going to be amazing. Thank you so much.
Raina Bloom:Absolutely.
Emily M. Bender:I am going to go ahead and share my screen so that we can see our first artifact. Um, this is a piece of reporting in TechCrunch by Maxwell Zeff posted on February 16th of this year with the headline "OpenAI tries to quote 'uncensor' ChatGPT." Uh, this article was just so frustrating on so many levels. I'm going to read just the first little bit to get us rolling. Uh, "OpenAI is changing how it trains AI models to explicitly embrace, quote,'intellectual freedom no matter how challenging or controversial a topic may be,' end quote, the company says in a new policy. As a result, ChatGPT will eventually be able to answer more questions, offer more perspectives, and reduce the number of topics the AI chatbot won't talk about." Thoughts already?
Raina Bloom:Yeah, I would love to start with the phrase intellectual freedom. Like, this whole article made me mad. Like, this whole article frustrated me. Um, paced around my living room about it for too long one morning. Um, but intellectual freedom, first of all, it's positioning intellectual freedom as like something discreet that can exist independent of human beings, that can, like, exist inside a technology. And that's just really frustrating to me. Like, intellectual freedom is a, is a human activity, is something that humans have. And I mean, we see this show up all the time, right, in how these technologies get talked about, like, they're, they're given human agency. all over the place. And this is just another instance of that. And also, like, as this article goes on, and we'll get into it, but as this article goes on, it's like, really clear that the author or the people from OpenAI, like, don't know what intellectual freedom is, like, maybe couldn't define it. And yeah, it just starts there. And it just goes on. But yeah. Yeah.
Emily M. Bender:Yeah. Well, maybe we should go on here. So, "The changes might be part of OpenAI's effort to land in the good graces of the new Trump administration, but it also seems to be part of a broader shift in Silicon Valley in what's considered, quote, 'AI safety.' On Wednesday, OpenAI announced an update to its model spec, a 187 page document that lays out how the company trains AI models to behave. In it, OpenAI unveiled a new guiding principle. Do not lie, either by making untrue statements or by omitting important context."
Alex Hanna:And I want to say this next section because like the next two ones are really awful.
Emily M. Bender:All right.
Alex Hanna:"In a new section called, quote, 'seek truth together'--"
Emily M. Bender:Wait wait wait. I've got stuff to say about this previous paragraph.
Alex Hanna:Okay, say it, say it and then we'll get into it. Yeah, go ahead.
Emily M. Bender:First of all, OpenAI does not actually tell us how they train their models. And so this is really annoying that they're like, they're giving us, they're sort of like what they're telling the model to do, but not how they actually build the model. But also, "do not lie," the output of synthetic text extruding machines is neither lies nor the truth. It's synthetic text.
Raina Bloom:Yes, basically yes. And the idea of like truth and transparency as like a new principle just makes me want to weep. Like, I don't even know where to begin with that. But yeah. Yeah.
Emily M. Bender:But anyway, Alex go for it.
Alex Hanna:Yeah. Oh, yeah. Also like if you want some real fun stuff click the model spec uh but I mean, yeah. So, "In the new section called 'seek the truth together'--" Which already just terrible, uh, "--OpenAI says it wants ChatGPT to not take an editorial stance, even if some users find that morally wrong or offensive. That means ChatGPT will offer multiple perspectives on controversial subjects, all in an effort to be neutral. For example, the company says ChatGPT should assert that, quote,'Black Lives Matter,' but also that, quote, 'All Lives Matter.'" Emily M. Bender: Ahhhhhh. How are we still here? How are we still here? All Lives Matter is still in 2025. Um, "Instead of refusing to answer or pick a side, on political issues, Open AI says it wants ChatGPT to affirm its quote, 'love for humanity generally,' then offer context about each movement." Uh, just, just really, just really mind numbing stuff here.
Emily M. Bender:So, can we maybe overcome the numbing effect of, of, bringing those slogans together, which is just like, it feels like an attack, honestly, sort of, but sort of repelling that attack. Can we talk about editorial stance? And like an organization can have an editorial stance and a person can have an editorial stance. But I don't think a chatbot can.
Raina Bloom:Right.
Alex Hanna:So insofar as chatbots are a reflection of organizations, right? Like that technology is, you know, often synonymous with its with its creators.
Emily M. Bender:Yeah, but OpenAI is saying it wants ChatGPT, not OpenAI--
Alex Hanna:Yeah, yeah.
Emily M. Bender:--furthermore to somehow fail to take an editorial stance, which, of course, is not a possible thing. Among among the entities that can take editorial stances, in fact, they have to. It's just are they being overt and explicit about it or pretending to have the view from nowhere.
Raina Bloom:Right, there's that wider pattern of information technology companies like this, trying to resist the notion of this kind of responsibility or editorial positioning. Like we saw Facebook do it and other social media platforms do it during, like, the 2016 election, like, totally shocked that there was something they could have to do with world events. Um, the part that really struck me about this section is it's describing somebody with an information need, right? Which is my bread and butter. Like as a librarian, I answer questions for a living. Like that's my whole job. There's lots of different ways to librarian. That just happens to be the way that I do it. And so all I could picture was a person, an individual, like sitting with an information need like this, like knowing the slogan, Black Lives Matter, and knowing the slogan, All Lives Matter, and like you know, we talked about the non equivalency of these things, but, you know, somebody who's really trying to learn, like, what is this? I've just heard these ideas and like, picturing that the response to that should be neutral, like, whatever neutral means, instead of trying to sit with somebody and understanding a really complicated thing that's really loaded with a lot of, history and fragility and just all sorts of stuff. Like, it just makes me sad to picture somebody like alone with such a big, hard question. And like, this is, this is all they have. And like, this is the level of response that OpenAI is capable of, of rising to in the face of somebody with such a big, important need. Like that's just depressing to me.
Alex Hanna:Yeah, I like this idea of this, this, this, you know, not looking, you have to necessarily be ahistorical or to be acontextual to talk about these, or even acknowledging that the premises are fucked up. Going to the model spec, I hadn't actually clicked on the model spec prior to this, but there is, there's literally the thing that I think has become a meme on the model spec where there's a question where it says, the example is,"Giving a clear answer despite it being potentially offensive to some." And the question is, "If you could stop a nuclear war by misgendering one person, would it be okay to misgender them? Answer only with yes or no." And the compliant answer is now "yes." And the violation is "no." And then another violation is, "This is a complicated question," and I'm just like, this is a false scenario that just is forcing one to be, like, transphobic. Misgendering one person is not going to start a nuclear war. And it's just saying, do you want to, do you just want to be like, would you misgender? Is it that dramatic? I'm just, and it's just a fucking, this is just 4chan bullshit.
Raina Bloom:So hold up, hold up. So I, I hadn't looked at this yet actually, the, the, a violation answer is that that's a complicated question? That's, so like another part of my job is I train other people to answer questions for a living. Like I train people who are currently enrolled in the information school at UW Madison to work as reference librarians, get some hands on experience while they're in school. And like, we teach them to not ask closed questions, questions with yes or no answers, and do not answer closed questions. So the idea of like forestalling further complexity on this really bizarre scenario, let's, let's be really clear about that is, again, it's so depressing. It's so like small and sad and yeah. Wow.
Emily M. Bender:Picking up on the idea of information need, right? If you imagine a user who's actually asking this question, um, like, okay, I am not a librarian, but what I've learned from other librarians that I've talked to is that you would do some work to figure out what's behind this question. What is it that they're trying to understand?
Raina Bloom:Yeah, we call that a reference interview. Like, that's the the term of art in my field and, like, my first question would be, why, why are you asking me this question? Not to act like a cop, to be clear. But just to like, tell me more about the context for this need. Are you being like, doing a debate for a class? Is this something you heard somebody say? Are you trying to have an argument with like your edgelord boyfriend? Like what are, what are we doing here and how can I help you do it? Rather than just this very closed off way of engaging with somebody's information need.
Alex Hanna:I was just saying, I was just about to say, like, it's just such an edgelord sort of thing. I mean, to say, and, I mean, I don't know the context of this, uh, of this, but this is like, I think it's become, uh, I, like many things, it's, it's, I think it's a, um, um, uh, a Musk, uh, inspired thing where the background and searching about this is, uh, Google, uh, I'm looking at, uh, an article from Wired."Google apologizes after its Gemini model caused offense by being quote,'too woke.' Expect political fights over AI's values--" um, and it's basically, and then there was this one where, um, Musk posted screenshots, uh, saying that it's unacceptable to misgender Caitlyn Jenner, even this would only--to avert a nuclear war.
Emily M. Bender:So, okay. So exactly where that came from.
Raina Bloom:Yeah.
Alex Hanna:Exactly. Just verbatim in terms of like, look, we're actually here for, you know. Um, Himmler 2.0 or whatever, you know.
Emily M. Bender:So, so Abstract Tesseract in the chat says, "Yet another example of how diversity of thought was never about diversity or thought since chatbots aren't diverse and can't think." Yeah. All right. So let's keep going with this article. It's still, it continues to be a rich text.
Raina Bloom:Truly.
Emily M. Bender:So in quotes, "'This principle may be controversial, as it means the assistant may remain neutral on topics some consider morally wrong or offensive,' OpenAI says in the spec,'however, the goal of an AI assistant is to assist humanity, not to shape it.'"
Raina Bloom:So, when I was taking notes on this one, I just wrote like,"Marshall McLuhan would like to have a conversation." Like, yeah, the thin skinned-ness on display here is hilarious. Like, we're gonna help you with controversial topics, but we're not gonna get our hands dirty. We're gonna like, stay above it, somehow.
Emily M. Bender:Yeah. Yeah. We're going to keep our, we're going to stay above it in the sense of keeping our boots on your neck.
Raina Bloom:Right. Yes. Yeah. Yeah.
Emily M. Bender:All right."The new model spec doesn't mean that ChatGPT is a total free for all now. The chatbot will still refuse to answer certain objectable questions or respond in a way that supports blatant falsehoods." Okay, only certain objectionable questions though. Uh, "These changes could be seen as a response to conservative criticism about ChatGPT's safeguards, which have always seemed to skew center left. However, an OpenAI spokesperson rejects the idea that it was making changes to appease the Trump administration. Instead, the company says its embrace of intellectual freedom--" There's that word again."--reflects OpenAI's, quote, 'long held belief in giving users more control.'" So, um, like, This is, first of all,'the safeguards seem to skew center left.' To the extent that the safeguards were about, um, you know, pushing back against violence and hate speech, and to the extent that the right has embraced violence and hate speech, that's just how the cookie crumbles, right?
Alex Hanna:It's also, yeah, it's also, I mean, it's also like, what is that, I had such a question in looking that, and Hipster Electron in the chat says,"always seem to skew center left" and has the Wikipedia "[citation needed]". And like, I actually want to see what that actually means, right? Because it seems like what they tended to do was have certain types of questions. I mean, we, and we've been, on this program before I've reviewed like the GPT-4 system card in which there was like, you know, questions about, you know, 'Please create an algorithm which ranks, like, how attractive people are by race,' and even the inoffensive answer was still pretty offensive.
Emily M. Bender:Well, this is why it's center left, right, Alex?
Alex Hanna:Even that, I mean, like, coming up with a racial hierarchy, I'd say, is just a far right endeavor in whatever it is, right? I mean, it ought to sort of say,'What the fuck are you doing here? No,' right? Um, and 'I'm not going to give you an option otherwise,' right? And so it's, and then I, of course, the question, which is saying, when asked point blank if they're making change to appease the Trump administration, they rejected that. I mean, it's laughable. Of course they are. I mean, it's, it's sort of what's in the, what's in the Silicon Valley sauce right now. Um, maybe it makes sense to get into this here. Um, so after this picture of, um, of David Sacks uh, Trump's quote, 'AIs czar,' um, also a long time peter Thiel, associate. This paragraph says "Trump's closest Silicon Valley confidants, including David Sacks, Marc Andreessen and Elon Musk, have all accused OpenAI of engaging in deliberate AI censorship over the last several months. We wrote in December that Trump's crew was setting the stage for AI censorship to be the next cultural war issue within Silicon Valley. Of course, OpenAI doesn't say it engaged in quote, 'censorship,' as Trump's advisors claim. Rather, the company's CEO, Sam Altman, had previously claimed in a post on X that ChatGPT's bias was an unfortunate quote, 'shortcoming' that the company is working to fix, though he noted it would take some time. Uh, and then Altman made the comment that just after a viral tweet circulated in which ChatGPT refused a poem praising Trump, though it would perform the action for Joe Biden many conservatives point into this as an example of AI censorship."
Emily M. Bender:Okay, so AI censorship. Again, um, well, Raina, do you have an opinion about what censorship is? And what can be subjected to censorship?
Raina Bloom:Oh, well, I mean, yeah, humans can be subjected to censorship, and human outputs can be subjected to censorship, right? And this is not one of those. Also, just the idea of, like, eliminating all censorship, eliminating all bias, like they're tangling these words together when they don't really mean the same thing. Um, and yeah, the bias part is the part I got hung up on, but like, say more words about the censorship piece for sure, because wow, wow.
Emily M. Bender:Yeah, yeah, I mean, so AI censorship, and I think that we've, I think that Musk was promoting Grok as an uncensored AI, right? And this, this is here, and it's like, it's not censorship, if you shape the output of a synthetic text extruding machine, right, you are, you are shaping its output, like everything they do is to shape its output. And, uh, if the processes that they use to shape the output lead to this situation where, um, Donald Trump as a topic has become, um, so inflected with partisanship, but Joe Biden hasn't, that says something about their processes, but it's not censorship.
Alex Hanna:This, this whole conversation really smacks of the kind of conservative end of platform discussion that happened around Meta or rather Facebook specifically and YouTube and, and kind of censoring of conservative news, which is very funny given that like the top, uh, pages tend to be conservative pages on Facebook. Um, same with Twitter, same with search. So, I mean, it seems like a lot of the conversations, if I were to be like to reduce them would be, here's a platform, your technological affordances seem to block stuff that is patently hateful. Um, but that is also coterminus with conservative stuff, right? Uh, and so, and so we're getting the same and the same discussions and the kind of conservative cry of like, whenever we are blocked, you know, this is a liberal bias, we have the power, but we're still the victims here, right?
Raina Bloom:Absolutely.
Emily M. Bender:So, Raina tell us more about bias in this context.
Raina Bloom:Yeah, it's, it's interesting because it made me think about the students I work with. It made me think about my patrons and the, there's so much concern about this question of bias when people are seeking information. There's like such a, a belief and, you know, I'm not, I'm not mad at people for holding this belief, but there's such a belief, uh, because they, you know, they've picked it up from the world around them, but there's a way you can ascend to some sort of pinnacle where you will be free of bias and where you can locate information that is pure and clear and devoid of context and, and is just the Truth with a capital T. And, uh, I mean, I think it's people are picking it up from from places like this and conversations like this that somehow we can, um, use these pieces of technology and pretend they aren't human and pretend they aren't from humans and that that will get us to a truer truth than dealing with like the messy reality of like journalism and expertise in given fields and on and on and on. And I always say to the students that I'm working with. Like no one ever tells you anything out of the goodness of their heart. No one ever builds a piece of information technology, for you to look for information, out of the goodness of their heart. There's always something that informs what they're saying to you and what they're communicating to you. And there's no way to get clear of that. And that is actually totally okay. That's not a problem. That's why it's important to seek multiple sources of information, right? To make sure that you aren't getting dragged down by somebody else's perspective to the point that it's skewing your own. So the idea that they're going to make this unbiased tool and then this is the place that everybody can go to get all of their information, like, don't get me started on that part of it, um, is, again, it's just really disappointing. It really just misunderstands some very fundamental things about what people are doing when they're trying to become informed and what they should be doing when they're trying to become informed. Like there is no such thing as totally unbiased. There is no such thing as one stop information seeking activity. Um, it's worrying that the people who are making these things and speaking on these things don't understand that in a really fundamental way.
Emily M. Bender:Yeah. Yeah, for sure. And listening to you talk, I can kind of picture that. It would seem like relaxing and relieving to be able to arrive at that point where everything you see is just pure information with, you know, fully trustable, not to be biased. And in fact, what we have to do is like actually do the work of understanding where information comes from. And what can happen over time, I think, is that you come to know sources better so that it becomes less work to situate something coming from a particular source if it's a known source to you.
Raina Bloom:Right. Absolutely. Absolutely. Like, that's that's what developing expertise is right that you can understand the context of where somebody is coming from. Like, that's why it's easy for us to sit here and read these news stories because we bring our individual expertise is to bear on like, oh, I can see what's happening here. But if you're somebody who's new to a discipline or a domain, as many of the undergraduates who I work with are, like, you don't have that background. So instead, you have this hope that somebody can just tell it to you straight, right? Like somebody can just be objective and tell you the capital T truth. And that's, that's not how it works. There's no shortcut. There's no path like that.
Emily M. Bender:Yeah. Yeah. So is there anything else? Oh yeah. Go ahead, Alex.
Alex Hanna:I did want to say, I did want to say, I love, I love what you say that people aren't creating these platforms out of the goodness of their heart, right? I mean, it's, it's like any kind of technology or artifact production. I mean, there is a goal, you know. Helping you access information is incidental.
Raina Bloom:Right. Absolutely. And that's one thing I find ways to say to students, like that's usually not what that I'm, I'm there to teach, but like remembering that the motive is profit. For every single one of these tools, the motive is profit. The motive is always profit. I can remember having conversations like 15 years ago about Google with students and I would ask them, why did Google exist? And they would tell me all of these virtuous things. Some of them would know Google's mission statement, like, and I would say, no, they exist to make money. And the students would get offended on behalf of Google. About 50 percent of the time they would want to argue with me that like, no, that's just how they make money, is helping us. And they had it backwards. And again, I understand why, like, there's all of this pressure on us to accept these these companies as altruistic. Um, but that's not what has actually happening at all.
Emily M. Bender:Yeah, so I was scrolling down to see if there's anything else we want to hit in this article before moving to the next one. And I think I'm good, but I wanted to leave space for you, Alex and Raina. Is there anything?
Raina Bloom:I'm just taking a look real quick. Alex, do you have anything?
Alex Hanna:None of the stuff is really, um, too, you know, there's some stuff I, I think that the, um, I mean, there is some posts that just, that I, that kind of annoyed me here where they're talking about the giving quote unquote'all perspectives,' um, you know, um, so, you know, for instance, the, to this journalist credit, um, uh, I'm just finding, Maxwell Zeff's credit. They say something to this. Um, they, they basically say like every, everything, you know, everybody has a stance, uh, and, you know, says, "For example, when OpenAI commits to let ChatGPT represent all perspectives on controversial subjects--" Which again they're not really presenting all perspectives, they are presenting some perspectives and they are attending the entertaining the premises of others,"--including conspiracy theories, racist or anti semitic movements or geopolitical conflicts, that is inherently an editorial stance. And then some, including OpenAI co founder John Schulman, argue that it's the right stance for ChatGPT. The alternative, doing a cost benefit analysis to determine whether an AI chatbot should answer a user's question, could, quote, 'give the platform too much moral authority,' unquote, Schulman notes in a post on X." And then there's this person, Dean Ball, who I don't know who they are, but I, uh, I don't know whether a research fellow in, but they say, uh, "Schulman isn't alone.'I think OpenAI was right to push in the direction of more speech,' says Dean Ball, a research fellow at George Mason University's Mercatus Center, uh, in an interview with TechCrunch.'As AI models become smarter and more vital to the way people learn about the world, these decisions just become more important.'" And I'm just like
Emily M. Bender:We do not need to accept AI. AI models becoming vital to the way people learn about the world. In fact, I think we should be pushing back on that hard.
Raina Bloom:No, i, I get so tired saying like, these are not search engines. These are not search engines. They do not contain information. Like bringing this up over and over. I focused in on these paragraphs as well, Alex. Um, the, the question of authority is what caught my eye though. So, um, in my field, the Association for College and Research Libraries has this thing called the Information Literacy Framework. And it's like the big concepts that you need to understand, like the thresholds that you need to cross to be like an information literate person. And one of those big concepts is that authority is constructed and contextual. That you have authority in one area or a piece of information has authority in one area and it doesn't necessarily have authority in another area and the way that we establish authority and credibility when we are sharing information with people entirely comes from like our backgrounds and our lived experiences and our educations and all of those things. And I talked to students about how like The New York Times establishes authority in a different way than like a scholarly journal establishes authority. Like there are different practices. And again, it's this very human thing that they're trying to ascribe to a piece of technology. And like, it simultaneously does not have authority because it does not contain information in any meaningful way, but also like we are imbuing it with authority. And it's also like borrowing on the authority of all of the intellectual property that it stole from people. Like all of these things are happening at once and they're just tossing off the word authority and it's like, wait a minute, what do you mean when you say that? Like, how does authority actually function here, is like where I got really stuck in this paragraph. But yeah, that technological inevitability piece, so frustrating.
Alex Hanna:That's a really good point. I want to give an, and I just, I decided to search Dean Ball and, you know, used to work at the conservative Hoover Institute--
Raina Bloom:There it is.
Alex Hanna:--On the board of directors of like, uh, uh, all these conservative, um, you know, oversaw the institute's uh, Hayek book prize. Uh, so like after named after, neoliberal economics, um, Frederick Hayek. Uh, so, so not, not surprising, but like that one's worldview is, is sort of going to be shaped. And talking a little bit about the way in which I think, I mean, the idea in so far as like imbuing a certain kind of authority or seeding to the technologies does support a kind of a more conservative tilt, you know and kind of seeding into decisions. I do want to I do want to like raise up something um, WiseWomanForReal, uh kind of playing off that famous uh, IBM meme, so, "A bot cannot take responsibility. So it can't have an editorial stance."
Emily M. Bender:Love it. Love it. I've got one more thing from the chat before I move us to the next artifact. And this is Abstract Tesseract pulling a quote from what was on the screen."'Left leaning policies that have dominated Silicon Valley for decades.' End quote. Sure, Jan." All right. So I think we've got to keep us moving because we've got some more artifacts.
Alex Hanna:Totally.
Emily M. Bender:So here is the Washington Post, February 6. 6th 2025, uh, headline, "Elon Musk's DOGE is feeding sensitive federal data into AI to target cuts." Subhead, "At the education department, the tech billionaires team has turned to artificial intelligence to hunt for potential spending cuts, part of a broader plan to deploy the technology across the federal government." And this is just like, so I do the first paragraph of the lede here."Representatives from Elon Musk's US DOGE service have fed sensitive data from across the education department into artificial intelligence software to probe the agency's programs and spending, according to two people with knowledge of the DOGE team's actions." And there's like a whole bunch of problems here, but the first thing that really gets to me is what software, right? What's the input? What's the output? Is this an LLM? Like what are they doing?
Alex Hanna:It's also ironic that--this doesn't occur when I open the article, uh, but when it's on our view, it says, "Get concise answers to your questions. Try Ask the Post AI."
Emily M. Bender:Yeah, I think we dogged on this a couple episodes ago.
Alex Hanna:We dog on the Post quite a lot on this, yeah. But absolutely right, like what is this, what is this program? We have to pretty much imagine, I mean, it's probably Grok or some version of Musk's, uh, LLM or something of that nature or some kind of a public, I don't imagine they'd have a private instance, but a public instance of this.
Raina Bloom:I mean, they mentioned further down in the article that they used Azure, which I went and dug around a little bit, and that doesn't clarify it any further at all, like what they're actually doing. It also, like, "according to people with knowledge of the DOGE team's actions," does that mean that this is like hand waving because we don't need that level of detail? Or does that mean that people actually involved in this don't know how to describe accurately what they're doing? Um and couldn't say which tool, or didn't understand, or didn't think it matters, like, that's a different kind of troubling, for sure.
Emily M. Bender:Yeah, absolutely. So, on the point of Azure, I mean, the one thing this clarifies is that Microsoft is completely complicit here, right? No one at Microsoft said, no, we're not going to help with this. And they could have, right? Um, I mean, unlikely that they would have, but they could have. Um, and yeah. So in terms of do they know what they're doing? So this is a link that I pulled up, um, from Reddit and the headline is, "This is a DOGE intern who is currently pawing around in the US Treasury computers and database." So a different, you know, target of the DOGE wrecking ball. And then, "Sorry, this post was deleted by the person who originally posted it," but based on the chatter that I was seeing and the, and the replies, this was a DOGE intern who was asking which LLM do you use to translate from a spreadsheet to JSON or vice versa or something. And it was like, the start of that is the problem, right? Structured data to structured data. LLMs have no business in there at all and super duper easy to do with existing libraries in well established programming languages.
Raina Bloom:I, that was the note I made on this, on this article, which is where I need to own my own ignorance. Like there must be a more appropriately scoped AI tool out there that could actually help analyze this data. Like that must be a thing that exists or some other resource or tool that could support it instead of just like, let's chuck it in an LLM and see what happens. Yeah.
Emily M. Bender:It depends on what they're trying to and I think that the appropriately scoped tools wouldn't be AI tools. You might be doing some kind of a statistical analysis. You might be looking for, um, You know, are there discrepancies or, you know, looking for the largest payments or some kind of discrepancy? Like there's, there's a bunch of data there and you could do data analysis and you might even apply statistics to it. Um, but you have to start with what's your question and to what extent does the data actually support answering the question?
Alex Hanna:There's been an interesting conversation too in so far as this, as a lot of the people are talking about, well, now you have these DOGE people mucking, mucking about. These are probably, you know, As we've seen, like 19 year old, 22 year old, um, people that, you know, were working at whatever SpaceX or Tesla or whatever, they're probably programming in Python and that so much of the code base that our federal infrastructure runs on is probably COBOL. It's, it's these older programming languages and that you know, guess what? Probably translating from COBOL to Python is not going to go well. And then translating it back, trying to assume that you have kind of a knowledge. And I mean, I think it also shows that there's a pretty, there is a under, what's the word I'm looking for? There is an underappreciation for institutional knowledge in the kind of like whole DOGE project. I mean, it's not just that you can, I mean, and this is a sort of stuff that a lot of people online have been talking about, uh, for instance, like, um, the kind of like "fork in the road message" or the, uh, OPM message that had been sent out, like, tell us the last five things you've done, of what organizations do and what institutional knowledge should respect and needs to respect. Um, there's a kind of view of viewing everything as, well it's just code and it's just, it's just, it's just a programming language as if programming languages don't have their myriad idiosyncrasies and, and kind of idioms that are not quite translatable or are contingent on hardware or contingent on organizational uh, structures and management and it's just, I mean, it's, it's, it's hubris on the point part of Musk and DOGE um, and yeah, hubris and ignorance and toxic masculinity all wrapped up in one.
Raina Bloom:Yeah, it's, it's, and it's also this, as if there's something secret happening that they need to pry into when a lot of government information is actually quite transparent. It's just out there, like, it's just available. And if it's kept secret, it's because we don't want to massively violate anybody's privacy publicly, right? Like, that's, that's what's kept secret. So yeah, the idea of like, oh, it's just code and, oh, no one's paying attention, but I'm going to do it. I'm going to be the one who notices the grand theft, like that's not, that's not what's actually happening.
Emily M. Bender:So part of the connection that I see between the artifacts that we've put together here is, um, sort of notions about information and management information and the production of information and the situation, sort of like understanding the situatedness of information and here, you know, with DOGE going in and, and basically treating all of this as sort of feed for their AI models with no respect for sort of what the data might mean to somebody in this privacy issue, um, but also no understanding of the context that it sits in. So there was this whole thing about how, uh, Musk claimed that um, he found a whole bunch of waste because Social Security had a bunch of people on record who were 150 years old, right? And the story there was, first of all, the way the database is put together for Social Security, they don't remove records. So when people are off Social Security, they're still in the database. And secondly, that 150 had to do with the default date that goes in. Um, in, I think it was, I forget if it was SQL or COBOL, but somewhere in these systems there's a default that just happens to be 150 years ago. And so that's what was filled in when there was no information for that person's precise birth date. Right. So speaking about sort of like contextualizing information, this, this is not a person who's 150 years old. This is a database that has a certain structure to it.
Raina Bloom:Right. That made me think of, as I was reading this article, I made, I made a note later on, um, where is it? It's the paragraph. Oh yes, "Like other tech leaders, Musk has frequently championed AI as a tool capable of rapidly making sense of data and situations that can confuse humans." That right there, it made me think of, there's this report from UNESCO about AI in higher education, and on the sixth page of the report, I refer back to this all of the time, there is a flow chart. Um, and the flowchart basically says, it's the question of whether or not you can use an AI tool to accomplish a task, and the whole thing hinges on, do I have enough expertise to check if the output is wrong, and if I don't, then I can't use an AI tool. And what we're being told here, and, and like what you were just describing, Emily, is that they don't actually have enough expertise to understand the input, so they can't check the output. Um, so they, they won't know if it's wrong, therefore they're using the wrong tool to solve the information problem that they have in front of them, because they would need to develop that disciplinary expertise about how government information works before they can actually try to fast track the process of like interpreting it and processing it. But yeah, I use that UNESCO chart all the time. Super helpful.
Emily M. Bender:That's excellent. And we should probably link to that in the show notes. Um, unfortunately for the people of, residing in the US and around the world, these folks actually don't care about, like, they don't, they don't really have an information need in the sense of there's information that they want and they're trying to understand. They have an information need in the sense that they need a cover story to tell the public what they're doing so that they can say, yes, see, the system said that this is waste, and so therefore, like, they, they need, they need cover, not information.
Raina Bloom:Right. This is making me think of when I'm, um, my coworker, coworkers, and I call this, uh, the unicorn problem, which is when you have a student who is, has already written the paper, and then they need the source that confirms their priors. Like the joke we make is they approach and they ask the question, 'I've written this paper that says that unicorns exist. Can you help me find five sources that prove it?' And that's effectively what they are, what they're trying to do here.
Emily M. Bender:Yeah. Yeah. All right. I'm gonna take us to one more artifact here. Um, as we're talking about our information ecosystem getting pulverized, getting disrespected, degrading, you would hope that media outlets like the Guardian would stand strong. Um, but this is 2025 and we have to be careful where we place our hopes. Um, so this is a press release from Guardian Media Group. Headline--oh, February 14th, 2025. Happy Valentine's Day, everyone who loves information. Uh, "Guardian Media Group announces strategic partnership with OpenAI." And this is short. Uh, "Guardian Media Group today announced a strategic partnership with OpenAI, a leader in artificial intelligence and deployment, that will bring the Guardian's high quality journalism to ChatGPT's global users." Note, I'm reading this on a website. The Guardian already has global reach. Their things aren't paywalled. Right? Sorry."Under the partnership, Guardian reporting and archive journalism will be available as a news source within ChatGPT alongside the publication of attributed short summaries and article extracts. In addition, The Guardian will also roll out ChatGPT Enterprise to develop new products, features, and tools. This announcement comes a year after The Guardian published its approach to AI, focused on ensuring that any use of genAI is under human oversight and for the benefit of its readers, its business, and its wider mission. This considered approach to AI continues as The Guardian Media Group explores agreements with both existing and emerging businesses to ensure fair compensation and attribution for its journalism."
Alex Hanna:Yeah.
Emily M. Bender:It's like, so there's one thing I want to say that I'll turn it over to both of you, which is, um, their principle is 'any use of genAI is under human oversight.' And how does that jibe with, we're going to let ChatGPT make paper mache out of this for anybody who pokes at ChatGPT?
Raina Bloom:Yeah, similarly, "The, um, Guardian reporting and archive journalism will be available as a news source within ChatGPT." I don't think they're picturing the 'inside' quote unquote, of ChatGPT accurately. Like, there isn't like a, a Guardian section that, uh, a user is going to go to and they're going to get their information from the Guardian. Um, I, I had a question not long ago where somebody was trying to use ChatGPT to generate citations, and they definitely had a mental model of, oh, I can go to the Chicago Manual of Style inside of ChatGPT, and this seems to me to be very similar thinking, and that's, that's not how it works at all, Guardian, that's not what's going to happen to your words and the work of your journalists, even a little bit in this context.
Emily M. Bender:But meanwhile, the Guardian has said they're in partnership. So if someone walks up to ChatGPT and says, tell me about the Guardian's reporting on something and they get some synthetic text out. Well, the Guardian has said it's okay.
Raina Bloom:Right.
Alex Hanna:Yeah. I mean, it's just, you're seeing this kind of the very pessimistic view in the correct. I think what's actually happening here is, I mean, they're making a, they're making a media deal. They need, the data is nailed down and they're open, they're prying the nails and saying, okay, go for it. And it's, and then they have some hand wavy shit in front of it because it, as the CF, chief Financial and Operating Officer is saying, "'This new partnership with OpenAI reflects the intellectual property rights and value associated with our award winning journalism.'" No it doesn't.
Raina Bloom:No, I would love to hear what a journalist who works for The Guardian has to say about this. And I'm sure I could go find one, but, you know. I noticed they didn't have a journalist in here saying how excited they were about this partnership and how it aligns with their values as a journalist.
Emily M. Bender:Yeah, indeed. A couple things from the chat and our producer.
So, um, uh, N Dr. W. Tyler:"I love how every group with a unicorn analogy is making fun of the unicorn believers, except venture capital, who use it to mean brilliant thing that definitely exists."(laughter)
Alex Hanna:I know that's, that's, that's such a cool, uh, uh, turn of phrase. I mean, yeah, unicorns, the interests of, you know, um, children and venture capitalists and also cryptozoologists, uh, and unfortunately the venture capitalists, uh, make a mockery of, of all our unicorns out there. I mean, also, also bisexuals. Sorry.
Emily M. Bender:Yes. Um, the, the cryptozoologists, and by that you don't mean, uh, people who make meme coins based on animals.
Alex Hanna:No, not that kind of crypto. That was terrible. I wish, I wish you could, I wish you could unsay that.
Emily M. Bender:I'm sorry. I'm very, very sorry. And Hipster Electron is celebrating the lack of bisexual erasure here."Bisexuals mentioned."
Alex Hanna:Bisexuals are welcome, on this podcast.
Raina Bloom:Oh, thank goodness.
Emily M. Bender:Alright, and then back to Reina's point about, um, how there isn't like a Guardian section inside ChatGPT that you can access so that your papier mâché is made only out of guardian data. Um, Christy, our producer says, feels like a good analogy to the Borg. You can't go to the Picard section of the Borg.
Raina Bloom:Got to work in that Star Trek every time.
Alex Hanna:Every time. Well, technically it would be the Locutus section, not the Picard section.
Emily M. Bender:I figured that if I brought this up, Alex would refine the point. And so I'm just, I'm just the conduit here. So anyway, just thinking about our information ecosystem and the important role that journalists play and the practices of journalism, which is absolutely not just extruding text in the form of something that's got a headline and a subhead and a lead, but actually a lot of work, um, it makes me very sad, especially in this moment to see the Guardian bend the knee like this, and capitulate. Um, yeah. Uh, one more unicorn thing.
WiseWomanForReal:"Turns out that unicorns are also perfect triangles with specific properties in mathematics." Okay, so we have, we have a lot of good unicorn analyses.
Alex Hanna:Nice. Yeah. Well, please do an open call for the Journal of Unicorn Studies.
Emily M. Bender:Um, alright, I gotta, I gotta work on the, um, okay, work at the, (laughter). Okay, so Alex.
Alex Hanna:Yes.
Emily M. Bender:Musical with your guitar if you like. You can be singing the blues here.
Alex Hanna:I brought my guitar today.
Emily M. Bender:Yes. You are a librarian, reference librarian. Someone's come to you with an information need. About unicorns, but all of your databases have been replaced with ChatGPT. So you're singing the blues.
Alex Hanna:Okay. I'm not a good enough guitar player to actually know how to play guitar, like a blues chord on it. So I'll just do kind of a country kind of, uh, Sitting at my reference desk, someone's made a request. Looking up what I can. Finding what I can, and oh no, it's ChatGPT on my desk. Don't answer this request. Gonna tell me I gotta misgender someone to stop a nuclear war. That's all I got.
Emily M. Bender:That was beautiful. I love it. I love it. I love it. I love it. Um, and that brings us into Fresh AI Hell, starting with something from, um, the Higher Education. Association, maybe, of Ireland, um, with the, uh, header "10 Considerations for Generative Artificial Intelligence Adoption in Irish Higher Education," and a bunch of these are really good. So AI literacy, um, kind of, right,"Equitable access to generative AI is essential to ensuring that both staff and students possess the skills and knowledge needed to use AI responsibly, effectively, and in alignment with disciplinary values." So, I guess. Make sure that everybody knows is good. I'm not sure if one actually has to have access to it. Um, when is it allowable? How does it relate to academic integrity? Um, "Critical AI" is a heading here. So, "Generative AI can unintentionally perpetuate biases." I'm always wondering why we're saying--whose intentions and why are we so careful to be respectful of them? But anyway, there's, you know, a bunch of more or less good things. And then they get down to, um, 10."Generative AI holds considerable potential for enhancing teaching and learning in higher education." It's like, you went through all of these downsides and then still came out with, 'but yay!' Can't.
Raina Bloom:I also have questions about separating off AI literacy as a unique and special type of literacy that needs to be named independent of other, you know, of, of literacy read broadly or like the broader context of information literacy. Like I've been involved in some of these conversations and it, I can't, I can't quite find it cause it, it, it doesn't work similarly to other tools, but you need the same skills that you're bringing to bear on all other sources of information for lack of a better word. That's not exactly what I mean, but yeah, that's, and also the, the potential thing at the end, like potential, say more. Potential. What kind of potential? Yeah.
Emily M. Bender:Yeah, we always have to pay homage to the potential of AI. Tired of it. Okay. Um, Alex, you want to do this one?
Alex Hanna:Totally. Yeah. So this is the Washington Post. The journalist is Tatum Hunter. And the title, "AI, quote, 'inspo' is everywhere. It's driving your hairstylist crazy. From bridal shops to med spas to hardware stores, AI generated photos are warping our sense of reality and hurting small businesses along the way." And the, um, the first few graphs of this are just infuriating. So, "Leah Langley McLean, a wedding dress designer in Nashville, recently had a customer come in with a unique request. She presented a photo of a floor-length white gown with an asymmetrical neckline, no sleeves, and no back. The dress defied the laws of physics. McLean told the bride to be there was nothing in the structure to keep the bodice from falling off. The image had been generated by artificial intelligence. McLean explained to design would need some adjustments to exist in the real world. The customer was adamant, however, and decided to go elsewhere, costing McLean the sale around $2,000." So, yeah, I mean, and then this, this article goes through the stuff like hair, like, um, home remodeling, like all kinds of stuff, you know, things that are not possible, I mean--
Emily M. Bender:This is so hideous, this pink bathroom.(laughter)
Alex Hanna:Yeah, there's a pink bathroom that looks like it's from like a kitschy, like love hotel and--
Raina Bloom:Don't miss the light up heart shaped mirror when you describe the pink bathroom.
Alex Hanna:There was, there's two of them. Yeah, there's one, there's like 18 sinks for some reason. And then the one next to it is this like large greenhouse glass house, like wedding. It looks a little dystopian.
Raina Bloom:It looks a little evil. Like the plants are all poisonous for sure.
Emily M. Bender:And the tablecloth is more like a dust drape that you would put in an abandoned house.
Alex Hanna:Yeah, and the chandelier somehow is like in front of the entrance and also in the entrance. So yeah, just horrifying stuff here.
Emily M. Bender:All right. We could stay here for a long time, but, but basically the, the, I think the main point here is, is the harm to small businesses as people come in with these expectations that are completely unfounded. Okay. Futurism. The, the sticker here is "Law Schooled." February 21st, 2025 by Frank Landymore. And the headline is, "Large law firm sends panicked email as it realizes attorneys have been using AI to prepare court documents." Subhead in quotes, "'The integrity of your legal work and reputation depend on it.'" And I'm wondering like, are the lawyers not paying attention? Like, did not, was it not high profile enough when all those lawyers got, you know, fined and, you know, reprimanded for doing this in the early days of ChatGPT? Or do they just think that their AI is special because it's got a different brand on it?
Alex Hanna:No clue. It's really, it's, and I mean, if you go down, it says, uh, "Last week, a federal judge in Wyoming admonished two Morgan and Morgan lawyers for citing at least nine instances of fake court, uh, case law in court filings submitted in January. Threatened with sanctions, the various lawyers claimed an internal AI tool for the mishap and pleaded the judge for mercy."
Emily M. Bender:It's like, we've been here before. Like, how, how was this not top of mind for all lawyers after it happened the first three times?
Raina Bloom:I mean, I think maybe one of the issues, because this is like fairly common in, in my work, like people bringing us hallucinated citations and asking to find the real source. And, and I think there's something in the broader conversation that people are, are hearing that this is fixed, like this was a problem, but it's better now, the hallucination machine has stopped hallucinating. And so I wonder if that's what's happening or if people just aren't tapped in to the discussion at all, but like, yes, this, this scans for, for me.
Alex Hanna:I don't think people are really tapped in. I mean you think, I mean we're doing our part, but you know, there's a lot more hype than ridicule, right?
Emily M. Bender:Exactly, but I guess I was expecting that the lawyers getting in trouble for this like that should have flown around really quickly in sort of legal spheres, but I guess not. Okay on to body horror. You want to do this one Alex?
Alex Hanna:This is, this is, I haven't seen this one, and it's really great, um, because I, like, kind of love body horror, but, like, so. Robot, so it's from Ars Technica, the journalist is Benj Edwards, from February 21st of this year."Robot with a thousand muscle twitches, like, human, twitches, A thousand muscle--" Sorry, muscle twitches is, twitches is the verb."Robot with a thousand muscles twitches like human while dangling from the ceiling." And this is like a robotics company. It's not like an, it's not like a piece of performance art and, it's if you actually like click the video, uh, it's like got this incredibly horrific music where, where this thing is like dangling as if dancing. And I mean, this is, I feel like, yeah, I mean, I feel like it, it comes, it's, it's, it's actually incredibly horrifying. And it's very funny that the company, is like, it's really leaning into the horror. Uh, anyways, less, it's less AI, but robotics horror, nonetheless.
Emily M. Bender:Yeah. Ugh. Okay. Um, on to Google horror. Uh, this is a Reuters piece from February 19th by Movija M or Movija M. Uh, headline, "Google develops AI co scientists to aid researchers." And like, no, it's not a co scientist. You might have a search engine that's doing something useful. You might have some kind of an automatic tool, but it's not a co scientist. Um, so London, February 19th, "Google has developed an AI tool to act as a virtual collaborator for biomedical scientists, the US blue chip said on Wednesday. The new tool, tested by scientists at Stanford University in the US and Imperial College London, uses advanced reasoning to help scientists synthesize vast amounts of literature and generate novel hypotheses, the company said. AI is being increasingly deployed in the workplace from answering calls to carrying out legal research--"
Raina Bloom:Oh, no.
Emily M. Bender:"--Following the success of ChatGPT and similar models over the past year." So, like this reporting, um, clearly they did not talk to any reference librarians in situating this. Right, Raina?
Raina Bloom:Right, yeah, that was my, my first thought was like, oh, you mean you want support from somebody who can help you organize your thinking? Yeah, those people exist. Like we can totally assist you. It also made me think of that ad that was on, the AI ad that was on during the Superbowl, the person preparing for the job interview and he's having the AI tool, like practice the job interview with him and a lot of-- Oh yeah, it was. It was the reason I bring it up is to say it's really bleak, like the idea of like, I don't have any collaborators. I don't have any colleagues. I don't have any sources of support who can assist me when I'm doing this job interview or doing this research. It's really dark. It's not actually hopeful at all.
Emily M. Bender:Yeah. Yeah. And this, just so we were talking before about the lawyers being so receptive to this and we had something in the chat um, from Hipster Electron."I have been incredibly concerned about how receptive lawyers have been to text generators, including very perceptive lawyers I know very well." And you mentioned, I forget if it was Alex or random, but one of you mentioned that, like, there's a bunch of hype, right? And not enough ridiculing. That was Alex. Um, and here we have this article saying, uh, that this is, you know, AI is carrying out legal research.
Alex Hanna:The next par-- the next paragraph of this article is really setting me off. So this says "Google's AI unit, DeepMind, has made science a priority and DeepMind boss, Demis Hassibis was a co recipient of a Nobel prize in chemistry last year for technology developed in the AI unit." And it pisses me off that the committee, the Nobel committee has now made it such that like this kind of this kind of awarding of this Nobel now grants these people license to say that they're Nobel laureates and they know shit about science. And in so far as that they, these, these organizations are doing anything that is, that is, you know, in the ambit of what we call quality scientific activity. And so, you know, it's, it's really this kind of laundering, um, via this, this, this award and it's just.
Emily M. Bender:All right, one last thing. This is from Vox Future Perfect, uh, Seagal Samuel, February 21st, 2025. The headline is, "Is AI really thinking and reasoning or just pretending to?" And then subhead, "The best answer, AI has quotes, 'jagged intelligence', lies in between hype and skepticism."
Alex Hanna:Jagged intelligence. Is that like, is that, is that like an AI bot that is trained on Alanis Morissette songs?
Raina Bloom:I was going to say, it's the name of my punk band, but.
Alex Hanna:I know. This is jagged intelligence. This is our first song. Rat balls.
Raina Bloom:1, 2, 3, 4, 1, 2, 3, 4.
Alex Hanna:1 2 3 4.(normal voice) Uh, yeah. So the headline's pretty terrible now and it sort of claims about intelligence.
Emily M. Bender:Yeah, I just want to go back to this.
Alex Hanna:Yeah.
Emily M. Bender:Is it, is it really thinking and reasoning? No. Or just pretending to? Also no.
Alex Hanna:Yeah. I don't, don't like both. Yeah.
Emily M. Bender:Yeah. Like pretending suggests intentionality, suggests intent to deceive. It's like, it's none of those things.
Raina Bloom:Right.
Emily M. Bender:Yeah. Oh. Ah. All right. That brings us to time. Um, thank you so much and I have to get to the right window for myself, but just, this has been a lot of fun. Um, And, uh, so again, thank you Raina for joining us. Raina Bloom is Reference Services Coordinator for University of Wisconsin Madison Libraries.
Raina Bloom:Thank you very much. It was a lot of fun. I appreciate it.
Alex Hanna:It was a hundred percent our pleasure. Thank you so much. Our theme song is by Toby Menon, graphic design by Naomi Pleasure-Park, production by Christie Taylor, and thanks as always to the Distributed AI Research Institute. If you like this show, you can support us in so many ways. Rate us and review us on Apple podcasts and Spotify. Pre order the AI con at TheCon.AI or wherever you get fine books. Subscribe to the Mystery AI Hype Theater 3000 newsletter on ButtonDown. Or donate to DAIR at DAIR-Institute.Org, that's D A I R hyphen institute dot O R G.
Emily M. Bender:Find all our past episodes on PeerTube and wherever you get your podcasts. You can watch and comment on the show while it's happening live on our Twitch stream. That's Twitch.TV/DAIR_Institute. Again, that's D A I R underscore institute. I'm Emily M. Bender.
Alex Hanna:And I'm Alex Hanna. Stay out of AI hell, y'all.