Mystery AI Hype Theater 3000

Episode 14: Henry Kissinger, Machines of War, and the Age of Military AI Hype (feat. Lucy Suchman), July 21 2023

September 13, 2023 Episode 14
Episode 14: Henry Kissinger, Machines of War, and the Age of Military AI Hype (feat. Lucy Suchman), July 21 2023
Mystery AI Hype Theater 3000
More Info
Mystery AI Hype Theater 3000
Episode 14: Henry Kissinger, Machines of War, and the Age of Military AI Hype (feat. Lucy Suchman), July 21 2023
Sep 13, 2023 Episode 14

Emily and Alex are joined by technology scholar Dr. Lucy Suchman to scrutinize a new book from Henry Kissinger and coauthors Eric Schmidt and Daniel Huttenlocher that declares a new 'Age of AI,' with abundant hype about the capacity of large language models for warmaking. Plus close scrutiny of Palantir's debut of an artificial intelligence platform for combat, and why the company is promising more than the mathy-maths can provide.

Dr. Lucy Suchman is a professor emerita of sociology at Lancaster University in the UK. She works at the intersections of anthropology and the field of feminist science and technology studies, focused on cultural imaginaries and material practices of technology design. Her current research extends her longstanding critical engagement with the fields of artificial intelligence and human-computer interaction to the domain of contemporary militarism. She is concerned with the question of whose bodies are incorporated into military systems, how and with what consequences for social justice and the possibility for a less violent world.

This episode was recorded on July 21, 2023. Watch the video on PeerTube.


References:

Wall Street Journal: OpEd derived from 'The Age of AI' (Kissinger, Schmidt & Huttenlocher)

American Prospect: Meredith Whittaker & Lucy Suchman’s review of Kissinger et al’s book

VICE: Palantir Demos AI To Fight Wars But Says It Will Be Totally Ethical About It Don't Worry About It 


Fresh AI Hell:

American Psychological Association: how to cite ChatGPT
https://apastyle.apa.org/blog/how-to-cite-chatgpt

Spam reviews & children’s books:
https://twitter.com/millbot/status/1671008061173952512?s=20

An analysis we like, comparing AI to the fossil fuel industry:
https://hachyderm.io/@dalias/110528154854288688

AI Heaven from Dolly Parton:
https://consequence.net/2023/07/dolly-parton-ai-hologram-comments/


You can check out future livestreams at https://twitch.tv/DAIR_Institute.


Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

Show Notes Transcript

Emily and Alex are joined by technology scholar Dr. Lucy Suchman to scrutinize a new book from Henry Kissinger and coauthors Eric Schmidt and Daniel Huttenlocher that declares a new 'Age of AI,' with abundant hype about the capacity of large language models for warmaking. Plus close scrutiny of Palantir's debut of an artificial intelligence platform for combat, and why the company is promising more than the mathy-maths can provide.

Dr. Lucy Suchman is a professor emerita of sociology at Lancaster University in the UK. She works at the intersections of anthropology and the field of feminist science and technology studies, focused on cultural imaginaries and material practices of technology design. Her current research extends her longstanding critical engagement with the fields of artificial intelligence and human-computer interaction to the domain of contemporary militarism. She is concerned with the question of whose bodies are incorporated into military systems, how and with what consequences for social justice and the possibility for a less violent world.

This episode was recorded on July 21, 2023. Watch the video on PeerTube.


References:

Wall Street Journal: OpEd derived from 'The Age of AI' (Kissinger, Schmidt & Huttenlocher)

American Prospect: Meredith Whittaker & Lucy Suchman’s review of Kissinger et al’s book

VICE: Palantir Demos AI To Fight Wars But Says It Will Be Totally Ethical About It Don't Worry About It 


Fresh AI Hell:

American Psychological Association: how to cite ChatGPT
https://apastyle.apa.org/blog/how-to-cite-chatgpt

Spam reviews & children’s books:
https://twitter.com/millbot/status/1671008061173952512?s=20

An analysis we like, comparing AI to the fossil fuel industry:
https://hachyderm.io/@dalias/110528154854288688

AI Heaven from Dolly Parton:
https://consequence.net/2023/07/dolly-parton-ai-hologram-comments/


You can check out future livestreams at https://twitch.tv/DAIR_Institute.


Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

ALEX HANNA: Welcome everyone to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype. We find the worst of it and pop it with the sharpest needles we can find. 

EMILY M. BENDER: Along the way we learn to always read the footnotes. And each time we think we've reached peak AI hype, the summit of Bullshit Mountain, we discover there's worse  to come. I'm Emily M. Bender, a professor of linguistics at the University of Washington.  

ALEX HANNA: And I'm Alex Hanna, director of research for the Distributed AI Research Institute. This is episode 14, which we're recording today on July 21st of 2023. And we're here to talk about AI and war, and all the quote 'fun' hype around it. From a new book by Henry Kissinger--yeah that Henry Kissinger--to Palantir's marketing of large language models for war. 

EMILY M. BENDER: And I'm so excited to say that today we have with us Dr. Lucy Suchman, who's a professor emerita at Lancaster University in the UK. Her research is at the intersections of anthropology and the field of feminist science and technology studies, with a focus on cultural imaginaries and material practices of technology design.  

Her current research extends her long-standing critical engagement with the fields of artificial  intelligence and human computer interaction to the domain of contemporary militarism. She's  concerned with the question of whose bodies are incorporated into military systems, how, and with what consequences for social justice and the possibility for a less violent world.

ALEX HANNA: Dr. Suchman co-wrote with Meredith Whittaker a review of Kissinger's bellicist book about AI which is called, "The Age of AI," which is co-authored with Eric Schmidt, the former chairman of Google's board and Daniel Huttonlocher. She's more recently been focusing on other more war--warmongering promises of AI systems, so we're incredibly excited to bring her onto the stream today. Welcome Lucy. 

LUCY SUCHMAN: Well, thank you so much. It's great to be here with you. 

EMILY M. BENDER: This is so great. As usual we have guests on the show who like raise the bar so high, I'm really excited but bear with us because we're irreverent and silly, also, even with topics like this. [ Laughter ] 

ALEX HANNA: Completely. Um all right and now I've got to get to the right window. There it is. 

EMILY M. BENDER: Okay so we're gonna start. Um we didn't--well, you had to read Kissinger's book. Alex and I were spared that but we did read this short piece based on the book which appeared in the Wall Street Journal and also is on Henry Kissinger's own webpage. Which, Alex you're the one who told me he had a web page, how did you find that out?   

ALEX HANNA: I well I was trying to find an unpaywalled version of this piece and I think I just searched the name of it and this came up. So unfortunately having to go to Henry Kissinger's web--website, something I never thought I'd have to do. 

EMILY M. BENDER: So-- 

LUCY SUCHMAN: And now you've been trapped, I'm sure so you know. 

ALEX HANNA: There's all kinds of cookies, I'm going to be added to the mailing list and never removed. Yes. 

EMILY M. BENDER: Lovely. Okay so let's let's dig into this a little bit it is just AI hype through and through. Um it's long enough that we could spend the whole hour, we're not going to because we've got to get to Palantir also um but let's let's do you know a few paragraphs at the top and then maybe um jump through a little bit um so. 

I will start us off with the first paragraph: "A new technology bids to transform the human cognitive process as it has not been shaken up since the invention of printing. The technology that printed the Gutenberg Bible in 1455 made abstract human thought communicable generally and rapidly, but new technology today reverses that process. Whereas the printing press caused a profusion of human thought, the new technology achieves its distillation and elaboration. In the process it creates a gap between human knowledge and human understanding. If we are to navigate this transformation successfully, new concepts of human thought and interaction with machines will need to be developed. This is the essential challenge of the Age of Artificial Intelligence."  

So they're framing it for us there, um uh. Lucy do you want to start us off with any reactions to  our first paragraph. 

LUCY SUCHMAN: Yeah so yes and I think this piece as you said is definitely a companion piece um to the book uh which which just to to give the book's title um by the same three authors. So the book is titled um, "The Age of AI and Our Human Future."  

Uh so I'll come back to that in a moment but let's just take this this piece uh and and I want to sort of draw a through thread here which is which is about the uh the figure of the the human.  

Um the the very kind of universalizing, "the age of AI and our human future," which of course is you know in in our understanding of how language is performative um the declaration of an age  um is you know an assertion that is part of the process of constituting something as, you know,  epical right. And then, "and our human future." Okay, so this is these are are people whose own locations are erased, who are now speaking on behalf of of everyone uh on the planet. And you know this is a very familiar move um of certain forms of white male Western uh intellectual you know enactments.  

Um and so then we have um you know, "ChatGPT Heralds an Intellectual Revolution," and in  that first paragraph that Emily just read uh I mean a couple of things. First of all um I think it's interesting to ask sort of what's happening to knowledge, because there's this somewhat strange kind of difference being made here between knowledge and understanding. And of course a lot of people have been pointing out the difference between um data and knowledge, or data, information, knowledge, right. That there that there are requirements for data--data themselves um don't constitute knowledge. Uh they're they're constructed out of knowledge practices and and then they're translated in various ways, so but now and and we you know and there have been there have been critiques I think based on the fact that um these systems don't really understand. 

And so I wonder here whether part of what's happening is a kind of absorption of that critique and a difference that's being made between knowledge and understanding. So that's kind of one move that I think is interesting. But the other thing that I think is really important is this reference to you know abstract modern human thought, which is again clearly valorized as the best kind of thought um and really here I think what's being cited is is rationalism.  

Um not reasonableness, although they make that difference further on but, rationalism uh rationalist knowledge practices and here and and further on in this piece, those practices are kind of part of what I would say is really a just so story about the enlightenment, about the scientific revolution um other familiar and you know really widely and deeply critiqued kind of periodizations of of history. 

And I'll just read one, further on they write uh they say quote, "Machine learning systems have already exceeded any one human's knowledge." Okay you know we could even stop there um but they go on. "In limited cases they have exceeded humanity's knowledge, transcending the bounds of what we have considered knowable." Uh you know, huh? I mean yeah you know I'm just really struck by the both the parochialism--parochialism dressed up as universalism, and the erasure of all of the other knowledge systems on the planet. 

Um so so those are you know those are some things that just really jump out at me from the very opening of this of this piece. 

EMILY M. BENDER: Yeah, go ahead Alex. 

ALEX HANNA: Well there's so much--yeah and and so much thank you for touching on so many of these things Lucy and I just want to deepen something you said or try to deepen--in this sort of um the kind of continual reference in this piece is referring back to Enlightenment, referring to this kind of stylized version of the scientific method, that the Enlightenment seemed to sharpen. 

You know they said um let's say uh in the uh a paragraph I think a little yeah we're already on it uh but, there're you know there're categorical differences, "Enlightenment knowledge was achieved progressively step by step, with each step testable and teachable. uh AI systems enabled systems start at the other end, they can store and distill a huge amount of existing information. In ChatGPT's case much of the textual material on the internet a large number of books billions of items."  

And so first off, Enlightenment knowledge you know you know like I don't I don't know too much about the history of the printing press but I know enough to know that's that's that's probably a pretty revisionist history of how Enlightenment knowledge was achieved, that everything was tested you know in a measured sort of um you know uh kind of model of knowledge, everything was strictly you know paperian and it's and it's testing, everything was falsifiable--like no that's not really how knowledge really progressed. 

And the kind of ideal of kind of Enlightenment as being this kind of point as you say rightly so there's been trenchant and critiques of the Enlightenment from from Black studies, from feminist studies, from anybody that wasn't considered human in the 15- 16-hundreds, uh you know for for centuries after. And so it's so fascinating but not surprising that this is sort of the referent here um, and it plays very well into you know the agenda of uh Kissinger, Schmidt and Huttenlocher that they're really using this as kind of the foil against you know you know against which Western civilization needs to be defended, which then sets the stage for their own kind of uh you know competition with China and Russian aggression and anything that's not considered uh in the traditional West, so it's very much very much within that frame. 

LUCY SUCHMAN: Exactly and of course the the title is a citation of that, right, the Age of Reason. And and it's also consistent in that the Age of Reason is a is a a moment where where  thought uh and reason are translated as as logic um, and that's of course absolutely fundamental to what's going on now with AI. 

EMILY M. BENDER: I've brought us to this thing about between the societies but I want to react to something you said earlier Lucy about which I really appreciated about how their idea of "this is exceeding knowledge of all humanity" is just erasing all of these other ways of knowing um and then on top of that it's taking the narrow notion of knowledge that they have and pretending  that the written word itself is that knowledge, that the form of the knowledge is the knowledge.  

Um so even taking the person out of the knowing um within the sort of Western conception of knowledge which is of a piece I think with rationalism, that somehow the knowledge is is external and just I'd say Kissinger's been around a good long time but he doesn't have personal experience of the Enlightenment, um he doesn't go back that far and-- 

ALEX HANNA: Maybe he's not, he's not 500 years, he's almost there-- 

EMILY M. BENDER: But none of these three are historians of science, right? 

ALEX HANNA: Right. 

LUCY SUCHMAN: Right. 

EMILY M. BENDER: So they and what are they what expertise are they speaking from when they're referencing the Enlightenment?  

LUCY SUCHMAN: Yeah I think that's so so important, I mean why you know how is this book published, how was this piece in the Wall Street Journal published? Um it's it's not an evidence-based analysis, um or even an argument. It's it's an opinion piece really full of a very very sort of sweeping and un unfound you know unbe--ungrounded assertions and so really it's about the social capital of the authors, right? 

ALEX HANNA: Yeah. 

LUCY SUCHMAN: That's really what's going on here and I think that's why um Henry the K is the lead author. I mean it would be really interesting to know how their writing process um is organized, um and but but having uh Kissinger as the lead author author and then Schmidt and then Huttenlocher, you know there's some really interesting kind of ordering of social capital  going on there um that I think is yeah I mean--and this is part of the the unaccountability of  of AI hype, right that your project here is is is struggling with and trying to to sort of interrupt.  

So. 

EMILY M. BENDER: Yeah and we interrupt it by just showing how ridiculous it is, and I've got some real howlers in here that I want to share. But before we get there um between the two of you you were talking about how there's like the you know raising the specter of a competitive or you know competition from abroad right which now in this space is is China and this paragraph had some of it right, so um somewhere in the middle of the article: 

"Using the new form of intelligence will entail some degree of acceptance of its effects on our self-perception, perception of reality and reality itself--" And I've got something on that reality itself which is hilarious to come to.  

Um but continuing: "How to define and determine this will need to be addressed in every conceivable context." So there the AI is totalizing, you know it's everywhere. "Some specialties may prefer to muddle through with the mind of man alone--" 

So we're framing human thought as sluggish compared to what the AI can do, human thought of course is male. "--the mind of man alone--though this will require a degree of abnegation without historical precedent and will be complicated by competitiveness within and between societies." 

And so there's like the specter of China, I think.  

LUCY SUCHMAN: Yeah, yeah. I mean I read that as first of all I sort of wonder you know that that um who that man is that that is the object of this but but they're presumably a sort of uh a Luddite uh humanist or you know some something of that of that genre, so these are the refuseniks who are going to try try to slow things down, try to pull the plug.  

Um so there's that threat on the one hand and part of the reason that such a threat, the implication is, uh is that we are locked in an existential kind of uh arms race um with with uh with other nations um some--you know China is often named explicitly as being the one that is the near peer competitor in AI, um but also there's the there's this uh this trope of democracy and authoritarianism that's kind of lurking in the background, and you know we we um, "we" in the West, we in the United States you know, we are on the side of of democracy and that's part of--and and this this extraordinary way in which um doubling down on investment of in AI has been completely sort of incorporated into uh the reproduction--and here Henry Kissinger is central--the reproduction of a new a new Cold War with you know with near peer competitors. 

Which which the United States needed um terribly in order to escape from the quagmires of its you know counter-insurgency operations, they get back into the good world stage you know of superpower conflict where investment in you know in weapon systems in in--where the whole the whole sort of figure of arms races and their Investments you know comes back into the fore. Whereas when you're fighting counter insurgents it's just so much messier right. So so that's another thing that's being sort of reproduced anew here um yeah in really you know a stunning way. 

ALEX HANNA: Yeah and I mean I want to highlight the next paragraph here in which they state: "As the tech--as the technology becomes more widely understood, it will have a profound impact on international relations, unless the technology for knowledge is universally shared um imperialism could focus on acquiring and monopolizing data to attain the latest advances in AI. Models may produce different outcomes depending on the data assembled. Differential evolutions of societies may evolve on the basis of increasingly divergent knowledge bases and hence the perception of challenges." 

So this is there's a lot to unpack here, I mean just in this particular graf. First off this idea of international relations as being the central and one the kind of, "unless technology for for knowledge is universally shared," as the US will of course do as if you know the US is not you know, one of its main exports being uh you know imperialism and and the and the undermining of sovereign states. Imperialism could focus on that and I mean that poses to his threat, uh you  know Russia and the current moment and it's in its in in the Ukraine war, but also China's own  um aspirations. 

And so you're 100 percent right Lucy in sort of the kind of idea here of the reinscription of kind  of a Cold War kind of thinking, of this export of democracy um and that becoming kind of the  main kind of narrative compared to the sort of counterinsurgency metrics that they're sort of  you know they they are that that has been the um focus of U.S foreign policies since since 9/11.  

Um and then there's also this kind of then the other types of things here with the second part of this paragraph: "Models may produce different outcomes depending on a data assembled." And sort of you're getting now to just sort of um idea of this sort of development of societies types of things you know, um you have these closed sort of systems um based in authoritarianism, which are not going to advance as quickly, you know and it just I mean this is the type of um uh hogwash I guess, this has been the sanitized--we've already said bullshit to start to stream so it's whatever, we're not on YouTube we're not--no one's demonetizing demonetizing us. But it's the same sort of bullshit you'd hear I mean in the kind of--it reminds me a lot of many of the types of ways compared to historical sociologists would be talking about the development of societies in the 50s. And even prior, I mean you know the kind of way of having a kind of a linear sort of development of societies in which you compare typically Russia and China um and you know that are effectively as you know echoes of um U.S. Cold War rhetoric, as it was back then. 

And it's sort of a an understanding of that frame in the current moment. So in this case it's not that the political system is necessarily dictating that but the sort of knowledge socio-technical scientific arrangement is going to dictate you know like if you're not accepting these AI tools, you're you're giving in to the Communists, effectively.

ALEX HANNA: Speaking of linear um development of things I want to point to the way that they've got the usual sort of eugenicist view of intelligence and artificial intelligence in here. And I have to find um yeah so um this is what we just were yeah okay. "Machines will evolve far faster than our genes will, causing domestic dislocation and international divergence." Like so this is this idea that that the machines I guess are evolving intelligence and we can't keep up? 

LUCY SUCHMAN: Yeah. 

EMILY M. BENDER: "We must respond with commensurate alacrity, particularly in philosophy and conceptualism, nationally and globally." And it's just so--but you know once you once you know it's there once you know that the project of building artificial intelligence as something that is quote unquote "superhuman" is underlies all this it's not hard to spot in this kind of a text. 

LUCY SUCHMAN: Absolutely yeah and and here we are back to the the race again, you know, using that term advisedly in this context.  

Um you know everything is is that--so it's re recreating um as you said this this developmental you know Alex you said this developmental narrative of a linear progression across you know all of humanity, and and now we also have this kind of evolutionary um competition going on, which divides humans into differentially evolving at different rates. I mean this is incredibly regressive in terms of how we understand um what it is to be you know the the the the--how humanness is constructed, how historically humanness has been been weaponized in the differential ordering  um of persons and and places and--so all of that is being kind of regenerated here and at end  AI as incorporated into that becomes the figure that's out in front, which challenges you know even us as as humans so. 

And this is the sort of technology succeeding humans in terms of its evolutionary progress. So there's just an enormous amount of really problematic um stuff packed into uh this paragraph and seemingly with very little um awareness on the part of the authors, there's sort of no indication that they recognize that that there's some really problematic ways of thinking that they're that they're regenerating here. So it's a bit like a paragraph that might be written by ChatGPT you know in in in in in this kind of pastiche of really problematic. 

ALEX HANNA: I really wonder how much how much ChatGPT, they had some explicitly written by ChatGPT in this piece but then they also didn't. I feel like they they might have actually had some ChatGPT writing. 

LUCY SUCHMAN: One does wonder, yes. 

EMILY M. BENDER: So there's one hilarious thing in the chat that I want to list up--lift up from Abstract Tesseract, who says, "How long do the authors think it takes for societies to evolve? oh shit we forgot to tune our hyper parameters, no golden age for us I guess." [ Laughter ] Which I thought was great. 

So one thing that I think it should be really clear is that these authors do not understand the technology they're talking about and you know they're using it as an excuse to do their usual thing, what they want the world to be, um but they talk about so that if the phrase "stochastic parrots" comes up kind of. They say "stochastic parroting," so: "The phenomenon that is known among AI researchers as quote 'hallucination' or quote 'stochastic parroting--'" which is actually  not the same thing? "--in which an AI strings together phrases that look real to a human reader but have no basis in fact. 

What triggers these errors and how to control them remain to be discovered." So throughout they're sort of acknowledging the fact that um large language models have no basis in reality, but at the same time they're holding them up as these like advanced synthesis of knowledge um sort of representation. So it's like they they pay lip service to the fact that yeah yeah it's it's just putting together, it's just pastiche um but also we have to worship it. 

I mean there's sort of this this back and forth that makes no sense and um I've never seen--so you know "stochastic  parrots," I'm responsible for that phrase I've never seen it turned into a compound verb like this, "stochastic parroting." 

LUCY SUCHMAN: Yeah and I thought that was really interesting because it seemed to be that  in conflating hallucination, which is of course a term we could you know we could-- 

EMILY M. BENDER: --also get into, yeah.  

ALEX HANNA: Oh yeah. We discuss that a lot. 

LUCY SUCHMAN: --hallucination and stochastic parroting, as we now say, um suggests that uh that there that they're the same thing and that they both refer to the production of of errors or of-- 

ALEX HANNA: That's right. 

LUCY SUCHMAN: Whereas whereas my understanding is that that stochastic parrot--the argument about stochastic parrots is that that is--those are the basic processes from which all answers are produced whether, they are then judged to be right or wrong. So they're kind of  suggesting then that there's another process, besides these two that produce errors, there's  another process running that isn't subject um to those problems, although that you know is left  implicit. But I I thought that that conflation of--that conflation was was doing some work um. 

EMILY M. BENDER: Absolutely and maybe trying to discredit um the the point from stochastic parrots. 

LUCY SUCHMAN: Yes or put it sort of limited put it in a more limited sort of box in terms of the critique, yeah. 

ALEX HANNA: I think they are. I would actually want to make one reference just to expand on that, because I mean I think a lot of a lot of ideas here is, you know there's people like Mustafa Suleyman and others that are saying oh we're going to solve the hallucination problem, we're going to have these kinds of checks on this, but then to kind of conflate hallucination as the act of just making up stuff, and stochastic parroting, which is a claim about how patterns are being  synthesized--it is suggesting that emerging from hallucination and stochastic parroting are the  same process, reduces the stochastic parrot problem to one of basically, oh we're actually going to get to a level of emergence you know in a in a level of kind of the development of new knowledge um that is not based on pattern matching and conflates these as problems in a really weird and what I mean I think is unfair to stochastic parrots, and I think is trying to limit it to it basically a problem of of of truthfulness, you know. 

EMILY M. BENDER: So there's one more thing that I want to bring us to pick because so much bullshit here that we could just keep going but there's one place where they I can really show their asses in terms of not understanding what they're talking about, while being sort of sciencey. Um so uh where I have to find this um let's see--yeah here we go. [ Laughter ] 

Sorry this is so ridiculous. "As models turn from human generated text to more inclusive inputs, machines are likely to alter the fabric of reality itself. Quantum theory posits that observation creates reality. Prior to measurement no state is fixed and nothing can be said to exist. If that is true and if machine observations can fix reality as well, and given that AI systems' observations come with superhuman rapidity, the speed of the evolution of defining reality seems likely to accelerate."   

LUCY SUCHMAN: Wow. 

EMILY M. BENDER: I can't even. 

ALEX HANNA: Um yeah those those are those are certainly words, as Abstract Tesseract said earlier in the chat.

LUCY SUCHMAN: Yeah no I think that's amazing, and then just the last one says, "The dependence on machines will determine and thereby alter the fabric of reality, producing a new future that we do not yet understand, and for the exploration and leadership of which we must prepare."  

So you know, so this is another reason why we have to be out in front. And but but I'm really struck you know, I am no to say that I'm not a physicist is like the understatement uh, but the the idea that quantum theory posits that observation creates reality, I'm--and I wish you know our  friend Karen Barad were here, um you know her book, "Meeting the Universe Halfway," her idea of the apparatus is so far from an understanding of quantum theory as as in the way that's represented here. It's it's really it's really a shocking kind of um caricature that erases uh the the relations with materiality um that are involved in the apparatuses as Barad understands it um through quantum theory. 

So so it's just a really bizarre kind of positing of of this um completely discursive reality construction uh that then somehow is going to um determine and alter the fabric of reality and produce a new future yeah it's it's it's pretty stunning.

EMILY M. BENDER: Yeah and to be clear when they're talking about reality here, they're imagining like physical reality and not social reality um. 

ALEX HANNA: Right. 

LUCY SUCHMAN: Right. 

EMILY M. BENDER: [ Laughter ] Yeah.  

So there's a couple chats that I want to bring up, one from SnoopJedi um reacting to this paragraph.  

"Allow us to briefly digress and demonstrate our misunderstanding of another unrelated field." And then A Helwar, "I'm a quantum computing guy and that is nonsense, LOL." So yeah, utter utter nonsense.  

Um and this thing is just full of it and I wanted to maybe move on now but to have ended with that, so that people really know just that you know this is only nonsense, it's only a platform which as you say Lucy based on their own social capital they were using to to further their own agenda. 

LUCY SUCHMAN: And I'm I'm frankly really puzzled by this because these are not uneducated people um you know and and Eric Schmidt and Dan Huttonlocher um they you know they're they're technologically sophisticated um and so I'm really puzzled about as I I said, sort of going back to the question of where these texts come from, kind of how they're produced um I I find it really puzzling. Um and you know I it's so it's so it's so easy to um to to locate agendas based on what we know about the positioning of the authors, and the you know what what's happening in the world but it's also extraordinary that that this these uh you know these kinds of unfounded and and unaccountable um you know statements um pronouncements, stories uh get published get get widespread circulation and and and don't get challenged, um or certainly don't get challenged on on a regular basis, and oh sufficiently to shut them down. 

EMILY M. BENDER: We're working on it. 

LUCY SUCHMAN: Exactly and I appreciate that. 

ALEX HANNA: Truly, yeah. 

EMILY M. BENDER: And speaking of challenging I just want to let people know I think we're probably gonna move on to Palantir but I'm going to put in a plug for your review with Meredith Whittaker of their book. Um so that's what I'm displaying on the screen right now, we'll certainly have a link in the show notes. Um, "The Myth of Artificial Intelligence: 'The Age of AI' in quotes advances a larger political and corporate agenda." So um a great read for anyone who wants to sort of dig into more of the details with you. I recommend that. Don't bother with the book, read this review. 

ALEX HANNA: Yeah and just a just a shout out, just a few things I mean the things that I--a piece of this that I really love is the dissection of the rhetorical strategies here and so y'all write um, and this is an important background for uh Schmidt's own leadership on the National Security Commission on Artificial Intelligence or the N-S uh C-A-I in their advisory role, and this is before ChatGPT mind you that they wrote this uh book um, but the three rhetorical strategies in which uh, and I'll read this because I think it's worth, it's some great stuff. 

"First they position big tech's AI and computing power as critical national infrastructure across research and development environments and military and government operations. Second they propose quote 'solutions' that serve to vastly enrich tech companies helping them to meet their profit and growth projections while also funding AI-focused research programs at top tier universities. This serves to bring big tech and academia closer together, further merging their interests and deterring meaningful dissent by a new wave of researchers critical of Silicon Valley. And then third and most importantly they provide--by providing arguments against curbing the power of big tech companies, the book frames these companies as too important to the American national interests to regulate or to break up. These arguments could be read against the antitrust advocates and tech critics within the Biden Administration who have committed to checking the concentrated power of Silicon Valley." 

And that's definitely a set of discourses that they echo in the ChatGPT pieces, effectively saying this thing is world changing we can have no hope to regulate these, you know this is going to change the fabric of reality itself, um all we can do is effectively fund things like alignment research or have people in these um you know be involved in the government but there's no help in regulating these, the genie is out of the bottle, we already have a foregone future in which generative AI rules everything around us and we're going to try to put our guard--some some minor kind of scientific guide rails around them.  

So. Just shouting that out before we pivot to another--so read the whole thing. 

LUCY SUCHMAN: You can certainly hear Meredith's voice in there. 

ALEX HANNA: Yeah and I think we've talked a little bit on this show before on her article, which I think is in a ACM Communications or Communications of the ACM, on the steep cost of capture. 

EMILY M. BENDER: Right, yeah. All right so let's switch on over to Palantir and we've got two artifacts here. Um one is this Vice article about it, and then we have some slides Lucy that you put together from a demo, as I understand it? 

LUCY SUCHMAN: Yes. 

EMILY M. BENDER: Um so should we um should we just go to the slides, should we look at--I haven't read the Vice article which is why I'm looking to you two about where to go next. 

ALEX HANNA: We could read we could read the top of the Vice and then I think transition directly into the artifacts which, Lucy you've also uh written uh an excellent amount about.  

Um but at least to foreground it, maybe we can read the Vice article because I think the the title is great. So this is by--a Vice article by Matthew Matthew Gault um in the by it's uh the--Motherboard, the kind of a sub um sub-publication of Vice. The title is, "Palantir demos AI to fight wars, but says it won't--it will be totally ethical don't worry about it."  

Um so great great title here um and so-- 

EMILY M. BENDER: Subtitle, "The company says its artificial intelligence platform will integrate AI into military decision-making in a legal and ethical way." 

ALEX HANNA: Yeah so sure yeah. Uh and so the first the first graph of this uh says, "Palantir, the company of billionaire Peter Thiel is launching Palantir Artificial Intelligence Platform or AIP, software meant to run large language models like GPT-4 and alternatives on private networks. In one of its pitch videos palantir demos how a military might use AIP to fight a war. In the video the operator uses ChatGPT--a ChatGPT style chatbot to order drone reconnaissance, generate several plans of attack, and organize the jamming of emininme--enemy communication. That's some slight searching of–hard word. But I think you know like this is sort of talking about it and I mean I think we could go into the slides from that, just because the you know it's pretty short on on details but the the slides that you picked out of this, Lucy, I think are really indicative of the whole project.  

LUCY SUCHMAN: Yeah I mean I'm really just starting um to to to write on this um and I found the demo--the demo is embedded in the Vice article so it's really easy to find um and you know it's--and and the slide that you're showing at the moment is from a talk that I put together um for a couple of conferences recently um which I titled "Militarism's Phantasmatic Interfaces" to emphasize the the fantasy um aspect here, but you know I say that and then what I'm really interested in is to try to understand more deeply the relations between the kind of operational infrastructures that are--that exist and that are being developed and elaborated on one hand, and you know what we call the imaginaries or the aspirations um that that uh that inform and and animate um those those projects and those investments on the other and this--you know this Palantir interface, I was just looking at Scale AI um has a platform called Donovan, I don't know how it got that name but it's it's a very it's a very similar kind of um platform to integrate multiple data sources, uh and then create uh present a a chat style interface to various kinds of operators, users um within the military imagined through through various scenarios. 

Now you know so this is a demo, we know about demos, you know it's it's really uh carefully staged and set up um but it is uh it is revealing of um you know--certainly revealing of the aspirational side and then you know the trick is, how do we kind of slow things down and read through that and try to think about where it's referring to, again sort of actually existing infrastructures and where it's imagining things that that um we might argue are not going to exist.  

And you know so just looking at this this slide that I pulled out um for the opening and this is part of this larger project of joint all-domain command and control, um which is which is the latest in a, uh quite a quite an extended genealogy um going back at least to Vietnam--Operation Igloo in Vietnam. But it's it's the export of of um managerial uh information uh systems onto command and control. And so these are conceptualized as battle management systems um and the the you know this this quote that I've pulled out um refers to uh "single pane of glass situational awareness for interoperability between joint forces and partners." 

So this I would say is the the aspirational phantasmatic interface, you know the whole idea that we're we're looking through this window, there's the world inside the military machine, then there's the world outside and the commanders inside the machine are looking through a single pane of glass to get an unobstructed objective view of everything that's going on outside. So that's the kind of starting premise that we could spend a lot of time kind of you know just taking that apart um. But but that's that's a good place to start I think.  

ALEX HANNA: Yeah and I I want to go to the you know the third you know before you run out of time kind of the third and fourth slide, but the third slide is set--yes so one of the things so this thing you pull out is this quote from Alex Karp um the head of volunteer which you know he says, "AIP will allow customers to leverage the power of our existing machine learning technologies alongside... large language models directly in our existing platforms." And the little thing in the corner which is really gets me is this quote that says, "Monitor and control LLM activity and reach in real time to help users promote compliance with legal data sensitivity and regulatory audit requirements."  

So that I mean the kind of promise here is there's effectively you know this there's this um there's this uh you know this single interface which is going to be this way of having kind of a  360 sort of view of war and planning and that--the thing that really gets me about the Palantir system is that the use of LLMs is going to ensure sort of that there's compliance with kind of regulatory and legality of war making. Um so effectively um effe--effectively LLMs are going to ensure that you're not doing war crimes uh or something of that nature.  

Um and I mean that and then the next slide here, which is very funny is and your pull quotes are  spot on here, effectively where it's this big because I watched this video and there's a big uh disclaimer just from the outset where you know your pull quotes say, "Coming soon. Any  data shown is nominal." And then another one that says, "Forward-looking statement should  not be read as a guarantee of future performance or results. It will not necessarily be accurate  indications of the of the times at or by which such performances will be achieved, if at all."  

Um just a really incredible statements here you know and and so-- 

EMILY M. BENDER: Yeah give us lots of money and give us all the data. 

LUCY SUCHMAN: And and I think both of these are of a piece. Um we've heard we've heard  the regulators, we've heard the concerns. So the first one you know it's really about where the  keywords are "monitor and control large language models," so the premise here you know and Alex Karp is like Mr. Mr. Ethics. The premise here is that built in to these platforms are are all  of the rules and regulations that you need in order to keep you know legal, data sensitivity and regulatory audit requirements right so we we've got we've covered it, we've covered our  butts here and and it's all going to be you know--the rules are going to be built in. 

We could talk about the problems with that. And then when we go to the disclaimer, of course, you can recognize that it's the kind of obligatory um you know required by uh by the finance financial regulators um disclaimer so you know you shouldn't go out and buy or sell stock based on what  you see in this demo. Right so that's you know we start with these acknowledgments um that we've heard it right, we're taking this on board and that's where one of the places where I think ethics um have become just a kind of part of the landscape that you have to build in to your system, right and you have to you have to it's it's like this disclaimer um about about your you know the the what--the status of of what you're seeing here um the kind of Investments that should and shouldn't be made on the basis of it um and now that disclaimer has been extended um to cover questions of ethics law etc, and so that's another thing you know we need to keep pushing back on. Like what does that actually mean, how is that actually going to be implemented? Yeah. 

EMILY M. BENDER: Yeah so I want to take us to the next part of the slide deck because I think you have it really interesting you put together here. So this is some images of what the um uh the total mock-ups right of what the user would be seeing, um and um I was wondering Lucy if you could comment some on sort of what kind of a scenario they've painted here and what's being exposed through the pane of glass? 

LUCY SUCHMAN: Right, right. And maybe it's a good it's important to note here that Palantir of course you know has has a long track record um building these kinds of systems right you know we we can look at uh Sarah Brayne's pretty amazing book, "Predict and Surveil," which is all about Palantir's uh you know infrastructures for for predictive and now precise precision policing because predictive policing is too controversial.  

Um so so they're behind here there is there are back ends that are well I'm assuming that are well established, right sort of data data management, data integration backends. Uh so what's now what we're seeing is you know we've got a ChatGPT style interface and we're and and there's a lot of discussion about how these you know foundation models or large language models or the platforms, um are they're going to share these these back ends, and then they're going to be this sort of fine-tuned customized--Scale AI talks about its systems in the same way. And for me there's a lot of questions there about how that actually works and um but but to come back to this scenario, you know it's really interesting again you know as I said it's a demo, um so we could think about how how it's being staged, how it's being framed and the the thing that's most striking is that we have this incredibly orderly um scenario uh located in some place like the Ukraine but but completely you know with all of the the bodies and the chaos erased. 

Um you know we have we have an aerial view uh and we are picking up on some um you know on what gets labeled here as a possible armor attack battalion, okay so we're back to good old war fighting where people are identifying themselves with their uniforms, with their equipment and so forth. 

So that's a really important condition right.  

Um and then we're doing various forms of um of of intelligence-based um investigation and analysis.  

So in this frame you know the the uh the operator has asked about the options for getting a closer look at this possible armor attack battalion and has been given several options, and picks the MQ-9 Reaper that's in the in the area and and then the system you know it it it the operator can simply uh instruct the system to put the next action into place which is to get this MQ-9 Reaper to to fly over this battalion. So it's all it's all sort of pre-staged to provide just the conditions that can be addressed through the sort of intelligence apparatus--surveillance and intelligence apparatus that's available to understand what's going on, to decipher what's going on in the ground. 

And again with all of the things that couldn't be datafied or can't be captured by that by that  infrastructure just sort of erased from the picture. 

EMILY M. BENDER: And I certainly wouldn't trust the US military to not be like oh okay well there's no people here, there's just these you know enemy targets that we will target. 

LUCY SUCHMAN: Exactly. Yeah, exactly exactly. 

ALEX HANNA: Yeah and it's I mean and I love the way you put that, Lucy, kind of the removal of kind of bodies and this kind of reality of of warmaking but that's sort of you know it's it it makes sense as sort of this extension of kind of you know warfare by drone and and kind of remote operators. The thing that I found shocking about this video and I think it's and some of it is um the kind of um and you have a few things here in which the operator is is typing these commands in, and it responds with a set of different um different orderings, I think in um on the slide I think it has three courses of action--I think it's the next slide Emily--uh yeah here. 

So it says yeah so course of action one, "target with air asset," uh and it has uh you know the time required, the distance to target, the uh the assets and the arm the armament and how many personnel it's involving. Uh the um course of action two, "target with long range artillery," uh and then uh course of action three "target with tactical team."  

Um and then it's effectively showing you know what what it would what it would take to do these different things um and the kind of thing that really you know boggles my mind is not necessarily the remoteness of warmaking which I think has been a reality uh you know as you know in the post-World War II, post-Vietnam era, but I think the way that this large language model is um you know presents these sort of options as these clear cut and dried types of operational courses um that have been cleared of all kind of uh ethical commitments with different lawyers um the amount of kind of pre-clearance one needs to effectively not do a war crime, too that it that it matches with with wartime law um and as if the large language model can do that type of reasoning. So it provides not only this kind of sanitized pane of glass as they put it but they kind of sanitize sort of ethical and regulatory uh uh um uh kind of circumstances of which to make decisions and warmaking. 

Yeah so yeah that's that's the thing that floors me about this and is and and of course it would be Palantir to develop a technology like this. 

LUCY SUCHMAN: Right and even if the models themselves can't do everything um the the scenario here that's that's shown in the in the video is that this is also a communication system. So that all of the relevant parties can be are integrated into this system. So if we need to consult with it with the JAG lawyers, that can also be done through this interface, right, they can be consulted because there are various points at which the operator you know sends messages gets approval. 

Um in fact here they send these three courses of action out for for approval, um you know they so so there's a lot of of indication of the ways in which the vetting of things also can be done through this system and the but the most important thing is that it is an integrative interface um that allows this kind of of command and control, which is you know always a um a view view from above, um so it it enable it facilitates that uh in unprecedented ways. And um you know we now you can see in this slide that that the suspected army attack battalion is now confirmed army attack battalion. 

Right so again so there's there's no ambiguity here there's no sort of messy uncertainties and you know the more we learn about what's going on in Ukraine, um it's unbelievably messy and com--I mean not even you know it's not even clear who all of the various troops are or who they work for or who pays them or you know--Zelenski doesn't even really know. And of course systems like this have perfect knowledge of their own assets and their own personnel but we know in Ukraine that's so far from the case, there there are people operating on the ground who like it's not even clear like who whose orders they're following right?  

Um yeah so that's all that's all nicely-- 

ALEX HANNA: Yeah imagine imagine this kind of--and one one claim is getting back to the sort of move here in the in the in a sort of Kissingerian sort of view, you know the kind of we live in this world in which I mean if you were trying to take something like this to to let's say Syria well I mean trying to--that you know becomes much more complicated. I mean the fighting against here is still going on with kind of so many different forces and so many different mercenaries and interests at play then I mean you know this anticipates a kind of perfect underlying data stream to come in and you know really understand who this is um. 

So it's just incredible stuff and it's adding more and more I mean it's kind of supercharging  his kind of notion of the the the automation bias embedded in LLMs into a very very messy uh theater of operation. 

EMILY M. BENDER: So I want to take us to a little bit of Fresh AI Hell because I have such a  great improv prompt for you Alex, are you ready? 

ALEX HANNA: Oh my gosh all right I'm doing it I'm excited because our producer Christie Taylor also uh revamped our screen but yeah. 

EMILY M. BENDER: All right so here's the prompt. Musical theater not, your choice. You are operating a system like this to target fresh AI hell, but it's not working. Go. 

ALEX HANNA: All right so I've got a I've got a thing, I've got an aerial view of all the fresh AI hell and I say in a kind of Minor--Minority Report thing and I'm gonna use Anna here and I'm going to pretend like Anna's operating the interface. Zoom in on AI Hell--oh no what is that?  

It looks like some fresh AI Hell. Why aren't you working computer? All right that's that's all I got.  

EMILY M. BENDER: [ Laughter ] Right thank you. So we're not going to get through all of these um but I just wanted to bring up a couple um--and for people who are listening, Anna is a kitty. 

LUCY SUCHMAN: And a very lovely very lovely one. 

EMILY M. BENDER: All right so. Fresh AI Hell number one uh the APA style blog had a thing on how to cite ChatGPT.  

This is a little ways back and it didn't say, "Don't use ChatGPT as a source of information," it said, "If you're using it, you should cite your sources and here's how you would cite it." And I just I felt so disappointed in--I mean APA is not my professional society but we also in linguistics look to them as how to cite things and that was frustrating. 

So that was number one. What do we have for number two. Um actually well okay-- 

ALEX HANNA: This is this is from a mutual of Christie and I so let me-- 

EMILY M. BENDER: Go for it. 

ALEX HANNA: So it's uh so uh Millbot or Emily Mills um says is is quote tweeting a tweet. And the tweet says uh, "This week on Behind the Bastards I investigated the seedy world of online entrepreneurs flooding Kindle with AI generated children's books. It's bleak, off-putting, and a potential danger to child literacy." And then linked to the article. And then uh Emily who now works at a um a kind of environmental um organization conservation uh organization in Wisconsin says, "It's everywhere. Came across a guy clearly using AI to write Google reviews of state parks with loads of glaring inaccuracies and like he's not even making money off that. 

This shit is a curse we put on ourselves." 

So yeah incredible, incredible, terrible flooding of any kind of text with AI language. 

EMILY M. BENDER: That just polluting the information ecosystem everywhere um. Okay Rich Felkner, um oh I really like this thread.  "AI is a lot like fossil fuel industry--seizing and burning something, in this case the internet and more broadly written down human knowledge, that was built up over a long time much faster than it could ever be replenished." And he talks in this thread which we'll put a link in the show notes about how what's really mattered, what's really important in this context is provenance.  

Um and he draws a connection to um somewhere in here um medicine right. So if you've got um  a batch of medicine that costs a million dollars and then you learn that somewhere in there, and I can't find it quickly, um some of the doses were contaminated the whole thing becomes worthless.  

And he's saying if we don't actually pay attention to tracking the provenance of written information um in the sources that we get to it, it's all just going to become untrustable because it's all mixed in with this synthetic nonsense. Um we're out of time um Alex your Fresh AI Hell detector system was malfunctioning and it actually turned up a friendly--um do you want to share this one? 

ALEX HANNA: Oh my gosh gosh bless Dolly Parton, who can never-- can do no wrong so this is uh from uh Consequence Sounds and the headline is, "Dolly Parton not interested in AI hologram. quote, 'I don't want to leave my soul--I I don't want to leave my soul here on Earth' unquote. Uh subtitle, "Country legend believes she has already left quote um 'a great body of work behind.'" So a nice breathe breath of air um Dolly Parton as usual um just being the best human being that uh that one could imagine. 

EMILY M. BENDER: Yeah. All right we'll shave the rest of that--save the rest of that hell for next time or maybe we'll have to do another "it's all hell" episode 

if too much of it backs up again. 

ALEX HANNA: Oh my gosh. 

EMILY M. BENDER: Lucy this was so enlightening and so wonderful, thank you so much for coming and joining our funny little stream. 

LUCY SUCHMAN: Thanks for the invitation thanks for the invitation, it's great to to think about these things with you. 

ALEX HANNA: It's been such a pleasure uh that's it for this week.  

Um our theme song Is by Tony Menon, graphic design by uh Naomi Pleasure-Park, production by Christie Taylor. And thanks as always to the Distributed AI Research Institute. If you like this show you can support us by reading and reviewing us on Apple Podcasts and Spotify and by donating to DAIR at DAIR hyphen institute.org. That's d-a-i-r hyphen institute.org. 

EMILY M. BENDER: Find us and all of our past episodes on PeerTube and wherever you get your podcasts. If you like this episode like we said please rate and review and share it with your friends, you can watch and comment on the show while it's happening live on our Twitch stream. That's twitch.tv slash DAIR underscore institute. Again that's d-a-i-r underscore institute. I'm Emily M. Bender. 

ALEX HANNA: And I'm Alex Hanna. Stay out of AI hell y'all.