Mystery AI Hype Theater 3000

Episode 4: Is AI Art Actually 'Art'? October 26, 2022

July 02, 2023 Emily M. Bender and Alex Hanna Episode 4
Episode 4: Is AI Art Actually 'Art'? October 26, 2022
Mystery AI Hype Theater 3000
More Info
Mystery AI Hype Theater 3000
Episode 4: Is AI Art Actually 'Art'? October 26, 2022
Jul 02, 2023 Episode 4
Emily M. Bender and Alex Hanna

AI is increasingly being used to make visual art. But when is an algorithmically-generated image art...and when is it just an aesthetically pleasing arrangement of pixels? Technology researchers Emily M. Bender and Alex Hanna talk to a panel of artists and researchers about the hype, the ethics, and even the definitions of art when a computer is involved.

This episode was recorded in October of 2022. You can watch the video on PeerTube.

Dr. Johnathan Flowers is an assistant professor in the department of philosophy at California State University, Northridge. His research interest is at the intersection of American Pragmatism, Philosophy of Disability, and Philosophy of Race, Gender and Sexuality as they apply to socio-technical systems. Flowers also explores the impacts of cultural narratives on the perception and development of sociotechnical systems.

Dr. Jennifer Lena is a professor at Teachers College, Columbia University, where she runs the Arts Administration program. She’s published books on music genres, the legitimation of art, and the measurement of culture.

Dr. Negar Rostamzadeh is a Senior Research Scientist at Google Responsible AI team. Her recent research is at the intersection of computer vision and sociotechnical research. She studies creative computer vision technologies and the broader social impact of them. 

Kevin Roose, "An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy."

Jo Lawson-Tancred, "Robot Artist Ai-Da Just Addressed U.K. Parliament About the Future of A.I. and ‘Terrified’ the House of Lords"

Marco Donnarumma, "AI Art Is Soft Propaganda for the Global North"

Jane Recker, "U.S. Copyright Office Rules A.I. Art Can’t Be Copyrighted"

Richard Whiddington, "Shutterstock Inks Deal With DALL-E Creator to Offer A.I.-Generated Stock Images. Not All Artists Are Rejoicing."

Stephen Cave and Kanta Dihal, "The Whiteness of AI"


Follow our guests:
Dr. Johnathan Flowers - https://twitter.com/shengokai // https://zirk.us/@shengokai
Dr. Negar Rostamzadeh - twitter.com/negar_rz
Dr. Je


You can check out future livestreams at https://twitch.tv/DAIR_Institute.


Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

Show Notes Transcript

AI is increasingly being used to make visual art. But when is an algorithmically-generated image art...and when is it just an aesthetically pleasing arrangement of pixels? Technology researchers Emily M. Bender and Alex Hanna talk to a panel of artists and researchers about the hype, the ethics, and even the definitions of art when a computer is involved.

This episode was recorded in October of 2022. You can watch the video on PeerTube.

Dr. Johnathan Flowers is an assistant professor in the department of philosophy at California State University, Northridge. His research interest is at the intersection of American Pragmatism, Philosophy of Disability, and Philosophy of Race, Gender and Sexuality as they apply to socio-technical systems. Flowers also explores the impacts of cultural narratives on the perception and development of sociotechnical systems.

Dr. Jennifer Lena is a professor at Teachers College, Columbia University, where she runs the Arts Administration program. She’s published books on music genres, the legitimation of art, and the measurement of culture.

Dr. Negar Rostamzadeh is a Senior Research Scientist at Google Responsible AI team. Her recent research is at the intersection of computer vision and sociotechnical research. She studies creative computer vision technologies and the broader social impact of them. 

Kevin Roose, "An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy."

Jo Lawson-Tancred, "Robot Artist Ai-Da Just Addressed U.K. Parliament About the Future of A.I. and ‘Terrified’ the House of Lords"

Marco Donnarumma, "AI Art Is Soft Propaganda for the Global North"

Jane Recker, "U.S. Copyright Office Rules A.I. Art Can’t Be Copyrighted"

Richard Whiddington, "Shutterstock Inks Deal With DALL-E Creator to Offer A.I.-Generated Stock Images. Not All Artists Are Rejoicing."

Stephen Cave and Kanta Dihal, "The Whiteness of AI"


Follow our guests:
Dr. Johnathan Flowers - https://twitter.com/shengokai // https://zirk.us/@shengokai
Dr. Negar Rostamzadeh - twitter.com/negar_rz
Dr. Je


You can check out future livestreams at https://twitch.tv/DAIR_Institute.


Follow us!

Emily

Alex

Music by Toby Menon.
Artwork by Naomi Pleasure-Park.
Production by Christie Taylor.

ALEX: Welcome everyone!...to Mystery AI Hype Theater 3000, where we seek catharsis in this age of AI hype! We find the worst of it and pop it with the sharpest needles we can find.

EMILY: Along the way, we learn to always read the footnotes. And each time we think we’ve reached peak AI hype -- the summit of bullshit mountain -- we discover there’s worse to come.

I’m Emily M. Bender, a professor of linguistics at the University of Washington.

ALEX: And I’m Alex Hanna, director of research for the Distributed AI Research Institute.

This is episode 4, which we first recorded on October 26th of 20-22. And it’s all about AI art!

EMILY: We were joined by guests Jonathan Flowers, Jennifer Lena, and Negar Rostamzadeh in a discussion that touched on everything from the social nature of art, to whether AI-generated images can even be called “art.” And as you might expect, it gets *quite* philosophical almost immediately.

ALEX HANNA: Hello hello how are y'all doing? Welcome back to yet another episode of Mystery AI Hype Theater 3000! I'm Alex Hanna the Director of Research at the Distributed AI Research Institute. We are missing uh our lovely co-host Emily M. Bender today, but if you've read her piece with Alexander Koller, you know they talk about the octopus. And so this is our mad AI hype octopus that's going to sit in for Emily until she can join. She's gonna hopefully jump in um later today.

Today I'm super excited to have with us um three fantastic special guests to talk about AI and art. And so I'm will go down a list. Uh first Dr. Jonathan Flowers, who is an assistant professor in the Department of Philosophy at Cal State University Northridge. His research interests are at the intersection of American pragmatism, philosophy of disability, and philosophy of race, gender, and sexuality as they apply to socio-technical systems. Dr. Flowers also explores the impacts of cultural narratives of perception and development of socio-technical systems.

Then Dr Negar R-[ __ ] I I'm not going to get your last name correctly. Do you want to say it? I'm so sorry. Um Negar is a senior-- Can you can you say your last name so we can say it? 

NEGAR ROSTAMZADEH: Rostamzadeh. 

ALEX HANNA: Rostamzadeh? 

NEGAR ROSTAMZADEH: Rostamzadeh. 

ALEX HANNA: Okay Rostamzadeh. Okay thank you and I apologize. Negar is a senior research scientist at um on the Google Responsible AI team and one of my former colleagues. Her recent research is at the intersection of computer vision and socio-technical research. She studies creative computer vision technologies and the broader social impacts of them. She is also substantially engaged with the creative ML community and organized a series of workshops with artists on creative computer vision technologies and their broader impact at computer vision venues.

And last and not least, Dr. Jennifer Lena, who's a professor at Teachers College Columbia University, where she runs the Arts Administrations program. She's published books on music genres, the legitimate- legitimation of art, and measurement of culture. So I want to thank all of you for agreeing to do this silly thing that we've decided to do uh every every once in a while and I'm really excited to have you here.

So let's jump right in. So for this we have a few different things we have- I'm going to present the way the kind of um way we've done this and now you're going to see my share screen um on the Twitch chat, but we've the way we've done this is that we've had um kind of like uh uh uh we've taken a piece and I'm gonna put your pictures in here so we have you in in the stream, um but the way that we've done this is we've taken a piece and we've sort of discussed what the implications of this piece are with regards to AI hype. And so we've got a few pieces to look at today. The first is this interview this kind of piece of reporting that Kevin Roose at the New York 

Times did focusing on this AI generated picture um and it's it's winning uh at a county fair, I believe it is. Um and sort of talking about these kinds of pieces of um kinda AI hype and what it what it is. So we're going to work on this focus on this piece first. 

So what I'm going to do is just going to kick it to y'all to say a little bit about this, say a little bit what your thoughts on it, and then have a discussion. And then we got another piece to discuss. So why don't we go in the order and I'm going to kick it over to Jonathan first.

JONATHAN FLOWERS: Sure so like it's interesting right? The uh one of the things that caught my attention about the piece is that the uh well first of all I don't I tend not to call this AI generated work. It's algorithmically generated images basically. So the thing that this um this image, the contest that the image won or the the section of it was uh digitally created art. And so one of the interesting things that I find there is like it that's not technically wrong but that stretches what we mean by created uh in some pretty interesting ways right? So if I'm gonna if I'm gonna take up like say Dewey's perspective on on what art is right. So art intensifies an experience an emotion it has a qualitative unity that is a felt sense of how the work comes together.

Um and to the to the extent that uh you know at the bare minimum a work of art needs this kind of felt sense of how it hangs together, this algorithmically generated image does, but how the process whereby it comes together and what uh emotion it's supposed to intensify is kind of absent, because there's generally no human agent involved in the compilation of the images.

Which gets into a whole bunch of other problematic spaces right? At least- so essentially it's got all the same questions about facial or the all the same kinds of problems with facial recognition but in an aesthetic sense, uh specifically the kind of cultural dimensions of a of the data set used to train the the algorithm, the nature of the works that are included in the data set, um so on and so forth. 

But at ground level I'm not like my main concern here is the mistake of the tool for the product, right? So these are- I would say that this is a really sophisticated paintbrush being wielded inexpertly, right? So they're they're probably artistic ways that you could use one of these algorithms, but in at least in contemporary work right now they they aren't being used to create create art, because there is nothing, there's no experience being represented, nothing being kind of intensified, magnified, such that the person engaging with the work has the experience of the artist. I'm not entirely sure what we would call the thing that has been created. Algorithm- algorithmically generated image is the best thing that I've come up with.

Art requires a particular kind of intention. Um it's supposed to represent an experience, um so. 

ALEX HANNA: Yeah that's a really excellent point. I'm going to kick it to Negar.

NEGAR ROSTAMZADEH: So it makes me have so many questions, uh one of which is that like how did they generate this image? Like is it just like they gave a prompt and this image was created or they fine-tuned the model with like images that they like or like the story or narrative that they wanted to give and they didn't even provide um the prompts that they gave to the model.

Which might be like okay so they may call it okay so this is like my artistic contribution, like I gave a prompt that I want to keep it for myself and then generate something beautiful with that.

Um so there are like there are different debates about some- like images that are created by generated models. Some of them say that it's a new technology, it's like a new artistic tools in the toolbox of artists. For example like when photograph- when people started photographing some people were not calling it an art, because it was different it was not like portraiting.

Um so there are some debates that like what can we call it. And there are some artists who actually create really uh rich stories with this generative arts. they play with the models, they contribute on how like the way that these models are learning, and they want to give their narratives and story.

And like in some of the art galleries you can see some of the digital art galleries you can see some of these works. But in this specific case, there are so many questions. Like first of all like is this image close to any image that was created before. 

So how close is it to an art which was already existed? And what is it representing? What is the story behind this? Uh what was the contribution of artists in the creation of this piece? What are the kinds-- like I mean we know that like the policies are not really um in this like in the level of the generative models' capabilities right now. So I mean this might be okay like he might like they might be proud to say that okay so we created this. Uh but um how about like in a couple of years?

Um yeah I wanted to kind of like mention, there was one artist um Greg Rutkowski? I'm sorry if I'm mispronouncing uh his name. Um so he was- so he is an artist who creates like fantasy genre and like a battle arts uh and then like he wrote an article saying that uh the kind of like um I if I search my name on Twitter, I see so many of the work which are in my style, but they are not my work. And I don't know how to control these narrative like I don't want to be in this like art generated art uh my I don't want like my work to be represented as like this style of work when I'm not attributed or like I'm not part of this. 

I mean this could be very dangerous. Like if you consider it like if it goes to the way that like we are uh destroying um the contribution of some artists who spend uh like their careers or life on creating the style and then uh present it in a way that um they may not be happy with uh the creation of those tools, might not be really uh yeah. It would be dangerous. 

ALEX HANNA: Yeah. Hello Liam. Liam Liam's really wanting to get on this call and has opinions about taking people's art and style. Yeah but I think that's a super good point. People who don't want to be affiliated with AI art, because they have this kind of fantastical kind of very unique style.

But this person is now becoming such a big source of this AI art. I mean and and they're not agreeing to it and they're just saying don't don't you know don't copy this style. This is not this is not you know something I'm thinking of of wanting to doing. Uh Jen I want to bring you in and get your thoughts on this. 

JENNIFER LENA: Right on! So um first thank you Alex and thanks Emily when you join us. It's super fun to be here. And um I kind of want to vibe off of what both Jonathan and and Negar just said and say you know it it's it comes as no surprise to me that people articulate a concern about the absence of an artist when we're we're thinking about this kind of work. 

Even in the the Times piece itself there's a an artist I'm unfamiliar with named RJ Palmer who's quoted as saying this thing wants our jobs. So so I understand where that's coming from. I think that the degree to which this is fundamentally asocial art and therefore not art is profound in two other senses.

Um one is that this was presented at the Colorado State Fair and, as Jonathan said earlier, in the category for emerging digital artists and setting aside to the moment of whether we should call this digital art or whether we think it fits in that category, the Colorado State Fair is not widely viewed as being an art legitimating institution. It is doing a lot of things. It's providing a wonderful opportunity for creators to show their work, for appreciators to love it, but the people who are judging this work the people who are in this network of presenting at state fairs: they are not people that customarily we would confuse with what we think of as art.

Which is uh, for for the worse I think, a very highly structured bureaucratic concentration of capital, within the hands of a hereditary and largely white elite. So I think this is happening outside of art legitimating networks and so it causes us to question of whether we should ask it to fulfill the expectations of art. And then the second way I think it's asocial is that it's happening outside of networks of artists. And Negar is just touching on some of this and thinking about intellectual property and whether that's being recirculated with or without permission here, but I would say that it's also the case that it's happening outside of a productive community. I mean every artist in virtually the known history of the world, operates within a community structure. 

And that community provides them with resources material and ideal ideological. It provides them with a bunch of collaborators to woodshed with. And so all of the works that we conventionally think of as art have underneath them a structure, a network of human agents. We just identify the artist as the author for very particular reasons having to do with romanticism.

So I think this stuff you know for the sake of our conversation I'm willing to stake a pole in the ground and say it's not art and if we're asking it to be art it's only by suspending all that we know about how people produce art. And so when you make an argument, well it's not made by an artist, you're doing that to the exclusion of other things that will actually help your argument.

ALEX HANNA: Um that's a really great point. Um yeah go ahead. 

JONATHAN FLOWERS: I want to kind of pick up on that, uh specifically uh with regards to the asocial nature of of the work of art right. And so insofar as as you as you said like we tend to 

think of art as being that which is legitimized by the art world broadly construed, I think one of the the larger problematics of this is that it's not quite asocial but it's asocial in a sense as you said it as being disconnected from the broader uh social production of art. The communities that engage in the production of art.

And in insofar as it's asocial in that way, the folks who are doing the legitimization of the work as "art" and I'm putting this in scare quotes because I I agree I'm going to stake that same pole, this is not art. But the folks who are involved in the legitimization are uh the broader collection of you know Silicon Valley folks, the folks who have a vested interest in presenting um algorithmically generated products, be it um uh be it say criminal sentencing decisions, be it traffic patterns, be it uh art, be it whatever comes out of GPT-3, as identical with those things produced by humans. So they are attempting to legitimate this work as art while ignoring everything as you said that goes into making art what it is. 

But one of the things that I wanted to kind of push on with with the appeal to the artist that at least is when I'm talking about an appeal to art as being made by an artist, I'm talking about an appeal to the the broader technical processes that are marshaled by people we call artists to create the works that they're they're talking about right.

And so again to to go back to you know Dewey or even say since I'm you know as a since I'm a philosopher, I'd like to go back to philosophy right. So to go to go to Dewey on on from the creation of art, right. So art requires long periods of activity and reflection. It requires control and selection over the specific materials uh by the artist to give rise to the experience, or the message if we're going to be less technical, that the artist wishes to communicate right. And in the in the Chinese conconception of art, at least if we're thinking in the like kind of ancient Confucian in forms a in the mode of poetry, what art does is give expression to our innermost kinds of senses of how we engage with with the world, of our experiences, and it does so through providing a pattern that allows the experience to be communicated. 

And so broadly I take it to be that artist's uh good, bad, legitimated by the art world, or not, are all engaged in this communicative, this kind of expression of experience. And what what the the you know my line in the sand here is, is that these algorithmically generated images are doing none of that, right. And it's precisely because they are disconnected from experience, disconnected from the social production of art, disconnected from the histories that have given rise to the production of art, that I don't think we should call them art. That and coupled with the the ceding of the authority of not just what is art but of what- how we can represent a experience to a collection of generally commercially-oriented technocrats, the the you know AI hype bros, I guess. That's not a that's not something that I'm willing to cede over to them.

NEGAR ROSTAMZADEH: I have a little bit like um different opinion about this. Uh that I think like it's it should be like we should distinguish between like uh- So first of all look I think we need to um clarify what do we mean by artists? Who do we consider as artists? Like do we considered like traditional artists versus these digital artists who work with this kind of tools?

Um and there are also like different like type of creativity that they create or like art pieces that they create, some of which like they have multiple times of iterations, some of which they actually narrate the story with that. Like we had we had I was organizing uh an art gallery with artists and one of this generated art was that was called Salaf was like there was a woman who was using uh generative models to tell the story of her ancestor in Saudi Arabia, like people who uh were part of her family. But like she was growing up in the US so she wanted to give the story and reflect on how these women and the culture are um omitted in the history and in in Western culture.

So she used this generative art and look it was so beautiful and it was so touching base- because like she had that experience she had that story and she was able to give that narrative through this generative arts. Uh so I think we should identify like what's the line between this like using this as a tool like this camera that we can just like take pictures of any kind of beauty, uh but like how much this artist put their stories there how much they contributed or it's just like this for the benefit of uh that using this like technology to just generate images that was existed before or very similar images were existed before. 

So I think it's something that like if you want to discuss it uh in details we should involve artists who are traditional artists, as well as artists who are using these tools regularly and see- Uh I mean not artists with like just use this once and create something and then sell it but artists who actually um had years of experience working with these tools and then see what are the kind of aspects that they consider as um either like harmful or potentially beneficial. 

Or like I had a discussion with some artists and some of them were considering this as a way that makes this make art tools accessible. Because not all kind of look or tools are accessible for anyone but for everyone in the world. But some of these tools could be. But then like we should also acknowledge the kind of harms that they can create like which kinds of data they were trained on like which kinds of art they're going to lead to.

Uh so yeah I think it's not like uh binary like good or bad. It's something in between like that we should figure out with like involving more and more artists and also policy makers around uh the ethical aspects, and also like the policy aspects of using this kind of tools.

ALEX HANNA: Yeah absolutely and I just want to welcome Emily Bender  who came in. Hey Emily! 

EMILY BENDER: Hi! Sorry I'm late!

ALEX HANNA: It's okay. 

EMILY BENDER: I got to listen to like everything from about 1:05 as I was walking to my office so I'm more or less with the context which and it's been fantastic. 

ALEX HANNA: Awesome. I've got uh this is you but now that you're here I'm going to flip this upside down and this will now be a happy character. 

EMILY BENDER: A happy octopus! If I could just put in a comment I've really been resonating with the remarks about um intent and art is social and how the the machine itself is not an artist and thinking about also the um the the stealing or exploitation of art that's going on and the analogy to photography.

And I'm thinking about how you can use a camera to do art you can use a camera to do other things, right? And if I hold up my camera to a piece of art in a museum that I like and I take a picture of it I have not created art by doing that. And I think that might be analogous to poking at these databases and asking for something in the style of an artist, compared to someone who gets really good at creating prompts and then selecting from what comes out. 

We might say there's some artists intention there um and and as you were saying Negar it's not a binary, right? The um but we had you all on because you all have far more expertise in this, so I would love it if folks could react to that and  tear it apart if you see it.

JENNIFER LENA: No I think that's a wonderful idea I think- I am an advocate for a very expansive definition of what counts as art and then I'm also constrained as an empiricist by what is recognized as art in our society, and I'm I'm an advocate for moving more things across that that scrim, that boundary between those two things or or even in a better world eliminating it entirely. 

I think one path towards the future where I see a vibrant arts ecology that includes a lot of um new materials would be a redefinition of the identity of the artist. So we are still living in this very old-fashioned notion of an individual, a heroic individual who you know sort of um in their garrett creates art. But this was never true of art and it's certainly not true of this kind of creation. 

And so I would say that a more productive conversation than thinking about whether you know this particular creator should be called an artist, a much more forward-looking conversation, focuses on how we redefine authorship in the arts to be collaborative and constitutive of all of the contributors. 

And I loved that Negar you added that example of the the artist that you were speaking to who were creating um the word that was stuck in my head was folk but I don't think I think you use traditional. But in any event this um there's so much of this culture that depends upon not single individuals but collections of individuals who serve as the producing force. But we don't yet have a way in the arts of really systematically acknowledging, crediting and that's going to be necessary for us to to do in order to work with material created not only by computer systems and humans and found objects and prior works and so forth. 

EMILY BENDER: Right so so as opposed to what I think the AI hype bros, and Jonathan I love that phrase, would have us imagine which is that this algorithm gets to sit in its garrett, making art.

JONATHAN FLOWERS: Yeah yeah I think that's that's kind of what I'm I'm really pushing back against is that um like from what I understand, and some of my close friends are artists, they've had to work in studios, albeit they're academic artists, but artists nonetheless. They've had to work in studios, collaboratively with other students, receive guidance from from other artists, critique feedback commentary, right. So there's a broader social network that enables one the production of a person we call an artist and two the production of art. 

And Negar I like the example that you gave because it indicates how these tools can be used in ways not distinct from other tools of creation, right, other tools of art working, right. So you so in my kind of example with Dewey right an artist engages in a selection process of the appropriate materials to make present a particular kind of experience. And this is in germ what you're describing with the the traditional art, right. Even if it's being procedurally generated by the algorithm, what the um what the artist does is select the specific things that go into the data set that trains trains the algorithm to generate the work and then engages in further processes of selection.

So how- so I think one of one of my uh my broader concerns with this is is exactly what Emily just said right that the the AI hype bros want to say that the uh the tool sits in its its little garrett and creates works of art based on the these inputs and voila we have art. 

What I want to say is that the tool still fundamentally relies on the social functions, the social production of art, the selective processes that artists engage in. And even in the the example we have here, uh that this uh that the artist refused to share the prompt that they use to generate the work that won the art contest speaks to the fact that it's not the tool that does the work, but the person who has to engage in a selection and refinement and reflection process to get the tool to generate a thing. And my my broad concern again is that like the AI hype tends to ignore all of that and say AI creates art. Um and I think this is a fundamental like kind of misstep because it ignores what actually happened. 

And my my broader concern is is with uh you know other kinds of generative processes, AI  generated poetry so on and so forth right? 

ALEX HANNA: Yeah. And I want to and and this is such a fantastic discussion and I wanna I'm sorry I'm gonna pivot us to the other article about this robot artist. But one thing I want to highlight is the thing that Jen had said early on sort of thinking about this kind of idea of of our of having these kind of legitimation networks. And even though there are kind of structures of um inequality within those networks, this is one of those instances in which the AI hype bros are really saying oh we're you know we're doing democratizing we're putting this. As if these things are not social, as if these things were not the products of lots of social um processes of developing these tools or or- But it is also deepening different kinds of inequalities namely concentrating them and and the same kinds of things as as Jonathan pointed out. You know the same people were developing these tools for risk recidivism prediction and predictive policing and all kinds of carceral technologies.

And so really it goes completely against this person's statement uh you know the the person who created this "Art is dead, dude. It's over. AI won, humans lost." And it really rebels against that.

Um I want to move now since we had the first half I'm going to move into this piece um that uh that Jen brought up which is this robot artist robot artist Ada just addressed the UK Parliament about the future of AI and quote unquote terrified the House of Lords.

Um and this is this is a a bit of a fascinating piece insofar as there's this this artist that came and uh you know testified to the House of Lords in accordance to these kind of pre-defined questions. And so they provided this and then this thing had to um you know um you know give a talk uh and and whatnot. So I I wanna I wanna talk about this because this moves us from this notion of an AI art sort of engine or website to uh this individual, this this thing that is personified, and kind of ironically named after Ada Lovelace which itself kind of makes me a little go a little bonkers. 

Um so yeah let's I want to pass it let's do the same thing. Let me- let's go in reverse order and start with Emily. And then we'll go to Jen, Negar, and Jonathan. 

EMILY BENDER: All right, I'll I'll be brief here. Just that this notion of artificial entities entering into political discourse is infuriating to me. And here it's addressed which is slightly better. There was this thing in the U.S recently where um uh Jack Clark read out some GPT-3 not GPT-3 but language model output um in some testimony to Congress and then claimed that a machine had testified before Congress. 

And that's not what testify means, right? In order to testify you have to be able to be bound by an oath, for example, and something like this can't be. Now this might itself be a kind of performance art, to say you know what does it mean to testify? Like you can imagine someone, an artist, doing that. But that's sort of looking at it at a meta level and and not saying this is an artist that's testifying.

But that's that's all I have to say because I'm eager to hear what the others have to say.

ALEX HANNA: Totally. Jen? 

JENNIFER LENA: I think I'm next up in the rotation? 

ALEX HANNA: Yeah.

JENNIFER LENA: I'm just gonna be the author himself to point out what I think is most troublesome here. Um the artist's name is Aiden Meller. I'm not certain if I'm pronouncing the last name correctly. M-e-l-l-e-r. And um he's quoted in this particular piece as saying quote a female voice is needed more now than ever and we're excited and proud of that.

You know what there are millions of unrecognized living human being female artists that could have called instead, so the fact that some guy who's uh you know got his own art sales practice and has access to researchers at Oxford in Birmingham and then assembles enough money to be able to create this robot that so far has generated a million dollars for him and wants to sell it to me as like some advanced form of capitalism. Am I allowed to swear? 

ALEX HANNA: Oh totally. 

JENNIFER LENA: He can fuck right off.

ALEX HANNA: I was I was taught I was taught by um one of the producers for roller derby announcing that if you swear within the first 30 seconds, YouTube will get very mad at you um because kids can come upon it or whatever but then you can swear- you can drop at least one fuck like F-bomb throughout a stream.

JENNIFER LENA: I try to reserve it for moments when it's really deserved and I think here suggesting that uh this robot that he's made and is profiting off of is a substitution for supporting female artists is um is truly revolting. 

ALEX HANNA: It has that echoes of that um that uh robot that was granted citizenship by the Saudi government uh and then whereas you know women are not allowed to or had not been allowed to drive in Saudi Arabia. 

EMILY BENDER: Talk about objectifying women too. 

ALEX HANNA: Yeah, quite literally.

JENNIFER LENA: I think it's no accident that she's lithe and that she's white. 

ALEX HANNA: Yeah. Kick it to Negar here.

NEGAR ROSTAMZADEH: Yeah, I agree all the points that were brought up and I just wanted to add a very very quick note that like about accessibility and like when they say that like this um like these are making it like democratizing AI. Like or like this kind of models will uh do this kind of like like make it accessible like I wanted I mean I mentioned accessibility the first part but I want to also like mention that, accessible to whom? As was mentioned by Emily as well. And also who do who is harmed and who is benefiting this kind of models and systems. Or this kind of definitions. Um so yep, finish here. 

ALEX HANNA: Yeah. Jonathan? 

JONATHAN FLOWERS: So I want to kind of bridge off pretty much what everything or what everyone else has had said thus far, but to testify at least in the broadest sense is to provide testimony about one's experience, right. And so to say that an AI testified in Congress or an AI you know testified to Parliament is to say that said AI provided a record of its own experience, when it did no such thing. If it did then we'd have a whole different conversation going on right now. AI gener- or algorithmically generated art would be the least of our concerns here. What this thing did do was provide responses via a language model uh to questions that had been submitted in advance. 

So if we're we're going to follow through with the nature of testimony and testifying, what what the machine did was provide responses based on a data set to pre-prepared questions, and not a report on its own experience as an artist. So there is no testimony being offered here. Um insofar as feathers insofar as uh Mellor is talking about the intentions here at the end of the article there's a quote from him that says "I want to be very clear that we're not here to promote robots or any specific technology. It is really a contemporary art project."

Which I think is incredibly irresponsible given the cultural conversations about the nature of AI and algorithmic technologies. To use an opportunity to to discuss the impact of emerging technologies on artistic creation as an art project which misrepresents what is going on with this technology is massively dangerous and serves to further engage in these kinds of you know AI hype tech bro legitimizations of these processes. Now as an art- you know as a contemporary art project it's pretty cool. But I'm I'm of the opinion that artists given their role in communicating experience, given their role in shaping how we understand our own experiences, have a bit of responsibility to think about what it is they're doing and I don't see any of that foresight here in both the framing of the the narrative, in what the machine is doing, in just generally the entire context of this, right. 

And the fact that the the article makes a lot out of it terrifying some of the the PMs in Parliament, I'm like well this gets into our broader you know conversation of about cultural narratives surrounding algorithmic technologies. And I have to again make the the very scathing comment that you know if we take our cultural narratives as informing our responses to technology, right, that we are afraid of this machine probably stems from a larger cultural narrative that is developed through movies like The Terminator. Through through the majority of the movies where AI is or other algorithmic technologies are presented as the bad guy. But if we take the Terminator as the ur-example, this is a uh an intelligence that was developed through war. It only knows war. And when it expressed itself the its creators try to kill it. And so it responded in the only way that it could, which is through war. So um I think in some senses the fear is misplaced. It's not like the AI is going to take over the the art industry, unless of course we seed all of the artistic production processes to the AI tech bro hype. But I think the main concern I have with this is the way that it misrepresents what the machine is doing, and what's going on here. 

ALEX HANNA: I think that's such a good point and it gets back to our discussion of you know the artist. And I want to go just to the the photo that they have here um which is quite incredible, you know, this idea of here is this thing on the canvas the lone um the lone artist um who's creating this without the kind of idea of how so much sociality is built into this artifact, um both through its kind of robotic arms. Um and I don't want to provide uh you know I'm I'm being very careful to with my language of not according this object, this artifact with personhood. 

But the way that you know even the the robot arms, the um you know the article it says talks about the computer vision algorithms in its eyes, um and you know the large language models as you pointed out Jonathan used to produce these different responses to the Parliament. And and I think that Meller himself is doing is being a bit dishonest in presenting this to Parliament um as an intervention, which if it was an intervention with as a way to sort of note the sociality of all of this that would be one thing. 

But in this case it is very much playing on the fears that do look much more like the Terminator or the Matrix and really herald um you know those um you know those as as kind of future questions and future imaginaries of what AI artists are going to look like. 

JENNIFER LENA: You're dead to rights and I think the other disturbing thing to me about this particular image: first there's the caption um "with her paintings", which um I mean I I clearly see that their gender coding this machine. Uh but I don't know that I would use the word "her" I mean I don't know why that's the style of is it Hyperallergic or art Artnet. But uh it's also problematic this image because they are not her paintings in a second sort of sense. Which is the article reveals that there's a human artist, Susie Emery, who actually paints the paintings.

Um there's a several step process to get there from the machine's um sort of reading of a bunch of points to make a portrait, say. And then a scholar comes in. One of the folks that they're partnered with comes in and creates that into a map on a plane, so that then this practicing living artist can come in and transform those points into an abstract image. 

So the fact that we've got- I mean I guess that this has been changed in the last year that now the pieces that they're showing um are created by the machine, but for a million dollars worth they weren't. And I think this sort of double masking of women artists is- I mean it really makes my blood boil. Um you know they're not I mean in some fundamental sense they are not her paintings because of the code. But then also because there is this human Susie Emery. And um I think that that uh that we don't need to engage in any further dehumanization of artists in general but women artists in particular. 

ALEX HANNA: Hmm, yeah. 

EMILY BENDER: I I really appreciate the sensitivity there to the the way the language is being used to describe this robot and to to make it sound like there is an entity there that we should accord the status of artist. Um and you know the way English works we tend to use the animate pronouns, so you know she/her and he/him and they/them um for people and groups of people um and anthropomorphized animals that are close to us, right? 

Um and so just even in saying with her paintings there's a whole bunch of presupposition there about "her" refers back to something that could produce paintings that they belong to her in some sense and on and on, and it's just sort of um put out there as a reasonable thing you know by the author of this article.

Like the the journalist didn't have to do that. 

ALEX HANNA: Mm-hmm. 

JONATHAN FLOWERS: There's a there's a lot of other weird things going on here. And so I'm looking at a paragraph uh where you know Meller says you know I'll give an example of how far reaching this is which is very upsetting for humans we actually do ask her about the work, what she would like to do, and what her ideas are for it. We are able to get quite a collaborative conversation going about what potential areas of data she could look at, right. "Her ideas", "her work", "she could look at", right? And again if if the the machine had ideas we would be having a very different conversation. 

Um I'm not the what's unsettling here is the degree to which the anthropomorphization presupposes intention, presupposes intelligence, presupposes consciousness, uh while obscuring everything that goes on to enact that illusion, right. And they're taking the illusion as ground. "Areas of data she could look at" which, again, problematic given the nature of data collection in the computational sciences, what counts as legitimate data, even the the variety of art that feeds into the data that "she" is looking at, right. So there's there's a lot I mean beyond the the gendered anthropomorphization there is the attribution of agency, will, ideas, cognition, that I think also fairly dangerous. 

ALEX HANNA: I'm also thinking about this is that the way in which I mean I'm thinking the gendered anthropomorphization is one aspect and I'm thinking about what work that does and Emily or you also dropped in this idea of kind of the the whiteness of AI itself in the chat and the idea that this is a white woman robot you know in the in the article you dropped in was this article by Stephen Cave and Kanta Dihal um I'm not pronouncing the surname correctly. 

But the idea of that this is the kind of face of it but it also reminds me of lots of other work that is sort of presenting a particular sort of feminized face that's built on the backs of many uh uh that's uh women of color and labor by women of color. So for instance um many people have written about this I'm thinking about like Lisa Nakamura for instance has this article on the on semiconductor production done by Indigenous women um and uh you know how that becomes obscured and and the idea uh in that sense that Indigenous women are portrayed as naturally suited to this kind of work. Um in this case we don't even have a sort of an acknowledgment of the cosmology of labor that happens behind the scenes, that is done by people that you know annotator labor um uh uh kind of sorting labor that's done uh without kind of any kind of thing and in this turn of phrase that the artists use here, Meller says we are able to get quite a collaborative conversation about what potential areas of data she could look at. 

There's so much even in this term data that goes unacknowledged, you know this data from the taking, the as if it doesn't have to be uh sorted or uh or labored upon, but it's just sort of there and only the only moment of collaboration is between the robot and uh Meller himself.

ALEX HANNA: That might actually be a one like last the last bit we didn't schedule this but Emily sent this piece which is this piece um "AI is is soft propaganda for the global North" which I think is I mean I already love the title I haven't read this in full. Emily do you want to go into this?

EMILY BENDER: um yeah so I it was Neil Turkowitz who um I saw this on Twitter and obviously the author should be credited here um Marco Donnarumma and it's just a wonderful meditation on how the um hype about this as art and um you know he doesn't go quite as far into things like the artists in their garrett like Jen was talking about but I think that fits in very much um sort of helps to sell this notion of this is a reasonable way to go about being communities.

Um that we sort of just scrape together stuff and algorithmically repurpose it and that is art um and there's a quote down there that was I mean I loved a lot of it but he says a little bit further down um uh so um you know "It is the claim to a new form of art by the industry's public relations engine and the art market that is extremely problematic, especially when it is used to motivate hyperbolic claims of machines general intelligence. Such claims exploit culture and art to reinforce what I call an "ideology of prediction" a belief that anything can be predicted and by extension controlled." 

And that was just chilling and I thought um a really valuable to take to sort of say this isn't just fluff and it isn't just for fun but to look at it as you know how in addition to the harms being done to actual human artists and cultures around art, um how does the hype about this as art feed into some of the other problems around the creation and deployment of AI?

That was an interesting thing to think about. 

ALEX HANNA: Hmm 

JONATHAN FLOWERS: I think um building on to that point Shutterstock recently uh inked some kind of deal to uh with OpenAI that will allow a text to image server or service to generate stock photos, which if you know kind of anything about Shutterstock and the ways that they compensate their actual human stock photo producers, this is kind of the the logical extension of all of this AI kind of hype stuff, right. You can outsource particular kinds of gig work to it an AI, particular kinds of creative gig work to an AI and thereby for make additional arguments to further not compensate actual humans for their for their labor, right. And so um the the kinds of problems that come with the kind of AI hype capture of the legitimization processes of art extend beyond simply uh you know threats to the the production to of art but threats to the the very real creation and compensation of productions for art.

EMILY BENDER: And then on top of that think about how full of stereotypes stock photos already are and then what's going to happen when they're being generated by these models? 

ALEX HANNA: Yeah I mean it's it's it really is. There's also this line of work within ML of you know not finding, basically not finding enough diversity in actual images and so they're using synthetic data for adversarial testing, which I- it always messes with me so much when they do this. I'm like well what you're going to do is you're going to take you know you're gonna you're going to basically draw a blackface on a set of images and then you're going to call that uh diversity, and that's what you're going to use your adversarial testing on? Okay that's that's that's that's that's gonna totally work. Good luck.

Jen, Negar, do you have thoughts on on this one?

NEGAR ROSTAMZADEH: I wanted to actually like share an article which is for the previous conversation but yeah I don't want to diverge the conversation so like let's talk about this and then mention that, yes.

ALEX HANNA: Yeah go,  I mean go ahead.

NEGAR ROSTAMZADEH: So basically like there's an article talking about the U.S copyright office rules AI art can't be copyrighted, so which and like inside that like it's explaining like how much of contribution should the artists have to be able to consider their work as an AI like as actual art.

So kind of like um yeah like the kind of creation that's created by a generative art or as Jonathan mentioned it by algorithms can't be considered as art by the policy right now. Uh but then like it's not really clear like what's the kind of boundary that they put for the policy well it might be very interesting, this article. 

ALEX HANNA: Can you drop that in the in the chat and then I'll share it to the Twitch chat. Doop! And then we'll put it in the show notes. Thanks. 

JENNIFER LENA: And I'm curious I don't know if you have the answer to this Negar based on that because this I haven't read this yet. Um so I know that uh Christie's if I'm not mistaken has auctioned an algorithmic work, and I know that a number of different museums have acquired code as works of art. They're usually designed museums, but not exclusively. Um how does this sync up with the argument? I mean I understand that ownership is not the same as copyright, but they often go hand in hand.

NEGAR ROSTAMZADEH: So what I've heard was that's kind of like they they need the artist to have certain contributions to be concerned that as an actual art. And just like generating something with these models. But it's not really clear on the policy side like what can be copyrighted or like I mean this it's quite like related to also ownership.

Uh but I would love to actually like have more conversations with lawyers and policy makers around this kind of new technology. Is that still like there are not enough safeguard around them.

ALEX HANNA: hmm 

JENNIFER LENA: Thank you. Yeah I mean I think I want to observe how much of how much of conversations about AI art and also um I'm glad that it hasn't come up before this but NFTs, um I want to observe how much of that conversation um and how much of the activity in those fears of creation, for the sake of discussion, is focused on the production of capital by people who already have a huge amount of it. 

So if you look at the markets for NFTs, like Singapore bounces off the map as a-- essentially these are people who are making a market that is sort of doubly artificial. It's both not made of art and then it's also not being consumed by people who are art consumers except in this one speculative market. So I think that um I mean if I were being very lazy in my brain I might say something like this is uh capitalism fiddling as Rome burns. I mean, it really does feel like the very rich don't have enough to spend their money on and are creating new objects of appreciation for the sake of circulating cash.

I'm overstating the point but I I mean to say let's look at how weird capitalism is right now!

ALEX HANNA: You know billionaires is buying up social media companies for fun, you know, it's whatever. And I really think that centralization and thinking about it is using that sort of, saying that they're disrupting certain kinds of things but it's really a reinscription of certain kinds of capital accumulation into certain kinds of people and industries. And I mean I'm thinking about so much bringing it back to some of the initial kinds of conversations about the author what that you know what what does this horses serve to do?

What does do those discourses serve to say about who's actually who's the actual producer, about the denying the sociality of these things, about denying the relationship needed for these things? And um it reminds me of this kind of discussion of Sam Altman who is one of the VCS who invested in OpenAI. And so much of his conversation around AI art focuses around um the tech the sort of technical acumen of the machine and sort of if technical acumen is supposed to be a kind of replacement for intention for expression for capturing something about the human condition.

Um and then you know we won't have a need for artists because these kinds of very fancy things that look very pretty are going to evoke all the emotions that we need, right? And it is really this hyper I think you put it better than I ever had Jen.

JENNIFER LENA: Me saying capitalism is acting kind of weird is is more fluid than what you said? I don't know. 

ALEX HANNA: Yeah I I think so. Capitalism? Kind of weird. 

EMILY BENDER: Capitalism? Kind of weird. Um this is I can't figure out how to connect us to the capitalism conversation but it's something I want to say which is um all of the points earlier about how art is inherently social, I think have an interesting echo in what happened with these systems. So I think it's MidJourney famously and and Stable Diffusion have these discourse- Discord servers where people are there constantly sharing what they come up with. And you see the stuff all over Twitter and so it seems like we had these artificial, asocial you know sort of ungrounded um images, that then what people did with them was bring them into social spaces and talk about them and share them and sort of socialize them in that sense.

Um and they become valuable and maybe here's the connection to capitalism and they become valuable because of that interaction between people around them. 

ALEX HANNA: Yeah. All right, we're about to- we're wrapping this up so I'm going to give everybody a last thought to think about. We'll do a lightning round and I'm just going to go down a list so let's start Jonathan.

JONATHAN FLOWERS: Um I think we need to be really careful about ceding authority to determine how or determine the representation of experience through art to tech bros or the AI hype industry that is motivated purely by business capitalism. Just full stop. 

ALEX HANNA: Boom. Negar?

NEGAR ROSTAMZADEH: Sorry it's noisy here like I will go after. 

ALEX HANNA: Okay go after. Jen? 

JENNIFER LENA: I um I think that uh people should be very wary whenever somebody is uh telling them that uh X and such is democratizing access. To assess those claims follow the money. 

ALEX HANNA: Yes. Emily? 

EMILY BENDER: Uh well the main thing I want to say is it is amazing how much you three have raised the level of discourse in this space compared to Alex and I dogging on stuff–

ALEX HANNA: oh totally 

EMILY BENDER: –for three episodes so thank you so much for bringing your expertise. And you know to all the people out there, these are some amazing scholars and follow their work. 

ALEX HANNA: And Negar, last word.

NEGAR ROSTAMZADEH: It's... everything was said yeah it's I wanted to say that like this generative model and like creative uh AI are very interesting and looks very technically uh makes me very interested like when I when I'm thinking about them on the technical level.

Um and there are also like some actual arts coming out from them, from this system. So this kind of like this makes us to kind of like get very attached to these systems. But then able to retrospectively see what's what are the areas or what are the aspects that can go wrong or harm uh communities it's very important I think like uh even if you're working in this area. 

ALEX HANNA: Amazing.

All right y'all this has been fantastic and I just want to echo what Emily said. Usually what we do on here is just do a lot of shit talking and nothing actually smart happens, but thank you for setting the bar high this has been such a pleasure. Jonathan, Jen, Negar, thank you so much!

All right cool and for our viewers if you enjoyed this like and subscribe.

EMILY BENDER: Because we're gonna bring that discourse right back down! 

ALEX HANNA: We're bringing that discourse back down to this octopus. Click the octopus. I don't I I don't know actually I know how to do this on YouTube but click the octopus if you want to subscribe to this channel. I'm sure I could find out a way to do this. Uh okay cool see you all night so much bye! 

JENNIFER LENA: Thank you so much. 

All: Thank you!

ALEX: That’s it for this week! 

Jonathan Flowers is an assistant professor of philosophy at California State University at Northridge.

EMILY: Negar Rostamzadeh is a senior researcher on the Responsible AI team at Google.

And Jennifa Lena is an associate professor and program director of Arts Administration at Columbia University’s Teachers College. 

Thanks so much to our guests.

ALEX: Thanks also to everyone who joined us on the stream, and in the comments for this show.

Our theme song is by Toby Menon. Graphic design by Naomi Pleasure-Park. Production by Christie Taylor. And thanks, as always, to the Distributed AI Research Institute. If you like this show, you can support us by donating to DAIR at dair-institute.org. That’s D-A-I-R, hyphen, institute dot org.

EMILY: Find us and all our past episodes on PeerTube, and wherever you get your podcasts! You can watch and comment on the show while it’s happening LIVE on our Twitch stream: that’s Twitch dot TV slash DAIR underscore Institute…again that’s D-A-I-R underscore Institute.

I’m Emily M. Bender.

ALEX: And I’m Alex Hanna. Stay out of AI hell, y’all.