
Art Is Not a Thing
Art Is Not a Thing is a podcast series about art as a practice of critical inquiry, knowledge production and world-building. From media art, bio-art, sound art to digital activism, speculative design, or data storytelling, the series delves into artistic work that reflects on, questions, and reimagines our practices in and of the world.
The series is developed in collaboration with Radio Ö1.
Host: Hannah Balber
Producers: Ana-Maria Carabelea, Christopher Sonnleitner, Marlene Grinner
Editing: Hannah Balber, Ana-Maria Carabelea
Music: Karl Julian Schmidinger
Art Is Not a Thing
Web of AI: Linking Homes and Battlefields
In today's episode, we talk to Sarah Ciston about her award-winning project, AI War Cloud Database. The project visualises the links between everyday life technologies and military infrastructure, and reflects on the increasing automation of war.
The series is developed in collaboration with Radio Ö1.
Host: Hannah Balber
Producers: Ana-Maria Carabelea, Christopher Sonnleitner, Marlene Grinner
Editing: Hannah Balber, Ana-Maria Carabelea
Music: Karl Julian Schmidinger
Sarah Ciston: One that was really compelling to me: the iris scanners that are used in refugee camps in Jordan. And then these iris scanners become used for crypto wallet technology. And you can follow this thread and threads like it on the map.
Ana-Maria Carabelea: Welcome to Art Is Not a Thing, the podcast series about art as a practice of critical inquiry, knowledge production and world building.
My name is Ana Carabelea, and together with my colleague Hannah Balber, we discuss with artists and researchers whose work questions and reimagines our practices in and of the world.
In this episode, Hannah talks to artist Sarah Ciston about artificial intelligence and war. Sarah's project, AI War Cloud Database, shows how closely technologies from our everyday lives are linked to military infrastructure and how wars are increasingly being automated. For this work, Sarah received the European Commission S+T+ARTS Prize, which annually honours innovative projects at the nexus of science, technology and the arts.
Enjoy the conversation.
Hannah Balber: How do you explain to someone that their robot vacuum is connected to war?
Sarah Ciston: This is a great question. Well I made the project because I was not seeing those connections in the first place, I was not seeing these conversations happening in the same room. I was seeing them happen in parallel, but not together. So I was seeing a lot of headlines about AI and warfare on the one hand, and I was seeing a lot of headlines about the next AI product. And then every week the next thing coming out in this space. And yet I knew from my research and my art practice that these were using the same materials and the same tools and the same processes. It didn't feel like people understood that these are the same conversations. But the fact is that the robots that are in the vacuum cleaners in our houses that are mapping the spaces, and that same technology designed by the same companies then is cross-applicable to a task like mapping a terrain in a battlefield space or looking for bombs in an area that they don't want to send humans in.
HB: What exactly is the AI War Cloud database?
SC: The AI War Cloud database is a description of different projects that are used in AI warfare and it's also a visualization of how those projects connect across different nations, across different corporations, and trying to visualize how power and technology move independent of borders and legislation and regulation. So it's a description of over 50 different AI systems specifically. It's not necessarily AI systems that are like automated drones or weapons. It's the systems that are designed to make decisions in automated ways. So it's the tools that are looking for people collecting data to decide who is a good target, where places should be bombed. It's the systems that are helping battlefield commanders make decisions about where to put troops and things like this. So it's more about the intellectual systems than specifically the weapons. But it's a collection of just over 50 systems from the year 2000 to the present.
HB: What exactly do I see when I look at this map?
SC: So I imagine this can be an adaptable project that might grow with different researchers who might take it and do other things with it. But in the current version, you see a spider web of clusters of nations and corporations and AI systems, each with small icons, and you can hover over the icons and get a pop up of information about what kind of machine learning they use, the risks involved and what kinds of speculated civil applications those might have. And then each of these has lines that connect from them. So an AI system like the Roomba vacuum might have a line that goes from it to the company that made it, which then pivoted to making robots for seeking bombs, and then that would connect to the nations that use it and the companies that designed it and deployed it. So you can follow trails from one nation to all of the different tech companies that it affiliates with, from one tech company to all of the different items that they've developed. And you might also see some products that you use that you'd find in your own phone, things like the llama chat bot, and then you could follow those, you know, a couple of steps away. You would see devices that are used in ways that maybe you wouldn't agree with.
HB: What inspired you to create this project?
SC: It's a really personal project for me because I was feeling a lot of despair at the state of the world, and I didn't have another place to put that feeling. But I realized in all of the stories and the news I was seeing that I could take that energy and the knowledge that I had about how machine learning systems worked and I could start researching around this. So it was actually very much coming from a place of how do I make something productive out of something really awful and negative, and how can I kind of channel this horrible feeling that I have about what's going on in the world?
HB: You told me before that oftentimes those connections were not something that you were aware of or that you understood the gravity of. What was the most or some of the most uncomfortable connections that you found while mapping the database?
SC: To me, the most uncomfortable aspects are reading about how instrumentalized warfare is, not only because of AI, but assisted by AI, that it allows for the acceleration and the scaling up of horrific things. And one might think that it makes it so that war could be more precise and maybe avoid casualties. But in fact, it makes it so that so much more damage is done. So the shocking things were reading that the United States, for example, was able to increase the speed at which they were selecting targets to eliminate and from I believe it was 20 minutes down to 20s. So this means that it's a huge number of targets that they're able to acquire and the amount of time that they're spending considering whether this is somebody to kill is really negligible. So things like that are really upsetting and just, I think, highlight how little AI is being considered as a fallible instrument and as a tool, responsibility is being deflected onto this system instead of being a last resort with the understanding that these are quite new, quite imperfect instruments.
AMC: Sarah Ciston holds a PhD in media arts and Practice from the University of Southern California and is currently a research fellow at the center for Advanced Internet Studies in Bochum. Ciston's artistic practice looks at large scale data sets and machine learning. Their focus is on how queer, feminist and intersectional perspectives can be brought into these technologies.
Their projects include an interactive AI application and art installation that asks how artificial intelligence can be used to rewrite one's own inner critic, and a chatbot that attempts to explain feminism to misogynists online.
Sarah has also founded the Creative Code Collective, an approachable interdisciplinary community for co- learning programming.
HB: Was there a moment that's changed how you see the connection between technology and power?
SC: Probably so many moments repeatedly over and over. The biggest moment that I keep learning is that this sort of AI hype is very much an illusion. And that even as much as I struggled to learn to code, and to have access to these spaces, that every time I pushed past that intimidation I felt. Once I was able to push a little bit through that and find a side route into understanding it, those technical barriers were very much an illusion, because every time I got into trying to understand it, all of that felt like a smokescreen. That it's very much not as complicated in many ways as it's been designed to have us believe. And so one of the things that I'm really trying to do through my work is create ways that help explain in different kinds of language and use different metaphors to help people kind of understand what's happening in a still in a in a way that doesn't undermine the complexity, but is more inviting to help people realize this is just math. This is not magic.
HB: The AI War Cloud Database catalogs and maps connections across AI systems used to make deadly decisions in warfare and how the same systems are used in commercially popular devices. So how are AI systems changing the nature of the battlefield today?
SC: So that is one major thing. The orders of magnitude that it scales up damage is horrific. And the other thing that is interesting and really problematic to see is who is actually designing systems. So instead of your giant aerospace companies, we're seeing a lot of small startup companies and tech companies. So along with Google and Microsoft and Amazon in this space, which has been in the news a lot, there's these small startup companies who are experimenting with venture capital to design weapons and AI systems. And so to me, that can be really scary because if we're looking at this as like an incubator, experimental space, but it's a very high-risk human impact space. So this is really frightening. And it's also really difficult to regulate as a policy space because tech notoriously moves fast and breaks things. And it is not happening under the kind of umbrella of regulations or specific nations. It is potentially extra judicial. And it's changing the shape of how warfare is done and how AI and tech research is done.
HB: Systems like Palantir Meta Constellation in Ukraine or Lavender, which the Israel Defense Forces have allegedly used as first reported by the Israeli online news outlet +972 magazine and the Guardian: How do they making on thand what role do they play in military decision making on the battlefield?
SC: A system like Lavender is designed to generate targets, in the military parlance. And it does this based on collections of data that have been pulled from other systems, other databases that it puts together and analyzes. So as a recommendation system, it is looking at kind of finding a match to other similar kinds of people who have previously been a target. So if you think about something like a movie recommendation system, that's like: if you liked this film, you might like this film. It's saying: if this person was also a target, then a person who has a similar kind of template would also be a target. And there's so many ways that this could go wrong because you have to look at what were the original criteria by which it was changed to decide who is a target. And then were the data that were put into it accurate? And there's so many problematic things about this. But when you think about something like this being used in a space, it's that high stakes. It's completely terrifying.
HB: And the other example, Palantir Meta Constellation in Ukraine. What does that do?
SC: Meta Constellation and Palantir, really interesting case to me, because when the war in Ukraine started, Palantir specifically set up an office in Ukraine because they saw the opportunity for investing in this space and developing tools there. So the Meta constellation tool is for mapping enemy territory and gathering information on troop positions and using the intelligence and reconnaissance that they have to kind of feedback information to battlefield commanders about where the enemy positions are. So it's synthesizing location data. It's synthesizing all kinds of sensor data that is being brought in from satellites, from heat sensors and things like this. And then it's using AI to analyze that data and locate probable locations. And what's also interesting to me about this is that on their website, they advertise versions of all of their military software as commercial devices for things like supply chain management and business management as well. So it's things like monitoring your competitor's activities or making sure your shipments get across the world and things like this.
HB: Many AI systems tend to amplify and accelerate existing human biases. And what are the dangers when AI systems help make decisions about who or what to target?
SC: There's all kinds of dangers to take into account from how was the data gathered and what was the original purpose of the data set. Because oftentimes the data sets that are being gathered in order to train AI are coming from things that were intended for completely different purposes in the first place. To how is that data being labelled, what's the goal or the perspective of the group that's labelling that data? So all of it is coming down to: how is it designed by the humans behind it and what are the choices that those humans are making along the way? But the issue is that then because the computational system can move so quickly and do so much, all of that is getting compounded and amplified and sped up.
HB: What can be done? Should they ban AI in warfare? Or is there any way to make it more ethical? What do you think?
SC: I have not seen yet what I would call any ethical applications of AI in warfare yet. And part of why I think this is, is that there is no hypothetical version of a system that would be trustworthy enough, if it were optimized enough to be used in such a high-stakes situation. It would be a very long time and we would need very, very different systems designed by completely different processes before we would be at a place where I would say, here's an example of an AI system that would be ethical to use in warfare.
HB: How do you see this relationship between technology, corporate power and warfare evolving in the near future?
SC: Not only in the warfare domain but with all kinds of AI technology and technology in general, we're seeing a lot of power being concentrated at big tech companies and also just a lot of energy being put into these questions of: where will technology go next? And I think that means: we've got regulation kind of chasing behind that. And rather than having technologists asking with actually, a slow, methodical process asking about what people need and how can they be in service of that. It's very much like: what is it possible to make. And the assumption is that I should go in it instead of a more creative imaginative thinking that might even not include AI.
HB: Is there anything that we as users can actually do? What responsibilities do we have when using or creating AI tools?
SC: That’s the most important question. I think there's two things we can do. The first would be to try to not be intimidated and be willing to understand a little bit more about how the systems work. So there are resources out there to help understand more and understand that these are not magic systems, that they do rely on our data and they are processing data that already existed. They're not generating from scratch, for example. The other thing that we can do is to be very selective about which tools we work with. And if we don't like the values behind a tool, we can find another tool, we can create another tool or collaborate with someone. Or we can ask our governments and our communities to create other tools.
HB: What would you like people exploring the AI War Cloud Database to take away from it? What do you want to spark in the audience?
SC: I hope that people exploring the database come away with a little bit of a feeling that the tools being used in warfare have a connection to them personally, that they find something in the database that they can trace back to themselves and or someone they know or a tool they use. And I don't want anyone to come away feeling bad because there's enough in the world to feel bad about. But I hope that it makes them reflect on how they select tools and how they understand systems work.
HB: Thank you very much for the conversation.
SC: Thank you so much for having me. I appreciate it.
AMC: That’s it for today. Thank you for tuning in! Today’s episode was brought to you by Radio Ö1 and Ars Electronica. Join us next month for a new episode, and in the meantime, follow us or share the show with someone you think might like it.