Implausipod
Art, Technology, Gaming, and PopCulture
Implausipod
E0028 Black Boxes and AI
How does your technology work? Do you have a deep understanding of the tech, or is it effectively a "black box"? And does this even matter? We do a deep dive into the history of the black box, how it's understood when it comes to science and technology, and what that means for AI-assisted science.
Dr Implausible can be reached at drimplausible at implausipod dot com
Bibliography:
Bijker, W., Hughes, T., & Pinch, T. (Eds.). (1987). The Social Construction of Technological Systems. The MIT Press.
Koskinen, I. (2023). We Have No Satisfactory Social Epistemology of AI-Based Science. Social Epistemology, 0(0), 1–18. https://doi.org/10.1080/02691728.2023.2286253
Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford University Press.
Latour, B., & Woolgar, S. (1979). Laboratory Life: The construction of scientific facts. Sage Publications.
Pierce, D. (2024, January 9). The Rabbit R1 is an AI-powered gadget that can use your apps for you. The Verge. https://www.theverge.com/2024/1/9/24030667/rabbit-r1-ai-action-model-price-release-date
rabbit—Keynote. (n.d.). Retrieved February 25, 2024, from https://www.rabbit.tech/keynote
Sutter, P. (2023, October 4). AI is already helping astronomers make incredible discoveries. Here’s how. Space.Com. https://www.space.com/astronomy-research-ai-future
Weitzman, M. L. (1998). Recombinant Growth*. The Quarterly Journal of Economics, 113(2), 331–360. https://doi.org/10.1162/003355398555595
Winner, L. (1993). Upon Opening the Black Box and Finding it Empty: Social Constructivism and the Philosophy of Technology. Science Technology & Human Values, 18(3), 362–378.
On January 9th, 2024, rabbit Incorporated introduced their R one, their handheld device that would let you get away from using apps on your phone by connecting them all together through using the power of ai. The handheld device is aimed at consumers and is about half the size of an iPhone, and as the CEO claims, it is the beginning of a new era in human machine interaction.
End quote. By using what they call a large action model, or LAM, it's supposed to interpret the user's intention and behavior and allow them to use the apps quicker. It's acceleration in a box. But what exactly does that box do? When you look at a new tool from the outside, it may seem odd to trust all your actions to something that you barely know how it works.
But let me ask you, can you tell me how anything you own works? Your car, your phone, your laptop, your furnace, your fridge, anything at all. What makes it run? I mean, we might have some grade school ideas from a Richard Scarry book or a past episode of How It's Made, but But what makes any of those things that we think we know different from an AI device that nobody's ever seen before?
They're all effectively black boxes. And we're going to explore what that means in this episode of the Implosipod.
Welcome to the Implausipod, a podcast about the intersection of art, technology, and popular culture. I'm your host, Dr. Implausible. And in all this discussion of black boxes, you might have already formed a particular mental image. The most common one is probably that of the airline flight recorder, the device that's embedded in every modern airplane and becomes the subject of a frantic search in case of an accident.
Now, the thing is, they're no longer black, they're rather a bright orange, much like the Rabbit R1 that was demoed. But associating black boxes with the flight recorder isn't that far off, because its origin was tied to that of the airline industry, specifically in World War II, when the massive amount of flights generated a need to find out what was going on with the planes that were flying continual missions across the English Channel.
Following World War II, the use of black boxes Boxes expanded as the industry shifted from military to commercial applications. I mean, the military still used them too. It was useful to find out what was going on with the flights, but that idea that they became embedded within commercial aircraft and were used to test the conditions and find out what happened so they could fix things and make things safer and more reliable overall.
meant that their existence and use became widely known. But, by using them to figure out the cause of accidents and increase the reliability, they were able to increase trust. To the point that air travel was less dangerous than the drive to the airport in your car, and few, if any, passengers had many qualms left about the Safety of the flight.
And while this is the origin of the black box in other areas, it can have a different meaning in fields like science or engineering or systems. Theory can be something complex where it's just judged by its inputs and outputs. Now that could be anything from as simple as I can. Simple integrated circuit to a guitar pedal to something complex like a computer or your car or furnace or any of those devices we talked about before but it could also be something super complex like an institution or an organization or the human brain or an AI.
I think the best way to describe it is an old New Yorker cartoon that had a couple scientists in front of a blackboard filled with equations and in the middle of it says, Then a miracle occurs. It's a good joke. Everyone thinks it's a Far Side cartoon, but it was actually done by Sidney Harris, but The point being that right now in 2024, it looks like that miracle.
It's called AI.
So how did we get to thinking that AI is a miracle product? I mean, aside from using the LLMs and generative art tools, things like DALL-E and Sora, and seeing the results, well, as we've spent the last couple episodes kinda setting up, a lot of this can occur through the mythic representations of it that we often have in pop culture.
And we have lots of choices to choose from. There's lots of representations of AI in media in the first nearly two and a half decades of the 21st century. We can look at movies like Her from 2013, where the virtual assistant of Joaquin Phoenix becomes a romantic liaison. Or how Tony Stark's supercomputer Jarvis is represented in the first Iron Man film in 2008.
Or, for a longer, more intimate look, the growth and development of Samaritan through the five seasons of the CBS show Person of Interest from 2011 through 2016. And I'd be remiss if I didn't mention their granddaddy Hal from 2001 A Space Odyssey by Kubrick in 1968. But I think we'll have to return to that one a little bit more in next episode.
The point being that we have lots of representations of AI or artificial intelligences that are not ambulatory machines, but are actually just embedded within a box. And this is why I'm mentioning these examples specifically, because they're more directly relevant to our current AI tools that we have access to.
The way that these ones are presented not only shapes the cultural form of them, but our expected patterns of use. And that shaping of technology is key by treating AI as a black box, something that can take almost anything from us and output something magical. We project a lot of our hopes and fears upon what it might actually be capable of accomplishing.
What we're seeing with extended use is that the capabilities might be a little bit more limited than originally anticipated. But every time something new gets shown off, like Sora or the rabbit or what have you, then that expectation grows again, and the fears and hopes and dreams return. So because of these different interpretations, we end up effectively putting another black box around the AI technology itself, which to reiterate is still pretty opaque, but it means our interpretation of it is going to be very contextual.
Our interpretation of the technology is going to be very different based on our particular position or our goals, what we're hoping to do with the technology or what problems we're looking for it to solve. That's something we might call interpretive flexibility, and that leads us into another black box, the black box of the social construction of technology, or SCOT.
So SCOT is one of a cluster of theories or models within the field of science and technology studies that aims to have a sociological understanding of technology in this case. Originally presented in 1987 by Wiebe Biejker and Trevor Pinch, a lot of work was being done within the field throughout the 80s, 90s, and early 2000s when I entered grad school.
So, so if you studied technology as I was, you'd have to grapple with Scott and the STS field in general. One of the arguments that Pinch and Biejker were making was that Science and technology were both often treated as black boxes within their field of study. Now, they were drawing on earlier scholarship for this.
One of their key sources was Leighton, who in 1977 wrote, What is needed is an understanding of technology from inside. Both as a body of knowledge and as a social system. Instead, technology is often treated as a black box whose contents and behavior may be assumed to be common knowledge. End quote. So whether the study was the field of science and the science itself was.
irrelevant, it didn't have to be known, it could just be treated as a black box and the theory applied to whatever particular thing was being studied. Or people studying innovation that had all the interest in the inputs to innovation but had no particular interest or insight into the technology on its own.
So obviously the studies up to 1987 had a bit of a blind spot in what they were looking at. And Pinch and Becker are arguing that it's more than just the users and producers, but any relevant social group that might be involved with a particular artifact needs to be examined when we're trying to understand what's going on.
Now, their arguments about interpretive flexibility in relevant social groups is just another way of saying, this street finds its own uses for things, the quote from William Gibson in But their main point is that even during the early stages, all these technologies have different groups that are using them in different ways, according to their own needs.
Over time, it kind of becomes rationalized. It's something that they call closure. And that the technology becomes, you know, what we all think of it. We could look at, say, an iPhone, to use one recent example, as being pretty much static now. There's some small innovations, incremental innovations, that happen on a regular basis.
But, by and large, the smartphone as it stands is kind of closed. It's just the thing that it is now. And there isn't a lot of innovation happening there anymore. But, Perhaps I've said too much and we'll get to the iPhone and the details of that at a later date. But the thing is that once the technology becomes closed like that, it returns to being a black box.
It is what we thought it is, you know? And so if you ask somebody what a smartphone is and how does it work, those are kind of irrelevant questions. A smartphone is what a smartphone is, and it doesn't really matter how the insides work, its product is its output. It's what it's used for. Now, this model of a black box with respect to technology isn't without its critiques.
Six years after its publication, in 1993, the academic Langdon Winner wrote a critique of Scott in the works of Pinch and Biker. It was called Upon Opening the Black Box and Finding it Empty. Now, Langdon Winner is well known for his 1980 article, Do Artifacts Have Politics? And I think that that text in particular is, like, required reading.
So let's bust that out in a future episode and take a deep dive on it. But in the meantime, the critique that he had with respect to social constructivism is In four main areas. The first one is the consequences. This is from like page 368 of his article. And he says, the problem there is that they're so focused on what shapes the technology, what brings it into being that they don't look at anything that happens afterwards, the consequences.
And we can see that with respect to AI, where there's a lot of work on the development, but now people are actually going, Hey, what are the impacts of this getting introduced large scale throughout our society? So we can see how our own discourse about technology is actually looking at the impacts. And this is something that was kind of missing from the theory.
theoretical point of view back in like 1987. Now I'll argue that there's value in understanding how we came up with a particular technology with how it's formed so that you can see those signs again, when they happen. And one of the challenges whenever you're studying technology is looking at something that's incipient or under development and being able to pick the next big one.
Well, what? AI, we're already past that point. We know it's going to have a massive impact. The question is what are going to be the consequences of that impact? How big of a crater is that meteorite going to leave? Now for Winner, a second critique is that Scot looks at all the people that are involved in the production of a technology, but not necessarily at the groups that are excluded from that production.
For AI, we can look at the tech giants and the CEOs, the people doing a lot to promote and roll out this technology as well as those companies that are adopting it, but we're often not seeing the impacts on those who are going to be directly affected by the large scale. introduction of AI into our economy.
We saw it a little bit with the Hollywood Strikes of 2023, but again, those are the high profile cases and not the vast majority of people that will be impacted by the deployment of a new technology. And this feeds right into Scot's third critique, that Scot focuses on certain social groups but misses the larger impact or even like the dynamics of what's going on.
How technological change may impact much wider across our, you know, civilization. And by ignoring these larger scale social processes, the deeper, as Langdon Winters says, the deeper cultural, intellectual, or economic regions of social choices about technology, these things remain hidden, they remain obfuscated, they remain part of the black box and closed off.
And this ties directly into Wiener's fourth critique as well, is that when Scottis looking at particular technology it doesn't necessarily make a claim about what it all means. Now in some cases that's fine because it's happening in the moment, the technology is dynamic and it's currently under development like what we're seeing with AI.
But if you're looking at something historical that's been going on for decades and decades and decades, like Oh, the black boxes we mentioned at the beginning, the flight recorders that we started the episode with. That's pretty much a set thing now. And the only question is, you know, when, say, a new accident happens and we have a search for it.
But by and large, that's a set technology. Can't we make an evaluative claim about that, what that means for us as a society? I mean, there's value in an analysis of maintaining some objectivity and distance, but at some point you have to be able to make a claim. Because if you don't, you may just end up providing some cover by saying that the construction of a given technology is value neutral, which is what that interpretive flexibility is basically saying.
Near the end of the paper, in his critique of another scholar by the name of Stephen Woolgar, Langdon Winner states, Quote, power holders who have technological megaprojects in mind could well find comfort in a vision like that now offered by the social constructivists. Unlike the inquiries of previous generations of critical social thinkers, social constructivism provides no solid systematic standpoint or core of moral concerns from which to criticize or oppose any particular patterns of technical development.
end quote. And to be absolutely clear, the current development of AI tools around the globe are absolutely technological mega projects. We discussed this back in episode 12 when we looked at Nick Bostrom's work on superintelligence. So as this global race to develop AI or AGI is taking place, it would serve us well to have a theory of technology that allows us to provide some critique.
Now that Steve Woolgar guy that Winder was critiquing had a writing partner back in the seventies, and they started looking at science from an anthropological perspective in their study of laboratory life. And he wrote that with Bruno Latour. And Bruno Latour was working with another set of theorists who studied technology as a black box and that was called Actor Network Theory.
And that had a couple key components that might help us out. Now, the other people involved were like John Law and Michel Callon, and I think we might have mentioned both of them before. But one of the basic things about actor network theory is that it looks at things involved in a given technology symmetrically.
That means it doesn't matter whether it's an artifact, or a creature, or a set of documents, or a person, they're all actors, and they can be looked at through the actions that they have. Latour calls it a sociology of translation. It's more about the relationships between the various elements within the network rather than the attributes of any one given thing.
So it's the study of power relationships between various types of things. It's what Nick Bostrom would call a flat ontology, but I know as I'm saying those words out loud I'm probably losing, you know, listeners by the droves here. So we'll just keep it simple and state that a person using a tool is going to have normative expectancy.
About how it works. Like they're gonna have some basic assumptions, right? If you grab a hammer, it's gonna have a handle and a head and, and depending on its size or its shape or material, it might, you know, determine its use. It might also have some affordances that suggest how it could be used, but generally that assemblage, that conjunction of the hammer and the user.
I don't know, we'll call him Hammer Guy, is going to be different than a guy without a hammer, right? We're going to say, hey, Hammer Guy, put some nails in that board there, put that thing together, rather than, you know, please hammer, don't hurt him, or whatever. All right, I might be recording this too late at night, but the point being is that people with tools will have expectations about how they get used, and some of that goes into how those tools are constructed, and that can be shaped by the construction of the technology, but it can also be shaped by our relation to that technology.
And that's what we're seeing with AI, as we argued way back in episode 12 that, you know, AI is a assistive technology. It does allow for us to do certain things and extends our reach in certain areas. But here's the problem. Generally, we can see what kind of condition the hammer's in. And we can have a good idea of how it's going to work for us, right?
But we can't say that with AI. We can maybe trust the hammer or the tools that we become accustomed to using through practice and trial and error. But AI is both too new and too opaque. The black box is too dark that we really don't know what's going on. And while we might put in inputs, we can't trust the output.
And that brings us to the last part of our story.
In the previous section, the authors that we were mentioning, Latour and Woolgar, like winner pitch biker, are key figures, not just in the study of technology, but also in the philosophy of science. Latour and Woolgar's Laboratory Life from 1977 is a key text and it really sent shockwaves through the whole study of science and is a foundational text within that field.
And part of that is recognizing. Even from a cursory glance, once you start looking at science from a anthropological point of view, is the unique relationship that scientists have with their instruments. And the author Inkeri Koskinen sums up a lot of this in an article from 2023, and they termed the relationship that scientists have with their tools the necessary trust view.
Trust is necessary because collective knowledge production is characterized by relationships of epistemic dependence. Not everything scientists do can be double checked. Scientific collaborations are in practice possible only if its members accept it. Teach other's contributions without such checks.
Not only does a scientist have to rely on the skills of their colleagues, but they must also trust that the colleagues are honest and will not betray them. For instance, by intentionally or recklessly breaching the standards of practices accepted in the field or by plagiarizing them or someone else.
End quote. And we could probably all think of examples where this relationship of trust is breached, but. The point being is that science, as it normally operates, relies on relative levels of trust between the actors that are involved, in this case scientists and their tools as well. And that's embedded in the practice throughout science, that idea of peer review of, or of reproducibility or verifiability.
It's part of the whole process. But the challenge is, especially for large projects, you can't know how everything works. So you're dependent in some way that the material or products or tools that you're using has been verified or checked by at least somebody else that you have that trust with. And this trust is the same that a mountain climber might have in their tools or an airline pilot might have in their instruments.
You know, trust, but verify, because your life might depend on it. And that brings us all the way around to our black boxes that we started the discussion with. Now, scientists lives might not depend on that trust the same way that it would with airline pilots and mountain climbers, but, you know, if they're working with dangerous materials, it absolutely does, because, you know, chemicals being what they are, we've all seen some Mythbusters episodes where things go foosh rather rapidly.
But for most scientists, what Koskinen notes that this trust in their instruments is really kind of a quasi trust, in that they have normative expectations about how the tools they use are going to function. And moreover, this quasi trust is based on rational expectations. They're rationally grounded.
And this brings us back full circle. How does your AI work? Can you trust it? Is that trust rationally grounded? Now, this has been an ongoing issue in the study of science for a while now, as computer simulations and related tools have been a bigger, bigger part of the way science is conducted, especially in certain disciplines.
Now, Humphrey's argument is that, quote, computational processes have already become so fast and complex that it was beyond our human cognitive capabilities to understand their details. Basically, computational intensive science is more reliant on the tools than ever before. And those tools are What he calls epistemically opaque.
That means it's impossible to know all the elements of the process that go into the knowledge production. So this is becoming a challenge for the way science is conducted. And this goes way before the release of ChatGPT. Much of the research that Koskinen is quoting comes from the 20 teens. Research that's heavily reliant on machine learning or on, say, automatic image classifiers, fields like astronomy or biology, have been finding challenges in the use of these tools.
Now, some are arguing that even though those tools are opaque, they're black boxed, they can be relied on, and they're used justified because we can work on the processes surrounding them. They can be tested, verified, and validated, and thus a chain of reliability can be established. This is something that some authors call computational reliabilism, which is a bit of a mouthful for me to say, but it's basically saying that the use of the tools is grounded through validation.
Basically, it's performing within acceptable boundaries for whatever that field is. And this gets the idea of thinking of the scientist as not just the person themselves, but also their tools. So they're an extended agent, the same as, you know, the dude with the hammer that we discussed earlier. Or chainsaw man.
You can think about how they're one and the same. One of the challenges there is that When a scientist is familiar with the tool, they might not be checking it constantly, you know, so again, it might start pushing out some weird results. So it's hard to reconcile that trust we have in the combined scientist using AI.
They become, effectively, a black box. And this issue is by no means resolved. It's still early days, and it's changing constantly. Weekly, it seems, sometimes. And to show what some of the impacts of AI might be, I'll take you to a 1998 paper by Martin Weitzman. Now, this is in economics, but it's a paper that's titled Recombinant Growth.
And this isn't the last paper in my database that mentions black boxes, but it is one of them. What Weitzman is arguing is that when we're looking at innovation, R& D, or Knowledge production is often treated as a black box. And if we look at how new ideas are generated, one of those is through the combination of various elements that are already existing.
If AI tools can take a much larger set of existing knowledge, far more than any one person, or even teams of people can bring together at any one point in time and put those together in new ways. then the ability to come up with new ideas far exceeds that that exists today. This directly challenges a lot of the current arguments going on, on AI and creativity, and that those arguments completely miss the point of what creativity is and how it operates.
Weitzman states that new ideas arise out of existing ideas and some kind of cumulative interactive process. And we know that there's a lot of stuff out there that we've never tried before. So the field of possibilities is exceedingly vast. And the future of AI assisted science could potentially lead to some fantastic discoveries.
But we're going to need to come to terms with how we relate to the black box of scientist and AI tool. And when it comes to AI, our relationship to our tools has not always been cordial. In our imagination, from everything from Terminator to The Matrix to Dune, it always seems to come down to violence.
So in our next episode, we're going to look into that, into why it always comes down to a Butlerian Jihad.
Once again, thanks for joining us here on the Implausipod. I'm your host, Dr. Implausible, and the research and editing, mixing, and writing has been by me. If you have any questions, comments, or there's elements you'd like us to go into additional detail on, please feel free to contact the show at drimplausible at implausipod dot com. And if you made it this far, you're awesome. Thank you. A brief request. There's no advertisement, no cost for this show. but it only grows through word of mouth. So, if you like this show, share it with a friend, or mention it elsewhere on social media. We'd appreciate that so much. Until next time, it's been fantastic.
Take care, have fun.