MINDWORKS

The Ethics of AI with William Casebeer and Chad Weiss

March 16, 2021 Daniel Serfaty Season 2 Episode 1
MINDWORKS
The Ethics of AI with William Casebeer and Chad Weiss
Show Notes Transcript

From the Golem of Prague, to Frankenstein’s monster, to HAL and the Terminator, literature and film are full of stories of extraordinarily artificially intelligent beings that initially promise a brighter future but in the end tragically turn against their human masters. Beyond the Hollywood lore, what are really the issues? Why is it important now, at this juncture, to study the ethics of artificial intelligence? No less than the future of work, of war, of transportation, of medicine, of manufacturing, in which we are blending two types of intelligences—artificial and human. Join host Daniel Serfaty as he explores the intersection of technology and philosophy to explore the ethics of artificial intelligence with Dr. William Casebeer, Director of Artificial Intelligence and Machine Learning at Riverside Research Open Innovation Center, and Mr. Chad Weiss, Senior Research Engineer at Aptima, Inc. 

Daniel Serfaty: Welcome to the MINDWORKS Podcast. This is your host, Daniel Serfaty. This week, we will explore the far boundaries of technology with a topic that is familiar for all of the audience because the literature and the film industry is full of those stories. Usually those stories turn tragic from the Golem of Prague to Frankenstein's monster, to HAL in A Space Odyssey to the Terminator. There is always this extraordinary artificially intelligent being that at some point turns against the human and there is hubris and fear and folks have this image of, uncontrolled, that's an important word, robots or artificial intelligence. There is a lot of positive in all that, but also a lot of warning signs. And we will explore that with my two guests today. We are lucky, we are really fortunate to have two people who've been thinking about those issues while doing very successful careers in technology.

And my first guest is Dr. William Casebeer who is the Director of Artificial Intelligence and Machine Learning at Riverside Research Open Innovation Center. He has decades of experience leading interdisciplinary teams to create solutions to pressing national security problems at Scientific Systems Company, the Innovation Lab at Beyond Conflict, Lockheed Martin Advanced Technology Lab, and at the Defense Advanced Research Projects Agency or DARPA.

My second guest is Mr. Chad Weiss. And Chad is a senior research engineer at Aptima Inc. So full disclosure, he's my colleague and I work with him on a weekly basis. Chad focuses primarily on user experience and interaction design. He comes to us from the Ohio State University where he studied philosophy and industrial and systems engineering, focusing on cognitive systems engineering. Now that's a combination that is going to be very useful for today's discussion.

So Chad and Bill, we're going to go into exploring unknown territories. And I know you probably have more questions than answers, but asking the right question is also what's important in this field. Both of you are very accomplished engineers and scientist technologists, but today we're going to talk about a dimension that is rarely approached when you study engineering. It's basically the notion of ethics, the notion of doing good as well as doing well, as we design those systems.

And specifically about that are intelligent, that are capable of learning, of initiative. And that opens a whole new domain of inquiry. And our audience is very eager to even understand beyond the Hollywood lore, what are really the issues. So what we're talking generally speaking is about really the future of work of war, of transportation, of medicine, of manufacturing, in which we are blending, basically different kinds of intelligences, artificial and human. And we are at a moment of a perfect intersection between technology and philosophy. Let's call it by its name, ethics. The ancient Greek philosophers studied ethics way before formal principles of engineering and design. So why now, why is it important now at this juncture to understand and study the ethics of artificial intelligence? Why now? Bill?

Bill Casebeer: I think there are really three reasons why it's so important now that we look at the ethics of technology development. One is that our technologies have advanced to the point that they are having outsized effect on the world we live in. So if you think about over the span of evolutionary timescales for human beings, we are now transforming ourselves and our planet in a way that has never been seen before in the history of the earth. And so given the outsized effect that our tools and ourselves are having on our world, now more than ever, it's important that we examine the ethical dimensions of technology.

Second is I think that while we have always used tools. I think that's the defining hallmark of what it means to be a human, at least in part. That we are really good tool users. We're now reaching a point where our tools can stare back. So the object stares back as the infamous saying goes. So in the past, I might've been able to use a hammer to drive a nail into the roof, but now because of advances in artificial intelligence and machine learning, I can actually get some advice from that hammer about how I can better drive the nail in. And that is something that is both qualitatively and quantitatively different about our technologies than ever before.

Third, and finally, given that we are having dramatic impact and that our technologies can talk back, if you will, they're becoming cognitive. There's the possibility of emergent effects. And that's the third reason why I think that we need to think about the ethics of technology development. That is we may design systems that because of the way humans and cognitive tools interact, do things that were unintended, that are potentially adverse, or that are potentially helpful in an unexpected way. And that means we can be surprised by our systems and given their impact and their cognition that makes all the more important that we think about those unanticipated consequences of these systems that we're developing. So those are at least three reasons why, but I know Chad probably has some more or can amplify on those.

Chad Weiss: Yeah. So I think it's an interesting question, perhaps the best answer is if not now, when? But I also don't see this as a new phenomenon. I think that we have a long history of applying ethics to technology development and to engineering specifically. When I was in grad school, I joined an organization called the Order of the Engineer, which I believe some of my grad school mates found a little bit nerdy at the time, but it fit very well with my sort of worldview. And it's basically taking on the obligation as an engineer to operate with integrity and in fair dealing. And this dates back to, I believe the 1920s. A bridge collapse in Canada, and it became readily apparent that engineers have an impact on society.

And that as such, we owe a moral responsibility to the lives that we touch. In the case of AI I think that the raw power of artificial intelligence, or these computational methods presents some moral hazards that we need to take very seriously. And when we talk about ethics in AI, I think that one thing I've noticed recently is that you have to be very deliberate and clear about what we're talking about. When we say artificial intelligence, if you read between the lines of many conversations, it becomes readily apparent that people are talking about vastly different things. The AI of today, or what you might call narrow AI is much different from the way that we hypothesize something like an artificial general intelligence that has intelligence closer to what a human has. These are very different ethical areas, I think. And they both deserve significant consideration.

Daniel Serfaty: Thank you for doing a 360 on this notion because I think the definitions are important as well as those categories, Bill, that you mentioned are very relevant. I think what most people worry today are about your third point. Which is, can I design that hammer, it may give me advice of how to hit a nail, but can suddenly the hammer take initiatives that are not part of my design specification. The notion of emerging surprising behavior.

I mean, Hollywood made a lot of movies and a lot of money just based on that very phenomenon of suddenly the robot or the AI refusing to comply with what the human thought should be done. Let's start with an example, perhaps. If you can pick one example that you're familiar with from the military or from medicine. Can be a robotic surgery or from education or any domain that you are familiar with, then describe how the use of it can represent an ethical dilemma.

I'm not yet talking about the design principles. We're going to get into that, but more of an ethical dilemma, either for the designer who designed those systems or for the operators that uses those systems. Could you share one example? I know you had tons of them, but to pick one for the audience so that we can situate at least the kind of ethical dilemma that are represented here, who wants to start?

Bill Casebeer: I can dive in there Daniel, and let me point out that Chad and I have a lot of agreement about how the kind of a history of technology development has always been shot through with ethical dimensions. And some of my favorite philosophers are the ancient virtue theorist out of Greece who we're even then concerned to think about social and physical technologies and how they impacted the shape of the polis, of the political body.

It's interesting that Chad mentioned the bridge collapsed. He might've been referring, correct me if I'm wrong, Chad, Tacoma Narrows bridge collapse, which was where a new change in the design of the bridge, where we eliminated trusses from the design of the bridge was what actually caused the aeroelastic flutter that led to the bridge oscillating and eventually collapsing. That dramatic footage that you can see on YouTube of the collapse of galloping Gertie

And so that just highlights that these seemingly mundane engineering decisions we make such as "I'm going to build a bridge that doesn't have as many trusses can actually have direct impact on whether or not the bridge collapses and take some cars with it." So in a similar fashion, I'll highlight one technology that demonstrates an ethical dilemma, but I do want to note that I don't know that confronting ethical dilemmas is actually the best way to think about the ethics of AI or the ethics of technology. It's a little bit like the saying from, I think it was Justice Holmes, that hard cases make bad law. And so when you weighed in with a dilemma, people can immediately kind of throw up their arms and say, "Oh, why are we even talking about the ethics of this? Because there are no clear answers and there's nothing to be done."

When in fact for the bulk of the decisions we make, there was a relatively straightforward way to design and execute the technology in such a fashion that it accommodates the demands of morality. So let me throw that caveat in there. I don't know that leading with talk of dilemmas is the best way to talk about ethics and AI, just because it immediately gets you into Terminator and Skynet territory. Which is only partially helpful.

Having said that, think about something like the use of a semi-autonomous or autonomous unmanned aerial vehicles to prosecute a conflict. So in the last 20 years, we've seen incredible developments in technology that allow us to project power around the globe in a matter of minutes to hours, and where we have radically decreased the amount of risk that the men and women who use those systems have to face as they deliver that force.

So on the one hand that's ethically praise worthy because we're putting fewer people at risk as we do what warriors do. Try to prevail in conflict. It's also ethically praiseworthy because if those technologies are constructed well, then they may allow us to be yet more discriminate as we prosecute a war. That is to reliably tell the difference between somebody who's trying to do us harm, and hence is a combatant and someone who isn't, and is a person on the battlefield.

And so those are two ethically praiseworthy dimensions of being able to drop a bomb from afar, you put fewer lives at risk. You put fewer warriors at risk and you potentially become more discriminant, better able to tell the difference between combatants and non-combatants and morality demands that if we were going to be just warriors.

However, the flip side of that is that being far removed from the battlefield has a couple of negative effects. One is that it makes you less sensitive as a human being potentially to the damage that you're doing when you wage war. So when you are thousands of miles away from the battlefield, that's a little bit harder for you to see and internalize the suffering that's almost always caused whenever you use force to resolve a conflict. And that can cause deadening of moral sensibilities in such a way that some would say we perhaps become more likely to use some of these weapons than we otherwise would if we were allowed to internalize firsthand the harm that can be done to people when you drop fire from above on them.

Secondly, if we delegate too much authority to these systems. If they're made up of autonomous, semi-autonomous and non-autonomous components, then there's the likelihood that we might miss certain dimensions of decision-making that are spring loading us to use force when we don't necessarily have to.

So what I mean by that is that there are all kinds of subtle influences on things like deadly force judgment and decision-making decisions that we make as warriors. And let me use a homely example to drive that home. When I was teaching at the Air Force Academy, we have an honor code. And so the cadets all swear that they will not lie, steal or cheat or tolerate amongst the cadet body, anyone who does. And you might think that that is a matter of individual judgment to do something that you, I might later regret when it comes just to say, preparing for a test. You might make that fateful decision to cheat on an exam, in a way that ultimately serve no one's interests. Those who want people who know the subject matter, and those who want individuals to be people of integrity, who don't cheat or lie.

But it turns out that when you look at the data about what leads cadets, or really any human being to make a bad decision, a decision they later regret that there are lots of other forces that we need to take into account. And in the case of those students who cheated, oftentimes there were precipitating conditions like a failure to plan. So that they had spent several sleepless nights before the fateful morning when they made a bad decision to cheat on an exam. And so the way to build a system that encourages people to be their best selves was not necessarily to hector or lecture them about the importance of making a decision in the moment about whether or not you're going to cheat on the exam. It is also to kit them out with the skills that they need to be able to plan their time, well, so they're not sleepless for several days in a row.

And it also consists in letting them know how the environment might exert influences on them that could cause them to make decisions they would later regret. So we should also be thinking about those kinds of things as we engineer these complicated systems that deal with the use of force at a distance. So I consider that to be a kind of dilemma. Technologies that involve autonomous and semi-autonomous components that have upsides because they put fewer warriors at risk and allow us to be more discriminant, but also my deaden us to the consequences of a use of force. And might also unintentionally cause us to use force when we would otherwise decide not to, if the system took into account all of the social, psychological dimensions that you support decision.

Daniel Serfaty: Thank you, Bill. I think I was listening to you very intently and this is the most sophisticated and clearest explanation of how complex the problem is, in a sense that on the designer perspective, as well as on the operator's perspective. That it is not just an issue of who has control of what, but there are many more context, variable that one has to take into account when conceiving, even of those systems. Chad, do you have an example you want to share with us?

Chad Weiss: Yeah. So first of all, Bill, great answer. I would advise anybody who is going to do a podcast with Bill Casebeer not to follow. The point that you bring up about remote kinetic capabilities is an interesting one. I think Lieutenant Colonel Dave Grossman covers that in his book On Killing about sort of the history of human kind of reluctance to take the lives of other humans. And a key variable in making that a possibility is increasing the distance between the trigger person and the target, if you will. One thing that strikes me in the military context is that what we're talking about today is not new, in any way. As we stated, it goes back to ancient Greece. It goes back to Mary Shelley and all of these different sort of cultural acknowledgements of the moral hazards that are presented by our creations.

And the history of technology shows that as much as we like to think that we can control for every eventuality, automation fails. And when automation fails or surprises the user, it fails in ways that are unintuitive. You don't see automation fail along the same lines as humans. They fail in ways that we would never fail. And I think that probably goes vice versa as well.

So something that keeps me up at night is the sort of idea of an AI arms race with military technologies, that there is an incentive to develop increasingly powerful, automated capabilities faster than the adversary. And we saw the nuclear arms race, and that this puts the world in quite a bit of peril. And what I am a little bit fearful of is the idea that we are moving towards AI superiority at such a pace that we're failing to really consider the implications and temper our developments in such a way that we're building resilient systems.

Bill Casebeer: Yeah, that's a really critical point Chad, that we need to be able to engineer systems in such a way that they can recover from the unexpected. From the unexpected behavior of both the system that it's part of and unexpected facts about the environment it's operating in. And that's part of the reason why in the United States, our doctrine presently in praise worthily requires that a soldier be involved in every use of force decision.

Just because we're aware of these unknown unknowns, both in the operation of the system and in the environment it's working in. And so bringing human judgment into there can work can really help to tamp down the unintended negative consequences of the use of a piece of technology. And now the flip side of that, of course, and I'd be interested in your thoughts on this Chad, is that as we use autonomy, and I agree with you that there is almost a ratchet, a type of inexorable increase in the use of autonomy on the battlefield because of its effect, you can act more quickly and perhaps deliver a kinetic solution if you will, to a conflict quicker than you could otherwise. So for that reason, autonomy is going to increase in its use in the battlefield.

What we might want to consider is given that the object stares back, is we need to think about how we engineer some of that resilience, even if we're not allowing deadly force judgment, decision-making to take place on the autonomy side into the autonomous system itself. And I think that's one reason why we need to think about the construction of something like an artificial conscience. That is a moral or governor that can help some of the parts of these complex and distributed systems consider and think about the ethical dimensions of the role they play in the system.

And I know a lot of people have a negative reaction to that idea that artificial intelligence could itself reason in the moral domain and perhaps for good Aristotelian or platonic reasons. For good reasons that stems from the Greek tradition that usually we only think of people as being agents, but it may very well be that as our tools start to stare back that as they become more richly and deeply cognitive, that we need to think about how we engineer some of this artificial conscience, the ability to make moral judgments, the ability to act on them, even independently of a human into the system so that we can give them the requisite flexibility they need.

Chad Weiss: Yeah, that's a great point. It strikes me that we've really been discussing this from one side, which is what are our ethical responsibilities when using artificial intelligence, developing, using artificial intelligence. There's also a question of not only what our responsibilities towards the AI that we're developing, if in fact there are any, but what does the way that we think about AI say about the human animal?

Bill Casebeer: Yeah well that's a really interesting point. Maybe we're spring loaded to think that, "Oh, a robot can't have a conscience." I think that would be too bad. I think this requires a more exacting analysis of what it means to have conscience. So we should probably talk about that. Which I think of as being something like the capability to reason over and to act on moral judgments. And of course the lurking presence here is to actually give some content to what we mean by the phrase, moral judgment. So what is morality? And that's the million dollar question because we've been around that block for a few thousand years now, and I suspect that Daniel and Chad, both of you could probably give some nice thumbnail sketches of what the domain of morality consists in, but I'll give that a go because that might set us up for more questions and conversations.

So I think of morality or ethics as really consisting of answers to three questions that we might have. We can think that for any judgment or action I might take that it might have positive and negative consequences. So that's one theory of morality. What it means to be ethical or to be moral is to take actions that have the best consequences, all things considered. And that comes from a classic utilitarian tradition that you can find in the writings of folks like John Stuart mill, probably the most famous proponent of utilitarian approach to ethics.

And on the other hand, folks like Aristotle and Plato, they were more concerned to think, not just about consequences simply, but also to think about the character of the agent who was taking the action that produces those consequences. So they were very focused on character oriented analysis of ethics and morality. And in particular, they thought that people who had good character, so people like Daniel and Chad, that they are exemplars of human flourishing, that they are well-functioning well put together human beings. And so that's a second set of questions we can ask about the morality of technology or of a system. We can ask what's its function. And is it helping people flourish, which is slightly different from a question of what are the consequences of enacting the technology.

And then finally, we can also think about ethics or morality from the perspective of do we have obligations that we owe to each other, as agents, as people who can make decisions and act on them that are independent of their consequences, and that are independent of their effect on our flourishing or our character. And those are questions that are generally ones of rights and duties. So maybe I have a right, for instance, not to be treated in certain ways by you, even if it would be good for the world, if you treated me in that way, even if I add good consequences.

So that's a third strand or tradition and ethics, that's called the deontic tradition. That's a Greek phrase. That means the study of our duties that we have towards each other. And you didn't see this in the writings of somebody like Emmanuel Kant, who can be difficult to penetrate, but who really is kind of carrying the torch in the Western tradition for thinking about rights, duties and obligations that we have independent of consequences.

So those three dimensions are dimensions of ethical evaluation, questions about the consequence of our actions, questions about the impact of our actions on our character and on human flourishing and questions about rights and duties that often revolve around the notion of consent. So I call those things, the three CS consequence, character, and consent. And if you at least incorporate those three, Cs into your questions about the moral dimensions of technology development, you'll get 90% of the way toward uncovering a lot of the ethical territory that people should discuss.

Daniel Serfaty: Thank you, Bill. I'm learning a lot today. I think I should listen to this podcast more often. As a side I know that you're a former military officer because you divide everything in three.

Bill Casebeer: Right.

Daniel Serfaty: That's one of the definitions. Thank you for this clarification, I think it's so important. We order that space a little bit. We understand a little bit those dimensions. I've never heard them classified the way you just did, which is very important. I want to take your notion of artificial conscious a little later, because when we talk about possible approaches and solutions to this enormous, enormous human challenge of the future. I would go back now to even challenge you again, Chad, you keep telling us that these are problem that whether since the dawn of humanity, almost, the ancient Greek philosophers that struggle with these issues. But isn't a AI per se, different. Different qualitatively, not quantitatively in a sense that is perhaps the first technology or technology suite, or technology category that is capable of learning from its environment.

Isn't the learning itself, put us now in a totally different category. Because when you learn, you absorb, you model, you do all the things that you guys just mentioned, but you also have enough to act based upon that learning. So does AI represent a paradigm shift here. You're welcome to push back and tell me now is just on the continuum of developing complex technologies. I want myself to challenge both of you with that notion that we are really witnessing a paradigm shift here.

Chad Weiss: You know, it's interesting, I would push back on that a bit. Certainly the way that AI learns and absorbs information, modern Ais, is different from traditional software methods. But the ability for a tool to learn from the environment, I don't think is new. I think that if you look back at a hammer that you've used for years. The shape of the handle is going to be in some way informed by the shape of your hand, which is certainly a very different kind of learning, if you're willing to call it learning at all. But ultimately I think that what we're seeing with AI is that it is shaping it's form in a sense in response to the user, to the environment and to the information that's taking in. So I don't think that it's unique in that regard.

Daniel Serfaty: Okay. I think we can agree to disagree a little bit. Since this podcast, by the way, for our audience was prompted by a question that Chad asked me several months ago, members of the audience, that probably listened to the first and second podcast that focused on this artificial intelligence employee called Charlie, so to speak, at Aptima. And there was a moment in which Charlie was fed a bunch of rap music by different artists, thousands of pieces of rap, and then came up with her, that's a she, with her own rap song that is not mimicking just the rap songs or even the rhythm that she's heard before, but came with a striking originality almost.

So the question is that, okay, what did Charlie learn? And by that, I mean, this goes back to a point that Bill mentioned earlier about this notion of emerging behavior, surprising things, did Charlie just mimic and brought some kind of an algebraic sum of all the music and came up with the music. Or did she find a very hidden pattern that is opaque to our human eyes, but that she was able to exploit. That's why I believe that AI is changing because we don't know exactly what it learns in those deep learning schemes. We think we do, but from time to time, we're surprised, sometimes the surprise is very pleasant and exciting because we have a creative solution, and sometime it can be terrifying. Do you agree with me or disagree with me? For that matter.

Chad Weiss: I hope you don't mind if I shirk your question a little bit, because you brought up a couple of things in it that make me a little uneasy, not least of all that I think that my rap was objectively better than Charlie's. It had more soul in it. But in all seriousness though, the concept of the artificial intelligence employee is something that gives me pause. It makes me uncomfortable because this is one of those areas that I think we have to take a step back and ask what it reflects in the human animal.

Because if you look at the facts, Charlie is here at Aptima through no will of her own. Charlie is not paid and Charlie has no recourse to any perceived abuse if in fact, she can perceive abuse. If Charlie starts to behave in a way that we don't necessarily like, or that's not conducive to our ends, we will just reprogram Charlie. So the question that I think that raises in my mind is what is it in the human that wants to create something that they can see as an equal and still have control over, still have domain over. Because the characterization that I just laid out of Charlie, doesn't sound like employee to me, it sounds a little bit more like a slave. And I think there's some discomfort around that. At least in my case.

Daniel Serfaty: Very good point, Chad, that's something that you and I and other folks have been thinking about. Because suddenly we have these let's call it being for lack of a better term. We don't have exactly the vocabulary for it. That is in our midst, that participate in innovation sessions, that write chapters in books.

And as you said, the anthropomorphization of Charlie is a little disturbing. Not because she's not embodied, or she doesn't have a human shape, but we use the word like employee. She has an email address, but she does not have all the rights as you said, and all the respect and consideration and social status that other employees have. So a tool or a teammate, bill?

Bill Casebeer: These are great questions. And I think that I come down more like a Chad on this topic in general. I don't think there's anything new under the sun in the moral and ethical domain. Just because we have several thousand years of human experience dealing with a variety of technologies. And so it's hard to come up with something that is entirely new.

Having said that I think there was a lot of background that we take as a given when we think about the human being, when we think about ourselves. So if I just, from a computational perspective, consider 10 to the 14th neurons I have in my three pound universe here on my spinal cord and the 10 to the 15th power connections between them and the millions of hours of training, experience and exemplars, I will have seen as I sculpt that complicated network so that it becomes Bill Casebeer, there's a lot of that going on too.

I don't know exactly how Charlie, she may be a more traditional type of AI. But if Charlie learns, if she has some limited exposure in terms of training, exemplars and sets, if she has some ability to reason over those training sets to carry out some functions, then I think Charlie might be more akin to something like a parrot. So parrots are pretty darn intelligent. They have language, they can interact with people. Some parents have jobs and we don't accord the parrot necessarily full moral agency in the same way that I do a 20 year old human.

But we do think that a parrot probably has a right not to be abused by human being or kept without food and water in a cage. And so I don't think it's crazy to think that in the future, even though there's nothing new under the sun, that our AIs like Charlie might reach the point where we have to accord them parrot-like status in the domain of moral agency. Which really leads to the question about what makes something worthy of moral respect.

Daniel Serfaty: Yes, the parrot analogy is very good. Because I think it reflects more the place where Charlie and its cohorts of other AI's, like modern new generation AI are standing. And we need to think about that. We'll be back in just a moment, stick around. Hello, MINDWORKS listeners. This is Daniel Serfaty. Do you love MINDWORKS , but don't have time to listen to an entire episode? Then we have a solution for you. MINDWORKS Minis, curated segments from the MINDWORKS , but gas condensed to under 15 minutes each and designed to work with your busy schedule. You'll find the minis along with full length episodes, under MINDWORKS on Apple, Spotify, BuzzSprout or wherever you get your podcasts.

So artificial intelligence systems, whether they are used in medicine, in education or in defense are very data hungry. At the end of the day, they are data processing machines that absorbs what we call big data, enormous amount of data, of past data from that field, find interesting patterns, common patterns among those data, and then use the data to advise, to make decisions to interact, et cetera.

What are some of the ethical considerations we should have as data scientists, for example, when we feed those massive amounts of data to the systems and let them learn with very few constraints about those data? Do we have examples in which the emerging behavior from using those data for action or behavior has led to some questions?

Chad Weiss: That's a great question. And there are a lot of issues here. Some of them are very similar to the issues that we face when we are dealing in sort of research on human subjects. Things like do the humans that you're performing research on benefit directly from the research that you're doing. I've used the phrase moral hazard a few times here, and it's probably good to unpack that. So when I say moral hazard, what I'm referring to is when an entity has incentive to take on a higher risk because they are not the sole holders of that risk, in some sense it's outsourced or something of that nature.

So here are some specific examples we have are things like image recognition for the purpose of policing. Where we know that because of the data sets that some of these things are trained on, they tend to be much less accurate when looking at someone who is African-American or in many cases, women. And as a result of being trained on a data set of primarily white males, they are much less accurate when you're looking at some of these other groups.

And there are some very serious implications to that. If you are using something like image recognition to charge someone with a crime, and it turns out that your ability to positively identify from image recognition is significantly lower with certain demographics of people then you have an issue with fairness and equity. I believe it was Amazon who was developing an AI for hiring, and they found that no matter what they did, they were unable to get the system to stop systematically discriminating against women.

And so I think after like $50 million of investment, they had to pull the plug on it. Because they just could not get this AI to stop being chauvinist, more or less. So I think those are examples where the data sets that we use and the black box nature that you alluded to earlier come into play and present some really sticky, ethical areas in this domain.

Daniel Serfaty: These are very good, very good examples. Bill, can you add to that law enforcement and personnel management and hiring examples? Do we have other examples where data itself is biasing the behavior?

Bill Casebeer: I think we do. One of the uses of artificial intelligence and machine learning both is to enable prediction and the ethical dimensions of prediction are profound. So you and Chad have both alluded to the possibility that your training data set may perhaps unintentionally bias your algorithm so that it makes generalities that it shouldn't be making, stereotypes, classic stereotypes. So I know a professor Buolamwini at MIT has done studies about bias and discrimination present in face recognition algorithms that are used in surveillance and policing.

I think that same kind of use of stereotypes can, for example, lead as it has with human doctors to medical advice that doesn't work well for certain underprivileged groups or minorities. So if you're medical research and experimentation to prove that a certain intervention or treatment works, and this began mostly with white males then whether or not it will work for the 25 year old female, that hasn't really been answered yet, and we don't want to over-generalize from that training dataset, as our AIs sometimes can do.

The example that comes to mind for me, like Chad mentioned his Tay Bot. And Tay was a AI chatter bot that was released by Microsoft corporation back in 2016. And its training dataset was input that it received on its Twitter account. And so people started to intentionally feed it racist, inflammatory, offensive information. And it learned a lot of those concepts and stereotypes. It started to regurgitate them back in conversation, such that they eventually had to shut it down because of its racist and sexually charged and innuendo. So I think it goes, that's a risk in policing for some defense applications. If you're doing security clearances using automated algorithms, if you're determining who is a combatant based on a bias training dataset. For medicine, for job interviews, really for anywhere where prediction is important.

The second thing I would point out is that in addition to data sets that can cause bias and discrimination is people like Nicholas Carr Virginia Postrel have pointed out that sometimes you get the best outcomes when you take your native neural network and combine it with the outputs of some of these artificial neural networks. And if we over rely on these AIs, we may underlie or shirk this very nicely trained pattern detector that has probably a lot more training instances in it than any particular AI and ability to generalize across a lot more domains than a lot of AI systems. And so Nick Carr makes the point that one other ethical dimension of prediction is that we can over rely on our AI is at the expense of our native prediction capabilities. Every day AI is making people easier to use as the saying goes.

Daniel Serfaty: Yes, well, that's a perfect segue into my next question that has to do with, as we move towards the future and towards the potential solution to the many very thoughtfully formulated problems that you shared with us today, the major recent development in research is to apply the knowledge that we acquired for many years in the science of teams and organization, to understand the psychology and the performance of multiperson and I use that term in particular. Because now we use it as guidelines for how to structure this relationship you just described in your last example, Bill, by combining basically human intelligence and AI intelligence into some kind of [inaudible 00:42:11] intelligence that be better than perhaps at the sum of its part, in which each one checks on the other, in a sense.

And as a result, there is some kind of an [inaudible 00:42:20] match that will produce higher levels of performance, maybe safer level of performance, maybe more ethical levels of performance. We don't know, all these are questions. So could you comment for a second on both the similarities and differences between classical teams that we know, whether they are sports team, or command and control teams, or medical teams with those new, we don't have a new word in the English language. We still call them teams of humans and artificial intelligences, blended together similarities and differences. What's the same, what's different. What worries you there?

Chad Weiss: This is another interesting area. It's a lot of this hinges upon our use of language. And this is the curse of really taking philosophy of language at a young age. There's a question here of what we mean when we say teammate, what do we mean even when we say intelligence, because machine intelligence is very different from human intelligence. And I think that if you are sort of unfamiliar with the domain. There may be a tendency to hear artificial intelligence and think that what we're talking about maps directly to what we refer to when we talk about human intelligence, very different.

Daniel Serfaty: Language is both empowering, but also very limiting Chad. That's true. We don't have that new vocabulary that we need to use. So we use what we know. That's the story of human language, and then eventually that evolves.

Chad Weiss: Thank you.

Bill Casebeer: Language generates mutual intelligibility and understanding. So if you're interacting with an agent that doesn't have language mutual intelligibility and understanding is really hard to achieve.

Chad Weiss: Yeah. And then when we're talking about teammates when I use the word teammate, it comes packaged with all of these sort of notions. When I consider a teammate, I'm thinking of someone who has a shared goal, who has a stake in the outcomes. If I have a teammate, there's a level of trust that this teammate, one, doesn't want to fail, that this teammate cares about my perception of them and vice versa, and that this teammate is going to share in not only the rewards of our success, but also the consequences of our failures.

So it's hard for me to conceptualize AI as a strictly defined teammate under those considerations, because I'm not confident that AI has the same sort of stake in the outcomes. Often you hear the question of whether it's ethical to unplug an AI without its consent. And I think that it's very different because what we're doing there is inherently drawing an analogy between depriving the human of life. You're turning them off, essentially turning off in AI is not necessarily the same as a human dying. You can switch it back on, you can copy and duplicate the code that runs the AI. So there's a really interesting sort of comparison between the stakes of a set of potential outcomes between human and Ai.

Daniel Serfaty: I think the richness on to your perspective on this notion, Bill, especially the ethical dimension of it, but I am very optimistic because those very questions that we're asking right now when we pair a radiologist, for example, with an AI machine who's read millions and millions of MRI pictures and can actually combine that intelligence with that of the expert to reach new levels of expertise. As we think through this problem, as engineers, as designers, it makes us understand the human dimension even deeper, what you're reflected right now, Chad on what does it mean to be a member of a team and what does a teammate mean to you? Has been forced, that's thinking has been forced because we are designing artificial intelligence system and we don't know what kind of social intelligence to embed with them with. So my point is that it has a beautiful kind of a going back to really understanding what makes us humans as special, unique. That for us, what do you think about that?

Bill Casebeer: That's really intriguing Daniel. I mean, when I think about the similarities and differences between AIs and people on teams, some similarities that we share with our artificial creations are that we oftentimes reason the same way. So I use some of the neural networks I have in my brain to reason about certain topics in the same way that a neural network I construct in software or in hardware reasons. So I can actually duplicate things like heuristics and biases that we see in how people make judgements in silico, if you will. So at least in some cases we do reason in the same way because we're using the same computational principles to reason.

Secondly, another similarity is that in some cases we reason in a symbolic fashion, and in some cases we reason in a non-symbolic fashion. That is in some cases we are using language and we're representing the world and intervening on it. And in others, we're using these networks that are designed to help us do biological things like move our bodies around or react in a certain way emotionally to an event. And those may be non-symbolic. Those might be more basic in computational terms, if you will.

And I think we actually see that in our Silicon partners too, depending on how they're constructed. So those are a couple of similarities, but there are some radical differences as you were just picking up on Daniel, I think. One is that there is a huge general purpose AI context that is missing. You and Chad are both these wonderful and lively people with these fascinating brains and minds. You've had decades of experience and thousands of training examples and hundreds of practical problems to confront every day. That's all missing, generally, when I engage with any particular artificial intelligence or cognitive tool, it's missing all of that background that we take for granted in human interaction.

And secondly, there's a lot of biology that's just missing here. For us as human beings, our bodies shape our minds and vice versa, such that even right now, even though we're communicating via Zoom, we're using gestures and posture and eye gaze to help make guesses about what the other person is thinking and to seek positive feedback and to know that we're doing well as a team. And a lot of that is missing for our AI agents. They're not embodied, so they don't have the same survival imperatives that Chad mentioned earlier. And they also are missing those markers that can help us understand when we're making mistakes as a team that at least for us human beings have evolved in evolutionary timescales. And are very helpful for helping us coordinate activity like be mad, angry when somebody busts a deadline. So all supremely important and differences between our artificial agents and us humans.

Daniel Serfaty: So taking on that, are you particularly worried about this notion of, it's a long verb here, but basically anthropomorphizing those artificial intelligence and robots by giving them names, giving them sometimes a body. The Japanese are very good at actually making robots move and blink and smile like humans, for example, or maybe not like humans and that's the issue. And are we worried about giving them gender like Charlie or other things like that because it creates an expectation of behavior that is not met. Tell me a little bit about that before I'm going to press you about giving us all the solutions to solve all these problems in five minutes or less, but let's explore that first, anthropomorphizing.

Bill Casebeer: I'll start. It's a risk for sure, because of that background of our biology and our good general purpose AI chops as people, we take that for granted and we assume it in the case of these agents. And when we anthropomorphize them, that can lead us to think that we have obligations to them that we actually don't, and that they have capabilities that they don't actually possess. So I think anthropomorphization, it can help enable effective team coordination in some cases, but it also presents certain risks if people aren't aware the human-like nature of these things stops. Before we kind of think about, "Oh, and this is something that rebuts Chad and Bill's assumption, there's nothing new under the sun." I would say we actually have a body of law that thinks about non-human agents, our obligations to them and how we ought to treat them. And that's corporate agency in our legal system.

So we have lots of agents running around now, they're taking actions that impact all of our lives daily. And we have at least some legal understanding of what obligations we have to them and how we ought to treat them. So IBM or name your favorite large corporation isn't composed of exclusively of people. It's this interesting agent that's recognized in our law, and that has certain obligations to us and we have certain obligations to it. Think of Citizens United. All of those things can be used as tools as we kind of work our way through how we treat corporate entities to help us maybe figure out how we ought to treat these agents that are both like, and unlike us too.

Daniel Serfaty: Thank you. Very good.

Chad Weiss: Yeah. I think I'm of two minds here on the one hand-

Daniel Serfaty: Something an artificial intelligence will never say.

Chad Weiss: On the one hand as a developer of technologies and because of my admittedly sometimes kooky approach to sort of collaborative creativity I think that there is a sense of value in giving the team a new way to think about the technology that they're developing. I often encourage teams to flip their assumptions on their heads. And to change the frame of reference with which they're approaching a problem, because I think this is a very valuable for generating novel ideas and remixing old ideas into novel domains.

It's just the key to innovation. On the other hand, I think that as shepherds of emerging and powerful technologies, we have to recognize that we have a much different view, understanding of what's going on under the hood here. And when we are communicating to the general public or to people who may not have the time or interest to really dive into these esoteric issues, that people like Bill and I are sort of driven towards by virtue of our makeup I think that we have a responsibility to them to help them understand that this is not exactly human and that it may do some things that you're not particularly clear on.

My car has some automated or artificial intelligence capabilities. It's not Knight Rider or Kit if you will. But it's one of those things where like, as a driver, if you sort of think of artificial intelligence as like human intelligence that can fill in gaps pretty reliably, you're putting yourself in a great deal of danger. There are spaces in, as I'm driving through this area, if I'm driving to the airport, I know there's one spot right before an overpass where the car sees something in front of it and it slams on the brakes. This is very dangerous when you're on the highway. And if you're not thinking of this as having limited capabilities to recover from errors or misperceptions in the best way possible you're putting your drivers, your drivers families, your loved ones, a great deal of risk, as well as other people who have not willingly engaged in taking on the artificial intelligence. There are other drivers on the road, and you're also putting their safety at risk as well. If you're misrepresenting in a way, whether intentionally or unintentionally the capabilities and the expectations of an AI.

Daniel Serfaty: It's interesting guys, listen to these examples from inside your car or in war and combat situation, et cetera, I cannot help, but go back to science fiction. Because that's really our main frame of reference. Quite often, in discussions, even with very serious medical professionals or general officers in the military, they always go back to an example of the scene in the movie because they want a piece of that. Or because that becomes a kind of a warning sign, whether it's about the autonomous artificial intelligence or some interesting pairing between the human and artificial intelligence system many people site the minority report movie in which there is that interaction between Tom Cruise, I believe, and the system. Do you have a favorite one, kind of a point of reference when you think about these issues that from the movies quick one, each one. You're only entitled to pick one each one of you.

Bill Casebeer: Well, that's tough. So many great examples ranging from Isaac Asimov and the I Robot series of stories on through to probably my favorite, which is HAL 9000, the 2001 movie. So the Heuristically Programmed Algorithmic Language Computer. And it's my favorite not only because it was built at the University of Illinois where my son's finishing his PhD in computer science, but also because it highlights both the promise and the peril of these technologies. The promise that these technologies can help us do things we can't do alone as agents, like get to Jupiter. But the peril also that if we don't build them with enough transparency, intelligibility, and potentially with a conscience that they might take actions that we otherwise don't understand like a murder astronauts en route to the planet. So I think of HAL when I think of AI and its promise and peril.

Daniel Serfaty: It's striking, Bill, isn't it, that I know the movie was made more than 50 years ago, five zero. And the book, even earlier than that, it's pretty amazing the foresight these folks had on the danger of basically the artificial intelligence, HAL in that case, took on themselves because they decided that they knew what's best for the collective. That's interesting, Chad any favorite one?

Chad Weiss: Well the same one actually, and for similar reasons, I suppose, but having to do with the Ohio State University. And it is sort of, I think is attributable to Dave Woods who's a professor there. Not only because Dave sees 2001 as seminal in its connection to cognitive dissonance engineering, but also because of my propensity to say, "I'm sorry, Dave, I'm afraid I can't do that." What I really like about this is that it's not Terminator. I have zero fears about an eventuality where we have the Terminator outcome. The reason Terminator works is because it's great for production value. I don't think autonomous armed rebellion is what we need to worry about here. I think that there's a little bit more about the sort of inability for, or the imperfection, I guess, with which humans can actually predict the future and foresee all of the potential outcomes.

Daniel Serfaty: So let's go back actually to that. Because I, a member of the audience, would like to know, okay, there are a lot of these dilemmas, these almost disasters, these prediction of things are going to turn into a nightmare. There was a recent article in the Washington Post entitled Can Computer Algorithms Learn to Fight Wars Ethically and both praising the capabilities as you did earlier Bill, but also warning about unexpected behaviors.

Well, that makes good storytelling, but what can we do as engineers, as scientists, as designer to ensure that the AI of the future or that is being designed now will behave according to the design envelope, we have engineered it for. You brought that brilliant idea earlier about this notion of an artificial conscience, maybe a metacognition of sorts on top of the artificial intelligence that can regulate itself, maybe independent of it. What else? What can we do, even practically, what guidance do we have for the engineers in our audience to minimize the occurrence of unpleasant surprises?

Bill Casebeer: It's more than a million dollar question. You've been asking a series of those Daniel. That's a $10 million question. Like Chad, actually, I'm not worried about terminators. I'm not worried about Cylons from Battlestar Galactica, I'm more worried about systems that have unintended emergent effects or miniature HAL 9000s that is systems that are designed to reason in one domain that we try to apply in another and they break as a result.

So in order to prevent that kind of thing from happening, I think there have to be three things. I'm thinking in PowerPoint now as you mentioned earlier. First, I think better self knowledge will help us. So it's not necessarily a matter of engineering as such, but rather a matter of engineering for the types of human beings, people we are. So the best way to engineer a hammer that doesn't hit my thumb when I strike a nail is just for me to know that I don't use cameras as well when I'm tired. So maybe I ought to put the hammer down when I'm trying to finish my roof in the middle of the night. So first better self knowledge.

Second better modeling and simulation. So part of validation and verification of the use of technologies is to forecast performance at the ragged edge, if you will. And I think we're only now really getting to the point where we can do that, especially with human machine teams. And so part of what we're doing in my lab at Riverside is we're working on virtual testbeds. That lets us put algorithms into partnership with humans and ecologically valid or somebody's realistic environment so we can stress test those in the context of use. I think that's very important, better modeling and simulation.

Finally, third, I think we do have to be sensitive to how we build capacities into these machine teammates that let them reason in the moral domain. Not necessarily so they can strike off on their own, but more so they can be mutually intelligible with us as teammates. So they can say, "Hey Bill, I know you told me to take action X, but did you really intend to do that? Because if I take action X I might unintentionally harm 10 innocent non-combatants in the target radius." And I would say, "Oh, thank you. No I'm task saturated as a human right here. I didn't have that context. And I appreciate that you surfaced that." That's why you think it's so important that we design into our AI agents, some type of artificial conscience that is the ability to know what is relevant from a moral perspective, the ability to have the skill to act on moral judgments, have the ability to make the moral judgements themselves and the ability to communicate with us about what those judgments consist in.

So that framework that I told you comes from a moral psychologist, a friend who I should acknowledge Jim Rust, who talks about moral sensitivity, moral judgment, moral motivation, and moral skill, all as being necessary parts of being the kinds of creature that can make an act on moral decisions. And so along with Rust and people like Paul and Patricia Churchland, my mentors at the University of California, I think we should think about giving our tools some of those capacities too. So that can be effective teammates to us human beings.

Daniel Serfaty: Fascinating. That's super. Chad, you want to add your answered to Bill's, especially as a, I know how much you care about design, about thoughtful design. You being the evangelist, at least that Aptima about putting design thinking or thoughtful design in everything we do. What guidance do you have for the scientists and the designers and the data scientist and the engineer in our audience about adding to what Bill said to prevent that surprises or minimize the occurrence of them, at least?

Chad Weiss: I can't tell you how much I wish I could give a clear, satisfying and operational answer to that. What I can give you is, what I see is one of the biggest challenges here. And I think that is we need to pay particular attention to the incentive structures here. We need to convince the developers of technology, because I think that we rely often on external bodies, like government to step in and sort of legislate some of the ethical considerations and certainly in free market capitalism, there is an incentive to operate as well as you can within the confines of the law to maximize your self-interest.

In this arena government is not going to be there. They're not going to catch up. They move too slow, technology moves too fast. And so we have a unique responsibility that we may not be as accustomed to taking on when we're talking about these types of technologies. We need to find ways as leaders within organizations, to probably incentivize some degree of sober thought of, I think I've used the phrase with you tempering our action with wisdom. And consideration that when something that we produce fails, when it has adverse outcomes. And I don't mean to talk only about adverse outcomes, because a huge part of this discussion should be the positive outcomes for humanity, because this is by no means a bleak future. I think that there's a massive amount of potential in artificial intelligence and advanced computing capabilities. But we have to be aware that we bear responsibility here and we should take that with great seriousness, I guess. I don't even know the word for it, but it's critical.

Bill Casebeer: It's precious. I mean, to foot stomp that Chad, that is a beautiful insight and a significant piece of wisdom. If we could just rely on our character development institutions, our faith traditions, our families, so that we push responsibility for moral decision making down to the individual level. That's going to be the serious check on making sure that we don't inadvertently develop a technology that has negative consequences so that we can harvest the upside of having good artificial teammates. All the upsides that Chad just mentioned, such a profound point. I am in debt to you, sir for bringing us to it.

Chad Weiss: Part of the reason that people like me exist, that you have user experience designers is because there is a tendency when we're developing things to externalize the faults, the blames. Something you're building doesn't work. Maybe we blame the users. They don't understand it, it's something there, what is it? The PIBKAC, the problem exists between keyboard and customer. This is really dangerous when you are talking about something as powerful as AI. And so knowing that tendency exists such that UX, as big of a field as it is, I think we need to be special consideration here.

Daniel Serfaty: I really appreciate both thoughts of wisdom and just going back to basic human values as a way to think about this problem. And certainly I cannot thank you enough to have shared these insights, really new insights that make us think in different direction. Which makes me believe that while many folks in the corporate environment are thinking about adding a certain roles, either a chief AI ethicist or chief ethics officer, or even having subspecialties within the engineering about ethical and societal responsibilities of AI and computing in general, MIT has a new school of computing for example, in which that particular branch is being emphasized.

I believe that like you, that we need to go back to first principles as innovator, as inventors or scientists and engineers to consider ethics the same way we consider mathematics. When we do our staff, it's part of what we do. It's not a nice to have, it's becoming the must have. So thank you very much, Bill. Thank you, Chad, for your insights and your thoughts and your thoughtful considerations when talking about these important topics of technology and ethics and AI and ethics.

Thank you for listening. This is Daniel Serfaty. Please join me again next week for the MINDWORKS Podcast and tweet us @mindworkspodcast, or email us at mindworkspodcast@gmail.com. MINDWORKS is a production of Aptima Inc. My executive producer is Ms. Debra McNeely and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.