MINDWORKS

Mini: Socially Intelligent AI (Jared Freeman and Adam Fouse)

February 01, 2021 Daniel Serfaty
MINDWORKS
Mini: Socially Intelligent AI (Jared Freeman and Adam Fouse)
Show Notes Transcript

Sometimes people just don’t get it! But will our AI teammate? Join MINDWORKS host, Daniel Serfaty, in this MINDWORKS Mini as he talks with Dr. Jared Freeman, and Dr. Adam Fouse, about the importance of a mutual understanding between humans and their AI counterparts, as it will be crucial to the success of human-AI teams. 

Daniel Serfaty: Let's continue on that, I mean, you talk about this project earlier today and you mentioned the term social intelligence. I would like to know what is it? Are we trying to make AI aware of society? Aware of their microcosm of the people and the other AI that interact with it? What is it?

Jared Freeman: Even a three-year-old human has social intelligence. This is labeled in the philosophy of science theory of mind, right? An ability to watch mom enter the room with a pile of clothes and then open the door to the washing machine so she can put them in. Inferring that what she wants to do is put those clothes into the machine. The assist program out of DARPA aims to imbue AI with a theory of mind, meaning a little more specifically an ability to infer what humans know and believe to be true about the world. And to predict what action they might take, then to deliver guidance, just deliver advice which humans will listen to because it aligns with their knowledge, which they might comply with more readily or be able to critique more easily.

Daniel Serfaty: So in a sense are you trying to develop empathy in Artificial Intelligence? I mean, is that really what we're trying to do? So the ability basically not only to infer actions by others, but also to understand the reason why others are taking certain actions?

Jared Freeman: Yes. I think people generally associate empathy with emotion. And certainly AI that can appreciate the emotional state of its human teammates will get farther than AI that doesn't. But here we need to expand the meaning of empathy a bit to say that it also denotes understanding the knowledge that others bring with them. The beliefs about the state of the world, that others bring with them. Their understanding of what they can do in the world and can't do in the world, right? So there's a distinct cognitive component, as well as the effective component.

Daniel Serfaty: I certainly want to explore that to the end of our interview, really these kinds of ground limits of AI, social intelligence, emotional intelligence, creative intelligence, things that we attribute to the uniquely human. And hence my next question to you Adam is that, wouldn't be easier to say, "Okay, let's look at what humans are best at. Let them do that part. Let's do what machines are best at. Let them do that part and just worry a little bit about some interface in between." I remember that call the MABA-MABA approach where men are best at and machine are best at approach to design. Why isn't that sufficient? Is there something better that we can do by in a sense engineering that team?

Adam Fouse: Well, we certainly need to be thinking more than just about [inaudible 00:28:43]. And the theory answer to your question is, I think that's a bit reductive in that, just trying to break things up is awfully limiting to what teams of humans and AIs might do in the future. It partitions things in a way that both doesn't let the team adapt to new things as well but also doesn't really take advantage of what some of the real strengths are. That MABA-MABA type of philosophy. It's a very task oriented way of thinking about things. What either side is doing the people or the machines is just about accomplishing tasks. And going back to Jared's point about the importance of social intelligence, a lot of the strength of human teams comes from the interaction between the team members, not just, "Well, you've got someone that can do this thing and someone that can do that thing, and then we can each have their own thing, and then we get the end result."

But they're going to work together to figure it out. And if we can add in AI to that mix of being able to work together to figure it out, then I think there's going to be a lot more opportunities that are opened beyond just crunch a bunch of numbers real fast.

Daniel Serfaty: That's interesting. So you trust our ability basically to build teams a very same way we build work teams or sports teams for that matter? Not as a collection of individual experts, but maybe a collection of individual experts that are brought together by some kind of secret or a hidden source? So teamwork aspects that the particular quarterback can work best with a particular wide receiver in football because they work together well, they can anticipate each other. Is that your vision of what eventually those human-AI team are going to be like?

Adam Fouse: Down the road, absolutely. In sports, you can have people that are glue guys that are going to bring the team together. You can imagine that same type of thing happening with the right type of AI that's brought into a team.

Daniel Serfaty: So Jared, one of the big questions that folks are discussing in our tool based on what Adam just told us is, fundamentally should we consider we humans that are still in charge, kind of in charge of our world. Should we consider AI as a tool, just like a hammer or a computer or an airplane? Or should we consider AI as a teammate, as another species that we have to be a good teammate with?

Jared Freeman: The answer depends on the AI. There will always be good applications of AI in which the AI is a tool, a wrench, a screwdriver for a particular job. But as AI becomes more socially enabled and as humans learned to deal with it, I think that AI will become more and more capable of being a teammate. And this means a few things, right? It means that we might have human-AI teams that can collaboratively compose the best team for a job, right? The leader in the AI pick out three good officers, 16 cadets, and a bunch of machinery that can do a job well. It means that we'll have human-AI teams that can work with each other to orchestrate really complex missions as they're being executed. And it means that we will have AI helping us to look into the future of a mission, to discover where the risks lie, so that we can plan the present to meet the future well.

Daniel Serfaty: Okay. So that's a pretty optimistic view of the future of AI I think. Adam tool or teammates?

Adam Fouse: I think that I would give a very similar answer to Jared. When we were talking about empathy, Jared made the comment that we need to think about how we're defining that and maybe expanding that. And I think how we define a teammate is something that we're going to need to grapple with. I think we shouldn't be afraid of taking a look at how we define that and expanding that. Or maybe taking some different takes on it to that are broader and encompass some different ways that AI might fit into a team that go beyond a tool that maybe don't come with the same presumptions that you would of a human. Not the exact same ways you interact with the human and you're making a virtual human or making an AI teammate. And so we need to be unafraid of thinking about what we mean when we say, AI teammate.

Daniel Serfaty: Okay. I like that, unafraid. That's good for our audience who may think, "Okay, are we entering a totally new era where those machines are going to dominate? Where is the center of gravity of the initiative or the decision? We've seen enough science fiction movies to scare us." Okay Adam, so you have been talking about the need, not just to design superior AI with the new techniques of deep learning and natural language understanding, and also having experts interact with the AI, but also looking in a sense of how to build an expert team of both sides. Being aware of each other's capabilities, of each other's, perhaps even weaknesses and adapt to each other in order to form the best team. Jared, are you aware of a domain or could you share with us an example where these will intention system of both automation, that is well-designed to supplement the humans that are well-trained to operate a system when failure open, because the two of them are not well-designed together? The human side and the AI side.

Will you share that with our audience? And I want you to extrapolate, because I know that you have been very concerned about that, about the measurement aspect. How do we test that those systems are actually going to behave the way we expect them to behave?

Jared Freeman: Let me draw on the most horrific recent event I can think of. And that's the Boeing 737 Max 8 disasters. Multiple plane crashes. There in the Max 8 was a piece of Artificial Intelligence meant to protect the aircraft from among other things, stalling. And when you look at the news reports from that event, you see that the Max 8 systems read some system data predicted an engine stall incorrectly, took control of the flight services, and then effectively pitched the aircraft into the earth. 346 people died if I recall.

Daniel Serfaty: But is that without telling the pilot that it is actually taking over?

Jared Freeman: Right. Yes, that was part of the problem. And so you can imagine part of the solution, imagine if the 737 Max 8 was able to infer the pilots belief that they were in control of the aircraft. The pilots were wrong, the Max 8 had taken control of itself, but that was not the pilots belief. Imagine if that system could predict that the pilots would apply the manufacturers procedures to restore that aircraft to stable flight. Even though those procedures would fail in that circumstance, then the AI could guide them away from the wrong actions. But the AI had neither of those, not the ability to infer the pilots current beliefs, nor the ability to predict what the pilots might do next. And so it was in no position to work as every human teammate should, to guide the teammates towards correct understanding and correct behavior.

Daniel Serfaty: Hence your call earlier today about social intelligence is kind of an early form if you wish. For a human team, a pretty and sophisticated form of human intelligence, but for AI is still something that is a little beyond rich of current systems.

Jared Freeman: There are a couple of very basic metrics that fall out of that story. One is simply can AI infer the knowledge and beliefs of the human? And experimentally, we can fix those, set those knowledge and belief, and then test whether AI can accurately infer them. Can AI predict human behavior at some level of granularity? Imagine as we're doing an assist, having humans run their little avatars through a search and rescue scenario in a building. Can AI predict where the human will turn next? Which victim the human will try to save next? If it can do that successfully, we can measure that against what humans actually do. If the AI can make those predictions successfully, then it can guide humans to better actions where there is a necessary. Let the human do what they're planning and understand where that's the most efficient, effective course of action.

Daniel Serfaty: Thank you, Jared. That's a pretty insightful albeit horrifying example of what happens when that last design, the teamwork aspect of the design of human-AI systems fails.