MINDWORKS

Mini: Why should we study teams? (Nancy Cooke and Steve Fiore)

February 03, 2021 Daniel Serfaty
MINDWORKS
Mini: Why should we study teams? (Nancy Cooke and Steve Fiore)
Chapters
MINDWORKS
Mini: Why should we study teams? (Nancy Cooke and Steve Fiore)
Feb 03, 2021
Daniel Serfaty

Since the age of AI is upon us, it begs the question: “how will humans and AI work together?” Before we can answer this question, we must understand how teams work in the first place. Join MINDWORKS host, Daniel Serfaty, as he talks to Dr. Nancy Cooke, the Director of Arizona State University’s Center for Human, AI and Robot Teams, and Dr. Stephen Fiore, Director of the Cognitive Sciences Laboratory at University of Central Florida, about the importance of studying teams.

Show Notes Transcript

Since the age of AI is upon us, it begs the question: “how will humans and AI work together?” Before we can answer this question, we must understand how teams work in the first place. Join MINDWORKS host, Daniel Serfaty, as he talks to Dr. Nancy Cooke, the Director of Arizona State University’s Center for Human, AI and Robot Teams, and Dr. Stephen Fiore, Director of the Cognitive Sciences Laboratory at University of Central Florida, about the importance of studying teams.

Daniel Serfaty: So I'll follow up with a question for you Steve, why is it important to understand and study how teams and groups and collective work? Is there something magical there that we need to find out? Is there something that is qualitatively different from just studying individual behavior, individual decision-making?

Stephen Fiore: I definitely think there is something that's meaningfully different but also practically important. And I believe that there's no way we're going to successfully solve the world's problems through the reliance on individuals. I believe there is way too much knowledge produced for any lone individual to understand, comprehend and apply by themselves. So I think that collaborative cognition, that is the integration of knowledge held by multiple people, is really the only way we're going to address the kinds of problems that the world is facing. So, a significant part of my research is this notion of knowledge integration that is how do you bring people together that know different parts of a problem and produce solutions from that complimentary knowledge. So how do they integrate and produce something that they could not have come up with on their own, which is essentially why we bring teams together in the first place. But my interest is specifically in collaborative problem solving. 

Daniel Serfaty: Nancy, can you add to that? All these years studying teams, is there something still magical, still mysterious about how teams perform?

Nancy Cooke: In the field we distinguish between teams and groups. Teams are a special kind of group with each member having different roles and responsibilities, being very interdependent, but working toward a common goal. And so when I look at teams, I see them as a social system and looking at them from the systems perspective kind of brings in some of my background in human systems engineering, human systems integration.

Daniel Serfaty: Perhaps for you, I'll start with basic principle. You already made a distinction, Nancy, between teams that are particular structure and groups of individuals in general. What is a team? Is any group of collaborating individuals defined as a team. Are team's uniquely human structures or do we find them in other places; in society, nature, with animals? Tell us a little bit the ABC of teams. 

Nancy Cooke: I don't think teams are uniquely a human structure. I think one of the key aspects of teams that's important is the interdependence of the individuals on the team. So when we're constructing a team task, we have to keep that in mind that we need to have interdependence in order to have a team. Steve and I are on a ISAT Study Group that's looking at teams of humans and animals as an analog for human AI teaming. And so I've been talking to lots of people in the military working dog program. We've been talking to people in the Marine Mammals Program in looking at how teams of humans and animals can work effectively and trying to take some of that information about how the dogs understand the intent of a human and think about crafting AI in that image.

Daniel Serfaty: That's fascinating. So interdependence is a key component here that any other society or collective that do not have interdependence or truly explicit interdependence therefore is not a team. It's something else.

Nancy Cooke: I think that's right. And there is some controversy over whether a machine can actually be a teammate because a lot of people view teaming as something that's uniquely human. But I think as long as we have interdependence, there's no reason that a machine can't be a teammate. That doesn't mean that you lose control of the machine. You still can have control just like a team leader can have control of the subordinates. So I think definitely human AI robot teaming is a thing.

Daniel Serfaty: Yes, it's also because maybe in the popular literature, especially in the American society, there is a lot of value in individuality, but also there is a lot of value in teams. A lot of the sports, that culture about doing something together in ways that perhaps produces an outcome that is superior than any outcome an individual would have produced by herself or himself, is that right?

Nancy Cooke: It can go either way, right? We always say a team of experts doesn't make an expert team. So you can't just bring people together. There has to be this interdependence and cohesion and synergy. And I use the example of the 2004 Olympic basketball team that was called the Dream Team of very professional stellar players that came together and ended up losing at the Olympics in their speculation that they didn't have that kind of cohesion and interdependence that allowed them to play together. This team of experts was not an expert team. And contrast that to the 1980 hockey team that went on to the Olympics made up of not professional hockey players but college level hockey players. They were good, but they weren't professional. And by virtue people think of very good coaching, they came together and beat the Russians at the Olympics. They were the underdogs. So that's an example of how it's not so much the parts of the team, although there has to be some kind of prerequisite for being on the team, but it's that interaction that's important.

Daniel Serfaty: Steve, from your perspective, again, looking at it maybe from an angle that is slightly different from that of Nancy, in what specific way is a team better than the sum of its parts. What are the internal mechanisms that can be perhaps taught or innate or can be developed or supported maybe by technology that makes a team better than the sum of its parts?

Stephen Fiore: In addition to interdependencies as one of the core features or defining characteristics of teams, we've talked about the task relevant knowledge and the specialized roles that comes out of that task relevant knowledge. So there's an important degree of complementarity and once team members know, what makes them special are then the emergent processes that occur or hopefully will occur when you bring these people together. And probably the easiest way to think about this is something referred to as knowledge co-construction where you bring people together who know different things and based upon their focusing on a particular problem or particular decision, they co-construct a solution based upon what they individually know. And they piece it together to produce some kind of solution that they would never have been able to come up on their own. 

So that kind of synergy, that kind of synthesis is what makes teams more interesting. And an individual can't necessarily do that unless they spend a lot of time reading multiple literatures, becoming familiar with different domains. And as I said, there's too much knowledge produced. And there was actually an article written by Ben Jones, an economist, who referred to it as the burden of knowledge. And he specifically talked about the proliferation of scientific papers and scholarly papers over the last few decades and how there's no way we could be the kind of renaissance man where you can know everything about everything. And because of that, we need to team the more and the special properties of teams are such that we hope that they can come together with their specialized knowledge, have meaningful interdependencies, communicate appropriately, and then construct solutions that they couldn't have otherwise or make superior decisions that they couldn't have otherwise.

Daniel Serfaty: But that construction that you talked about requires basically one member of the team to know something about what the other member does. That's part of the interdependence but part of that knowledge construction. The quarterback needs to know something about the job of the wide receiver, but not all of it. How do we determine the right amount of knowledge about the others in order to optimize how a team works?

Stephen Fiore: Now you're touching upon the area referred to as team cognition, and what you're essentially describing is part of a transactive memory system. What we know about that is effective teams know who knows what. As probably one of the necessary features, you have to know who is an expert in what area. But you added an additional dimension, and that has to do with how much do you need to know about another person's position, role or specialty. And honestly, that's an empirical question. And Nancy and her colleagues, they did a lot of research in that in cross training to see the degree to which you needed to know what was the knowledge, what were the roles and responsibilities of other team members, and I think it's going to be contextually independent. 

So I might be a better team member if I knew how to code, but I'm not going to learn how to code. I'm going to go to the people on my team that know how and say, "Hey, can you do this?" And they'll say, "What do you need?" And then we'll talk about it. It's about identifying what's the appropriate level of knowledge. And in the case of scientific teams, I think the bare minimum is understanding the lingo that is a specific terminology and trying to use the terminology appropriately. You can be on different teams and you can hear people with different areas of expertise use concepts differently than you and that's an important characteristic so you can coordinate that knowledge among members much more effectively.

Daniel Serfaty: Nancy, we're going to use the rule of presidential debates here. When one candidate mention the other person's name, the other person's entitled to an addition or a rebuttal. Steve mentioned your research here. Can you add to that answer, because I think it sounds a core of what makes team function well.

Nancy Cooke: I think I can best address it by talking about I work with a synthetic teammate. So we've done work in a simulated unmanned aerial vehicle ground station that is a three agent task, so you have a pilot, a photographer or sensor operator, and a navigator or mission planner. And we worked with the Air Force Research Lab to put a synthetic teammate in the seat of the pilot. And so we tested how teams with a synthetic teammate did at this task compared to all human teams. And it was really interesting that the synthetic teammate was really good at doing its own test. It knew how to fly the vehicle.

Daniel Serfaty: So synthetic teammate is fundamentally a computer program that has been designed to behave a certain way?

Nancy Cooke: That's correct, yes. It was designed to know how to do its task. But what it didn't do is to anticipate the information needs of the other teammates. When humans are on a team, they realize there are others on this team and that they probably need something from them and they need something from you. And so the synthetic teammate did not anticipate the information needs of the human teammates. And so they would have to always ask for what they needed, to ask for the information they needed. And instead of having the synthetic teammate give it to them ahead of time, as a result, and this is really interesting, the team coordination suffered. But what happened eventually is that the two humans on the team also stopped anticipating the information needs of others. So everybody became like one for themselves. 

And so some people ask me whether it's that the synthetic teammate doesn't really have a theory of mind of the human teammates and therefore doesn't know what they need. And that may be true, but I think it's probably a very simple theory of mind that it needs. And that is it's missing what's going on in this larger task, what are the roles of these other agents on the team, what do they need from me and what can I give them and when do they need it?

Daniel Serfaty: It fascinating. You are saying that that particular, and I don't want to extrapolate here and to say something that you haven't said, but in a sense it's almost like the synthetic teammate didn't have empathy programmed in it and therefore didn't make the effort to guess or to anticipate or to predict what its teammate wanted. Something that we should expect what, naturally from a human teammate, or is that something that we train?

Nancy Cooke: Well, even human teammates differ in their ability to be a good team player. It could be just this idea of empathy, but people do seem to come to our experiments knowing what it means to be a teammate.