The Magic of Teams Part 1: The ABCs of Teams with Nancy Cooke and Stephen Fiore

November 24, 2020 Daniel Serfaty Season 1 Episode 7
The Magic of Teams Part 1: The ABCs of Teams with Nancy Cooke and Stephen Fiore
The Magic of Teams Part 1: The ABCs of Teams with Nancy Cooke and Stephen Fiore
Nov 24, 2020 Season 1 Episode 7
Daniel Serfaty

Much of our work today gets carried out by teams. Teams of humans, teams of humans and machines, distributed teams, virtual teams—almost all of us operate in teams. But what exactly is a team? What differentiates high performing teams from low performing ones? Can advances in team science help improve team performance? Join MINDWORKS host Daniel Serfaty for the first of a five-part series on “The Magic of Teams.” In this first episode exploring “The ABCs of Teams,” Daniel talks to Dr. Nancy Cooke, the Director of Arizona State University’s Center for Human, AI and Robot Teams, and Dr. Stephen Fiore, Director of the Cognitive Sciences Laboratory at University of Central Florida, leading experts in the field of team science. Nancy and Steve are not only very well versed in the research and science of teams, but they also come from very multidisciplinary backgrounds, ranging from philosophy to psychology to engineering and everything in between. And perhaps this is our first insight into this complex domain of teams—that we basically need many arrows in our quiver to take advantage of the opportunities that working in teams presents to us.

Show Notes Transcript

Much of our work today gets carried out by teams. Teams of humans, teams of humans and machines, distributed teams, virtual teams—almost all of us operate in teams. But what exactly is a team? What differentiates high performing teams from low performing ones? Can advances in team science help improve team performance? Join MINDWORKS host Daniel Serfaty for the first of a five-part series on “The Magic of Teams.” In this first episode exploring “The ABCs of Teams,” Daniel talks to Dr. Nancy Cooke, the Director of Arizona State University’s Center for Human, AI and Robot Teams, and Dr. Stephen Fiore, Director of the Cognitive Sciences Laboratory at University of Central Florida, leading experts in the field of team science. Nancy and Steve are not only very well versed in the research and science of teams, but they also come from very multidisciplinary backgrounds, ranging from philosophy to psychology to engineering and everything in between. And perhaps this is our first insight into this complex domain of teams—that we basically need many arrows in our quiver to take advantage of the opportunities that working in teams presents to us.

Daniel Serfaty: Welcome to the MINDWORKS Podcast. This is your host, Daniel Serfaty. This week we are starting a series of five podcasts focused on the magic of teams. As we talked about in last week's episode on learning and training, much of our work today gets carried out by teams. Teams of humans, teams of humans and machines, distributed teams, virtual teams. Almost all of us operate in teams. In fact, one could say that teams are the foundational building blocks of any human society. And I couldn't have dreamt of a better way to kick off this series than to talk to my two guests today because they can give us both a retrospective and a prospective view on the science of teams, what we know about teams and to frame what the rest of what this podcast series on teams is going to be about.

Both of them are true thought leaders in the field of team science. It is my hope that's what we'll talk about today will give you the incentive to go on the web, Google them and learn more about the work that they are doing. So let me introduce them briefly. Dr. Nancy Cooke is a professor of Human Systems Engineering with research centers on how humans and technology work together in teams. She's a director of Arizona State University Center for Human, AI and Robot Teams, which in addition to researching human and non-human interaction is addressing the potential legal and ethical issue expected to arise as artificial intelligence and robots are assigned increased autonomy. Dr. Cooke is also the past president of the Human Factors and Ergonomics Society and the recent past chair of the Board of Human-Systems Integration at the National Academies of Science, Engineering and Medicine. 

My other guest is Dr. Stephen Fiore who is a professor at the University of Central Florida Cognitive Sciences Program in the Department of Philosophy, the School of Modeling, Simulation and Training and he's a director of the Cognitive Sciences Laboratory at UCF. His primary areas of research are the interdisciplinary study of complex collaborative cognition and the understanding of how humans interact socially with each other and with technology. Dr. Fiore is the president-elect of the International Network for the Science of Team Science. He has also cluttered over 200 scholarly publication in the area of learning memory and problem solving in both individuals and groups. 

As you just heard, my two guests today are not only very well versed in the research and science of teams, but they also come from very multidisciplinary backgrounds, ranging from philosophy to psychology to engineering and everything in between. And perhaps this is our first insight into this complex domain of teams that we basically need many arrows in our quiver to take advantage of the opportunities that working in teams present to us. 

Welcome Dr. Nancy Cooke and Dr. Steven Fiore. So first a word, if you don't mind to introduce yourself, but also what made you choose these particular domain as a field of endeavor, the teams. Nancy, would you like to start? 

Nancy Cooke: That's a really good question. I started out as a cognitive psychologist studying individual knowledge elicitation, how do we find out what's the right stuff behind the expertise of an individual. And about I guess it was 25 years ago now, Eduardo Salas who's a leader in the field came to me and said, hey, we need to bring some cognitive psychology into the study of teams because given the Vincennes incident, a disaster that involved lots of team decision making under stress, the disaster was that the USS Vincennes by mistake shot down an Iranian Airbus killing all the passengers in it. Given the Vincennes incident, they realized that a lot of the team issues had to do with decision-making under stress for instance. And so he got me interested in trying to measure what team cognition is, and I've been pretty much doing that ever since.

Daniel Serfaty: Well, I'm glad you did because your contribution to the field certainly have been game changing. Steve, how about you? Of all the things you could have chosen to focus on, why focus on collaboration and teams?

Stephen Fiore: For me it started in graduate school. I was doing research on the influence of verbalization on cognition. That is what happens when you speak aloud about various cognitive processes. And I happened to be taking a seminar with Jim Voss on cognition, decision making, problem solving, and we had to choose a paper topic. And one of the notional topics was group problem solving. I was studying individual problem solving at the time. And part of what we were studying is what happens when people verbalize their problem solving processes and how that may interfere with things like the moment of insight.

So I said, "Huh, well, when you're in a group, you're always talking. It might be the case that groups actually interfere and hinder your ability to collaborate and solve problems." So I made that my paper topic, ended up digging into the literature. This was in the early '90s before team cognition started taking off. I was reading a lot of old literature that indirectly talked about cognition in groups and teams, even before it specifically became its own area of inquiry and ended up doing my dissertation on individual versus group problem solving and how working in a group actually interferes and hinders the kind of collaboration process. I've spent the rest of my career trying to fix the problems that can occur when people try to work together. 

Daniel Serfaty: So I'll follow up with a question for you Steve, why is it important to understand and study how teams and groups and collective work? Is there something magical there that we need to find out? Is there something that is qualitatively different from just studying individual behavior, individual decision-making?

Stephen Fiore: I definitely think there is something that's meaningfully different but also practically important. And I believe that there's no way we're going to successfully solve the world's problems through the reliance on individuals. I believe there is way too much knowledge produced for any lone individual to understand, comprehend and apply by themselves. So I think that collaborative cognition, that is the integration of knowledge held by multiple people, is really the only way we're going to address the kinds of problems that the world is facing. So, a significant part of my research is this notion of knowledge integration that is how do you bring people together that know different parts of a problem and produce solutions from that complimentary knowledge. So how do they integrate and produce something that they could not have come up with on their own, which is essentially why we bring teams together in the first place. But my interest is specifically in collaborative problem solving. 

Daniel Serfaty: Nancy, can you add to that? All these years studying teams, is there something still magical, still mysterious about how teams perform?

Nancy Cooke: In the field we distinguish between teams and groups. Teams are a special kind of group with each member having different roles and responsibilities, being very interdependent, but working toward a common goal. And so when I look at teams, I see them as a social system and looking at them from the systems perspective kind of brings in some of my background in human systems engineering, human systems integration.

Daniel Serfaty: Talking about that, actually your background, what do you do? Could you describe for our audience, what do you do in your day job? What is it that you do truly? I know you're a professor, but also you're managing all these students and the labs. So what do you do?

Nancy Cooke: A lot of Zoom meetings. I mean, that's how we interact these days with COVID. But I do have a lot of meetings. There are meetings with my individual research teams, with other collaborators, with program managers for the various grants that I'm working on. And we've also pivoted now to collecting data remotely, which has been quite interesting. So how do you collect data from teams and do it remotely? And so that's been a real challenge, but I think it presents some opportunities too.

Daniel Serfaty: What about you Steve, what do you do? The reason I am asking is because some team researchers look at a blank piece of paper and invent new models, new theories. Some others go to the lab and they observe how teams put together and come up with behaviors and performance and some others do other things. How do you do, how do you approach your work?

Stephen Fiore: Primarily by reading as much as possible. That includes not just what gets published in the literature but I accept way too many reviews and I do that because it forces me to read articles and learn about areas that I wouldn't otherwise. And there's also grant reviews. But a big part of what I do that may be slightly different than what Nancy does is what I call practicing what I preach and preaching what I practice. I try to work with other scientific teams and people who are trying to develop their ideas. And it's kind of a route facilitation around research where I'll just meet with people who know they have an area of interest but really haven't jelled what kind of questions they want to pursue. 

To me that's really the fun and exciting part of what I do is to do this kind of knowledge elicitation, not unlike what Nancy described earlier, but it's in the context of a research group and you're pulling knowledge from these different people. And I use a lot of cognitive artifacts in these kinds of meetings. So I externalize that, use the whiteboard liberally so that people can see the ideas instead of just trying to remember what was spoken and help them integrate those ideas and produce something like a notional research framework and notional hypothesis that they could pursue, for example, with a grant or in the development of an experiment.

I think it's important to, like I said, help people who are working on these complex problems because that's a focal area of research for myself, but it's also something that I want to help those other researchers who they're working on very important problem, so I want to make sure that they can wrap their head around the kinds of problem space with which they're dealing.

Daniel Serfaty: That's fascinating because in a sense that is what the science of team science is in the sense that you have team scientists collaborating, forming teams in order to understand better how teams work. Perhaps for you, I'll start with basic principle. You already made a distinction and see between teams that are particular structure and groups of individuals in general. What is a team? Is any group of collaborating individuals defined as a team. Are team's uniquely human structures or do we find them in other places; in society, nature, with animals? Tell us a little bit the ABC of teams. 

Nancy Cooke: I don't think teams are uniquely a human structure. I think one of the key aspects of teams that's important is the interdependence of the individuals on the team. So when we're constructing a team task, we have to keep that in mind that we need to have interdependence in order to have a team. Steve and I are on a ISAT Study Group that's looking at teams of humans and animals as an analog for human AI teaming. And so I've been talking to lots of people in the military working dog program. We've been talking to people in the Marine Mammals Program in looking at how teams of humans and animals can work effectively and trying to take some of that information about how the dogs understand the intent of a human and think about crafting AI in that image.

Daniel Serfaty: That's fascinating. So interdependence is a key component here that any other society or collective that do not have interdependence or truly explicit interdependence therefore is not a team. It's something else.

Nancy Cooke: I think that's right. And there is some controversy over whether a machine can actually be a teammate because a lot of people view teaming as something that's uniquely human. But I think as long as we have interdependence, there's no reason that a machine can't be a teammate. That doesn't mean that you lose control of the machine. You still can have control just like a team leader can have control of the subordinates. So I think definitely human AI robot teaming is a thing.

Daniel Serfaty: Yes, it's also because maybe in the popular literature, especially in the American society, there is a lot of value in individuality, but also there is a lot of value in teams. A lot of the sports, that culture about doing something together in ways that perhaps produces an outcome that is superior than any outcome an individual would have produced by herself or himself, is that right?

Nancy Cooke: It can go either way, right? We always say a team of experts doesn't make an expert team. So you can't just bring people together. There has to be this interdependence and cohesion and synergy. And I use the example of the 2004 Olympic basketball team that was called the Dream Team of very professional stellar players that came together and ended up losing at the Olympics in their speculation that they didn't have that kind of cohesion and interdependence that allowed them to play together. This team of experts was not an expert team. And contrast that to the 1980 hockey team that went on to the Olympics made up of not professional hockey players but college level hockey players. They were good, but they weren't professional. And by virtue people think of very good coaching, they came together and beat the Russians at the Olympics. They were the underdogs. So that's an example of how it's not so much the parts of the team, although there has to be some kind of prerequisite for being on the team, but it's that interaction that's important.

Daniel Serfaty: Steve, from your perspective, again, looking at it maybe from an angle that is slightly different from that of Nancy, in what specific way is a team better than the sum of its parts. What are the internal mechanisms that can be perhaps taught or innate or can be developed or supported maybe by technology that makes a team better than the sum of its parts?

Stephen Fiore: In addition to interdependencies as one of the core features or defining characteristics of teams, we've talked about the task relevant knowledge and the specialized roles that comes out of that task relevant knowledge. So there's an important degree of complementarity and once team members know, what makes them special are then the emergent processes that occur or hopefully will occur when you bring these people together. And probably the easiest way to think about this is something referred to as knowledge co-construction where you bring people together who know different things and based upon their focusing on a particular problem or particular decision, they co-construct a solution based upon what they individually know. And they piece it together to produce some kind of solution that they would never have been able to come up on their own. 

So that kind of synergy, that kind of synthesis is what makes teams more interesting. And an individual can't necessarily do that unless they spend a lot of time reading multiple literatures, becoming familiar with different domains. And as I said, there's too much knowledge produced. And there was actually an article written by Ben Jones, an economist, who referred to it as the burden of knowledge. And he specifically talked about the proliferation of scientific papers and scholarly papers over the last few decades and how there's no way we could be the kind of renaissance man where you can know everything about everything. And because of that, we need to team the more and the special properties of teams are such that we hope that they can come together with their specialized knowledge, have meaningful interdependencies, communicate appropriately, and then construct solutions that they couldn't have otherwise or make superior decisions that they couldn't have otherwise.

Daniel Serfaty: But that construction that you talked about requires basically one member of the team to know something about what the other member does. That's part of the interdependence but part of that knowledge construction. The quarterback needs to know something about the job of the wide receiver, but not all of it. How do we determine the right amount of knowledge about the others in order to optimize how a team works?

Stephen Fiore: Now you're touching upon the area referred to as team cognition, and what you're essentially describing is part of a transactive memory system. What we know about that is effective teams know who knows what. As probably one of the necessary features, you have to know who is an expert in what area. But you added an additional dimension, and that has to do with how much do you need to know about another person's position, role or specialty. And honestly, that's an empirical question. And Nancy and her colleagues, they did a lot of research in that in cross training to see the degree to which you needed to know what was the knowledge, what were the roles and responsibilities of other team members, and I think it's going to be contextually independent. 

So I might be a better team member if I knew how to code, but I'm not going to learn how to code. I'm going to go to the people on my team that know how and say, "Hey, can you do this?" And they'll say, "What do you need?" And then we'll talk about it. It's about identifying what's the appropriate level of knowledge. And in the case of scientific teams, I think the bare minimum is understanding the lingo that is a specific terminology and trying to use the terminology appropriately. You can be on different teams and you can hear people with different areas of expertise use concepts differently than you and that's an important characteristic so you can coordinate that knowledge among members much more effectively.

Daniel Serfaty: Nancy, we're going to use the rule of presidential debates here. When one candidate mention the other person's name, the other person's entitled to an addition or a rebuttal. Steve mentioned your research here. Can you add to that answer, because I think it sounds a core of what makes team function well.

Nancy Cooke: I think I can best address it by talking about I work with a synthetic teammate. So we've done work in a simulated unmanned aerial vehicle ground station that is a three agent task, so you have a pilot, a photographer or sensor operator, and a navigator or mission planner. And we worked with the Air Force Research Lab to put a synthetic teammate in the seat of the pilot. And so we tested how teams with a synthetic teammate did at this task compared to all human teams. And it was really interesting that the synthetic teammate was really good at doing its own test. It knew how to fly the vehicle.

Daniel Serfaty: So synthetic teammate is fundamentally a computer program that has been designed to behave a certain way?

Nancy Cooke: That's correct, yes. It was designed to know how to do its task. But what it didn't do is to anticipate the information needs of the other teammates. When humans are on a team, they realize there are others on this team and that they probably need something from them and they need something from you. And so the synthetic teammate did not anticipate the information needs of the human teammates. And so they would have to always ask for what they needed, to ask for the information they needed. And instead of having the synthetic teammate give it to them ahead of time, as a result, and this is really interesting, the team coordination suffered. But what happened eventually is that the two humans on the team also stopped anticipating the information needs of others. So everybody became like one for themselves. 

And so some people ask me whether it's that the synthetic teammate doesn't really have a theory of mind of the human teammates and therefore doesn't know what they need. And that may be true, but I think it's probably a very simple theory of mind that it needs. And that is it's missing what's going on in this larger task, what are the roles of these other agents on the team, what do they need from me and what can I give them and when do they need it?

Daniel Serfaty: It fascinating. You are saying that that particular, and I don't want to extrapolate here and to say something that you haven't said, but in a sense it's almost like the synthetic teammate didn't have empathy programmed in it and therefore didn't make the effort to guess or to anticipate or to predict what its teammate wanted. Something that we should expect what, naturally from a human teammate, or is that something that we train?

Nancy Cooke: Well, even human teammates differ in their ability to be a good team player. It could be just this idea of empathy, but people do seem to come to our experiments knowing what it means to be a teammate.

Daniel Serfaty: So perhaps as a way to go back again to the different models of teams in the literature, there are models that come from systems engineering, there are models that come from organizational psychology, some other models from cognitive science, sociology even. Steve, there are many models of teams in the literature. Which is the one, I know you're a cognitive scientist and you have your own biases or preferences, let's call them this way. But which one of you do you prefer, in a sense which one of you tends to explain more team phenomenon more kind of team.

Stephen Fiore: The concept of a model is one of these nomadic concepts, meaning it travels across disciplines and means different things to different people. And so with that caveat, I know that the way social scientists use model tends to be very different than the way computer scientists or engineers use the word model. And I'd say that what a lot of the team research is doing are primarily frameworks and it's organizational relationship among concepts to try to say things about how they're associated and how some processes relate to certain outcomes. 

So I'll speak of frameworks and I think the initial frameworks that were useful for team researchers were something like the IPO, input–process–output models where from the input standpoint you'd look at things like the composition, who is in the team. You'd look at the kind of task characteristics. What did you need to do? What was the interdependence? And then you looked at the kind of process factors. So the communication going on, the backup behavior. And then you would look at various outcomes that were possible. You could look at their changes in knowledge, their real kind of performance outcomes, or you could look at outcomes, like were they more cohesive, did they learn something? 

So these initial ways of thinking about teamwork were useful because, again, this organizing framework helped researchers parse the many different concepts that they were trying to understand. And then when the field got more sophisticated, they started talking about moderators and mediators. So it became the IPMO, input-process-moderator-mediator-output, where you would look at things like, oh, you can't simply look at communication. You have to look at something like an attitude such as psychological safety because that's going to moderate the amount of communication that happens in a team, which then is going to influence the kind of output.

So again, these are models like the way you would think of an engineering where you have not simply descriptions but also predictions that you can quantify and test. So I'd say these frameworks are still not models and what I've been trying to do is develop a more niche kind of model in collaborative problem solving. So the macro cognition in teams model where we move beyond this kind of input-process-output to take into account different factors of a collaborative problem solving scenario where you have individual and team level cognition. So there's these individual and team level knowledge building processes. There is these extended or external cognitive processes. 

Those interact in specific ways that we can generate hypothesis developed to produce different kinds of outcomes such as the particular kinds of discussions they may have, or they'll argue about solutions and try to identify. And we can generate specific hypotheses that say the more that a solution is interrogated, the more that a solution is argued about, the better the solution will be and the more likely they'll come up with an outcome. So the short answer is right now I'm trying to develop a precise model but doing so in a narrow way in the area of collaborative problem-solving, which is just one facet of teamwork. And this is an attempt to integrate different factors from cognitive and organizational science into a model that has some kind of practical utility as well.

Daniel Serfaty: I wanted to force you into a trap but you didn't fall into that trap to force you to choose which one you prefer, but you like all your children equally, I understand that. Nancy, perhaps you can illustrate, add to what Steve is saying. The study of groups of collective for collective behavior has been around for more than a century, but in the past I would say 30 years there was more of a focus of understanding teams the way you defined them earlier. Can you point for our audience one or two if not breakthroughs but at least a key insight that this research has produced that was actually implemented in management teams, in our law enforcement teams, in military teams, in medical teams. Something that we didn't know, we didn't articulate as well maybe 30 years ago but today we know.

Nancy Cooke: Well, I think we had a better handle on what goes wrong in teams. And so when we go out to industry and look at some of their teams, there are certain things that come up again and again, and in my experience they have to deal with two things: communication and usually it's lack thereof, inadequate communication, or role conflict. Not knowing who's supposed to do what, not knowing what you're supposed to do, and that wreaks a lot of havoc in teams. 

But communication is definitely a big one and that's what a lot of my research has focused on. How do we measure team communication and say something about that communication? We look at the communication dynamics and not just the content of what's being said, but who's talking to who. And looking at those patterns over time provides a good way to measure team cognition without getting in the way, without having to stop them and ask them to fill out a survey. But we've really made a lot of progress, I think, in that area and more generally in just how we measure teams and team cognition.

Daniel Serfaty: That's fascinating. Let's talk a little bit about communication because if you open a popular literature or even sprinkly sometime a management kind of publication, not scientific publication, there is a thing that people preach about more communication is better. Is more communication better in teams always?

Nancy Cooke: Not always, no. Sometimes communication that is either meaningless or maybe even destructive is not better. So it's definitely not more is better. In fact sometimes you want to be as succinct as possible. In the military, for instance, we don't want to go on and on because you don't have time and you have to convey exactly what the intent is or what the next action is as clearly as possible.

Daniel Serfaty: That's interesting. So selective communication rather than more communication. I'm sure this is a key debate in the team research literature. Steve, do you want to add something to what Nancy just said about communications in particular?

Stephen Fiore: The study of communication is one of these areas that has co-evolved with the study of groups over the 20th century. And it's an important area because when studying communication it helped create these other concepts that are really important for the understanding of teams. And the specific example is related to your question about is more better. And some of the early research looking at expert teams showed that, no, good teams do not communicate as much. They only communicate when needed and independently we had Judith Orasanu and Jan [Canabaru] develop the shared mental model concept based upon that kind of research and the inference they were drawing is team members knew what each other knew, and therefore did not have to engage in explicit communication about everything. 

They could be brief and they could speak about only something that they knew was relevant to the task at hand, so they didn't have to explain everything all of the time because they knew their teammates knew these kinds of interaction patterns, their roles, and they would look at a situation and identify what was going on and then speak about only the important components of that. So they didn't talk as much as poor teams.

Daniel Serfaty: That's an interesting insight actually. Nancy, is it linked to the concept that you said about role conflict in a sense that if I don't know, in addition to knowing what the other person knows or doesn't know, that I need to know also what the other person does and doesn't do. It's not just about knowledge.

Nancy Cooke: Exactly. This is also linked to a very famous article, Entin and Serfaty, that talks about implicit communication, and the idea that when you know more about what everybody else is doing on the team, you don't have to communicate as much. You communicate implicitly.

Daniel Serfaty: So as we explore basically what we know about teams, I want to ask a question. In your experience, you've observed several kinds of teams and you've studied many kinds of teams, certainly in very different work environments, some mission critical, some other just regular work teams. What stresses a team the most? I'm an evil person on the outside and I want to disrupt the work of a team. How can I stress that team the most?

Nancy Cooke: Good question. I think I would interfere with their communications. If the team can't communicate, then all you're left with is the implicit part, and they'd have to be really well-trained in order to keep going.

Daniel Serfaty: Steve, if you are that nefarious agent from the outside who wants to disrupt the working of a team, what would you do?

Stephen Fiore: I would look at some of these frameworks, like the ABCs, the attitudinal, the behavioral and cognitive features of teamwork. I could mess with the attitudinal component such as trust and do something to diminish the trust on that team, therefore they won't communicate as much or they'll be deceptive in their own communications because they don't trust each other. Another attitudinal component would be psychological safety. I could disrupt that by insulting members of the team so they're not wanting to speak out anymore. We could look at the behaviors. We could increase the performance monitoring that goes on so they'll be worried that they're always watched. That may cause then to choke. 

We could influence the leadership, the shared leadership on that team such that one person may be more dominant than another and create this imbalance in coordination. You could interfere with their cognition where you could change the transactive memory system or the shared mental model through something like membership change. So pull a member out of that team and put someone into that team. Those are all the kinds of features we know from studying teams in multiple domains that will produce some kind of process and outcome disruption. 

Daniel Serfaty: I'm impressed you thought about many, many ways to disrupt things, Steve. But in fact, I know that you are not a nefarious agent and you won't do that. But in a sense, working in different organizations, sometimes the organizational climate around a team actually is inducing all the effects that you just described or many of them. Teams don't work in a void. They usually are part of a larger organization. To what degree is the variables of that larger organization that surround the teams: other teams, enterprises, departments, affects the performance of the team itself? Because in the lab quite often we isolate those variables in order to manipulate just the variable that we want. But those stressors or any other variable that we apply to a team sometime comes not from a manipulation but just from a climate or a culture or some external event. Nancy, you want to comment on that a little bit?

Nancy Cooke: I think that's exactly right. We do have research out there on multi-team systems, but I think what you're talking about maybe is a little bit different. So it's the climate surrounding the team. I know in one petroleum organization I visited, it turned out that there were some bad teamwork and part of it boiled down to the climate and what individuals were rewarded for. They were told that safety was most important, but they were really rewarded for productivity. And so this whole climate really created a lot of friction on the team when people had safety issues and they would just conflict with their goals to be more productive. So yes, it can have a huge effect.

Daniel Serfaty: You mention multi-team systems. I'm sure our audience is not necessarily familiar with that concept. What is that?

Nancy Cooke: Some people call it a team of teams and we do have this a lot. In the military you'll have platoons and squads and everybody is interacting at some level. We're actually developing a testbed to try to study this where we're going to be interacting using our UAV testbed, unmanned aerial vehicle, with a similar testbed at Georgia Tech and one at the Air Force Research Lab also connected to a ground battlefield kind of simulation. And so we're hoping to do research that looks more at these really complex interactions. 

Daniel Serfaty: And I'm sure your systems approach to understanding those complexities help here because in a sense it's a system of systems. We talked quite a bit about how teams think, how teams work, how teams solve problems together. Steve, what do we know, if anything, Steve and Nancy actually, this question is for both of you, about the way teams learn? How do they learn together or is just the sum of the individual learning? Is there particular ways teams acquire skills and knowledge, learn in a sense, that is different from the way individuals learn?

Stephen Fiore: I think I'll reframe it as how can you facilitate learning in teams? And I don't know that it's necessarily different and one key example that comes to mind is the process of reflection and feedback. There's debriefing as really a crucial part of any team, and the military has done debriefing. They do pre-briefing, they do debriefing. Sports teams do this as well where they'll engage in these kinds of preparatory activities where they'll watch game tapes and prepare for an opponent, but then they'll watch the game tapes after a game and then they'll reflect on what went well and what went poorly. And this is an area that I'd say is one of the more robust findings in team research because there's a lot of evidence to show that debriefing, this reflective activity, after some performance can facilitate learning. 

You have to put the right structure in place, meaning it has to be facilitated, it has to be guided or else there's going to be potential group dynamics that interfere with a good discussion. People might be intimidated, people might be afraid to speak clearly and honestly about what went wrong. But when you have this structure in place, you know they can identify, hey, you did that poorly, or I did that poorly. When you have things like psychological safety, when you have trust on that team, you can speak that way to each other. You can communicate in effective, explicit ways where you can identify where the team did poorly and where they did well. So that reflective activity produces the kind of learning that they then take into the next performance episode.

Daniel Serfaty: Reflection and feedback certainly. Thank you. As we are looking, again, in a sense in the rear view mirror before we move to the second part of our podcast which is going to look at the future of teams, and both of you started planting some seeds for that one. I want to ask the question, if you look back at your career in fact or at least the part of your career where you focused on teams and teamwork, was there an aha moment at some point besides the one you had maybe in graduate school, or maybe that one too that you described earlier, an aha moment, suddenly an insight that you gained when you grasped something about teams that you didn't know before?

Nancy Cooke: One aha moment was when I was thinking about team situation awareness and what does that mean and how would we even measure it. Is it the idea that everybody on the team knows the same stuff about the whole situation, everybody knows everything. And I didn't think that sounded right, but I was in a parking garage. I was at a conference with my graduate student or postdoc at the time in a rental car. And I was backing the rental car up, this is kind of an embarrassing story about my driving skill. And I almost backed it right into a cement pole, but I didn't. And why didn't I is because my post-doc did his job and pointed out that, "Oh, don't back up. You're backing up into a pole." And at that moment I thought, well, this is what team situation awareness is, conveying information on the team to the right person at the right time in order to avoid a disaster in this case.

Daniel Serfaty: That's a great example. So you had the perfect mental model of the absent-minded professor at that point. Steve, can you share with us one of those insights or aha moment in your quest to understand teams better? 

Stephen Fiore: Sure. One would be a project I was working on with Nancy and a number of other people in the early 2000s and we were trying to develop the very complicated research proposal of a center, a $25 million center funded by NSF. We had a small grant called a planning grant to develop the concept. And with that grant, we were supposed to be spending time thinking about what would you do with $5 million a year to study? And in our case it was expertise. And in that project, we were trying to coordinate among a number of different scientists with different kinds of specialties. And in my role as a co-PI on that project, I was struggling with how do we do this better? So I said, well, what does the literature say about how you coordinate science teams? And the aha moment was, hell, we've never studied scientific teams.

So anyone that had looked at it were not what we would call organizational scientists or training researchers. There had been some people in policy that had looked at it, but certainly not the way we team researchers study team. So that was the aha moment that there was this huge gap in the study of teams where we had never really looked at scientific teams and how to improve teamwork in collaborations and science. So that kind of changed my career but I didn't really do anything about it for a few years and wrote a paper in 2008 that said, this is an important topic, and there were enough people in Washington interested in that were also pursuing it. So I started collaborating with people at the National Institutes of Health, what we now refer to as the science of team science. We spent a lot of time trying to cultivate this. So people like you, people like Nancy, people who studied teams will recognize this as an important area of inquiry.

Daniel Serfaty: Thank you for sharing that moment with us. This is a perfect segue in fact into what I would like to explore with you in the next discussion, which is basically the future of teams. And I would urge you to talk either at teams that you study or teams in which you perform yourself as a teammate or in teams of teams. With the COVID-19 pandemic forcing enterprises, whether they are corporate or academic enterprises, into distributed and remote work situations, are we witnessing an evolution of the definition of how teams perform and what is teamwork in those situations? Nancy, you want to take that on, and then Steve I would love your perspective on that.

Nancy Cooke: On the one hand, because we are distributed and don't have to commute to a meeting place, we can have more of these meetings, almost infinite number of meetings, and that may improve teamwork because there's more communication. On the other hand, there's some things that we know about good collaboration that we're missing. So I think COVID is taxing the teamwork for that reason. And the two things that come to mind are food and serendipity. A lot of good collaboration happens when we share food with one another, when there's like a common break room or you go out for pizza or drinks after work. And that's when a lot of the collaboration happens and people relax and open up their mind a bit. 

But the other thing is serendipity. A lot of good collaboration happens because we run into each other in the hallway or on our way to the restroom or at a particular meeting that we didn't expect to both be at. So we're not doing either one of those things. We're not sharing food and we're not being serendipitous. And people try to use breakout rooms, I think, to get at some of that. But I don't think it's sufficient. So I think maybe we're improving the number of meetings we can have and maybe teamwork because there are so many meetings, but also we're taxing the teamwork. 

Daniel Serfaty: That's very interesting. I would have thought about the serendipity, but the food is certainly an intriguing variable here. Steve, how about you? How do you think this new world in a sense induced by COVID is affecting the way teams work together?

Stephen Fiore: I'd like to think it's calling attention to the need for better technology to facilitate this kind of collaborative distributed work. So virtual teams has been a sub area of research for a number of years now and there's fairly sophisticated frameworks for looking at them, and part of the problem is the people who study technology are in computer science and there's a field of computer supported collaborative work that overlaps somewhat but not completely with researchers in teams. And because of that disconnect, I think that the people who are building technologies to support distributed work may not be collaborating with the right people, may not be aware of some of the teamwork needs when it comes to this kind of distributed work. 

So I think that limitations are becoming much more apparent because we're forced to use some of these technologies. I won't name any particular companies. There's certainly a lot of variability in the effectiveness of these different platforms that we're now using. And some of the bigger names are really bad, surprisingly bad, at developing these collaborative technologies. So my hope is that this is a kind of use inspired science and engineering where because of the tremendous increase in collaborative work, that they're going to be developing better technologies. And it's also up to the users to make more apparent and to inform the designers what we don't like about the technologies. And I see some adapting fairly well to this but others are too rigid in their technology and they're not changing.

Daniel Serfaty: It is fascinating along those lines, or at least I can observe that in my own organization, how people spontaneously have been trying to reconstruct what has been deconstructed or destroyed by the remote work situation, the equivalent of the water coolers, the serendipity, almost then seek to try to induce artificially or promote the conditions for serendipity. And I'm witnessing that not because it's something that's decided from corporate from the top, but rather maybe it's a collective subconscious effort to make up for what the technology, as you say Steve, is not providing us. And I think there is a research that is screaming to be performed here to see what exactly as our shortcuts, those additional channels of collaboration that people have created around those tools. Steve, you wanted to add?

Stephen Fiore: The serendipity, this has been studied for example in the science of team science and the label for that is referred to as productive collisions where you run into somebody in the hallway, "Hey, what are you working on?" And they share what they're working on and you say, "Oh, that sounds similar to something I'm doing. We should get together." Or just the ambient awareness of what people are doing. So if they're working on a whiteboard somewhere, someone may witness that and say, "Oh, that looks interesting." And go and talk to them about whatever is the model or the data, whatever they're graphing on that whiteboard. 

Those kinds of informal interactions are really critical to any organizational innovation and I don't know how well we can engage in social engineering to produce that. The only example I can think of is a lot of us that run scholarly organizations where we're running virtual conferences now. And for one that we ran in June, we specifically tried to mimic the chats that happen during coffee breaks. We know that's where the real action happens. It's not necessarily during the talks, it's after the talks in between the sessions. So we set up Zoom rooms where anyone could go to and say, "Hey, if you want to meet up with someone, go check out this Google Sheet and go to this Zoom location and have an informal conversation." 

And it turns out, I found out a couple of months later, some company had developed an app for that to try to foster these kinds of informal get togethers at virtual conferences. And as you well know, my favorite part of conferences is hanging out at the bar at the end of the day where you share a drink with friends and you just sit around and talk about what you're doing, what are the interesting things you've learned. We're trying to mimic that. There are these virtual happy hours, but it's really not the same thing. I have no solution to it, but you're right. This is a significant problem that we're going to have to figure out how to overcome.

Daniel Serfaty: I'm glad you added the drink portion of the food hypothesis that Nancy proposed. Nancy, you want to add something?

Nancy Cooke: Yes. I meant to include the drink in the food. The other thing I think that's really difficult and maybe hampers communication is this idea of lack of co-presence. I can't see what's going on around you except directly behind you, or with Steve. And so there may be some distraction that's happening in the background that you can't see that maybe changes what I say. And so a lot of communication happens in context and we're pretty impoverished in the context that we share right now.

Daniel Serfaty: Very insightful. And again, you guys are giving me the south ball here because this is a perfect segue into the next area. I would like to explore with you this notion of knowledge, implicit knowledge sometime of the context in which teamwork is happening. Can we engineer it? That's really the question. But to major a recent evolution of the study of teams is to apply our theories, science, models of teams to the design and evaluation of hybrid teams. We are making a John that is not just metaphorical but it's real. We are looking at teams of humans and artificial intelligences that can be software, that can be robots. And both of you are leaders in thinking about this problem. Could you share your perspective and experience with this area, particularly for our audience certainly who want to hear what you are doing in that area in exploring these human AI perhaps futuristic teams, perhaps current teams, but also highlight the similarities and the differences with humans-only teams. Who wants to start with that?

Nancy Cooke: I'll go.

Daniel Serfaty: Okay, Nancy. I know that's one of the centers that you're managing at the Arizona State University is actually called the Center for Human, AI, and Robot Teaming. That's very brave to use that term there. Tell us about it.

Nancy Cooke: Considering that some people don't think that robots and AI can be teammates, it is. But we emphasize that we're not an AI center, we're not a robotics center, but we are emphasizing teaming. So we're about the relationships between humans, AI, and robots. And I think one mistake that people make is to try to replicate humans in the AI or robots. So you make robots that have a face on them or you try to develop the AI so that it has the kind of say theory of mind that a human would. And I think that's just counter to this idea of teaming. So in teams, as we were talking about, they're heterogeneous with different roles and responsibilities, and I think that argues against replication of humans in AI or robots. 

That AI should do what it does best and humans should do what they do best. AI should also do what the humans don't want to do because it's still dirty or dangerous. That's sort of the principle that I've been acting on, and trying to make this team work I think is going to be very different than making a team of all humans work because we're now teaming with a different species.

Daniel Serfaty: Thank you. And I certainly want to explore that further. Steve, you want to add to that?

Stephen Fiore: Sure. One distinction that I have found useful when we think about technology writ large integrated with teams is the difference between teamwork and task work. Task work is the reason you're brought together, you're trying to solve some problem, you're trying to make a decision, you meet certain objectives and goals. But teamwork is the process engage in order to meet that task, to meet that objective. So by differentiating between that, you can think about how and what are you designing. Are you designing a technology to support the task work, or are you designing the technology to support the teamwork? 

And the argument that my colleagues and I have been making is most AI, most technology has focused on the task work. And we're now moving into this new realm where AI is potentially capable of supporting the actual teamwork. And like Nancy mentioned, that gets us into more human kinds of cognitive processes. Theory of mind is merely a label for a particular kind of social cognition and that particular kind of social cognition is necessary for team members to, for example, anticipate each other and engage in what we would refer to as something like backup behaviors. 

So you need to have enough of a shared mental model that you can say, "Oh, Daniel's in trouble. I know Daniel's supposed to be doing this right now. And because I have an appropriate theory of mind, I can tell he's stressed, but I'm going to step in and help him accomplish his task at this moment." So that's where something like theory of mind is going to be needed. And again, it's just a label to describe what we used to refer to as shared cognition. So it's these more collaborative components of teamwork that are the next generation of technology. 

And again, I just use the term technology. It could be a robot, it could be a decision support system. It doesn't have to be an embodied agent. So it could be a disembodied agent. You all worked on decision support systems, and you were trying to develop intelligent decision support systems back in the '90s. So in that case you were trying to facilitate both task work and teamwork with technology. So the larger point is this is really not that new. We have always been trying to develop technologies to augment individual and collaborative cognition. The only thing that's new are the capabilities of some of these technologies. And it's our job as social scientists to point the technology in the right direction.

Daniel Serfaty: That's very interesting and also adds complexity and texture to the problem. So for our audience, both of you are talking about technology to enhance teamwork and task work. But it's not necessarily, or is it? Are we talking about imagining a team in which one member of the team will be replaced by an artificial member? Or are we talking of a team where artificial intelligence can be that, can be a node in the team if you wish, but can also have other functions to facilitate the team processes. Are we talking about both or are we talking about the former because our audience is going to think, okay, they are talking about replacing a job that is currently being accomplished by an expert or by a human with a machine. Which one of these two paradigms are we discussing here or maybe were discussing both?

Stephen Fiore: It's definitely going to be both and more just referring back to the point Nancy made about we shouldn't be thinking about AI to be just like a human. The point is to develop the technologies that can do more than humans can do. It's just that when a new technology comes along, we tend to think about it as a substitute. But the real goal is to think about how it can augment capabilities, how it can produce something greater than just a team of humans could produce. And to your other point, one of the distinctions we're making is, is the AI a member of the team or is it more like a coach or a facilitator where it's continually in the background monitoring what's going on and intervening when necessary? Or like you said, is it a member of the team where it actually has specific roles and responsibilities?

And as I said, we're really talking about both of these. And I think we will see both of these. And in fact there are certain companies that are making claims now that they have this ambient AI that can monitor meetings and facilitate meetings. DARPA tried to do something like that a couple of decades ago. So this is recognized as a continuing need, but I think Nancy's point is the critical one that we need to think of AI like we think about teams and complimentary skills, complimentary knowledge. AI can do things humans can not do. Do not look at them as merely a surrogate, look at them as this kind of cognitive collaborative amplification.

Daniel Serfaty: Nancy, you have been a leader in the field in a sense that your lab has been specifically designed to study that. You start having empirical evidence and publication about results regarding this notion of hybrid teams. Can you tell us a little bit about that and expand on this notion of where is the AI in the team? 

Nancy Cooke: Yeah. I've been spending a lot of time developing testbeds in which to study human, AI, robot teaming. I think that they're really important because it's hard to do this first of all out in the real world because we don't see a lot of good examples that would allow us to study them in a controlled laboratory. So we rely heavily on testbeds. We set up testbeds to look at tasks like unmanned aerial vehicle ground control, cybersecurity, urban search and rescue. We also make use of the Wizard of Oz technique a lot. And for those who don't know what that is, it's based on the Wizard of Oz, my favorite all-time movies where Dorothy unmasks the wizard and it's really just a guy who's playing a wizard. 

And so in our experiments we will have a human experimenter play the role of the AI, or in some cases even a robot, physical robot. That way we can have better control over what the AI and the robot are going to do, how they either interact, how they make mistakes, in order to get a leg up on the design of AI and robots. Without using Wizard of Oz technique, you'd have to wait until the AIs developed and then go test it and tell the developer like, that didn't work, do it again. And so this way we can get the human factors information to the AI developers just in time, at least, or maybe before it's time.

Daniel Serfaty: I had a wonderful ongoing debate, and maybe that will be the object of a future podcast on MINDWORKS with Professor Ben Shneiderman who both of you know and recently who just wrote an article called Human-Centered AI. Kind of a concept like that which is not technology centered but rather looking at the human first and looking out to augment that. And he and I had several discussion in different conferences and even private. I have the honor to count him as a member of the Scientific Advisory Board where I work. 

We had this debate about why is AI exceptional? I argue that we should talk about multi-species system, human AI teaming, and you say AI is a tool. So if AI is just a tool in the hands of the humans, either the designers or the teammate, what is exceptional here? Is there some kind of a unique exceptionalism in respect to past human machine interaction? Is AI exceptional in a sense that there is something unique about pairing human and AI that we need to pay particular attention to, or just isn't AI a very sophisticated capable machine that we are putting as a typical human machine design? Steve, I want the philosopher in you to come out.

Stephen Fiore: I do think that it is something exceptional and I take seriously this idea that Nancy has brought up about alien. And I think what we're seeing is something that's thinking in ways that we can't comprehend, and I'll give you a specific example. I first saw this when I was reading an article about AlphaGo. And what was intriguing about AlphaGo is that not only was it simply a human versus AI, they also had what we referred to as centaur teams where it was a human paired with an AI playing another human paired with an AI. And when they had experts review the games, that is masters at the Game of Go, they literally referred to it as kind of alien. They couldn't comprehend what the decision-making process was, yet they recognized that it was superior and it had something to do with the capability of AI to think in multiple dimensions that we are not able to.

And in another area that I've seen this occur is in cybersecurity where part of the problem with cybersecurity is humans can't really comprehend the physics of networks and the speed with which things happens and the rapid distribution of information across these multiple networks. And that's where I think AI has to come in where AI, like Nancy said, it can do things we can't do, and we shouldn't lose sight of that fact. It's not artificial intelligence because that means it is mimicking our intelligence. So this is why we're calling it alien intelligence. It's foreign to us. 

Daniel Serfaty: Nancy?

Nancy Cooke: I'd like to add to that and talk a little bit about autonomy. For the Starbus study that I mentioned earlier, I've been talking to a lot of people who work with military working dogs. And I had an interesting conversation with one person yesterday who said that really the problem is, and several people have actually said this, the weakness on the human-dog team is at the human end the lead because what the human wants to do is take the dog around and tell the dog where to go sniff for this target. Sniff here, sniff here, sniff here. What the dog does best is to run way ahead. They can smell their target from very far away; to be off the leash, to run ahead and find the target very quickly. 

The dog should have more autonomy, in other words, but people aren't comfortable with giving the dog that kind of autonomy. They want to be able to control the dog. And I see AI kind of in the same light so that people aren't very comfortable calling AI a team member or letting AI do what AI does best. I think it can be a team member by definition if it's working on team tasks together with humans than it's a team member where it's a tool that can also be a team member.

Daniel Serfaty: That's a beautiful example. Thank you for sharing that. And certainly the analogy of human-animal teams, this notion of multiple species basically collaborating is not just multiple humans with different expertise. We're in another dimension here and that's fascinating I think for us as researchers as well as for the field. Are we aware, in addition to the gaming example you just gave in the AlphaGo example, Steve, are we aware, could you share with our audience some domains in which these collaboration between human and AI has already been implemented and are these already part of the way a particular domain is being worked in today? Could you share that?

Stephen Fiore: One current example is in software development like in the GitHub repositories where you have communities collaborating on these repositories to produce code, reuse code to come up with new applications. And because these are very smart coders, what they'll do is recognize how to automate certain elements of these tasks and they'll create bots that will do things that they don't want to do anymore. So there's been kind of this organic emergence of artificial intelligence, these kinds of bots that are supporting the software development. So this is an interesting area of research where you see people paying more attention to it from the teamwork standpoint. And it's, how do you study the degree to which the bots in these collaborations are altering what's going on in the teams? 

And we're working on a paper right now where we studied some GitHub repositories and compared repositories that had bots and that did not have bots to at least try to understand what was changing. And we've seen changes in productivity for the humans where the bots are facilitating the productivity, that is the humans are able to get more work done and one way to think about this is they've offloaded some of the work to the bots, but we're also seeing some complicated changes in efficiency. This is one of the challenges with this kind of research is really understanding why there are changes in efficiency. 

And in the case of software development, it's how quickly they respond to issues. So when a bug or a flaw or a change is requested in the software, an issue is created and then the teams have to respond to that issue. And we're seeing the teams with bots tend to be taking longer. And we're not quite sure why they're taking longer. And one concern is it may just be that the bots are helping them identify more issues than human-only teams are able to identify. So this is this kind of field research where we don't have complete control and without complete control we're not quite sure what's going on there, but I think this is a very important example of where bots are being organically created. And we're already seeing a change to the way work is getting done.

Daniel Serfaty: Interesting. So software writing software basically in collaboration with humans. Nancy, do you have other examples you share with your students where those teams are already starting to find their way into the marketplace? 

Nancy Cooke: Yeah. I think this whole idea is pervasive across many different domains. So you see it in transportation, with the self-driving cars. We see it in medicine, robot-assisted surgery, in manufacturing. Looking at the Amazon distribution center, you would see lots of humans running around with robots and tablets with AI on them. You have it in space exploration and certainly in the defense industry. So I think there's no shortage of examples. And you could argue about how intelligent the AI is and how autonomous it is, but certainly we're headed in that direction or we're already there.

Daniel Serfaty: Well, that's certainly a future we can all aspire to. But before I ask you my last question about your prediction for our children maybe if not for us, all this introduction of these intelligence, initiative-taking sometime may be emergent behaviors that we may or may not be able to control. Are there some ethical consideration that this teaming of human intelligence and AI intelligence that are emerging that we should consider seriously, even as scientists and engineers? What should we worry about in term of ethics, if anything?

Nancy Cooke: I agree with Ben Shneiderman that we always want to maintain control. And I think especially in some situations like when you're deploying weapons, you want the human to be in control. Maybe there's other situations like riding on an elevator where you don't need that much control. So I think control is important. And part of my answer too is that we've been developing technology for centuries. And every time we develop something, somebody can use it for good or evil. And I guess we just have to try to be ahead of the curve and try to make it so that that is harder to do.

Daniel Serfaty: That would be wonderful if science and engineering certainly has a word in that and we don't leave that very tough ethical decision to folks who do not understand deeply what the technology and perhaps even what team science is about. Steve, what's your take on ethics?

Stephen Fiore: Well, I think it is the compliment to what you just said because the arguments and the debates are that the people developing the technology don't understand the implications of the technology that they're creating. So there is a lot of discussion, a lot of hand ringing around how algorithms are being created. Another part of the ISAT community who Nancy and I are working is actually looking at the inequity in what we would think of as algorithm creation. So who are the people making these algorithms and do they really understand the implications of the algorithms, that is the end users, and how they're affecting the end users lives. So there's essentially a lack of diversity in the community making the technology that is impacting everybody, and that's an ethical problem. 

Another significant ethical problem has to do with data privacy and how the AI is learning by continually monitoring what our activities are. Everyone knows Google is watching us, everyone knows Facebook is listening and monitoring us. We've seen the implications of that, but they filter bubbles. And it seems that people don't really care as much as they should about this monitoring that's going on. So we tend to be concerned in the US about government monitoring, but we don't care about private sector monitoring. And that's something we're going to have to address because it's affecting our lives in very real ways. And the more things change, the more they stay the same. We've always been manipulated by companies or told what products to buy and we're influenced by advertising and now we're being influenced by algorithms. And the speed and the reach of this influence is what's new, what's different. And I don't know the degree to which people are paying enough attention to that kind of influence.

Daniel Serfaty: And there is certainly an instantiation of this worry when we look at a more intimate relationship between this technology at the team level. I understand at the societal level that's something we're all worried about. I wonder actually in the future when we work very naturally in small teams that have AI very pervasive in it, either an actor or an AI that is part of the coordination mechanism, whether that notion of privacy is going to be even more critical. 

Thank you so much for taking us all the way back to the origin of team research and all the way in the future. But I want your prediction now. I want you to think about the next 10 years focusing on team science, not so much on the teams in society but rather what would you say is the top challenge or the top two or three challenges in the continuous expansion and sophistication of team research and team science? I know that's a big question and you could write several papers on that, but could you share with the audience just a couple of nuggets there, things that you tell your graduate students in particular what they should worry about.

Stephen Fiore: I'd say that we need to look at it from multiple fronts. From the science standpoint, it's the increasing recognition of the need for the multidisciplinary interdisciplinary approach. I think that there is a couple of fronts that are important for team researchers. One is big data and the degree to which team researchers are at least collaborating with if not understanding the amount of data out there in the real world that we now have at our disposal. I use GitHub as an example. So that's a new area where people are studying teams, but it takes a particular set of skills to look at that kind of data. 

Another component of that is the technology being used to monitor teams. There is developing research in cognitive science that I refer to as neo-behaviorism where they're monitoring the interactions of bodies through computer vision, through physiological sensors, and making inferences about the collaboration and about the cognition that these are new technologies that people who are trained in traditional behavioral research with things like subjective self reports may not be used to. So the next generation really needs to understand the suite of technologies that they're going to have at their disposal to really get at a deeper understanding of the various levels of collaboration that are emerging from the brain, the body and the ambient environment in which they're collaborating. 

So I think that's going to be the next part. And I think that we need to create something like a PhD in teams. Not a PhD in psychology, not a PhD in organizational behavior, but a PhD in teams where you'll take courses on the technology, you'll take courses on the various theories, you'll take courses on the suite of methods and measures that are potentially available to you so you study this problem of teamwork from this already multi-dimensional perspective.

Daniel Serfaty: Thank you very much, Steve. That's certainly an ambitious vision. Nancy?

Nancy Cooke: I agree with what Steve said and I think the assessment piece of it is really important. That that has changed over the last 20 or so years and I think it will keep changing as we have more technology that can do the sensing of the teams and more analytic techniques to make sense of those big data. Also, the heterogeneity of teams will keep increasing, I believe, and we'll have more and more multi-team systems. And so getting a handle on what that means, and by heterogeneity I mean science teams will have multiple disciplines. We may be working with different species like robots and AI, and maybe even we would have a human-animal-robot AI team. And so trying to get those kinds of teams to work effectively I think is a big challenge and I think that we have tools and human systems integration that can help us understand those systems, and we probably need more such tools.

Daniel Serfaty: Well, thank you Dr. Steve Fiore, thank you Dr. Nancy Cooke. You've been inspiring both in terms of our knowledge of understanding this whole trajectory of team science, but also give us a taste of what the future is like when we embed intelligent technologies into our very own social structure called teams. 

Thank you for listening. This is Daniel Serfaty. Please join me again next week for the MINDWORKS Podcast and tweet us @mindworkspodcast or email us at [email protected] MINDWORKS is a production of Aptima Incorporated. My executive producer is Ms. Debra McNeely and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.