MINDWORKS

Mini: If it works for humans, will it work for human-AI teams? (Eduardo Salas, Scott Tannenbaum, and Kara Orvis)

January 06, 2021 Daniel Serfaty Season 1
MINDWORKS
Mini: If it works for humans, will it work for human-AI teams? (Eduardo Salas, Scott Tannenbaum, and Kara Orvis)
Show Notes Transcript

As we evolve toward a future of multi-species teams made up of human intelligences and artificial intelligences, what's going to happen to team science? Should we apply blindly apply what we know about teams at work and say, "Well, it works with humans. There is no reason it shouldn't work with artificial intelligence and human teams." Or is there a possibility for developing a whole new science? Join host Daniel Serfaty for a MINDWORKS Mini featuring Eduardo Salas, Scott Tannenbaum, and Kara Orvis as they explore how the addition of AI is going to change not only the nature of teamwork, but our understanding of ourselves as humans. 

Daniel Serfaty: As you know very well, in the more futuristic but it's already happening in some professional domains, we're introducing new forms of intelligences, and I'm using the term in plural on purpose, into the human teams. We're introducing robots that work with humans. We're introducing artificial intelligence bots or artificial intelligence beings literally that observe how humans are doing things and learn from it and change as a result and adapt.

And I wonder as we evolve toward that future of multi-species teams literally, what's going to happen to team science? Should we apply blindly what we know about teams that work and say, "Well, it works with humans. There is no reason it shouldn't work with artificial intelligence and human teams." Or is there a possibility for developing a whole new science, a whole new insight perhaps? Kara, you want to start?

Kara Orvis: I've been thinking about this recently. First, I think we have to understand these nonhuman teammates and aspects of them that may or may not be different than human team members. But earlier, Ed and Scott were talking about this idea of generic skills that an individual brings to a team. And I believe, and I just wrote a paper with a colleague of mine, Sam Dubrow, where we took a look at some of those generic teamwork skills and we considered what these machine teammates were like and what made them special.

Daniel Serfaty: What's a generic teamwork skill for example?

Kara Orvis: A generic teamwork skill or trait, we were looking at traits too, like communication, ability to communicate with others or tolerance for ambiguity was one of the traits we looked at. We took a lot at some of those generic teamwork skills and we made a case in our paper that some of those skills probably do transfer over to human-machine teams. They're just as important in a human-machine team as they are in a human-human team. But some other skills may become more important in a human-machine team. And then some other skills might not be as important in a human-machine team. And so I believe that we can take things from the team's literature and it will apply to those kinds of teams. Do I think everything will apply? Probably not. But that's an example of, if we're going to design humans to work in human machine teams, what are those skills and traits that we're going to want to train and select for that are going to allow them to deal well with those nonhuman team members?

Daniel Serfaty: And I want to hear from Eduardo and from Scott on that, but I think it's very important that your community take the lead on that because left to their own devices, artificial intelligence developers work very fast and don't wait several years to have the right P values, will actually design an artificial intelligence system without taking into account the treasure trove of insight that our community, your community can give them. You're nodding, Scott. Do you agree? Tell me more about those future teams.

Scott Tannenbaum: Yeah. So if we think about them, the teams you described, [inaudible 01:06:33] think about them as let's say hybrid teams, right? It's a mix of human and other intelligences. Let's first start with the assumption that we're talking about in this case intelligences that are somewhat visible to the other team members. They don't have to be physically visible, but they're robotic or virtual. They're not so deeply embedded that we don't even know they're happening. So in those cases, you almost naturally as a human think about them in some ways as team member. So it makes me think about analogous phenomena in hybrid teams versus all human teams. And I can point out some of them, but it also tells me there's some research that's needed.

So what do we know with human teams? Trust matters. And we know that in judging whether we trust another human, there is a judgment made about ability. Like, do I think you can do what you said you're going to do? And character like, do I think you're going to do the right thing for me, that you care about me, et cetera? So what's the equivalent phenomena. Do those apply directly or differently when we start talking about a teammate who is not human. We know role clarity for example matters a lot in teams. So, Daniel, are you responsible for this? Am I responsible for this? What's the equivalent when we've got a hybrid here? Is it programmed in? Does the AI just make a decision to fire, to clean, to do? Who owns the decision? Is that clear and transparent? We know backup matters.

Daniel Serfaty: What's backup?

Scott Tannenbaum: A backup is, I am monitoring, I see that you need some assistance so I offer help. I fill in for you either partially or fully in some ways. In human teams, that's kind of a characteristic of high performing teams that have interdependency. So how and when do you human and AI backup each other? What are the implications for team composition? Can I compose a team where I know AI is able to step in and do some other things even if it's not their primary task? And can I as a human serve as backup for AI? You think about sometimes like, "Oh, the intelligence can run on its own. But are there times where I should be monitoring and seeing this is now evolving into a space that the AI was not programmed for and I need to back up?" So I share some of those as examples that we should use what we know about team science and we should probably study those phenomena in these hybrid teams.

Daniel Serfaty: Yes. Eduardo, if you can take on that topic, and also maybe expand on that notion of training as a team, on the training part how do you develop those teams? Are they totally new kinds of competencies that need to happen, or they are just variants of what we know?

Eduardo Salas: Let me make I think maybe a bold statement on this. I don't think we need to be afraid of human automation, human AI teams. I think that the way to tackle this is to stick to the basics like we always have. So instead of studying teams, we need to study the nature of teamwork. And so I don't care whether you have automation or a robot as your teammate. I want to understand what is the nature of your interaction. If we take what we know in team science into a team task analysis, you look at coordination, demand analysis, if you focus on understanding that, then I think you will get the kind of competencies, the kind of need that they have. And so I think it's that. We stick to the basics. And for years, at least the 40 years since we started all this [inaudible 01:09:57] movement and [inaudible 01:09:57], it has served us well. So that's what I will focus on. So to answer your question about training, training may or may not look any different.

But I'll give an example, that it kind of, it made me think about this. So Scott and I were asked by a manufacturer to look at a new kind of team that they were forming, which was a robot, a human, and an algorithm, automation. They used to work as a three person team, humans. And now they have changed so they have all kinds of problems. At the end, to me, what I got out of that was it's the nature of the teamwork that matters, not who is next to you who is a machine or a robot. And that's what we need to do I think. And so in the work that I've been ... once in a while I get asked to consult on human-robot or robot to robot teams. At the end, we talk about the same stuff. Backup behavior, informational exchange. We talk about the same stuff.

Daniel Serfaty: Thank you. I think going back to first principles will be very important here, but also be open minded enough to understand that because we don't have enough words in the English language, we still call that intelligence, artificial intelligence, we still call that teams maybe because we don't have a word for that new form of social structure. And there is a lot of controversy in the human machine community about whether or not that AI is a tool of the human or it's a teammate. And we're going to have a series, as part of this series of podcasts, debates specifically about that, tool or teammate. And the issue here is also about ... maybe one of the differences I want to offer to you is about that transparency or that trust. At that level, AI behaves sometimes in a very unpredictable way. And not because it's capricious, it's because it absorbs an ungodly amount of data. And from time to time there is an emerging behavior that emerges and happens because of some deep structure that the AI has learned.

And the AI learns not only deep structure about the task and the task interdependence, but also about the human teammate. And therefore, that kind of unpredictability is really interesting because it forces us to have a level of explainability and transparency perhaps for each other that occurs very naturally with humans because that's our DNA perhaps, but doesn't occur naturally between humans and intelligent machines.

Eduardo Salas: That's a great point, Daniel, because what I do worry about about all this stuff, really it's not a team issue per se, or maybe it is, I don't know, but what I think about is ethical issues. For example-

Daniel Serfaty: Tell me more about that. What are you worried about?

Eduardo Salas: Well, I'm worried about that these things will have a dark side as they're interacting where there's no boundaries. What I'm afraid of sometimes is it's confronted with ethical issues. So healthcare is going in that direction a little bit with robotics and all this kind of stuff. And they're beginning to look at the ethics of this. Because can the AI, can the automation, can the robot detect who they have in front of them, what kind of person they have, what kind of history they have, I mean all this kind of stuff. So you're right, they have a [inaudible 01:13:11]. What they're more worried about, the trouble if you will with more AI is about ethics and who monitors.

Daniel Serfaty: Is our field, Kara, Scott, equipped to deal with the ethical considerations that comes with this introduction of new learning, intelligent machines in our work? What does the IO psychology community have to offer in that area, or should we leave that up to the philosophers? I'm asking the tough questions.

Scott Tannenbaum: So we can't leave it to the philosophers, although they have a role in this. We can't leave it to the technologists because they have a role in this. In some ways, psychologists, IO psychologists somewhere can sort of bridge there. Historically, we have worked in man-machine interface. Historically, we have asked questions about ethical and appropriate behavior at work. We do interface with technology. So we're not the right people to program it. We're not the right people to ask the big questions. But maybe sort of where the rubber meets the road, we're the right folks to be able to facilitate and ask the right questions. Earlier, when you were talking about, Daniel, what implied to me kind of this emergent learning that occurs that's inherent in some forms of AI, that that's where some of the risk points occur because they're quantum leaps or they're divergent. And they could be much better or they could be much worse in some ways. It made me think of a parallel in some of the research that we've been doing on informal learning in humans.

So informal learning is in contrast to training where there's a preset objective, a preset curriculum, a preset group of experiences to learn X. Informal learning occurs very naturalistically. Humans do this all the time. The vast majority of learning in organizations is informal learning. So as we try to prepare people to be better, faster informal learners, one of the risk factors is they're going to try things that they probably shouldn't try and they get in trouble. So we've been coaching organizations to think about red, yellow, green charts. You're going to take this person, they're relatively novice, they're starting to learn, we're going to put them  . What are those things that, if they get a chance to do it, just run, green, don't ask? What are those things that are yellow like, "Do it, but only if there's some backup there?" And what are those things like, "We don't want you touching this thing in the nuclear power plant facility and testing it," red? Is there an equivalent to that in the case of emergent intelligence?

Daniel Serfaty: There is an equivalent. But what would worry me as both a technologist and a practitioner of teams is not the red, the yellow, or the green, it's the color that I haven't designed the system for.

Scott Tannenbaum: That's good.

Daniel Serfaty: That's really the known unknown so to speak that we have to worry about.