MINDWORKS

The Magic of Teams Part 3: The Future of Teams with Steve Kozlowski, Tara Brown, and Samantha Perry

January 12, 2021 Daniel Serfaty Season 1 Episode 10
MINDWORKS
The Magic of Teams Part 3: The Future of Teams with Steve Kozlowski, Tara Brown, and Samantha Perry
Chapters
MINDWORKS
The Magic of Teams Part 3: The Future of Teams with Steve Kozlowski, Tara Brown, and Samantha Perry
Jan 12, 2021 Season 1 Episode 10
Daniel Serfaty

Data is currently outpacing the theories on how teams work—is that a problem? Can AI help? What it will take for NASA to send a team on a multi-year mission to Mars? MINDWORKS host Daniel Serfaty talks with team science experts Dr. Steve Kozlowski, Dr. Tara Brown, and Dr. Samantha Perry on the future of teams to address these and other questions in this groundbreaking five-part series. 

Show Notes Transcript

Data is currently outpacing the theories on how teams work—is that a problem? Can AI help? What it will take for NASA to send a team on a multi-year mission to Mars? MINDWORKS host Daniel Serfaty talks with team science experts Dr. Steve Kozlowski, Dr. Tara Brown, and Dr. Samantha Perry on the future of teams to address these and other questions in this groundbreaking five-part series. 

Daniel Serfaty: New year and welcome back to the MINDWORKS Podcast. This is your host, Daniel Serfaty. This episode is part three in our highly successful five-part series exploring the magic of teams.

In part one, we learn about the ABCs of teams, and in part two we talked about teams in the wild. If you haven't listened to those episodes yet, you'll want to do that after you listen to this one.

For this part three, in which we'll explore the future of teams, we have a particular treat, three special guests who have a secret common past.

Dr. Steve Kozlowski is a world-class scholar, a professor at the University of South Florida, and, until recently, a professor at Michigan State University. Steve is a recognized authority in the areas of organizational system theory, team leadership, and team effectiveness, as well as learning, training, and adaptation. He has published more than 500 articles, chapters, and books on the topics of teams and learning, and his research is supported by key agencies, such as NASA, the Department of Defense, and the National Science Foundation.

Dr. Tara Brown is a senior scientist and leads Aptima's Instructional Strategy and Support Capability. Tara has been studying teams for more than a decade in both the lab and real world environments. Her more recent work focuses on how teams evolve over time. Tara completed her PhD at Michigan State University under, wait, who else? Dr. Steve Kozlowski, who is here with us today.

Last but not least, my third guest is Dr. Samantha Perry. We call her Sam. She's a scientist and leads Aptima's team in Organizational Performance Capability. Sam has more than 13 years of experience with the Air Force, the Army, and NASA, as well as emergency medical teams. Her expertise is in the adaptation and motivations of teams, as well as the unobtrusive measurement of team performance. Sam completed her PhD at Michigan State University under, who else, again Dr. Steve Kozlowski.

So with us today, we have a Grandmaster with two former apprentices that are now becoming masters in their own right, all of them in the field of high performance teams.

Steve, Tara, Sam, welcome to the MINDWORKS Podcast.

Stephen Kozlowski: It's great to be here and great to be with former proteges who are now consummate professionals.

Daniel Serfaty: Let us start perhaps, and I will start with you, Steve, if you don't mind, if you can perhaps let us know, why did you choose this particular domain of teams and teamwork as you're field of endeavor? What attracted you to this field?

Stephen Kozlowski: A lot of things, Daniel. I think it wasn't so much as an active choice as something that I sort of gravitated to. When I got into graduate school and I went to organizational psychology, I was very interested in the idea of organizations and systems and how people in groups in the organizational whole, how that all functioned.

I had some jobs, and I won't say they were great jobs, but just trying to understand how it all worked was one of the things that attracted me to the field. And when I got to graduate school, I discovered it's just individuals and outcomes. IO psychology 40 years ago, when I was a grad student, just looked at individual differences and some kind of outcomes. Performance, for example. There were no teams. They weren't even interested in the organization.

I was still interested in the systems part, so a lot of my own personal efforts as a graduate student was to how to learn, how to study, organizations in a more systemically oriented way, which is kind of how I got to be somebody, if you will, because I got interested in the systems aspects and the methods required to really be able to do that from a scientific perspective, rather than just the writing about it narratively as theory.

At some point, if you're going to study systems, you need a unit of analysis, a unit that you can study. So there's a reason why a lot of psychologists study individuals, they're easier to study in a way. You ask them questions, they give you answers, we've got data. But when you start to talk about studying organizations, well, now it gets way more challenging. How do you study a whole organization? Or really you need multiple organizations if you're going to study them.

Teams are kind of right at that sweet spot. You're going to study collective phenomenon. Teams are right there in the middle. I describe it as the crucible. There's the individual, there's organizations, or everything above the team, and the team is where the rubber meets the road. So I kind of, after a decade of getting out of graduate school and kind of finding my way, as you really kind of have to navigate as an academic, where's your expertise going to lie? I landed on teams.

Daniel Serfaty: Then, you see, we are all fortunate that you migrated there, because I personally studied your papers even as an engineer, precisely because it has that system flavor to it. It was very attractive, the precision of it, and the methodical approach to the study of what is fundamentally a small but a complex system, which is a team.

Stephen Kozlowski: Exactly.

Daniel Serfaty: Let me ask a similar question. Tara, here you are, a graduate student at Michigan State, and you can study anything. You decide to focus on teams. Why? In addition to wanting to study with Professor [crosstalk 00:05:39].

Tara Brown: That's honestly one of the reasons. My interest and fascination by teams research really was inspired by the work that Steve was doing and the labs that I got to be a part of while at grad school. I actually came into the program at Michigan State with a focus on individual differences, and more focused on the selection side of IO psychology, and spent a lot of my first couple of years focused on the individual.

It wasn't until I really started to think about adaptation at the individual level, which was what my master's thesis was, and really starting to think about the context within which individuals perform and how the team impacts the individual and their processes, that I really started to expand my focus to understanding team dynamics.

I was able to be a part of Steve's labs, where we did really interesting cutting edge research with NASA and emergency medical teams, where we really got to see some real world implications of what happens when teams break down or what happens when their processes fail. Getting to talk with emergency medical doctors and look at how teams are trained within simulated environments and really seeing how the dynamics play out in those environments and how it can lead to life or death types of consequences in those environments, made it even more important to me to really understand that.

Then obviously moving into the work at Aptima, where we're working with primarily military teams now, and, again, the life or death kind of consequences to teams that might have highly capable individuals but cannot function effectively as a team, and trying to understand how to intervene and anticipate when that might happen to prevent disasters.

Daniel Serfaty: I'll hold that thought because I want to go way deeper into this situation of mission critical teams and the consequences of not performing well as a team. You started to tell the story about even at work you find yourself not just studying team, but being in teams, and I think that's probably the best lab one can dream of. Talking about labs, Sam, tell us your story. How did you get into these teams and teamwork and the study?

Tara Brown: I'm sure I inspired her, right, Sam? That was me, for sure.

Samantha Perry: Yes, definitely. It actually started far before I even got to Michigan State. It definitely grew there, but my dad actually is a psychologist himself and he was a professor at Fordham University studying motivation, communication, and leadership. And so I wanted to be like him and I wanted to be a psychologist before I could even spell it.

I engaged in research with him and teaching, and it was just something I was always very interested in, but it grew. So in undergrad, I had the opportunity to study more the leadership and motivation with Steve Zaccaro, and through that, I was able to work with the Army Research Institute and be a fellow for them, a rare opportunity as an undergraduate, and I got to work with Jay Goodwin and got to exposed to the team element of IO psychology. That really motivated me to focus less on motivation and leadership theories, but more on team and team dynamics.

At the completion of that, I of course knew of Steve Kozlowski and I had the opportunity to go to Michigan State, and that was where I really focused and really got deep in my knowledge of team processes, performance, and unobtrusive ways of measuring these phenomenon.

Daniel Serfaty: It's interesting, because at different levels, at different times, over the last several years, the three of you migrating to teams for almost different reasons, but you were still fascinated by that organizational unit we call a team.

My next question is addressed to you, Steve. After all these years studying teams and being probably one of the top world experts on that notion of team, is there something magical about teams as opposed to any kind of other organizational form? What is it about teams? Is that a uniquely human system or are we seeing other kinds of teams in nature?

Stephen Kozlowski: Well, the way we talk about it, I think, for most of us here, organizational psychologists, it sort of gets defined as uniquely human, but certainly you can see this kind of collective organization take place in higher order animals. Animals that we think of is not having... I'm not an expert so I hope I don't offend some animal expert out there, but you can see predators that hunt hunt in packs. They certainly have roles. They have strategies in how they play that out. Or you can look at insects, maybe that behavior is programmed in and it's probably bigger than a team, but clearly there's a lot of collaborative, coordinated, specialized functioning and behavior that has to take place for those collectives to be successful.

I'm not sure that what we're seeing is uniquely human. Certainly we have the capacities to communicate and to convey other kinds of responses, liking, disliking, in somewhat less obvious ways, perhaps. But I do think there's something [inaudible 00:11:02] to just kind of go to your notion of magical, which is not really a scientific term.

Daniel Serfaty: Not yet.

Stephen Kozlowski: [crosstalk 00:11:08] person who forms teams. I'm having to build research teams or I'm on some team. So as a participant, or former, or what have you, is that when it's all working, it feels really magical, and when it's not working, it feels not very good at all. You can tell. It's very visceral. It's different.

I think about trying to understand workers in organizations, there's a lot you can learn studying individual characteristics, but people don't work in a vacuum. COVID has separated us, but I spend almost as much time on Zoom as I do trying to write or read or do the other things that I would do as a professor. And so there's this interactive component, this exchange component, that I think is really important.

The team puts some boundary around it, so it's not just free floating, but we've got a common purpose, you're trying to achieve something, often it will be specialized in some way so we've got to be able to get that expertise to fit together.

When you get that to happening, you create a winning performance if you're a sports team, or you create new innovation if you're an entrepreneurial team, or as a science team you make a discovery, you've made it through a bunch of challenges and you find something unique. It feels really cool and it's something that's a shared experience. I think that's harder to feel in that visceral, palpable way, when you talk about the success of the organization, and you would know that. It's a lot easier to feel and to share when it's 5-10 people.

Tara Brown: We've seen that in some of the Army work that we've done. We've been talking about climates within teams and what the right level is to really have somebody talk about the climate that they're in. So [crosstalk 00:12:50] inclusion is one of the focal areas.

It's really interesting to think about what really constitutes the team, because within the Army, there's a hierarchical nested organization of teams [crosstalk 00:13:04].

Stephen Kozlowski: It's classic [crosstalk 00:13:06].

Tara Brown: What's the right level of team to talk about? And so we've had a lot of discussions around, with Army leaders and with soldiers at various levels within the organization, about who they identify as their team, and I think it comes down to what Steve just said, where we typically end up around the squad size element, which is that small enough to feel like you get enough interaction with everybody to be able to really know them and know their role and know their personality and develop some cohesion with them, but big enough that it's a meaningful team that has a goal that they're working toward.

It's really fascinating to think about that nested piece of teams as well. And I think the magical parts, and why teams are the unit that we have been focused on, is exactly what Steve said. We have to identify the level at which people are doing most of their day to day interactions with, the group with which they identify or some of their identity is associated with, and who they have some shared common goals with.

I think you can have an organizational identity and there's obviously organizational level practices and systems in place, but I don't think people typically identify as strongly with their organization on a day to day basis as they do their smaller team unit, who they've really had a chance to develop some of these critical states with, trust and cohesion and all of those things. I think that there's a sweet spot there at the team level, which is why it's, I think, the focus of our study.

Samantha Perry: I also think there's a good example within just any organization of this "magical" phenomenon of teams, which is, for me, in brainstorming. So when we're kicking off a project or when we're developing a proposal, getting a few people around a whiteboard and seeing the ideas bounce around in conversation and seeing how they flourish and grow from one person's initial concept to what comes out of that conversation, even if it is just a few hours, is really something unique, because it's not something that would happen asynchronously in the same way.

I could send Tara an idea, and then Tara can send both of our aggregated ideas to Steve, and then we can workshop it individually, and it wouldn't be the same as if the three of us came together and bounced ideas off the whiteboard in real time. There's something unique about that phenomenon, even applied to just a normal organization. And I think that's a really critical aspect of teams, breaking that down. And why does that happen?

Daniel Serfaty: Yes. I would like eventually to explore that as we move to a different paradigm of co-location. I know that the reason we say magic is because maybe we are genetically primed to interact only with a couple of handful of individuals and feel as part of a living organism in a sense, but maybe the next generation, my kids, who are teenagers, are very comfortable having dozens and dozens of people in their immediate circle. Many of them they've never met. And that comfort with connectivity is really something that is generational.

I think we are observing a change in that, but let's leave that as we speculate about the future a little later in our discussion. Before we dive really into the core of this session, I would love for you to think of an example for our audience of the best team you have ever observed, or you've ever been part of. Something in which you were impressed by the, let's call it right now, in a non-scientific way, until we dive later, the teamness of it.

And also perhaps on the other extreme of the spectrum, perhaps the worst team, on a non-attributional basis, that you've ever been part of or you have observed. And why do you think they were best and what do you think they were worst? Who wants to pick that up?

Tara Brown: I can provide a very salient example of a best team experience, and that's honestly a team that I'm currently working with at Aptima, which was nominated for Best Teamwork Award and did not win, although I'm challenging the vote. But it's, ironically enough, a project team that's working on a contract on teams research. And again, we're our own kind of nested team structure where we have an Aptima team along with five university teams and a team from [Go 00:00:17:28], along with ARI, all kind of working together on pushing the future of teams research.

Daniel Serfaty: ARI is the Army research Institute, yes? [crosstalk 00:00:17:39]. For Behavioral and Social Sciences. [crosstalk 00:17:41].

Stephen Kozlowski: And it's a multi-team system because [crosstalk 00:17:44] different organizations involved in contributing those team members.

Tara Brown: Exactly. I think there are multiple levels of goodness of our multi-team system, but really focusing in on our Aptima team, one of the things that has stood out to me from the beginning is we exhibit the team processes and team states that make up a good team. We are a very cohesive unit, both socially and task oriented. We have very shared goals. We're all on the same page in terms of the vision for our team that we execute. We actually like being around each other and like meeting twice a week to contribute ideas. And there's also a very strong level of trust that's developed in our team.

It's a very complex project with multiple moving pieces and different levels of expertise, different types of expertise. We've got mathematical modelers and engineers and UI developers and psychologists and all of those people coming together that have their responsibility and role that they need to perform for our team to succeed, and all of our pieces and parts have to come together. And there has been a development of trust over time that people will complete their tasks. They will do it well. And you can count on people over time.

I think the other piece that's made it a really good team is that we back each other up and provide support in an anticipatory fashion, so we have developed a shared mental model enough that we can kind of anticipate or predict when somebody is overloaded or when somebody might need help or are struggling.

Stephen Kozlowski: Implicit coordination.

Tara Brown: Yeah, it's implicit coordination.

Stephen Kozlowski: Somebody here might've been somewhat responsible [crosstalk 00:00:19:33].

Tara Brown: I don't know. There might've been a paper out there with somebody's name on it somewhere. But we have been able to get to the point where we do implicitly coordinate and we back each other up proactively and continue. We have excellent communication, and as a result, we have navigated a lot of bumps along the way, a lot of external factors and constraints.

You throw COVID into the mix of field teams research, where you're trying to develop a paradigm to collect data with in-person teams during a pandemic, and that's an external factor that you have to consider and adapt to, and I feel like we have been, because of the shared mental models we have developed, because of the cohesion that has built and the trust that has built, we have been able to weather that storm and the challenges that have come up very gracefully and very productively over the course of the last year.

Stephen Kozlowski: It's really cool to hear, even anecdotally, that what the research literature would suggest after 75 years of research on small group and team effectiveness, seems to work.

Tara Brown: That's amazing.

Stephen Kozlowski: That's very comforting. It's nice to know that science works.

Tara Brown: It's almost like the leaders in this field who have done all that research knew what they were talking about.

Daniel Serfaty: It's a disruptive idea, sometimes science works. That's wonderful to hear. Steve, Samantha, do you have examples, either on the positive side of things or even on the difficult side of things, when teams that you have observed tended to break down or to not work?

Stephen Kozlowski: I'll give a short example. I mean, I would echo a lot of the things that Tara says. I have a much more focused team, so it's not multi-team system or people from other... Well, I guess technically they are. So I have a research group that's three of my former grad students, my wife and colleague, who's also an organizational psychologist, and me. So we're a core five group and we've been working together for about a decade. About the time these two guys were at Michigan State those folks were there too.

We do a different sort of brand of team research. We'll probably talk about it later. But we've been a very productive, cohesive, and innovative group. And it's all the things that Tara talks about. It's also our specialization and our ability to coordinate and, I don't want to say optimize because that sounds a little too engineering like, and we can't prove it, but to really kind of try to maximize what each individual is really good at in terms of our collective product or the effort that we create. And we've gotten really good at that.

Usually as graduate students graduate, they don't work with their former professors anymore. They're discouraged from doing so, and there are other impediments. but we have created such a great team that we're all motivated to keep working together on this team and just kind of manage any of those negatives from other views.

I would say when a team doesn't work, I have a different research team and it began to break down. And the breakdown was basically when people stop communicating and making decisions without collaborating, and then trust gets undermined, you no longer have that sense of cohesiveness, you don't have the common mental model, the shared goals begin to break down because it feels like someone's pursuing their own individual goals at the expense of the collective, and so then you begin to do what you're required to do like there's some professional sense or some contractual sense and no more, no less, and, when you can, you exit. I've recently exited that team.

Daniel Serfaty: Maybe that's something we will pick up in a second, because teams being lifeforms, in a sense, have the beginning of their life and the middle of their life and even an end-of-life. We don't talk very much about sometime the need, not just the happenstance, but the need to let the team go.

Tara Brown: [crosstalk 00:23:19]. I was going to say, sometimes that's the most adaptive strategy, is to let the team fall apart.

Stephen Kozlowski: Yes.

Daniel Serfaty: Sam.

Samantha Perry: I was [crosstalk 00:23:28] in my example, it wasn't one specific moment, but I observed many medical teams. Some work extremely well, some not as well. And it's not necessarily because of a lack of outcome, but the team process component can sometimes break down, but still patient outcomes remain stable.

Stephen Kozlowski: Or not.

Samantha Perry: Or not, [crosstalk 00:23:53] I've seen really bad team processes where the patient didn't suffer, which is excellent, but you could see that they weren't a very good team. Now, a lot of medical teams of different fields, like emergency medicine, are very...

Now a lot of medical teams of different fields, like emergency medicine, are very ad hoc. They come together, they do a task, they treat a patient, they break apart. You get different people from different specialties who respond, but it's a very short-lived team. So, thinking about those teams is completely different than thinking about a project team or a business team or something within an organization. How do you frame team dynamics and those ad hoc teams can be very different and distinct than how I would expect the people I'm looking at here to act like a team.

I wouldn't expect the same assumptions on cohesion and mental models, except as it relates to my tasks and my job. And I'm anticipating that Tara, being another organizational psychologist, can come onto my team and have a certain basis of knowledge that I can rely on.

And so, when you have these ad hoc teams that are functioning versus non-functioning, it's interesting to me to look at the reasoning why and where the breakdown is in the team, and how does that perpetuate through the different tasks that they're trying to accomplish? So, that's both positive and negative examples in my mind.

Tara Brown: I was thinking about medical teams as my example of poor as well. And I remember reviewing videos of emergency medical teams, students or residents who were training through these simulators, and one of the things that you saw break apart or fail was monitoring and backup behavior.

Specifically, being willing to correct somebody else's mistake if that person was viewed as higher on the hierarchy. So, if a nurse noticed the doctor making a mistake, they weren't always willing to vocalize that at the detriment of the patients and the breakdown of the team processes. And so, really-

Samantha Perry: And that involve the air crews as well, there's a lack of backup behavior and speaking up, and there isn't that psychological safety [crosstalk 00:01:45].

Tara Brown: So you see that a lot. And I think that's something that needs to be at the forefront of people's minds as they're examining the reasons that teams break down. I think oftentimes it's that lack of psychological safety or that lack of trust in that context where people see something that's not right and are afraid to speak it.

Stephen Kozlowski: And that brings in leadership, right?

Tara Brown: Yeah.

Stephen Kozlowski: Leadership becomes another one of those. Everyone needs it, but not everyone has it. You know, do you have good leadership that has a team is doing well, but leadership should basically be very unobtrusive if you're already did his or her job by getting the team to that place. And if the team's not doing well with the leader is stepping in to help get it back on track, whatever that might be. Certainly in the case of what Tara is grazing in PenNSAM, in terms of the medical teams that creation of psychological safety, that's basically the leader not having done what needed to be done sometime in the past. The team now has this, I feel I can't speak to power and say, "You just sowed an instrument inside that patient. I think I learned in medical school group, that's not good." Or "It looks like that heart has stopped. Do you think maybe we ought to take care of that?"

Thank you for those stories. I think they are very useful for our audience to situate a little bit, the space of themes. And I knew that having three brilliant scientists in front of me, I won't be able to keep you away from science for too long. I already heard the term share mental models and [inaudible 00:27:17] and back up behavior and compensatory these. Let's dive into that. For the next few questions will be about the science of teams because they are being studied. As Steve reminded us earlier.

They have complex system. They are not easy to study, but it's a very rich area of study. So perhaps for our audience, let's start from basic principles. Is something that has been discussed I know in the field. What is the definition? What is a team? Is any group of individuals together define as a team? Steve, can you share us your definition of teams?

I don't have my well cited definition.

Okay, but for our audience. The way you teach your students, the way you have written about. What is a team?

So, I can start checking off features, but I think the important thing to distinguish would be a group from a team and not to imbue these words with a reify them with great meaning. But you know, there are a lot of social groups, voluntary groups, groups of friends. They're not teams, at least as I would define, or at least as we try to, make some distinction in science. Then they may share common goals. Let's have a good time, or let's be fashion forward, let's be whatever. Whatever it might be that brings them together. But they're there because they like each other and they pursue some common interests. So that's a social group. Teams would share that. We hope that they like each other. We want them to communicate and interact, right? You need more than one person and you can find some debates. Is a dyad a team? Do you need three?

Tara Brown: You need three.

Stephen Kozlowski: Three is more interesting, but you need more than one. Let's just say that's a big distinction, common goals. And now you start to get into, well, what makes it a team? Well, they're there because they have some skills. They're there for some organization, an army put them together on a squad or after my hired them to be on this research project. Or I put them on a research project in my area, right there, you're there for a particular reason. Usually there's some expertise or skill or at least role that drives staff, which now begins to distinguish you from more of a social psychological let's look at group members interacting. And then, they're embedded in a broader organization or in some kind of task environment. That work that they're accomplishing there's a context that surrounds it. They may need to communicate with other teams or with higher echelons in an organization. There's some boundaries. Although with virtual teams, project teams, those boundaries might shift and change over time.

In the moment, there's some makeup of the team. Who's on that team? Who's core to getting things done? So I like these emergency medical teams. We have a lot of people coming in to do things that they are part of treating the patient. I wouldn't define them as team members that are [inaudible 00:30:02] providing information back to the core trauma team, but they're not really team members per se.

There's some degree of persistence for some identifiable period of time. It might just be a few moments, but there's some boundary around which this group is doing this thing. That's the common goal I would say.

Daniel Serfaty: That is very useful to put this envelope of definitions because then to distinguish, as you say, teams from any other social structures that wouldn't cement them. So if a team is really a collection of two or more individuals, but structured or constrained in a sense by the components that Steve just shared with us expertise, skills, boundaries, perspectives, common goal. In what way is a team better than the sum of its parts? Can a team be worse than the sum of its parts?

Samantha Perry: Yes.

Daniel Serfaty: Yes [crosstalk 00:30:53] Okay. So tell us a little bit about that Samantha and Tara, and notice that you will be graded on this particular subject.

Tara Brown: Well, I'm going to go back to the interjection I made earlier about whether it requires two or three people, at least to make a team. I am a strong proponent of teams being three or more people, not two or more people, because I feel like a two person team as a dyad and the dynamics between two people are very different than when you introduce a third person where there can be a two versus one situation or other kinds of in-group out-group types of behaviors that I think just add a layer of complexity to teams.

Stephen Kozlowski: I would point out that underneath that structure are dyadic strong groups. So this point of two versus three, I get all of that. And I've seen these debates go on in the literature, I think it's kind of a dog chasing its tail. [crosstalk 00:31:46] At the end of the day you've got social linkages and it's how those social linkages play out over time. That's really important. So you do get more complexity. If you have more than two.

Tara Brown: You give more complexity, which I think is where the real fun stuff happens. So, anyway, I'll answer your question. So I think a team, a high functioning team, what I would call an effective team always is greater than the sum of its parts. I think there is a synergy that happens when you bring individuals together. And I think that synergy is at its greatest when the task or the activity that that team is performing requires a high degree of interdependence. And so, when a team needs to draw on expertise from multiple individuals and really coordinate to get activities done, I think there's the potential for that synergy to come like Sam was saying about the brainstorming activity. There's something that happens, that magic that happens when you bring multiple minds together that should generate something that's greater than the sum of its parts. And I think through those unique elements of being a team, like cohesion as something that you can only experience in teams, you can't experience that as an individual.

So these emergent states that evolves are kind of the unique qualities that I think create the synergies of a good team. But I think equally, if you have a team that's not high functioning or effective where these processes and states are deficient or even toxic at times, I think the team environment, the team context within which individuals are working on can actually impact the individual's performance to the point where they no longer are contributing as effectively as they would outside of that team context. And so, I think there are definitely examples of both.

Samantha Perry: Made me think of this individual level construct that my dad, Paul Barton introduced me to ever since I was a little of psychological flow, but just in me wholly. So you have this ease of thinking, you just get in the zone, if you will, when you're writing or doing something at the individual level. And that's kind of what I was thinking from teen level, when you're brainstorming you're in this flow, the other people in your group and ideas are just bouncing off of each other. It's different. It's different than just sitting there writing a paper. If you're not in this flow, the psychological flow, it's like the white page phenomenon. You're just staring at the screen. You're staring at your notebook and you have nothing going on and you can't think about what to do next, but sometimes if you're in that flow ideas just stream.

And I think that's the same when you're in the team. And it helps you get out of that when you are working with other people. And so it's not just, I could sit there looking at a page or I could just call Tara up and say, this is what I'm thinking. We'll talk for five minutes and then I can go and write for two hours. And it's just that different, it kind of spurs your thinking, there's something else that happens. So I do think dyads can be teams, but you know, I think there are key benefits to be able to talk with other people because it gets you out of your own head. And it has you articulate concepts that help you be more effective.

Tara Brown: Yeah. I think there's a knowledge generation that happens as different individuals provide pieces of knowledge and different perspectives. It's not necessarily that you just have the sum of all the different perspectives and all the different ideas, but those ideas merge and blend and get refined. And over time, the end result produces something that no one individual could have produced by themselves.

Stephen Kozlowski: It can be behavioral too. I don't like using sports as exemplars because I think they get overused. But for this particular question, they're occurring. So, famously, I don't remember the year, but there's the US hockey team and [inaudible 00:35:30] you know, basically a bunch of Rockies, Olympians, but they all play the Russian national team because they collaborate better. They interact better. They coordinate their play better. And then, famously we have the US basketball dream team at the Olympics, all NBA stars, they're all individually fantastic. But they're there just to basically be there to be in the Olympics. They don't coordinate very well. They don't play well together. They're just playing as individuals and they get defeated. So they were less than the sum of the parts. And the other team was more than the sum of the parts.

Tara Brown: Yeah.

Stephen Kozlowski: And therein lies the magic.

And that's why we need the science to understand really how to extract that extra energy and minimize a waste of energy that comes from basically a team that is not well [inaudible 00:36:20] adjusted, coordinated, cohesive, et cetera. Steve, I'm going to ask you to do something if possible, which I love to ask people to do because usually they come through. You publish so much and your research is so rich. And also varied each time. Can you share with the broadcast audience, some of the key milestones in your research of that covers several decades of team research, some of the key milestones, the key moments, when you say, "Wow, that changes a paradigm. That changes my understanding of teams." Can you pick two or three of those out of the 500 plus publication that you have?

So remember, I mean, I got into teams because it was a way for me to understand systems and not to over jargon this, but what we would describe that in organizational psychology and in management would be multilevel theory. So trying to understand how individuals, groups, organizations, theoretically, how do these constructs, these concepts that we talk about that are in your head, right? Cohesion is a perception that you have or a mental model. That's something inside your head. How can we talk about that and measure it as a collective construct? Right? So thinking about that, theoretically in helping to think about how do we do this methodologically. So I'm certainly not the only one, but it was a small group of people interested in doing that for about 20 years. And we were at best a boutique area in science. It was not mainstream. Most people didn't get it and didn't want to get it.

But by the turn of the century, we were able to get some traction. I worked on editing a book with Katherine Klein, which really pushed that out. I mean, it basically laid out a set of principles for how you could do this. And again, it was not us just writing it, but mostly synthesizing and being able to take some risks about thinking about things in new ways. So that made possible team research, which a few people were pursuing, but not many to go really mainstream and organizational science across the board. So, I mean, if you just look at how much team research was being done or multi-level research, cause it tends to overlap a lot. It just takes off at the turn of the century. And a lot of that was really fundamentally founded around building these principles, giving people some tools, reviews then know what to look for when you send in their paper instead of rejecting it.

So it really changed the nature of research. And I did research in there as well. I want to talk about methods when we get to dynamics, but I think methods is really a big part of advancing the science. And it's not often not rack news. To me, that was a big milestone. And I would say, a decade later looking back on this, I'm like, "Gee, that was cool. I had some good points there." But I also recognize that half of what I wanted to do was getting done and the other half was not getting done. So the half that was getting done was the easier half and the other was... You know, we could learn how to think about these collective constructs and have some I'll call them measurement principles, good rules of thumb for how you can collect data from individuals and aggregate that up to represent something like a team or even a larger social unit.

So if you follow the rules, you can get a means by which you can create data, but it limits what you can study. So what you're studying mostly are things that are statics rather than dynamics and mostly things where higher level phenomenon, how the organization limits or constraints the team, or things that are organizational level can influence the team. So, the nature of the organizational structure, how flexible or how rigid it might be, influences workflow systems, technology design, which influences me the person, because my job is a piece of this, right? So we could see how the top influence the bottom. We don't get the look at how was the bottom percolate up and become collective or come back and influence the bottom. And so that part was not being done. And it's kind of interesting because that's about the time that Tara and Sam and this other group of students got to Michigan State and it's not like I was unaware, but I was thrust in a position.

There were opportunities to get funding where we could take some risks and begin to do the hard work of how do you study team dynamics.? And so, I would say writing about and thinking about, and really helping to think through new methods for collecting data, high-frequency data, where you can begin to capture some of the dynamics of how do these phenomena unfold over time? How do they start from it's a thought in my head to something that is now this tangible collective construct? This is what I've been doing for the last decade or so. And for me, at least it's a paradigm shift. It's a less productive by the way, too, because it takes longer and we're trying to pioneer new methods. So these guys were involved in medical team research, which is very laborious. You know, you have to create scenario, you would know Daniel, because you've done this as well, but you've got to create the scenarios.

You've got to get behavioral markers, you've got to train coders, you've got to extract the behavior from video. So you create simulated situations. You put people in them, you're looking for particular behaviors, and then, you can extract them and they show you a story over time of how the team performed. You know, what did they do? And did the medical teams, did the patient liver die simulated folks. So it's safe, which is where you want to do this kind of research.

We'll be back in just a moment, stick around. Hello, MINDWORKS listeners. This is Daniel Serfaty. Do you love MINDWORKS but don't have time to listen to an entire episode? then we have a solution for you. MINDWORKS minutes. Curated segments from the MINDWORKS podcast condensed to under 15 minutes each and designed to work with your busy schedule. You'll find the minis along with full length episodes, under MINDWORKS on Apple, Spotify, Best Buy or wherever you get your podcasts. I want to go back to the notion of methods because for many members of our audience is kind of a mystery. How do you go and study teams? They understand the concept of they are all parts of teams. And there is as much innovation into the finding as it is into the method themselves that led to those findings.

And I really want to explore that. But before we get into that, I wanted to ask both Sam and Tara, it's like in political debates, once you mentions somebody's name, they have another minute they can talk about. So since you mentioned both of them, is there one particular key idea, kind of an "Aha!" that you had as you were reading all these rich literature now, but also participating and generating your own. Is there a particular concept, one concept that really appeals you in this whole theory of teams, study of teams methods to study teams, Sam.

Samantha Perry: So I'm going to maybe answer your question, but first I wanted to point out one of my fondest memories was maybe the first time that Steve and I talked at MSU, I just arrived. And he told me he wanted me to be a part of his NASA team. And he promptly said that he wanted me to work on figuring out unintrusive methods to measure team dynamics and these NASA teams. And I remember being kind of overwhelmed, but kind of excited. Like this is a new way of thinking about team dynamics and what kind of behaviors can we capture. And it was kind of my introduction to kind of a new methodology and to really dig deep and think about this from a new perspective and that perpetuated a line of different ideas and thinking that I engage with, with Steve and the other students in his lab.

And I remember maybe not one specific construct. I know we worked a lot on the construct of cohesion, but the comparison to known methods like self-reports, we struggled a lot with how do we understand behavioral metrics and the associated self-reports usually with construct and criterion related validity, anyway, those are science-y words, but basically you want to make sure that you're measuring what you think you're measuring. And so, you need to use established methods to do that. But the problem is you're in a different head space when you're answering a question. If I ask Steve, how cohesive are we as a team, it's going to be different than looking at the behaviors of whether or not we hang out, whether or not he comes and talks to us when he has a problem. Those types of behaviors are different in kind than asking for a summation of his approach to our team.

And so, it's not necessarily construct [inaudible 00:44:50] you know, may or may not be answering your question, Daniel, but that's something that has highlighted to me over the years and why I think Steve, it's so hard to be productive in the literature because there are so many barriers to get over with. How do we describe to the academic community that this is a meaningful, purposeful way in which to pursue the dynamics?

Stephen Kozlowski: I think that if I hear you well, that first challenge at that first conversation you had when you enter the professor's office, and he challenge you for that, I am amazed that after this high challenge you stayed at Michigan State.

Tara Brown: It was literally in the first five minutes.

Stephen Kozlowski: Okay. But this notion of looking again as a team, as a living organism with observables and thinking about I can actually measure those things that Steve told us are literally in the head of people, cohesion or mental models that we can actually in an unobtrusive way without asking people opinions, be able to measure something. And from that measurement to infer basically the hidden variable in a search. I think that's a key idea. That's a key idea or transformative idea, at least for teams. Tara, one key idea.

Tara Brown: I think one of the things that caught my attention and that I've been grappling with ever since is to really distinguish between longitudinal studies of static snapshots of cohesion and other states versus really studying the dynamics and the emergence process of those states. And so, one of the things that I've really been thinking a lot about since my time at Michigan State, and even as we've studied cohesion and other team states within the context of some of our work at Optima is what is the right way? And what does it really look like to actually study the dynamics of the emergence process? I think oftentimes we don't consider the temporal nature of teams and really think about where they're at in that emergence process when we're measuring their cohesion level or their trust level or other things.

Stephen Kozlowski: Yeah. Add as a clarification. So say a team performs a two hour mission to a military team for as an example, are you talking about this timescale or are you talking where in their life as a team over the years?

Tara Brown: The life cycle of the team. So they're [crosstalk 00:47:21].

Stephen Kozlowski: They're different. The methods that are dominant have been dominant for a century and I'm not slamming the methods. They've been very effective and very productive, but it's essential . You know, we might think about diversifying a bit. So the lessons are asking questions, having a lot of rules to make sure that the answer to those questions tap the concept that you're interested. Because often it's an observable, it's something in the head, right? So we have a lot of rules on that in psychology. And we bought that. But once you have that measure, you can correlate it with other measures and you can have some statistical techniques and they're very fancy techniques, but underneath they're correlation still. So you're looking at relationships. It's important. It tells us that this is related to that, doesn't necessarily tell us why it's related. And so at some level the methods...

... Why it's related. So at some level, the methods have to advance, in some ways let go of some of the rules, because if you want to measure more frequently to begin to unpack it, then you have to measure things that are quick. So they're either single item questions, if you're still stuck on questions or they're the behaviors that you extract from video or in a laboratory, and we can look at button pushes. This sequence of buttons means they were trying to accomplish this, so I don't know what that is. Or I know, we do some modeling as well, where they're software agents that are behaving according to theoretical mechanisms and we can study them at scale.

But this is all removed from the dominant, "Let's ask people questions and correlate those data." And beginning to move into how do we actually capture these snapshots? And I think Tara raised a really key point, what is the timeframe? Well, the timeframe is really well, what's the phenomenon that we're trying to capture? And so some might require days, months, years, and some might be 20 minutes. I can get it. Then the method has to match that unfolding and I think that's not well appreciated. And of course part of it is, the toolbox is under construction.

Tara Brown: Yeah. There's no real theory that tells you what the right temporal dynamics are. I think that's where the theory is lagging behind the methods, and it can inform the methods. And so we're developing theory as we go. But I think the other thing that I'll just touch on quickly before we move on is that it's commonplace in the way we study cohesion and other emergency states, as we look at the average of the team and we say, are they high or low on cohesion? Then does that average change over time if we sampled them this month and then next month or this hour and the next hour?

But what we don't really look at is, have they been together long enough where cohesion should have emerged? Or are they still early enough in the process where it is still emerging? So really not looking at the variability across individuals and their perceptions of cohesion and, is that variability growing or shrinking? Are they converging on a shared states or shared perception of cohesion? Or are they getting further apart and really understanding not only the strength of their cohesion, but also the agreement of those states and whether they're converged or not.

Daniel Serfaty: So the three of you actually are bringing this notion that is quite new I guess in a team research is, notion of the life cycle of, again that of a team of a living organism. That brings basically two of the sets of questions I had for you, you started to answer. But the question of methods, so I know how to study a team. Maybe we would know to study a team that has an emergency medical team, whose task probably last a few hours. I can perhaps compress it in a lab in a couple of hours and be able to accept the result that is or observe it in the wild while they are actually doing their emergency care. But how do I study a team that evolves for ten years or five years? What are the scientific method to do that? In a sense, once you see that team, you don't have access to its electronic team history, like we have an electronic medical record when we see a patient.

Stephen Kozlowski: Well, organizations do. I mean, so to some extent you're pointing to what the future may look like for research when these kinds of digital traces or behavioral traces that can be fused across different platforms while some degree of tracking over lengthy periods of time. I'll go back to, most of this research, 70 some odd years of research, it's static. That correlation captured at one period in time, or maybe with, two or three times slices. What we're trying to pioneer are techniques that allow you even over short timeframes to be capturing data every second, let's say, or once every couple of minutes for a longer timeframe, where you can actually begin to see how things begin to play out, again scaled appropriately to the phenomenon.

We did a project with my medical colleague where we were filming [inaudible 00:52:16] in a regional medical center. That's about 20, 30 minutes total. They're in, they're stabilized and they're off to the ICU. So it's a very definable timeframe. And you can study what happens to the team. And this instance, we're focused on leadership in that kind of compressed timeframe. For some of the other phenomena we've been studying analog teams for NASA, where you can look at a team for upwards to a year.

But we look at one team at a time potential. Want to look at large numbers, you got to use this technique. We do computational modeling. If we want to do something else, I want it more in a while and I want it naturalistic, you can do this other technique, but you're not going to look at as many teams. Every method has some offsetting liability in terms of the advantage or strength that you'd get. It's really important, at least I like to have lots of different tools in my toolbox to be able to use the different tools, to understand phenomenon that better fit one approach or another. I still do research that ask people questions, because there are some things you can't get any other way.

Daniel Serfaty: Sure. But it's an evolution. Sam, If we took the example that just Steve shared with us, because again when you share that visual picture of the emergency care team coming in, stabilizing the patient, and moving in after 20 minutes, they have a whole lifetime. I don't know whether or not they've worked at a team before they enter that room, but this is essential data to understand their dynamics during those 20 minutes. Sam, how do we capture that?

Samantha Perry: The idea of digital traces is something that I've been pursuing a lot up to my lead, a technology called TeamVitals, which has about a decade of data, that's gone into its development. Basically the idea is, how can we understand how individuals are interacting with other individuals by capturing emails, chats, any kind of interaction based, communication based, or even just behavioral based data? The concept is utilizing some social network theory, which is basically the classic spider web of who's talking to who? But how can we use that data to understand what events are happening, what outcomes are super imposed on different interaction patterns? In that example, Daniel, it's which teams have worked together in the past can be tracked and traced through pulling in historical records, by pulling in which patients have individuals worked on and correlating, which patients have had the same temporal hack. If you saw patient X at time one, but Steve saw him at time two with Tara, then we'll be able to know this is the history of how people have worked together, which can inform longevity, which can inform common knowledge bases.

Then our jobs as IO psychologists is to figure out, to what degree does that information help us understand that team in that moment? So if Steve and Tara had worked together in the past on patient X, you share common knowledge with that patient X, Daniel. But what if there was a particular conflict or a negative thing that happened during Steve and Tera's interaction with that patient that caused a rift between those two people? And we never saw them work together again, whereas in the past they worked together every week. Those are different pieces of information that we can capture from that just behavioral data source or historical data source that we can aggregate and understand, not having to ask them any questions. And there're all sorts of caveats and things that we need to think about, but that's kind of how I would start it.

Stephen Kozlowski: And you can never get that data by asking people questions. It's not the kind of questions we'd ask. It's this idea of whether your dominant methods are snapshots, mostly it's just one, sometimes there's a few strong together versus making a movie. And it's either one of those old fashioned movies, it's kind of slow and clunky because you don't have that many data points or it's literally high def and you can just see everything unfolding.

Daniel Serfaty: But what it brings actually is extraordinary complexity. Perhaps even more than complexity, it's an extraordinary amount of data that is necessary to really understand in this new way of looking at teams, which is exciting. I wish I was a team researcher again.

Samantha Perry: [inaudible 00:56:23] be a CEO.

Daniel Serfaty: To have that amount of data that can actually inform even the question I ask at the moment of observation or the construct of the variable, I am interested at the moment of insalvation, It means that we have to carry those data with us for a lifetime in a sense, since we're going to be part of different teams. What do we do with all this data? Is the data essential to construct the theories, Steve now.

Stephen Kozlowski: At one level I want to say, each team is unique. Each team has its own ecology, and so I really want to be that predictive. Then yeah, I probably need to follow those people working together as long as possible. So NASA sending people to Mars, that team will be metricked to the gills or should be, and we should be able to be doing predictive modeling of that team. A And we can talk about it, but at least some of the work we did, we can do that now without, I'll say relatively primitive tools. 20 years from now, it ought to be like a piece of cake. But to think about it from a more generalizable perspective, like what's more typical? I don't know that I want to be trying to sift through all of that data. I think that's where, how to use methods intelligent quickly becomes useful. Can I think these guys, Sam and Tara are in a really great position to be anathema because one of the neat things about Optima is that you have this range of skills and capabilities.

It's not just IO psychologists and the typical statistics would use, but do you have data scientists, computer scientists who are using analytic tools that are designed to unpack the dynamics. They're not commonly known necessarily in organizational psychology, because they're a little bit alien. But these are the people I collaborate with in order to have the capacity to do the things I want to do. I would say at scale we rely on agent based modeling to see what can happen to teams and to identify, I would say, promising targets for when I want to do the very costly research with humans rather than asking some, what if question, well, what if we did this?

Well, let me run 200 teams, over a two year period to see if I can figure out what happens if, or I can run a simulation with millions of agents and say, here's what can happen, here what's likely to happen, here's what can happen in these dark corners that I could never even get data to really see in the real world are only with great difficulty. And here's where I really want or verify a finding, I really want to see how robust this effect is with real human data. It's easy to get overwhelmed with data. Even though I would like to see more and more data, there's a point which you'll say, okay, enough. How do we think about how you use data intelligently? When do I need the actual data from the actual people in that great deal of density? Because I really want to predict this one team that we're sending to Mars, versus, "I need to know what happens on average and I need to know where this team might fit in that distribution.

I have a better sense of what's likely to happen to them. Then that would just narrow if I'm trying to lead them, if I'm trying to do some intervention for the team, it would give me a lot of guidance. I would say that in the work that I'm doing, at least in its pieces, we're trying to really think about how do you appropriately use methods to do really different things. Well understanding to me, it doesn't have to be a one size fits all where we need every day point from every thought you've had about every team you've been on, to figure out how you might behave on this next team.

Daniel Serfaty: I think that I want to elaborate a little bit on that because I think like in many other fields, we are seeing a major shift between daytime theory. It's almost like the data or the data that is possibly available is getting ahead of the theory. Before we needed a theory first to go and test the hypothesis and collect data in order to test its hypothesis, now the data is here and we are trying to build in a sense the theory or guidance, the prescription because the data is already here. That's an interesting tension that is pervasive in many scientific fields involving human behavior and performance these days.

Samantha Perry: When I think about some of [crosstalk 01:00:34] that we have on, and some of the psychological phenomenon that we've measured in metric with self-reports I feel like we've taken it for granted that that is how that construct is measured. But I think some of the theories lend more in depth data to the answer to it, but we haven't considered that really in some of our theoretical development. We laid out what is cohesion and it has an inherently longitudinal inherently behavioral component to it as it emerges over time, but we measure it with snapshots and that's something that we're comfortable with. But thinking back on the theory, can we not associate it with the data that we're capturing now in a much more realistic way, but it's difficult to make that case in the literature, even though it really does tie to the original theory. I just thought that that was-

Stephen Kozlowski: I would comment but I know Tara has something that she wants to say.

Daniel Serfaty: I would love to hear your comment after Tara's, Steve, by all means, Tara.

Tara Brown: I have a lot of thoughts on the data theory balance. My personal experience is that we are a bit ahead of the theory in terms of the data that's available to us now. And what that puts us in danger of is, I think becoming too atheoretical. I think we're in this interesting tension of having to create theory. But as Sam said I think there are concepts and conceptual information within the existing theory that we can't lose track of. One of the ways that we handle it in terms of thinking through these unobtrusive of novel measurement approaches for cohesion and other team states is really taking a top down and bottom up approach to it. So really grounding what behaviors or characteristics exist that align with how we conceptualize something like cohesion. And then matching those to the data that's available to make sure that the indicators and the unobtrusive data that we are pulling into our measurement of cohesion is at least grounded in theory, even if the way that we compile it ends up making the assessment, is more data-driven or diverges from what we typically do within the literature.

I think having that grounding within the literature, within the theory is really important. Then I think if we become too atheoretical to say, we have all of this data available from teams, and we're going to just throw it in some machine learning algorithms and see what spits out, and say, this is cohesion, I think we are in danger of ending up in a place where we can't really explain what we're finding. But I think the other challenge with that is, even when you go through that process of developing these theoretically driven indicators and gathering data on it, it's still extremely rich but complex set of data that as an organizational psychologist, I don't think we have a way of making sense of outside of bringing in other data scientists and folks, like Steve was saying, that can help us think about that data in a way that would allow us to think outside of the box analytically.

But I think there's a decision point and assumption upon assumption that has to happen when you get that kind of data about, how do you aggregate it? Not just to the team level, but across time. And what are the assumptions you're making that help make those decisions? I would say it's easy for us to fall back on, that's too complicated. And so that's why I think our field continues to stick with the tried and true. But it's also the fact that we're bringing these novel methods and novel approaches that are theoretically driven, but at a different level of granularity than we typically have measured these constructs. Therefore there's a lot of resistance in the journal and publication outlets of, are we really getting at the same construct? Is what we're getting at really cohesion or is it some behavioral result of cohesion that shouldn't really be called cohesion? I think we open up Pandora's box in a good way, but there's a lot of questions that emerge as soon as you start going down this innovative path.

Stephen Kozlowski: I like to think of myself as a theorist. So theory should reign supreme. But I would also point out that methods constraint theory. So most of the theory in my ox field is basically constrained by, you're going to turn your thinking into a hypothesis with core measures of those constructs, and then you're going to use some cor relationally based techniques of correlation. And the ability to correlate the data is at the base limiting the way I think about how things work, which is why most of our theories are static, and really don't think about how they play out over time.

How does a phenomenon emerge? That it's not a correlation that you can examine. You really have to look at really the underpinnings and some different ways of visualizing that data or that phenomenon as it's going to manifest. I really think that rather than, this theory has to lead everything, it has to appreciate where does theory come from? Where did Darwin come up with his theory of evolution? In sitting in a chair, drink a scotch or whatever, and come up with evolution. He observed, he collected a lot of data and then he tried to make sense of the data.

I agree with Tara, there's a danger of relying too much on machine learning techniques where we don't know what the machine knows so we don't know how it came to that conclusion. Of course, then the quality of the data becomes really critical. But there's value in having that data and using those and other techniques to try to figure out what in fact is going on here to begin to inform theory and quite frankly to begin, to get theorists, to be thinking more dynamically. Because most the theories are really static. Even when people think about dynamic they think, here it's the theory at time one and at time two and a time three, which is not dynamics. We have complex connections, feedback loops, things of that nature.

I really think that the methods and the data can help Porsche theory to begin to catch up with these techniques. And we're at that point. We're at that point where that needs to be happening. Yeah, I think it's an exciting time if you're interested in dynamics of phenomenon and systems, because we now are beginning to see this kind of informing from different disciplines that really help each other out in ways that certainly I didn't get when I was trained as IO psychologists.

Daniel Serfaty: Yeah. I think the three of you make excellent points. I think this is a debate that it is not just certainly for team research, it's not even for psychological research, you see that again and again now in the pharmaceutical research, in other places like that, when the data advocates or not theories say, quantity will trump quality. And there is an elegance at least with those of us who got educated in the classical way, an elegance in theory, that you don't have in massive amount of data. But that tension is, as you said, Steve, is very current. We can turn it into a creative tension and it's very exciting time to be a scientist, because now you can actually have multiple tools at your disposal. You have the data and you have the theories and you have the models and you have the methods. And all of these together can lead to a deeper understanding of teams.

Well, all these discussion about the preponderance of data and the need for theory to balance those data, we need basically to the last portion of this discussion, which is the future. And I'm going to play first a little game with the three of you with your permission. I'm going to challenge you with a little problem, and we can discuss that for a few minutes. Let's imagine hypothetically, and from the remarks of Stephen, we know that it's not fully hypothetical, that NASA comes to this team today and say, "Hey, you guys are experts. We are sending a team to Mars in a few years and trying to bring it back alive to earth. And we're going to ask you as experts, questions, how do we compose or form that team? How do we select the particular individuals within that team? How do we train them? How do we keep team there? So the team cohesion of the team over a long period of time, what is the worst thing that can happen to us from a teamwork perspective over a long period of time?"

Can we answer all these questions now? And if yes, let's start answering. Here you have unlimited budgets, limited time though, to basically start thinking about that. Let's assume that for reasons that are outside of the scientific realm, NASA decides that we need to send five people in that first mission. What kind of questions, if not the answer, are you going to start asking about how to compose that team?

Stephen Kozlowski: I'm going to offer an opinion here. I would say, probably the biggest challenge here is how to compose the team. Selecting, there are going to be certain skill sets or an experience profiles, NASA has that down. They've been selecting astronauts for a long time. Composing a team, all the questions that people might ask of science to answer. Here's a pool of folks and we want to allocate them to teams, or we want to build army squads or organizations want to, who should we go together? Theory is lacking, the data is lacking, really sorely lacking. It's because people differ on an extraordinarily wide range of things that are important potentially to composing the team. We don't know what they all are. Can't measure them all or it's expensive. But more importantly because of that, there's very little science. The database, by which one could inform theory and help build theory out, it's really not there.

I would actually say, one of the things that my team and I are doing is, this is where modeling comes in. So we can compose teams on a fairly wide range of characteristics, certainly much more than you can study with real people in the real world. Then we can run simulations and see what happens to those team members. I would just say, a work in progress, I want to answer a couple of others. Almost everything else here is tractable. The other one that's really difficult right now is, how do we keep cohesion over long periods of time? As I said, Sam and Tara were at Michigan state where I think I had just gotten a NASA grant. And we were just getting into doing some research with an engineering group that built a sets of platform that we were using that could track interactions who you're interacting with.

It's only now almost a decade later that we're working with data where we have teams, I'll call them in the wild, but it's a controlled wild, where people are in a mission simulator. And they've spent anywhere from eight to 12 months together, living in a habitat, a facility with limited opportunities to explore the external space, pretending to be on the surface of Mars. We don't have many teams, we had to collaborate with lots of people to get access. But I can tell you what happens to the teams over time to cohesion and it breaks down. Teams start high, they like each other, they trust each other, they're glad to be there. And basically four to six months into the mission, it varies a little bit for each of the teams. But reasons why it vary a bit for each of again, three teams, not a lot of data, cohesion begins to destabilize.

Daniel Serfaty: Is that because they're isolated for a long period of time?

Stephen Kozlowski: It's an immersive experience-

Daniel Serfaty: So delayed it for a longer period of time?

Stephen Kozlowski: It's an immersive experience. You live, sleep and work with the same people, you have limited ability to email or communicate with your friends and family outside, because NASA wants to simulate a mission to Mars. So your social world is very small, this is a team of five or six people, and it begins to destabilize. And it's usually just, one or two people start to feel less cohesive than others, and then it will contage across the rest of the team, and by the end of the mission, y'all just want to get away from each other and go home.

Now, this is eight or 12 months. So if we're going to send people to Mars, that's about 33 to 36 months. This is about the time they would be getting there. So the answer to your question, how do we maintain it? Well, we don't have the answer yet, but we have indications from the data we collect that we can detect it from the sensor platform. So if you know that things are beginning to go bad, what do you do? And I think a lot of it has to do with how you communicate that information to help team members maintain their cohesion, assuming that that's something they want to be able to do, because once they fall into conflict, it does not get better.

Daniel Serfaty: So basically you want to use the tools at our disposal now to remote diagnose the onset, or to have a leading indicator, that we tell you the team is about to lose something, and intervene at that point?

Stephen Kozlowski: What we were proposing to NASA was not reporting back to big brother and having someone communicate with the team. I mean, you could have a range of interventions, but basically, how do you help team members self-regulate their social cohesion with each other? If you've ever had a conflict or a problem with somebody, you might not know it, right away, that you got somebody angry, you did something, said something, didn't do something, didn't say something that you were supposed to say. And so right now, there's something going on and you don't know about it. The longer you don't do something to resolve it, the worse it's going to get. Like all of us vary on our social perceptiveness, some people are really good at this because they monitor a lot and know what to do, and a lot of people, well they don't monitor or they don't know what to do.

So if you could provide feedback and some guidance, and think about how this could roll out across the team, and make it something as a self management tool, not a big brother tool, that's what we had proposed as a kind of architecture, an asset. [crosstalk 01:14:26] that latter part, but we do have the sensor platform and that technology got transferred to NASA. So it's theirs to figure out what it is they want to do with it, but we have data that shows that really using very simple algorithms that track how frequently you interact with people and how that changes over time, we can predict social cohesion, and in particular, breakdowns.

Daniel Serfaty: That's great, and here I was trying to give you a hypothetical problem, you've been working on it for years. Let's take the next step. Samantha or Tara, pick up the answer. So let's assume that those sensors are on people, they collect data. The data and the theory collaborate to tell us that something is about to go awry. And they let the team members, on month seven, somewhere, getting closer maybe to Mars, that things are not going so well with the team. Two questions, how would they know that? Okay, hypothetically? And second, if you were there to advise about an intervention that they could do, what would that be?

Samantha Perry: I think it depends on the nature of, if the situation is task-based or not task-based. So I don't know if we covered this, but this idea of task cohesion and social cohesion are complimentary in nature, but based on different behaviors. So task is really focused on what the job is, and social is how well do you want to do you want to spend time with them outside? We tend to focus on task cohesion, in particular, when we're at our workplace and in our jobs, but in these environments that are so interconnected that should you ignore the social cohesion part in the selection or in the team building part of establishing these teams, it can be catastrophic because they are your social network in addition to your work. There is no work life balance, it is the same. And so I think not recognizing that can be really quite terrible. To answer your question-

Daniel Serfaty: I won't let you get away with that answer. Let's assume that you can actually measure the task-based versus [crosstalk 00:01:16:30].

Samantha Perry: Exactly, yeah, I just wanted to highlight that in case our listeners aren't used to hearing task and social.

Daniel Serfaty: How would you do the hard work of suggesting or provoking an intervention? Even without the big brother model, that the people can self-diagnose and then self-remedy.

Samantha Perry: You can give individuals feedback, like when you have a Fitbit or something, and it says, "You're only at 8,000 steps and you have an hour left of your day." It highlights, like, "Hey, you have so much time left to fix this." In a conflict situation, you might say, "Hey, a couple of days ago, you had this point of conflict. You haven't talked to that person since. You might want to go talk with that person because you haven't engaged after that."

Maybe a high arousal situation, you both had really high blood pressure or something, let's say there was a sensor that captured that. Maybe making sure that if there was kind of this non-interaction from that time point, and I'm just making this example up, it's just off the top of my head, perhaps there's a system that's able to predict, "Hey, we've found in past data that when you have a conflict event and you wait three or four days, that becomes a problem. But you here have a little bit of time to fix this before it becomes a problem." You can give that to an individual, and say that three or four or five days go by, maybe the system is able to tell the leader of that small team to say, "Hey, let's have a team meeting, make sure let's maybe do a team activity. Let's build some social cohesion again. Let's go have some space drinks."

Daniel Serfaty: So your suggestion is to have some kind of a presence, I dare say, artificial intelligence, that is there monitoring, roaming around, knowing the theory about team cohesion, collecting the right data, and then suggesting some solution. Basically it becomes not truly a team member because it doesn't replace the function of a team member, but it becomes kind of a rolling attendant that is there trying to help the team. Tara, you agree, or you think that's absurd?

Tara Brown: I guess it depends on how you talk about artificial intelligence and what that is, and that's a whole other discussion that we've had. But I think, for me, what is key is collecting the right data that can identify when cohesion might be going in a downward trend, and then providing some sort of feedback via some display, some alert, something at the individual and team level, to make them aware of it. Because I think there's different levels of intervention that can be used to help guide, as Steve said, this team self-regulatory process.

There might be situations where cohesion is declining and the team is not aware that cohesion is declining. So the simplest intervention might simply be making them aware that their cohesion is declining. And I think by providing them with specific indicators from the data that help them understand what is changing. So, we're seeing less frequent interactions, or there seems to be more negative affect being displayed, and communications or whatever. Letting them know what you're seeing in their interaction patterns that is indicating that cohesion might be off helps them, at a very initial level, be aware that there's a problem and understand what it is that's indicating that there's a problem.

Beyond that, there are times where they might be very aware that cohesion is going down the tubes, and so it's a different type of intervention, it's providing them with strategies for how to start repairing that. And frankly, helping them see why it's important for that cohesion to be repaired, and reminding them of the consequences of that going beyond repair, and is there a point of no return of that cohesion? So reminding them of their mission, what is their shared mission? How is this getting in the way of them accomplishing that? Having them come back around that shared mission and reuniting around that, even if there are maybe social frictions, helping them come back around the mission that they signed up to go accomplish.

So I think there are interventions along that point, giving them other strategies for how to engage in conflict resolution. Some people don't know how to resolve conflict. And so, if there is a conflict that is causing the cohesion, the decline, I think there are conflict resolution strategies that could be provided, and making them aware of that.

And I think just having that kind of real time feedback of allowing them to see trends, not always having to push an alert, but providing a system where they can monitor. Like very simple, things go from green to yellow, to red, "Oh, that's a very simple cue that something needs my attention." And so I think that it can't go through the big brother system down on the ground because one thing we know about the Mars mission is they're not always going to have communication with ground. And if they do, it can be significantly delayed. And so I don't think that's the right path. I think it's got to be feedback directly to the team, and the whole assessment and feedback and intervention needs to be all around them learning to self-regulate and resolve conflicts and repair cohesion.

Stephen Kozlowski: And this is where it's like, you're not just "the team," because the conflicts are often between a person and the team, or two people, or two on one. Now you're getting into really having to differentiate, not just individual and collective, but where has the friction developed in the network? One of the things we can see with these data, because we can look at interaction patterns now, basically for 16 hours a day, over months and months and months, is when there are these friction points. How it does not, the collective, it's really starts with a couple of people and it changes the structure of those interaction networks.

And the danger is when you have a structure that's more or less persistent, and then you've had a conflict that changes the network forever beyond that conflict point. The network does not recover. If you're not able to detect that that could happen, or intervene quickly after it does, then there's a change in that social system that, it's got a new attractor, so it has its own equilibrium now. It doesn't go back to this, "Hey, we're all interconnected." It's more differentiated. And so you've lost the opportunity to fix it.

Daniel Serfaty: It's almost you have created by then a new structure, in a sense, so it does evolve into a new structure that is not reversible, in a sense. So I like these multiple visual idea, maybe what NASA will have, the same way where they will have a status board to tell them how fast they're going or how close to Mars they're going, or what's the hydraulic system in the capsule, the health, they would have the team health.

Stephen Kozlowski: Dashboard.

Samantha Perry: Or TeamVitals, if you would say.

Daniel Serfaty: Yeah, or TeamVitals. But in space, it's kind of the Fitbit on steroids, in a sense. But joke apart, one thing that you're pointing out is that just a static snapshot is not going to do it. You're going to have to have a sense of the dynamics and the evolution, and maybe to stop, basically, the team disease, in a sense, before it metastasizes into the entire team body.

Stephen Kozlowski: It's an instance, too, where you're going to need a lot of data on that particular team. So we're not interested in a hundred teams in NASA in particular. They're interested in this one team for the next three years, and for the three years before that when they're training, and the three years after that when they're debriefing and training other people, or what have you. So you're really looking at the ecology of a single team, and this is where some of the deep data techniques, the artificial intelligence based algorithmic techniques, really have a lot of posit because... I mean, one of the things that we observed, again with small numbers of teams, is some people interact a lot, that's their normative behavior, and you see that reflected in what those patterns look like within a team. In others, not so much, but that's normal. So there's no one size fits all here. Gee, they didn't hit what we think is the mean or the average, it's got to be calibrated on that particular set of teams, and those people, how they're uniquely interacting, when you're doing it well, and then you mark departures from that.

Daniel Serfaty: That's a sense of the future here, Steve, because in a sense it's a new, perhaps a new individual differences, but for teams, understanding that each team is different and each team should be treated differently, and each team should be trained and augmented and taking care of it over time differently. That is perhaps the ultimate goal here. It's really a very exciting, even ambitious perhaps, but exciting view of how to deal with team research in the future.

Stephen Kozlowski: This is a high value team, right? It's going to cost bazillions of dollars to send a crew to Mars. So we're very, very interested in the outcome. It's worth that investment. And they're going to be other high value teams in organizations and government where it really, really matters that they're working together optimally. And so you want to put all the resources that you can afford to invest to help them function as effectively as possible.

There are other instances where maybe an off the rack solution is going to be really helpful and fine, but I think there's some great nations in this, but yeah, I think the future is customizable.

Daniel Serfaty: Maybe this is what my very last question, for which I will require the one minute answer from each one of you, and we will end with you, Steve, actually, is exactly about that. We tackled a few things and perhaps what is now is a super designer solution for an elite team of astronauts that will go to Mars, 20 years from now will be basically the popular solution for teams in management teams or in corporations or in the army, et cetera, that will be something.

I'd like you to think for a second, given what we know today, given that shift that you have pointed towards the theory and looking at teams over time, and looking at this interplay between the data that we collect and the data we use to optimize the performance of those teams. Close your eyes and open them, and it's now 20 years from today, the end of 2040. What have we done in the past 20 years that are really transforming our ability to understand, and maybe even help teams?

Samantha Perry: So the thing that comes to my mind is being able to metric not only the people, but the context and understand what are the components that are really critical in those contexts. So thinking about what are the requirements for effectiveness in an astronaut team is entirely different than an executive team. There may be shared components, but the core attributes that a team must embody for it to be successful are likely going to be different, and require different understanding and measurements, in order to track whether or not that team is hitting the milestones of effectiveness that that context requires. And so, whereas our entire conversation has not been about context, I think that's going to be a critical thing for us to really capture in our measurements, and even in our theory of teams. How do we establish effectiveness of teams, now that we have so much data, given that we have different contextual environments in which those data are being captured in?

Daniel Serfaty: Thank you, Sam, for that. Yes. I think we talked implicitly about context, in a sense. As a system engineer, I cannot think of a word that doesn't take into account context. I know it's new for a lot of disciplines, but you cannot just design a controller without understanding the context in which that controller operator, that decision-making operator, actually you need that.

Samantha Perry: I think a lot of the team's literature attacks it, but not necessarily directly.

Daniel Serfaty: Yes.

Samantha Perry: [crosstalk 01:28:29] bounds the theories based in context, as opposed to articulating the role of context in the theory, and that's more of the distinction I'm trying to make here.

Daniel Serfaty: Which will, again, increase significantly, maybe drastically, the amount of data you have to carry with you in order to truly understand what the team does and why it does it that way. Tara, your prediction?

Tara Brown: I second what Sam just said. And I think one of the importance of contextualizing is interpretation. And so, what good and bad look like on these teams, dates and processes might vary significantly based on context. So I think context is important for interpretation and prediction as well.

One of the things that I keep thinking, just given the conversation with you and Steve a few minutes ago, is teams are as unique as the humans that comprise them. And one of the things that we've seen in the individual learning and training and literature is this move towards personalized, tailored learning experiences and training experiences. And I think in some way, as we gain all this data about teams, and we do, I think, will have environments and people more fully equipped with sensors that provide a more consistent continuous assessment of them, I think we will be in a position where the way we train and augment teams will become very personalized, whether that is through artificial intelligence and what we call sidekicks or digital assistants, or something that comes alongside of the team and provide tailored or personalized augmentation, depending on what that team needs and what contexts they're operating in, or maybe it's infusing the team with some sort of, maybe at that point, actually intelligent artificial intelligence that takes on some characteristics of a human.

And I think we will be to the point where maybe the AI is actually more intelligent and able to function more as a team member in another 20 years or so. And so I think the nature of teams is going to change in terms of bringing more of the artificial intelligence into it, as well as because of the wealth of the data and the assessment that we're going to be able to create from that data, I think there'll be much more tailorized and personalized training interventions and training opportunities.

Daniel Serfaty: Thank you, Tara, for that feel for high personalization and individualization of the teams. Steve, take us home, and give us your 20 year prediction.

Stephen Kozlowski: Well, I'm going to build on both Tara and Sam, because I think they certainly become more salient from covert, but all it's done is accelerate trends. So what trends do we see? Scientifically, we've been studying teams for like three quarters of a century. They're co-located, they live together for a long time, that's the science base that we have. But if we look at what's happening, I'm on multiple teams, there's a lot of churn on team membership, and so really, when I think about teaming of the future, it's not going to be sort of I'm on one or two teams, more or less constantly for long periods of time, I'm part of a reconfigurable network. I'm connecting with lots of different people to perform a variety of different tasks and projects that are in different stages of the completion.

I mean, this is organizational life now, it's just going to accelerate. So I'm going to go from a team centric to more of an individual. I'm a person who's got to manage these relationships on these multiple teams. I'm going to need a teamwork coach. I want that artificial intelligence, but not launching at me on one team, but helping me work with a variety of teams. I'm on so many that my ability to maintain all those relationships and remember all the things I need to remember, I think might get somewhat challenged. I mean, we build trust, we're with people a lot and together we get to know their unique characteristics, and that won't completely go away. But if I'm on 50 or 100 teams, I need some help here.

So I see where you'd be looking to have teamwork coaches that are helping the individuals, and talking to each other, so that they are not individually based, but more collectively based or dyadically based solution. So it's much more of a, not the individual and the networks that create the various teams that I'm on, and helping me manage changing relationships. Because the way I deal with Tara for a conflict is probably really different than the way I might have to deal with Sam or somebody else.

So understanding a little bit about what are the best styles to use, et cetera, et cetera. So I would see it as this kind of customizable solution, really on steroids. And when I talk about some of the NASA research that we've done, or the sensors, there's invariably somebody who brings up the ethics of all of this. And I just bring out my smartphone and I say, "Do you have location services enabled? Because right now you are providing data, data about where you are and who else is around you, to a variety of entities who are using that data right now to draw inferences about you." And maybe they're anonymizing the data, so they don't care about you per se, but those data are being used to understand the dynamics of group behavior, so they can sell me things mostly.

But I'm a social scientist, I want to be able to use this computational social science to understand human behavior in ways that heretofore have not been possible, and to use it to help people do what they want to do, accomplish what they want to accomplish. So I don't think it has to be where this is some kind of big brother, but these are tools to help you navigate this much more complex social world that we're going to inhabit in the future.

Daniel Serfaty: Well, bravo, I think what an exciting vision, using all these technology and this data for good, as opposed to just for selling or commercial purposes. I think this is a mission of the social scientists of tomorrow. And thank you, Samantha and Tara and Steve, for sharing with us all your thoughts. You really have increased our knowledge in a very meaningful way about our exploration of the magic of teams.

Thank you for listening. This is Daniel Serfaty. Please join me again next week for the MINDWORKS podcast, and tweet us @mindworkspodcast, or email us at [email protected] MINDWORKS is a production of Aptima Inc, my executive producer is Ms. Debra McNeely, and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.