Police In-Service Training

Delayed Decisions in Policing: Choosing the Least Worst Option

Scott Phillips Season 1 Episode 25

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 33:11

Send us Fan Mail

If a police officer is facing a critical incident they may delay their decisions because of something called “decision inertia.”  Officers don’t freeze up, but delay or fail to make decisions due to uncertainty.  Paradoxically, that uncertainty can be the result of training or agency policies that are intended to guide behavior.  Dr. Brandon May explains the issue, and discusses his research that found that when officers are offered a least worst option, they will make a decision to resolve an complex choice.

Main Topics

  • Redundant deliberations and the “deliberation loop” can delay decisions.
  • A good decision can simply be the least worst option.
  • Police agencies need to accept that officers need to be flexible in an effort to make good-faith decisions.

Don't forget to like, FOLLOW, and share.  Sharing this podcast or an episode is one of the best complements I can receive, which will help grow the show.

And don't forget to provide a review.  Giving five stars is never a bad idea.

Feel free to email me your comments using the "send us a text" option (above), or at the following email address: policeinservicetrainingpodcast@gmail.com 

You can also contact me at: Bluesky: @policeinservice.bsky.social

The views and opinions expressed in this podcast are those of the author and guests, and are not authorized by and do not necessarily reflect those of the New York State Division of Criminal Justice Services or the State of New York.

SPEAKER_00

Welcome to the police and service training podcast. This podcast is dedicated to providing research evidence to street level police officers and command staff alike. The program is intended to help the police and law enforcement community create better programs, understand challenging policies, and dispel the myths of police officer behavior. I'm your host, Scott Phillips. While the job of a police officer is kind of boring, a sudden, unexpected, critical incident requires a decision, and often those decision-making situations do not allow the officer the luxury of time. Unfortunately, for the officer or police supervisor, that point is irrelevant. As we've seen over the past few years, a delayed response is considered a professional failure. But as we have seen from some of the research presented in the past on this podcast, there are psychological aspects of policing that are never considered before the officer's behavior is judged. First, there can be a lack of clarity in many critical incidents, so officers may not really understand what's expected of them. Second, officers never really know what other officers will do, which then impacts their own decisions. Finally, as suggested a moment ago, there may not be a lot of time to understand what the proper decision should be. All of this can lead to decision inertia, which is a failure to make an acceptable decision. Today we are joined by Dr. Brandon May from the Florida Institute of Technology, where he is an assistant professor of forensic psychology. Before joining Florida Tech, Dr. May served as an applied cognitive psychologist with the UK's Ministry of Defense's Defense Science and Technology Laboratory. Thanks for joining the podcast. No, no, I appreciate it. Thanks for the offer. I'm glad you were able to make it. So before you joined Florida Tech, uh, you earned a degree in applied psychology from the University of Portsmouth in the UK. What got you interested in uh applying your research area into the policing world?

SPEAKER_01

It's a great question, and it's uh it's a pretty long story, and I'll I'll give you the brief overview of how that worked. Sure. Um so I started in forensic psychology. My master's degree uh led me to explore opportunities to work with offenders, um, work alongside policing, in part to reduce recidivism and risk and a few other points. And long story short, that wasn't for me. Uh I didn't enjoy that as much as I thought I would. Um so I joined uh well, one University of Portsmouth, but simultaneously, uh the UK Ministry of Defense. And the majority of that work specifically required me not just to think conceptually, theoretically, and and sort of you know apply that psychological knowledge I'd I'd build up.

SPEAKER_00

Right.

SPEAKER_01

Um, but work alongside practitioners to try and resolve real-world problems, whether it be with the army, the navy, the air force, uh for more interestingly, was that with security and law enforcement. Um so that interests them from that point of working alongside practitioners, um, not least the fact you get to see direct impact, right? So you're you're working not just with the police or with security or with intelligence services, and then you sit in the background and something happens and you don't observe it. Um, you're seeing that in real time. So whatever we were doing at that particular point uh resulted in some sort of real-world immediate impact. And that's really where it stemmed from.

SPEAKER_00

Right. It's it's actually nice to see when your work is being used. Uh so with respect to decision inertia and moral dilemmas, which we'll get into in a few minutes, uh, why is this area of inquiry of value or relevance to the police?

SPEAKER_01

Yeah, so again, it's uh I'll give you a very brief backstory behind that. So I was at the time working as a research assistant for um Professor Becky Milld. And one of the things we were exploring at that point was um how we elicit information from um high-stake scenarios. So this was just before the um the Grenfell Tower incident in in the UK, and for your listeners, uh that was uh essentially a big tower block bar which killed uh many people. But soon after that, we saw a big terror incident uh in in Manchester. And one of the observations from that, as we were doing this sort of information elicitation project, was that many officers uh who are in attendance to that scene uh were wanting to respond, um, but often delayed their decision. So it was a sort of a redundant deliberation uh of all these decision-making pathways. And what we saw as a consequence of that was, and this is not specific to the police, but but my interest was very much towards the police, was why do people who want to make a decision, who are trained to make very reactive and proactive decisions very quickly under time constraints under uncertainty, often delay the decision. They eventually do make a decision, but but why do they do that? And in terms of relevance, for us that was an important question. Um, what is it driving that sort of cognitive delay in terms of decision making? So we we applied that originally to the Manchester Arena uh incident, and that stemmed further and further into, I guess, the cognitive architecture of what makes good decision, or in these cases, what makes least worse decision making. So there's always going to be a bad outcome, but it's the the best outcome given a set of bad circumstances.

SPEAKER_00

Okay, yeah. How do you minimize the the bad that's probably gonna happen? Sure. Yeah. Right. In in your paper, you first uh discussed various uh stressors in critical incident decisions, such as uh lack of time, as we mentioned, or different people working together and not knowing each other's moves. Uh then you wrote, uh, I'm gonna try to quote this here, in response to uh these challenges, strategic decision making often resorts to alternative coping strategies. Uh so that that's the quote. Can you clarify some of those uh alternative coping strategies?

SPEAKER_01

Yeah, and and I think part of it's easy to assume when individuals are faced by um ambiguous, uncertain, high-stakes consequences, they they resort to uh what I would term as maladaptive behaviors. In other words, they will essentially just not make a decision, and that's not the case in many of these instances, they do. Um so the strategies we often see, one is this sort of status quo maintenance, where decision makers will resort to what they already know. So there's a specific specific policy they may have. Um let's say they've been trained, and we saw about in Manchester, for example, and we had what's called Winchester, or Operation Winchester Record, uh, where they were trained in how to respond to uh counterterrorism incidents. So they resort to that, that sort of default knowledge of that status quo for decision making. And that doesn't substitute I say substitute inaction, it just delays action because they're going through all of these scenarios of what they could do, but it doesn't quite match up to what they're seeing. We get a as a consequence of some of that, and and I extended some of the literature here, is that we get this temporal displacement. So it's not that they're deferring decision making to another time, um, but they're they're sort of deferring decision making to others. So um, let's say you have a command structure in place, you're you're on the ground as an officer, um, you see something which requires maybe a little bit of clarity, um, maybe it's authorization, but you are sort of deferring decision making to someone else. So I'm gonna confirm with my um I don't know, commander at the scene saying, I'm gonna do this. Could you just clarify that as the best type of response? So they're getting all these strategic decision-making alternatives um from other people. It's it's it's a way of getting accountability uh sort of off-loaded to someone else uh or clarifying a particular decision. And that gets that sense of, I guess, responsibility sharing, right? So if we've displaced it, we've given it to someone else, it's sort of more horizontal decision-making. Um and in part it reduces liability. Because if I've confirmed it with my commander and that goes to inquiry, because we get bad outcomes in many of these scenarios, I can say it was a joint decision. We get this in interoperability scenarios where it is a collective responsibility of the organizations. And if I follow my what I'm gonna refer to as SOPs or standard operating procedures, um, I'm gonna be in a position where I'm pretty resilient to any potential negative consequence. So I'm not accountable criminally, I'm not accountable civilly, because I've just followed what the procedures tell me to follow. And what we talk about in this paper is is in particular counter-terrorism, that's potentially problematic because that can lead to what I would say is suboptimal decision outcomes. They're still bad outcomes, but they're probably the worst of the worst outcomes.

SPEAKER_00

So, in other words, you're not avoiding um or minimizing the worst outcome, you're actually getting it from those kinds of things. I thought it was interesting also, I think it was that that part of the paper where you were talking about uh heuristics, uh the shortcut methods for decision making uh that was studied by Tversky and uh Daniel Kuhneman. And uh I did some work on that as well, that you know you you kind of rely on, okay, I've done this before, and this is what the outcome was, so I'll you know I'll rely on the same the same approach to dealing with it.

SPEAKER_01

Right. And and I think Roosevelt made a really good observation um in one of his uh very famous quotes, but um I'm paraphrasing slightly, um, but uh decision making is best when it is the the right decision. Um then you've got the worst decision which sort of sits in the middle, and and then at the end of that is actually the worst decision totality is is not making any decision. And that's one thing we want to try to avoid is is officers, uh first responders, whether it be uh policing, military, is is being in this sort of deliberation loop, and that's how I sort of like to refer to this, is just continuous looping around, the point where you just don't make a decision. Um and when we've got school shooters or we've got marauding terrorists, um, and we can apply this in many different contexts, right? But just staying in that loop of decision making becomes problematic.

SPEAKER_00

Yeah, it doesn't get you anyway.

SPEAKER_01

Exactly. We try to avoid that.

SPEAKER_00

Now, I also found it interesting that you mentioned uh individuals exposed to stress may encounter fragmented or distorted memories uh in due uh part to uh the uh an intentional rehearsal that inadvertently introduces misleading details. Now, I I recently read a paper that indicated that officers uh who were training uh in uh video simulations uh were no less likely to fire their weapons on the street. And the the goal was to get them to you know make better decisions in the use of force. But what the research suggested was that the simulations ended in a shooting situation. The simulations often ended in the shooting situation, so the officers simply thought that a street encounter would end the same way. I was wondering if you had any thoughts on that.

SPEAKER_01

Yeah, and this probably leads to a later point we'll discuss. Um we talked about this in terms of uh Bayesian updating, Bayesian paralyses, um, where individuals will often rely on either a schema. So I have done this scenario before, and ergo the the um the context or situation on Facebook now either matches that or is very similar to that. So that should be the best approach. Um and that can be a distortion to some extent because what we see in particular incidence, where it is by definition ambiguous and certain, um defined by a lack of information, is it doesn't map to what we already see in memory or schema or anything else. So what we recommend in these cases is that you shouldn't rely on a base rate memory of a particular training experience or a scenario which may be somewhat similar because they are so unique. And and we talk about this later in not just this paper, this is part of a bigger PhD um uh uh thesis that I wrote as a consequence of these studies, is that we can understand this via base theorem. So if if we have a prior and we'll define prior in in context of, let's say, for example, um training exercises that I've participated in and I've learned from that, um, and that's now my memory of what I should be doing, but I'm then faced with information on uh a situation which may not map very well to that. Now, if I apply that memory of what I should do to that situation, that may result in a bad outcome. Uh, and and therefore we get severe consequences. Uh, what I need to be doing is looking at what information is coming in that would change that memory or change that response based on, say, um, and an example I often give is if someone's reaching for their waistband, um, and my training as a police officer may tell me that they may be reaching for a weapon, my reaction is not to immediately then pull my weapon and say, okay, I'm going to draw and potentially um either de-escalate by shooting or doing something else. It is, okay, let's see what other information or cues are available in order for me to make a better decision. It may be actually the context tells me maybe they're reaching for their wallet because actually their their driving license is available there and they're reaching for that because that's the context. Um, they're not reaching for a weapon. Or if it's the same context whereby there's a lot more hostile threat and they're threatening to shoot me, then you can see I get a different posterior of information. So my response is slightly different. So and when we see this in in I guess more high-stakes context, we get the stress element, we're actually more likely to lead to things that relate to my heuristics. So if we say, for example, um, again, the counter-terrorism incident, um, there's high stress, lots of casualties, lots of ambiguity, it makes sense that I'm going to resort to my training because that's the easiest, immediate scheme we have available to me. But it may not be the best appropriate action. And we we call this a grim storytelling approach. A later paper I published which spoke about this, uh, talked about action orientation. And we need to sort of overcome that distorted uh view on what we should be doing and instead provide a new narrative and says, actually, this is going to be a really bad scenario. Um I use Javaldi as an example of this because it was a terrible school shooting over in Texas. Um and what we argue in those contexts is there are going to be situations or scenarios where children may be killed. It's a terrible outcome. No one wants that outcome, but it's a guaranteed outcome.

SPEAKER_00

Right.

SPEAKER_01

So your your job is not to save everyone, your job is to minimize the risk as best as you can, and that may result in the inverse trolley problem. Where in order to get an optimal outcome, it's to save the one, sacrifice the five, rather than save the five, sacrifice the one. So we we we get the sort of um I I guess somewhat consequentialist or maybe virtue approaches to uh to decision making.

SPEAKER_00

This it's this this can get really complicated because it's you know the utilitarian idea of the needs of the many, uh meeting the weight, uh weighing the needs of the few, and the uh uh again the the uh the you mentioned the heuristics and the the lowering the risk. The question then becomes lowering the risk for whom? The officers or the or the or the students.

SPEAKER_01

Right. And and and we talk about this to some regard. And I remember doing this when I was doing um birth responder training as a what we we you refer to as the Coast Guards, it is the lifeboat service. Um I was part of that. And I remember going through the training, and and the first thing we were always told is preservation of life always starts with the individual. In other words, the officer responding. So make sure you protect yourself. Uh then it's protect those around you, particularly those who are also first responders. So we're protecting our crew. Um, then it's protecting the assets we have. In our case, it was the boat. Um, but it may be, for example, maybe the the uh the weapon systems or the fire engine or whatever, but actually make the capability uh possible. And then right at the end of that list, which seems somewhat counterintuitive because we're sort of trained to think preservation of life, um, is the casualty or the victim, um, the person we're trying to help. But you sit right at the very end of that hierarchy. Um, because without me, without the team around me, without the capability, then I'm useless to you and and therefore I can't respond. So getting people in that mindset is pretty, pretty tough.

SPEAKER_00

Yeah, it's it's it's a com again, a complicated topic. But back to your research now. Can you clarify? Because I just I just brushed over it briefly. Clarify what decision inertia is, uh, particularly with uh respect to policing.

SPEAKER_01

Yeah, so the the the simplest way of uh thinking about decision inertia is it's what is in the literature redundant deliberation. Um I like to describe it in more layman's terms as deliberation loops. So you do make a decision, so it's not decision in action. Um when you're in this sort of an inertia phase, your intention is to make a decision, you will eventually make a decision, but you're just going through different um hypotheses or pathways to get to that point. So it may be that you are um in this in this loop of, well, I want more information first. So I will make a decision, but I want to be sure that that's the right decision. So I'm gonna sit there and I'm gonna wait until I get more information. Um, it may be that, for example, you uh defer to someone else. So again, you're gonna make a decision, but you want clarification, but again, that's the right decision. So you are redundantly deliberating. So you're constantly going through loops where actually the best course of action is just action. Like do something now, not saying do something wrong or criminal, but you're doing something which is, given the circumstances, given your situational awareness, is the best possible outcome. So you're not deliberating redundantly, you are going to do something which you know has consequences but is most beneficial for that circumstance.

SPEAKER_00

Okay, now an important part of I I've always had this in the podcast is to avoid jargon. I always talk to people beforehand. Avoid jargon, avoid technical terms that result in listeners deleting the episode before they keep going. But I'm actually going to break that rule because I think it's uh important for everyone, myself included. And so, can you talk about uh Bayesian updating in again in layman's term? How does this apply to policing decisions, uh, critical or non-critical?

SPEAKER_01

Yeah, so uh and it's always hard when you talk about probability theory. This is just statistics. Um, and I'll caveat this from the start that we do not think in this way. So, cognitively speaking, we don't expect officers to go in and say, well, the likelihood of this based on this factor is this percentage. There is no way anyone is doing that. Um, and so when I speak to police officers and and we we train some of this, uh, we always get this, well, we don't actually think that way, so why are you talking about it? Um but it's a really nice way of thinking about sort of the the cognitive architecture as to how we make decisions uh giving or given prior information. So I'll give you an example because it probably helps this uh a little bit better. So you're an individual that has attended um a let's say a firearms incident, um, and the resulting uh circumstance resulted in a de-escalation, um, or maybe even uh a scenario where someone has shot someone else. So the police have responded uh and they've shot the suspect. Now, because that is your experience of what's happened, a week later you attend a very similar incident. So you have this prior estimation that based on these circumstances, I can apply the same outcome or the same action, and I get the same outcome, right? It's de-escalation, um, it is uh neutralizing a particular target, but you can apply the same thing, right? So you have a prior assumption of what you should do given these sets of circumstances. Now, what we want officers to do in many of these contexts is to understand that every scenario is unique. Um, there are circumstances um or or or factors which may change that outcome or change that action, and therefore you need to update based on these circumstances. So, as new information comes in, you should be updating whatever action you need to do. And we refer to this as action threshold. So everyone has this um sort of internal, probably somewhat unconscious view that in order to do something, I need to be confident that that's the right decision. So we sort of say you have this internal threshold for action. And we can categorize that again somewhat arbitrarily as a percentage. So let's say, for argument's sake, for me, in order to de-escalate a situation using lethal force, I need to be 80% confident that that is the right outcome. And the reason I'm 80% confident is my prior experience tells me that may be training, that may be having done it before, that may be an organizational cultural thing, um, that I need to meet that. And unless I have all of the information that tells me that that's the right action, I'm not gonna commit. I'm gonna stick in this deliberation loop until I've got either action that tells me to commit or to walk away. Right? So I've I've got these thresholds. Now, with the threshold value, and we see this in some of the research that we put forward, and again, we're writing this at the moment in our book chapter, is that we can either artificially inflate that threshold so we can make it even with all of the updating, all of the information. Available, we may never get to action thresholds. And we saw Vanilla Valdi as an example of that. Because of say organizational culture, lack of training, or too much restrictive training, standard operating procedures, it may be, for example, a toxic culture in the organization, so things like toxic bosses, or indeed accountability. But if I do something and I'm going to be heavily accountable, then I'm going to raise that threshold. So we can artificially raise that threshold based on all sorts of factors. But simultaneously, we can lower that threshold. I use the example of Minnesota as an example of a lowered threshold for action. Based on the individual's experience and what we know from some of the cases, and I'm not going to go to too much detail about this, but in principle, because that officer had experienced something which was negative, having, for example, been hit by a vehicle previously, his threshold for action was going to be lower on the basis that he knew what it was like to be hit by a car. So his action threshold from that moment was artificially lower. So he was going to commit to action, but his reason perspective had a different value to someone who perhaps hadn't been hit by a car, say six months earlier. So we can get these different updating perspectives based on previous experience, what we expect to happen post-incident, because obviously there's inquiries, there's accountability, we may see that inflated, uh, or again, as we say, uh deflated. So it's it's all about what is the likelihood of action given all of these circumstances.

SPEAKER_00

Okay, so your your your paper uh developed a fairly fairly straightforward question. Uh uh you you wanted to examine how decision inertia manifests within an immersive environment and explore its impact on decision-making outcomes in a critical incident. Uh, how did you go about getting the data?

SPEAKER_01

Yeah, so this was a a big project with multiple different um facets. So at first, we wanted to identify well, what do we know about this topic? What don't we know about this topic? Um and to take Kutmashuri's thought, what we found uh in principle was we know a lot about how organizations work, we know a lot about the policy, the practices, we know very little about why we as individuals uh struggle to make decisions in these contexts. So we did a big um systematic review uh identifying that. We then went into real-world practice and we explored uh a real-world incident to say, okay, we know this in principle, this is what the data tells us, this is what the theory tells us, what actually happens in the real world? So we took the Manchester Arena incident, the big uh terrorist incident at the uh Arianne Grande concert, and we said, given these factors, given what we know about decision inertia and what we know about operations, how did people actually make real-world decisions? And again, um to cut the story somewhat short, we essentially mapped what we found in the first study. So we we found that people at a micro level, a cognitive level, struggled to make decisions and they were often stuck in this deliberation loop, often just wanting more information. So they eventually do act, but they're stuck there for quite a while. Um, then we wanted to see, well, given all of these factors, um, how can we one experimentally test this? And and um in short, virtual reality, specifically extended reality, is a very good way of testing decision inertia um because we can't wait for a terror attack, we can't create a terror attack, but we can immerse someone into what is at least close to uh an effective um like an emotional mechanism for for say terrorism or critical incident. Um so we did that and and we found that one, we get the same effective response for the most part, um, but also we got to see decision inertia in action when people were just stuck in this deliberation loop over and over and over until eventually they were either told to make a decision, um they didn't make a decision, which is sort of somewhat tangential to decision inertia, uh, or in principle they deferred or they delayed or other factors. So we we observed that. Um but we felt that wasn't enough. We we felt we can go a little bit further than that and extend decision inertia a little bit further. So we took that a step further and we started to present all sorts of different philosophical problems to that baseline uh morality and baseline decision making, um, and then look at ways in which we can overcome that inertia effect. Um, so this is that sort of deontological piece where we assume the majority of people want to minimize harm, um, preserve life, uh, and there are a few jokers or in any study that say, okay, yes, fine, I'm gonna uh kill the five and save the one, and we expected that anyway. But for the majority, they they followed that classic tronic problem. I'm gonna um kill the one, save the five. So we we map that deontological view and said, okay, in this real-world scenario, you are faced with these contexts. Now, we present what's called a grim story narrative, it's a very pessimistic view going into these incidents, and we say, it's okay. You are likely going to see this. If you need to sacrifice more, that is okay. And these are the reasons why it's okay. So we we present that that grim narrative. And what we saw from that is that those presented with that narrative were significantly more likely to um select the least worst optimal outcome, which we predefined going into the study. So we already knew what that was. Um, comparative to those who didn't have that narrative, um, that often made a sub-optimal decision. So it was the worst of the least worst outcomes. Um, for example, they don't make any decision at all, they just stay there for the entire study and just get stuck. Um, so we often got got better outcomes when we presented those narratives. So it seemed to be, albeit in an empirical context, uh, if you present people in an extended reality environment where there is a time critical aspect, they were more likely to be optimal outcomes with a grim story narrative.

SPEAKER_00

So they so they took advantage of the knowledge that you gave them that it's okay if this is not a perfect outcome.

SPEAKER_01

Yeah, yeah. So they they essentially took that narrative and used that as their Bayesian framework. Um so they can go, okay, I'm gonna be more likely to update using these sort of uh Bayesian perspectives, and I know that that's okay. It's still not great. I there's there's still problems as a consequence of that, but I know given the circumstances, that is the best response.

SPEAKER_00

What are two or three implications for police practice or particularly with training?

SPEAKER_01

Yeah, so so the first I think is going to be towards action orientation. And and we see this in training more generally, um that we lack a perspective towards uh what I refer to as cognitive flexibility. So in high-stakes ambiguous scenarios, we need people to be flexible in how they make decisions, and that understanding the least worst outcome is okay, right? You may need to do that. Um and and to be in this position whereby you're not stuck towards operating procedures, you're not stuck towards what your previous training told you because it doesn't always matter, right? We we saw this in again in Manchester and London with the C evolved, but you just may not have that um that opportunity. And I guess the last point towards that is is accountability. We always need to hold people accountable for their decisions, um, but we need to be mindful that these operators, these police officers, are working in crisis scenarios uh and they're having to make split-second decisions that the majority of people will never experience. So training and preparing people to do that while also being mindful that they shouldn't be held criminally culpable for a scenario where actually that probably is the best outcome given all of the information, all the circumstances. Um it's still sad, it's still devastating for families and communities, but given the circumstances, that really was the best outcome.

SPEAKER_00

Okay, in in like two minutes, can you tell us how this might have helped uh might help us understand uh the actions in Ubaldi?

SPEAKER_01

Yeah, so so part of that, and I speak about this in the book chapter briefly, um, but that is uh understanding that going into an incident um is gonna be challenging, um, that there is no optimal outcome but is going to save all children, um, particularly in school shooting incidents. Um, but going in with that sense of flexibility that I'm gonna do the absolute best I can. The other is not waiting for a command structure to be put in place before you respond. I think nearly 400 officers attended your valdi uh before an outcome came up, which was optimal. Um there are circumstances where you just need to go in, uh address the situation, um, understand that you were audited based on that, but again, being able to talk about why you did what you did. Um, and I think that may have limited on an organizational level um a good effective outcome. Um there are lots more factors uh I will uh suggest your your listeners read the book chapter because we talk about this, um, because it's quite a complex issue.

SPEAKER_00

But oh, very very much. That's that's kind of the thing I I stumbled into uh what's called terror management theory that uh might have helped us explain uh uh and again I've I wrote an article so people can track it down, but you know, the idea that these officers had never been exposed to those kinds of things. So they like first they got shot at, and then everybody was like, okay, I'm not supposed to die. That's what the training has been in the academy, don't die. Anybody that's been through a police academy has minimized your own risk. But then the idea of terror management, if you really want officers to do that, exactly the same situation to get them going in the door, you have to train them to go in the door. Constantly train them to do it because that's what the terror management does, is it minimizes I've done this before, I can do it again.

SPEAKER_01

Yeah, exactly. And they just delayed. It was again, they wanted to action something, but it was just redundant deliberation, constantly thinking through every possible PowerPoint, every scenario available, and that's that's not optimal. We want to avoid that.

SPEAKER_00

Yeah. Uh Brandon, I appreciate your time. This was great, great information. I really do uh thank you for coming on the podcast. I appreciate it, Scott. Thank you. You have a great day. That's it for this episode of the Police in Service Training Podcast. I want to thank you, the listener, for spending your valuable time here. If you like what you've heard, please tell a friend to subscribe on Apple Podcasts or wherever they get their podcasts. And please take a moment to review this podcast. If you have any questions or comments, positive or negative, or if you think I should cover a specific topic, feel free to send me an email, which you can find in the show notes. Or you can find me on Blue Sky using the handle at placeinservice.social. Thanks very much and have a great day.