Evidence-Based Management

Module 4 Appraise evidence from practitioners

December 13, 2021 Season 1 Episode 4
Evidence-Based Management
Module 4 Appraise evidence from practitioners
Show Notes Transcript

This episode accompanies Module 4 of the course, which is all about assessing the quality and reliability of the evidence from practitioners – people who have experience with the problem we are tackling, or the solutions we are considering. 

Modules 4 and 3 are difficult to separate fully, so please refer to episode 3 of the podcast for the rest of the discussion about practitioners. In this episode we discuss the impact of the most important biases and talk about how to be alert to our own biases and those of others. 

We consider the role of the evidence based practitioner and the challenges of getting a clear understanding of problems and solutions from practitioners who may not always be as clear or succinct as we might wish. This is where careful listening and questioning are absolutely critical, together with challenging assumptions - both our own and other people's. 

Finally we discuss the 3 criteria used to assess the reliability and trustworthiness of practitioners and experts, including consultants that seek to identify themselves as experts with particular problems and solutions. 



Host: Karen Plum

Guests

Find out more about the course here:   https://cebma.org/resources-and-tools/course-modules/

00:00:00 Karen Plum

Hello and welcome to the evidence-based management podcast. This episode accompanies Module 4 of the course, which is all about appraising evidence from practitioners - the people who have experience with the type of problem we're trying to solve, or the types of solutions we're considering. 

Having identified potential practitioners from whom to gather evidence, which was the focus of module 3, we now need to determine how reliable and trustworthy they are as we move towards the identification of the problem, or the likely solution. Of course it's difficult to fully separate these two stages, but in this episode, we focus on the need to guard against our biases and any assumptions in terms of trustworthiness and reliability. 

I'm Karen Plum, a fellow student of evidence-based management and to discuss this module I'm joined by two of our podcast regulars, Eric Barends and Professor Rob Briner; and in addition, we're joined by Dr Christina Rader from Colorado College in America. Here we go. 

 

00:01:18 Karen Plum

So let's consider the issue of bias. I suppose because of the negative connotations about being biased, we'd all like to think that only other people are biased, not us. We don't like to think that we fall prey to those unconscious thoughts that drive our behavior. After all, we're smart and intelligent and would surely notice if we had biases? But that's why they're referred to as unconscious biases. 

Often we aren't aware of what's driving our behavior, and indeed not all biases are bad or negative in nature. Our brains are constantly seeking ways to conserve energy and anything that we don't have to start from scratch constitutes a saving. 

Neuroscience suggests that the brain works by using prediction - constantly comparing current situations with previously experienced ones to predict likely outcomes. If you don't have to think about how to react in each new situation, this saves energy. So many of our biases could be thought of as just the brain's way of saving you effort, and indeed keeping you safe, which is its main role in life. 

Many of them are helpful, therefore, particularly in threatening or dangerous situations where you don't have the luxury of thinking something through from first principles. You just need to, for example, run away from the ferocious animal that might eat you; or in modern day terms, to back away from a fire. 

I asked Eric Barends how we should think about biases and approach them with others, so that we don't suggest that being biased makes us bad people. 

00:02:57 Eric Barends

Well, maybe not use the term bias as a starting point! And explain maybe the type of bias, so rather than say, “well, you're probably biased in favor of this”, say, “well, people are inclined, of course to look at information or evidence that supports their prior beliefs because…”. That may be a better approach. I don't think I ever used the term bias when I ask people about their experience or why they think this may work or why they think this may be a problem. But if your colleagues are trained in evidence-based practice then you should drop the term bias because they know exactly what you talk about. 

So when we train larger groups within an organization, then the term bias is used frequently and it doesn't need explanation and it's not threatening. It's more like, oh yeah, you're right, good point. 

00:04:04 Karen Plum

The better approach is to ask questions to explore their thinking, particularly because in thinking they're biased you may actually be falling for your own bias, about their bias! So always explore further to see what lies behind what they say. 

Clearly there are tons of biases. I found an infographic online that identifies 188, so there are lots of traps that we and others can fall into. Knowing about these biases doesn't help either. Even those that have studied them for decades, including Nobel laureate Daniel Kahneman, still fall prey to them. More about him later. 

Eric explains that the course concentrates on some key biases - patternicity, so recognizing patterns and assuming causality, confirmation bias and group conformity. 

00:04:56 Eric Barends

These three biases, recognizing or assuming, there's a pattern - there's a connection between A and B - A happened and afterwards B happened, so that must be related; and confirmation bias and group conformity are the most important biases. And if you don't have that much experience in an organization, keep these three in mind and look around and see what's happening and I'm sure you will recognize them. 

00:05:27 Karen Plum

I think that's true. The course also talks about availability bias, authority bias, and outcome bias. I asked Dr Christina Rader, who teaches evidence-based practice as part of courses in management at Colorado College in America, if her students struggle with understanding any particular bias. 

00:05:49 Christina Rader

In every class I teach, I try to talk about outcome bias. Because a lot of times in management and related courses, maybe we do a case and then they want to hear what happened. And I also have to tell them - what happened is not ‘the answer’. It doesn't tell you whether a good process was used. 

One of the things I try to help students get is the idea that you could be a success in spite of yourself. Things can be a success because you got lucky or because the economy was great. And then along with that, the outcome bias, there can be a tendency to focus just on a single firm or a single instance and not compare to the other firms or the other time periods. Maybe something looks great, but then you look and see well, the other firms outperformed by double. So those are things that we look at. 

00:06:51 Karen Plum

I found outcome bias difficult to get my head around and I thought it was worth looking at it again. So I asked Eric to explain it for me. 

00:07:00 Eric Barends

Now the outcome bias is indeed that you judge a decision based on the outcome. If the outcome is good, the decision must have been good. If the outcome was bad, the decision must have been bad. And that is of course a very serious fallacy, because if outcome of a project is successful, that doesn't mean that the project was run in a successful way. Or when the surgeon, for instance operates a patient and the patient dies, that doesn't mean that the surgeon made mistakes, or it was a bad surgeon. 

It's not a good indication of the quality of the decision maker. For instance, in the example of the surgeon, it could be that this surgeon gets the cases that are in a very bad condition because it's the best surgeon and therefore of course if you get a lot of cases of patients that are in a bad health condition, probably more patients will die. And it's not because the surgeon is actually not good, no, the surgeon is actually the best surgeon we have, therefore we give him all these hopeless cases. 

So the outcome is not always a good indicator for the quality of the decision. But we tend to have a look at outcomes and then say, well this person was involved in a merger two or three times and the outcome was good, so probably this is a good manager to manage the merger. Nah, not necessarily so. You really need to take a deeper look and see exactly what happened. 

I mean, just by sheer random chance, things can be black or can be white four times in a row. Just by sheer chance something can go wrong, or have a great outcome, without the decision maker having anything to do with it. And when it's three times a positive outcome, and you say must be a great decision maker, and then you look into the decision-making process and you ask, how did you do it? 

“Well, I have a crystal ball here and I….” What? Oh my God, that's actually dreadful. Well, you know the outcome was good. Well, there was more luck than wisdom! So that's why outcome bias is also something you need to take into account. 

00:09:24 Karen Plum

I found that a helpful explanation and I think it's such an easy trap to fall into, perhaps particularly relating to practitioners or so-called experts. Experts themselves also have their own biases as Professor Rob Briner points out. 

00:09:40 Rob Briner

I think I would always say when it comes to experts, I think experts themselves are terrible sources of evidence because they have just as many biases as anyone else. However, experts are probably pretty good at helping you understand the evidence for yourself and helping you make judgments about how trustworthy or reliable or relevant it is. And that's quite a big distinction actually – it’s not what they say, but they may be able to help you make more sense of it. 

00:10:07 Karen Plum

I think that's great advice, and if I summed it up at this stage it's - never take anything as read and respectfully probe and challenge everything, to satisfy yourself that the evidence is sound. And then even if the evidence isn't sound, learn from that outcome to help in your evaluation of other sources of evidence. What does Christina advise her students when addressing bias with practitioners? 

00:10:33 Christina Rader

I think really I just come back to confirmation bias a lot, yeah that one! Which has been named the most pernicious of the biases. So it's one reason that I keep coming back to it. 

00:10:48 Karen Plum

I think confirmation bias just saves us such a lot of energy, which is why, as Christina says, it's so pernicious. It may be helpful here to talk about whose biases we're trying to trap - our own or those of other people? Knowing we have biases is a good start. And naturally by taking an evidence-based approach, we're already on a good path in terms of the biases we encounter. As Eric explains, using multiple sources of evidence is critical. 

00:11:18 Eric Barends

The main answer is of course - use multiple sources of evidence, not only your personal experience, but use evidence from the organization, other colleagues and the scientific research. But if you are a change manager or project leader, and in this particular case in this module you want to draw on evidence from practitioners, you need to take into account that these practitioners could be biased and therefore you have these three criteria. 

Does this person have a lot of experience with the situation? Did this person receive objective feedback and the third one was this person in a regular and stable environment? Because that makes his or her experience more valid and reliable. In all other cases, or in most cases, there may be biases that affect the experience or the judgment of the practitioner. 

00:12:20 Karen Plum

It also occurs to me that as an evidence-based practitioner, my role is partly to be on the lookout for biases, but also that in order to ensure some separation of powers, as it were, perhaps I shouldn't be part of the actual decisions that are being taken. Christina explains that the role her students take is very clear. 

00:12:40 Christina Rader

The advisory role that they play is very clear - that their job is to be able to say here's what the evidence says. One thing that I work with them about is how do you separate - they always want to put in their own ideas from somewhere. And clients often like hearing those, so what do you do? How do you separate? 

Do I allow them to just throw in their ideas - because it's less satisfying for my students when they don't get to throw in – “well, based on this you could do all these things” - and I'm like we haven't looked into any of those things, right?! And it just shows how used to doing things that way, we are as a society. 

You know, we just can't help ourselves, and it's funny. So I guess we I need to spend some time talking more about over confidence too, so anyway, but it's funny how students can experience under confidence and thinking they're not prepared to do this job. But then once they get an idea, easily over confident and ready to share it. 

00:13:50 Karen Plum

I think this is such a human response. We want to solve problems. We want to be helpful, to show we have good ideas; but the separation is really important, particularly until you've evaluated possible solutions. Here's how Eric describes the role. 

00:14:06 Eric Barends

Ideally you should be, as the evidence-based change manager or project manager / decision maker, in charge of the process rather than make... I mean, you can make the final decision at the end based on all the evidence that's brought forward, but you're more like a judge, like in a court of law. You invite people to come with, bring forward evidence and that can be evidence from the organization, or it could be a witness statement - in our situation, that would be a practitioner that has experience with the problem and was, you know, affected by the problem and people that have a solution and work with this elsewhere and other organizations. 

So you make sure that all the evidence is brought to the table and you are the judge to determine whether the evidence is admissible, yes or no. Or whether it's biased and very subjective and you dismiss it and say, well, very nice, thank you, but that's your opinion and we're actually looking for more rigorous evidence than opinions. 

00:15:11 Karen Plum

And presumably I'm responsible for being the bias monitor. Clearly, if you have colleagues who are versed in some of the same concepts of evidence-based practice, they will also be alert to the typical tendency to cherry pick the data. So that will help if as a group we are all on. 

00:15:29 Eric Barends

Message Daniel Kahneman said if you are aware of your own biases, that's not enough. You will never, ever be able to neutralize your own biases – it’s how your brain is wired. However, in a larger context in an organization, where multiple sources of evidence are brought together, there are multiple people, and you also have the role of bias detector, and you can make sure that questions are asked in a non-leading way, that the evidence is gathered in an objective and reliable way. And you use these three criteria to judge whether the experience of a person is indeed valid and reliable. Then it's easier to overcome these biases and make an unbiased decision. 

00:16:24 Karen Plum

These three criteria are very interesting. Numerous opportunities to practice, direct objective feedback and a regular, predictable work environment. I wondered in the world of knowledge work, who could satisfy those three? 

00:16:40 Eric Barends

That's hard, that's why in an organization, evidence from practitioners is almost never enough. We do need evidence from multiple sources because in the domain of management, the organization is by definition dynamic and not predictable. So yeah, that's an issue. But there are some situations where you could argue that maybe it's a little bit more stable than usual, and I think we gave the example of the sales manager, that is already in a bit more stable situation, although you could argue that there are still influences and dynamics going on there. 

But for instance, when you look to the more procedural things like a surgeon doing a procedure over and over again, or an engineer coming up with a solution, or if you get into the lean management area where a process is redesigned and it's done over and over and over again. And that's how hard outcome information then probably that is, is a more stable situation. 

But you are right, these are three criteria and they’re hard - like how many experience, how many opportunities to practice, what should be your hit rate in terms of change? We make the comparison with the baker an orthopedic surgeon I believe. That is, of course a kind of silly comparison, because an orthopedic surgeon does five or six or eight or twelve operations per week; and as a manager you come across situations, maybe five or six times a year, so it's by definition already quite a challenge. 

00:18:37 Karen Plum

As a consultant, I've worked hard to be a reliable and trustworthy source of guidance and expertise to clients. I think many aspects of the process I've delivered would stand up to scrutiny. Many opportunities to practice and outcomes measured, certainly in terms of stakeholder feedback. But a reliable and predictable work environment, when I work in change management? That's a tough one to meet. 

There's also the fact that as a consultant, I advise, but the client decides. They may not take all of my advice and follow my process, so when measuring outcomes, can I really either claim credit for success, or take the blame if objectives haven't been met? When clients are choosing a consultant, they should presumably exercise these criteria in making their choice. 

That's quite a sobering thought, not because it would be difficult to put a case forward to say how I would meet the criteria, but that I suspect most clients make their choice based on gut feeling, whether they like the people and whether the company has a big name or a good reputation. Naturally, where there are fixed procurement processes in place, one of the objectives is to avoid biased choices or gut feeling choices, which of course is good. But not every client has such a process. And if you're going for a big name, then you're absolutely giving in to authority bias. What does Eric think about this? 

00:20:05 Eric Barends

In many times a consultant is not able to judge the outcome of his or her recommendation because the client did not follow up the recommendation 100%, or left things out or did something differently. So in terms of hard outcome feedback it's not there because, as I said, the recommendation was not followed up. That's number one. So the consultant in this situation can't learn from his or her experience. 

The second one is that of course consultants are often driven also by commercial target and a commercial interest. So as a client, if you hire a consulting firm and you want to know what's actually this consultants professional experience in terms of hit rate or opportunities to practice, feedback, stable environment. 

Well maybe one this person has a lot of experience, maybe the environment was kind of stable, although we just explained that’s very hard, but measuring the outcome, the hard outcome, that's a starting point - it's often not there - and second there may be other reasons why this consultant recommends this specific solution or projects - and that is commercial reasons so that that makes it hard. 

00:21:36 Karen Plum

Hard or not, this is the real world. So as a consultant my instinct is to reverse engineer some evidence-based decision criteria into consulting proposals, so that clients are educated in a subtle way, giving them more evidence upon which to make their decision. Look, we can show you that we meet these three important criteria that you will be concerned about, because you want to make the best decision. So I'll let you know how that goes! 

I asked Christina about how we might go about assessing the reliability and trustworthiness of consultants. 

00:22:10 Christina Rader

Oh my gosh! I mean - so difficult because so often consultants treat things as proprietary and there isn't a lot of background that you can get. But it just depends, right? Sometimes a consultant will say I've done this number of projects and here's the results. But of course - you're not hearing about the things that didn't work or that it would have turned out that way anyway, so. 

So I think that a lot of times you're left with logic. So are they saying things that you can see - does the logic model even make sense for what they're proposing? And then so often we just go off of word of mouth and things like that. 

00:22:59 Karen Plum

Not as comforting as I was hoping for, but that's the world we live in. Maybe that's enough about consultants for now, but I do think it's worth thinking about as at some stage you'll probably be considering using a consultant to provide advice and guidance, so you'll want to ask them some probing questions about their expertise. And on the subject of asking questions, clearly it's important to ask questions and lots and lots of them and not take anything as read. 

People in organisations have their own language, but I don't just mean acronyms and technical terms, but they use terms like culture or engagement for example, in a way that implies that we all know what they're talking about. And even if you think you know what they mean – ask. Can you explain it to me? What do you mean by that? What does that look like? Can you give me an example? 

Forget about trying to appear smart and like you get it. It's a very human response and helps us feel connected to the person and maybe even part of their gang, particularly if we're more junior to the person we're questioning and we feel vulnerable and uncomfortable asking what could be awkward questions. But when we're looking for evidence, we need to be sure we understand what they mean by the words they use. That's far more important than looking smart. Here's Eric. 

00:24:23 Eric Barends

If you're junior and you have this big shot executive and you ask this executive, what is your experience? What do you think is the problem? And then there's a whole story and you don't have a clue what this person is talking about, but it sounds very inspiring and very knowledgeable. You will be maybe hesitant to say sorry, but I'm not sure whether I get it. Could you maybe explain or be more clear in what exactly the problem is. 

And if you're not impressed by all the BS in your organization and you’re an experienced evidence-based manager, you will cut to the chase and say, very nice, but it's very unclear to me what exactly the problem is. 

00:25:06 Karen Plum

And let's be clear, even if you aren't junior, senior people can be intimidating, and they're usually short on time, sometimes lack patience and may get irritated that you don't understand what they've said.

But think of it this way. If they can't explain it in a way you can understand, maybe they aren't clear about it themselves. Albert Einstein said if you can't explain it to a 6-year-old, you don't understand it yourself. So continue to probe, respectfully, because otherwise you'll make assumptions about what they mean and maybe you'll get it wrong. As a test, after you've listened to the practitioner, the expert, or the senior leader stakeholder, can you summarize the problem or the solution in a minute or so? If not, and you're still not sure, then again, maybe it's time to ask more questions. 

That's all for this episode about appraising evidence from practitioners. Let me leave you with a summary from Eric and the two guiding questions that he recommends we all regularly adopt. 

00:26:10 Eric Barends

One of the best questions you can ask when someone says I think we have a problem with X or Y, specifically when they use typical managerial talk and jargon, (there's a lack of engagement or people don't take ownership for da di da), you would ask, can you give me an example? Can you give me an example where that was the case? And then probably it becomes more clear. 

What is the problem you're trying to solve?” And “Can you give me an example?”