Evidence-Based Management

Module 3 Acquire evidence from practitioners

Season 1 Episode 3

This episode accompanies Module 3 of the course, which is all about acquiring evidence from practitioners – people who have experience with the problem we are tackling, or the solutions we are considering. 

Modules 3 and 4 are somewhat intertwined, so the rest of the discussion about practitioner evidence will be in Episode 4. In this episode we consider the choice of practitioners (people with expertise, not just opinions) and how we gather information from them. Asking questions (the focus of Module/episode 2 is critical here, and emphasis is again put on not jumping to solutions, even though it’s so tempting!

There is also discussion about the use of questionnaires – including some guidance about where to start (at the end) and how to get the best results - keeping the questions simple, testing understanding of the questions before launch and ensuring you know exactly how you’re going to use the data. 


Host: Karen Plum

Guests

  • Eric Barends, Managing Director, Center for Evidence Based Management 
  • Denise Rousseau, H J Heinz University Professor, Carnegie Mellon University 
  • Dr Christina Rader, Associate Professor with Tenure, Department of Economics and Business at Colorado College 
  • Dr Lisa J Griffiths, CEO, OzChild National Support Office

Find out more about the course here:   https://cebma.org/resources-and-tools/course-modules/

00:00:02 Karen Plum

Hello and welcome to the evidence-based management podcast. This episode accompanies Module 3 of the course, which is all about gathering evidence from practitioners, the people who have experience with the type of problem we're trying to solve, or with the types of solutions we're considering. Here we're keen to identify the right people to talk with, not those with strong opinions but with reliable and trustworthy experience in the area we're interested in. 

There are different ways to gather the information we need through interviews, questionnaires, focus groups, all of which we'll look at in a bit of detail. 

I'm Karen Plum, a fellow student of evidence-based management and to discuss this module I'm joined by podcast regulars Eric Barends and Denise Rousseau, and in addition, my guests are Dr Christina Rader from Colorado College in America and Dr Lisa Griffiths, CEO of OzChild in Australia. Let's get going. 

00:01:09 Karen Plum

When I did this module, I was struck by how important it is to choose your practitioners wisely. And of course, while trying to avoid any bias in their selection. It feels important to ensure that there's diversity in terms of their expertise and the evidence we gather from them. But it's also important to remember that we aren't seeking diversity of opinion because we aren't seeking opinions! We're looking for experience with the problem or solution that we're working with. 

We'll be interested in opinions from stakeholders, but they are a different group with a different focus for our evidence gathering. For the purpose of this discussion, we'll assume that the problem is already defined and agreed, and now we are looking for people who have experience with the problem. Here's how Eric Barends explains it. 

00:01:59 Eric Barends

We're looking for people that have experience with the problem or the suggested solution, and the problem is of course that a lot of managers and practitioners will have an opinion, will have an idea about the issue at hand so you don't need those. 

They are important but as stakeholder evidence, not as professional expertise, professional experience. So we're looking for people that have experience with either the problem or the solution. You'll find them probably in the department or at the place where the problem occurs, or the issue occurs, so that means it's a good idea to go to the department for instance, if it's assumed that a specific department is working very inefficiently and the performance could be improved.

It would help if you talk to the people there and the managers and the employees and say, do you agree we have a problem with the performance here in this department? How serious is this problem? What do you think causes this problem etc etc.

When the problem is clear and people have a good idea about what the problem exactly is and the underlying goals, then you move forward to the solution and then you ask people about the solution, but preferably people that have experience with that kind of solution, and that's probably a different group. So the people that have experience with the problem are probably located or central in your organization, and the people that may have experience with the solution that is suggested may be scattered all over the place. It could be a manager from a different department, that at a previous organization worked with the solution or in another capacity, or in another context, have some experience with the solution. 

So the first step is to differentiate between experience and opinion. 

00:04:03 Karen Plum

I also talked to Dr Lisa Griffiths, CEO of child care organization OzChild about the importance of practitioner evidence. You may remember Lisa from the episode on Module 2. 

00:04:15 Lisa Griffiths

I think with practitioner wisdom, or tacit knowledge, it's really fundamental in being respectful and valuing that. Because without appreciating what somebody brings to the assessment of evidence, then you could fundamentally miss a really big context piece, and practitioner wisdom in particular, really grounds you in in the context is what I would say. 

And we're often really attracted to the scientific data and then that somebody that's gone to the trouble of being peer reviewed, and appearing in the published journal, but if the practitioner, if you don't check in with a practitioner, could this work in practice on the ground, then you're not going to get very far. 

So it really is fundamental to not ignore that, and I think in doing the course you learn some fabulous skills of assessing the scientific research, and it's very easy sometimes to get some really rich organizational data, but until you check in with your practitioners at the frontline, then you can't really quite translate that as effectively as what you would like to. 

00:05:38 Karen Plum

I think that's a great point. It seems to me that another good practice is to ensure that when you're choosing practitioners to talk to, that you don't just select the people you know, or worse still, the people you know and like, as opposed to people that you don't like! It's often suggested that when trying to work through problems, you include people who are least like you, from a diversity and creativity point of view. 

That may not feel comfortable, but it'll help ensure that selection bias is less likely to creep in. If it does, and I populate my practitioner community with people who think like me, then we're likely to come to the same conclusions about the problem that we're working on. 

I asked Eric about the importance of getting different perspectives. 

00:06:25 Eric Barends

What I think is also important is indeed maybe different perspectives and experiences, but also keep in mind that you have employees, you have supervisors, you have managers, you have different disciplines, different professions. And it is important when you look into practitioner evidence, to consult, especially when it's about the problem diagnosis, all these levels in your organization. 

So ask the employees who have experience with this problem; ask the managers and also ask  different disciplines or types of profession. Ask the engineers or the physicians; ask the nurses when you work in a hospital, so to have a broad representation of the people that have some experience with the problem. 

00:07:16 Karen Plum

So the important thing here is how much experience do people have with the problem, rather than how they feel about it, or what they think about it. Some may not experience the problem as anything very significant because it doesn't impact the work that they do. But others may be able to explain why the problem is more urgent and serious for them, by telling you how their work is affected and how much impact it has. 

The other thing that I think has an impact here is who asks the practitioner for their input. We'll come back to this in module 15, right at the end of the course, but for now I think it's important to bear in mind that seniority is a very real concept in organisations, and if you're a junior person with relatively low standing in the organization and you want to ask questions of a senior leader about an organizational problem, then you may not even have your request taken seriously. 

So it's important to think about how you and your assignment will be viewed by senior or influential practitioners, and if you feel that you lack sufficient standing to be taken seriously, then make sure you find someone else to approach the senior person to ask for their input, even if they just make an introduction. 

I discussed this with Christina Rader from Colorado College in Colorado Springs. Christina arranges for her management students to practice gathering and assessing practitioner evidence in local nonprofit organisations, or for assignments within the college. 

00:08:47 Christina Rader

What I think my students find challenging in this area is questioning practitioners, because my students are undergraduates and have limited work experience, they find it very hard to question people that they may think of as an authority. And so combine that with confirmation bias and the authority shows up and says, oh this is the problem - it's very easy to start agreeing.

However, over the course of a 30 minute prep and an hour long conversation, they get dramatically better at it. I think there's a tension because you want to please the authority figure and you want to feel like you've answered the question as they've posed it, even though they haven't given you enough to work from. 

It's interesting how often we want to just fill in the gaps for them, rather than push them. One strategy we use with my students, is to use the fact that they are students and allow that to help them ask quote ‘dumb’ questions that makes it easier to get at the real answers to their questions. And one thing I think my students learn pretty quickly is how willing people are to receive questions. 

And it's scary at first, it feels like you're challenging authority, but once you get the conversation going, your questions can be received as an expression of interest and also it's a way that they can show off their smarts by asking questions as well. 

00:10:21 Karen Plum

I agree that even when you have experience, it can still be really intimidating to sit in front of those powerful people wanting to be liked and to appear smart and that you get it. Cristina explained that it can be very challenging to get to the problem, as often people find it difficult to talk about problems rather than symptoms. 

00:10:41 Christina Rader

One thing that I think is challenging is when the practitioner has a very long answer to a short question. I find it challenging to know what to do, let alone students, because I want to be respectful, I don't want to cut the conversation short, but I'm on a mission. I personally do enjoy hearing all the background, but it's really important to keep the thread of ‘what is the problem’. What is the evidence for the problem? It is hard when you get a long answer. I am practicing, my students are practicing what questions or statements can we say after a long answer to direct us back or to say so, we're hearing that the problem is <fill in the blank>, but the problem, the difficulty is that in the moment it's hard to take all this long thing you've heard sometimes, and distill what you think the problem is. Which is why taking breaks in the middle of the meeting has been very important. 

00:11:46 Karen Plum

I love the idea of taking a break, and if you can set that up in advance, that's great. It may be difficult to get a follow-up meeting depending on the seniority of the person, so you want to get as much as you can in one go. But asking if you can come back if you have further questions in the future is always a respectful way of leaving the door open. I asked Christina why she thought people found it difficult to cut to the chase when asked about the problem. 

00:12:15 Christina Rader

Probably two reasons. One, it's hard to be clear about a problem. So even in their own minds, they may not be fully clear about what they think the problem is. It's easy to identify a symptom or identify something that someone else has thought is a problem, but not be really clear about the problem.

And then the second thing I think is that we tend to think we're being helpful - the more we talk and the more we share; that we're adding richness and sometimes people are, but other times it can be very hard to follow the thread of the problem. 

The other thing that makes it difficult is the students, I, we're all listening, trying to listen at a higher level because we're trying to listen for what is the logic model? What are the assumptions? And so everything that gets said - you're trying to listen through that lens and the longer it goes that things are said before you get to stop and reassess where we are and what we've established, the harder it is to keep track of it all. 

00:13:29 Karen Plum

This can be very tricky. You asked a question. You're listening for the answer you need. You’re processing and you're thinking about the next question. And when you don't hear the answer you need, I think you start to panic a bit. Because, you know you're going to have to probe in a respectful way, as you realize that the person isn't very clear about the problem that they're facing. 

Christina mentioned the logic model and if you'd like to recap on that, it's covered in module 2 or episode 2 of the podcast where Eric explains it. It's about 20 minutes into that episode. 

Next I wanted to explore the usefulness of looking outside your organization when seeking evidence from practitioners. In my experience as a consultant, there's a conundrum in that clients want to know what other organisations are doing, but if they don't like what they hear or feel threatened by it perhaps, they typically say Oh well, they aren't like us. And the implication, therefore, is that that doesn't count. 

In terms of seeking expertise with the particular problem you're faced with, it certainly makes sense to seek out those that have experience of implementing specific solutions. How does the solution work? Were they happy with the results? And actually did it solve the problem they were trying to fix? Having said that, Eric warns about the dangers of confirmation bias. 

00:14:52 Eric Barends

One of the things that is a serious bias is confirmation bias - that you pick out organizations that have implemented or applied or used your solution that you think is a great idea and would solve your problems, and you pick out only those organizations that have positive experiences with that solution. 

So what you should do, is also have a look at the organizations that may have chosen for dissolution and are maybe not so happy about this or had outcomes that were actually negative or unexpected. So that is that is something that you absolutely need to keep in mind. 

00:15:42 Karen Plum

And at the risk of repetition, it's vital that we distinguish between practitioner expertise and stakeholder evidence. Stakeholders will be asked about feelings and opinions because they're going to be affected by the decision and the solution chosen. This is subjective and we'll discuss it further when we reach the modules on stakeholders, but for now we're talking about practitioners. 

00:16:06 Eric Barends

This is about people that have experience with the problem or have experience with the solution. So if someone says, oh, I think this is a great solution because da di da di da and you ask so have you experienced with the solution? Have you seen this work elsewhere? And they say no, but I'm sure this will work. That is an opinion. 

It may be a clever insightful judgment and maybe this person is correct, but that's not what we mean with professional or practitioner evidence from experience and expertise. 

00:16:45 Karen Plum

Once we've identified our expert practitioners, we can set about gathering evidence from them. This can be done via interviews, survey questionnaires, and focus groups, and I wanted to explore these different methods with our experts. I asked Denise Rousseau what general advice she would give people when they're putting a questionnaire together to gather practitioner evidence. Where should we start when designing a questionnaire? 

00:17:10 Denise Rousseau

Begin with the end in mind. What information do you need to have? What's your primary purpose in gathering this information and which population or groups of people can best give you that information? 

Second issue is less is more - first, because questionnaires take time for people to fill out and because in our busy, time starved age they need to be short and easy to answer, so not be difficult for people who are trying to give you a response, to make a valid response. So you need a good design going in - is what is it that I want to know with this? 

And then the next part of that is what do I really need to know with this? Because nice to haves probably don't belong in your questionnaire because it's better to have a few questions that are on point and a higher response rate (more people filling it out) than it is a lot of questions scattered on different aspects of an issue and a very low response rate 'cause that low response rate is a deal breaker. Your information isn’t representative of the group you're trying to learn from, and you don't know what it means - you can't interpret it. 

00:18:23 Karen Plum

Denise and I discussed doing whatever it takes to encourage people to complete your questionnaire. She mentioned donuts - apparently that's the best way to get doctors to do your survey! But clearly, that's not always possible. 

Eric explained that designing questionnaires is hard, so if there's any possibility of using an already scientifically validated questionnaire, then that's by far the best option. Naturally there aren't questionnaires on all topics, but at least have a look, particularly if your topic relates to something that has been widely studied, like trust for example. But what if there isn't anything suitable? 

00:19:01 Eric Barends

You should realize designing questionnaires is hard. You are your own enemy in that respect, that you will have a look at your question, say, well, I think it is pretty good. Yeah, it's clear, you know. I mean, we can send this out. No, no no no! Try it out with a small pilot group and ask them to think out loud when they answer a question and they will probably say OK, this is the question - not surely clear what they mean. Do they mean this? I think they mean this, so this would be my answer. 

So what seems obvious for you, what seems obvious in terms of wording that it's clear, is not always the case when you try it out with some of your colleagues. 

00:19:47 Karen Plum

I love this idea. I've tested questionnaires in the past but never in this way and it seemed a brilliant way to really understand whether people interpret the question in the way you intended. Sitting with a small group of people and asking them to think out loud - to tell you what's going through their mind as they decide what the question is asking and how to answer it. 

As a consultant, I've used questionnaires to gather data from populations of staff to identify the ways in which their current ways of working are helping or hampering their ability to be productive and effective at work. I use a standard questionnaire for consistency and the ability to benchmark across organisations. 

In my experience, everyone is an expert at questionnaire design and clients always want to fiddle with the wording or to remove or add questions that interest them, or that they think aren't desirable or necessary. I love Denise's suggestion here. 

00:20:44 Denise Rousseau

I think the client might be given, there's a ‘gimme’. Maybe room for three questions. Which three are important to you? So they can feel that they have customized the survey and also, if there's some kind of heartfelt important issue, it's reflected, but only three, because else we reduce the likelihood of getting a high response rate, and you want a high representative response rate from your people, right? 

00:21:09 Karen Plum

In discussing the questionnaires that are used in organisations without the benefit of thorough testing and consideration, Eric explained that once you understand the typical issues with these tools, you can easily spot the questions that are going to be impossible to interpret because the question is double barreled, i.e. by linking two concepts or issues and asking the respondent to give you just one response. 

So – ‘do you feel energized and fulfilled at work?’, allows people to only answer about both states, not each one separately. When you analyze that data, you conclude that people are responding about both states equally, whereas they may feel very differently about how energized they are, as opposed to how fulfilled they are. 

So we should avoid double barreled questions at all costs, as the data they deliver is pretty meaningless. Here is Denise explaining the best approach. 

00:22:04 Denise Rousseau

Questions are best short, simple, one key idea, easily responded to based on the rating scale that you used (the anchors for response) and in the less is more idea, what I typically suggest is if we're talking about more objective information, like facts about how old am I, where have I worked before, you only need a single question to get a particular fact. 

But if you're looking for attitudes or perceptions of fairness or supportiveness of a boss, you need several survey questions to tap that one attitude, because people provide information on ratings of an attitude or perceptions of the environment, in ways that contain some error or unreliability. And so you want to improve the reliability and trustworthiness of your items by asking questions about the supervisor’s style for example, one or two or three ways. 

And so maybe at least three questions for a quality or a perception, so that you can average those and also test the inter-item agreement – inter-item agreement is high, we have a lot more confidence that they understood the question and are reporting on something consistently. 

So because we might need three items or so for perceptual or opinion measures, we have to be really careful about which opinions and which perceptions we go after, which gets me back to the point about beginning with the end in mind. Nice to haves are not enough. 

Make sure that you get the core information you're seeking in a reliable way, and that often means simple questions, not double barreled and several variations of a question so you can test reliability. 

00:23:58 Karen Plum

We also discussed the use of focus groups, which Denise explained needs to be carefully constructed in terms of the people participating and what you're asking of them. For example, what is their role - back to the practitioner vs stakeholder point. While there may be benefits to be gained by having a group of people together, if you mix ranks and expertise, the quality of the conversation could be skewed. 

00:24:24 Denise Rousseau

We want people to tell truth to power and sometimes to make that happen, power does not need to be in the room. 

00:24:31 Karen Plum

Eric explains that focus groups are more typically used for gathering stakeholder input, and the important thing is to ask open questions about how people experience things, how they feel about them, how they would use them, etc., so we'll come back to focus groups in a later episode. 

That's all for this episode. I hope you've enjoyed learning more about practitioners, who they are, how to choose them wisely and how to gather good evidence from them. I'll leave you with this advice from Denise Rousseau. 

00:25:02 Denise Rousseau

Plan your questionnaire as if you were going to use it over time. Because one of the real advantages from questionnaire assessment is trend tracking. And in order to do that, questionnaires should be short, should be targeted, not fatiguing to people - so that you can monitor from month to month, quarter to quarter, because you're getting so much more value out of that survey that you've carefully designed if you use it over time and spot trends. 

And if you use it over time and are able to do some break outs of what's going on in parts of the organization that have gone through some event and others who were protected from it and did it make a difference, it's really invaluable. I think we don't do enough targeted assessment, short sweet assessment over time, that lets us track what's going on in an organization and see patterns that are kind of invisible when we're on the ground.