Evidence-Based Management

Module 9 Appraise evidence from the organisation

March 24, 2022 Season 1 Episode 9
Evidence-Based Management
Module 9 Appraise evidence from the organisation
Show Notes Transcript

This episode accompanies Module 9 of the course, which is about the appraisal of the data, information and evidence gathered within the organisation. The acquisition of this evidence is covered in Module 8 and its corresponding podcast episode.

In this episode we continue to consider some of the wider aspects associated with the use of organisational data. Why we can’t take it’s quality for granted; how it can be highly misleading if we don’t know where it comes from and how it’s created, and how people can continue using it for years in ignorance of its shortcomings.

The excitement around Big Data and data analytics is also discussed, to see whether these are all they are cracked up to be!  And we look at the complexity associated with key performance indicators (KPIs) which again can lock us into repetitive behaviour, without questioning their value.

Once again, the importance of taking decisions based on multiple sources of evidence is reinforced. 

Link to the video mentioned by Jeroen Stouten

Host: Karen Plum

Guests:

  • Eric Barends, Managing Director, Center for Evidence-Based Management 
  • Martin Walker, Director of Banking & Finance at Center for Evidence-Based Management
  • Jeroen Stouten, Professor of Organisational Psychology, KU Leuven University
  • Ravishanker Jonnalagadda, Senior Expert Data Science, People  Analytics , Novartis Healthcare 

Find out more about the course here:   https://cebma.org/resources-and-tools/course-modules/ 

00:00:00 Karen Plum

Hello and welcome to the Evidence-Based management podcast. This episode accompanies Module 9 of the course, which addresses the critical appraisal of organizational evidence. There's more on organizational evidence in Module 8 and its corresponding podcast episode. 

Module 9 considers the data that we've gathered, suggesting ways of examining it, to ensure it's accurate and reliable for our use, so that we don't fall foul of assuming that the data is free of misleading presentation methods, statistical inaccuracy, or wrongly applied metrics such as meaningless KPIs. 

It also reinforces yet again, the importance of looking at multiple sources of evidence. 

I'm Karen Plum, a fellow student of evidence-based management, and in this episode I'm joined by Eric Barends, Managing Director of the Center for Evidence-based Management, Martin Walker, Director for Banking and Finance at the Center for Evidence-Based Management, Jeroen Stouten Professor of Organizational Psychology at KU Leuven University in Belgium and Ravi Jonnalagadda, a data scientist and people analytics expert at Novartis Healthcare

Let's dive in and find out what they had to say. 

 

00:01:28 Karen Plum

I've spent a lot of time working with organizational data, and I've seen how it can be revered and go unchallenged. And as discussed in Episode 8, I sense there can be a tendency to set too much store by it. 

Just to recap on this before we move on with this episode, I asked Martin Walker if he agrees that people overestimate the power or the quality of their organizational data. 

00:01:52 Martin Walker

Well, historically, in any kind of vaguely organized organization, relies on data in their decision making. I think the difference in recent years is there has been a huge buildup of excitement, if not hype around data - people talking about big data, many organizations have chief data officers, data lakes, et cetera. 

And I think the general excitement about data has actually distracted, in a way, people from asking some of those fundamental questions about where's the data coming from? What's the quality? How relevant is it to the decision-making process? 

I think this is one of the things organizations actually need to come back to the basics on, because if you look at types of data which have historically had a huge impact in driving decision making like financial data, even financial data over the last few decades, we've seen many accounting scandals and the like. And people are very conscious that the quality of data about financials has to be good and there's so many bad things that can happen if it's wrong.

But I don't think that mindset has fully caught up with the other types of data that organizations are now busily collecting and using in decision making. 

00:03:08 Karen Plum

Being reminded of the financial and accounting scandals really should give us all pause for thought and cause us to at least satisfy ourselves as far as we can, about the reliability of the data we're using. But what of the emphasis these days on big data and data analytics? Is it too easy to get swept up in the excitement and/or the potential hype without understanding the nature of the data being collected and analyzed? It turns out that big data is one of Martin's bugbears. 

00:03:40 Martin Walker

One of my bugbears is about big data. Big data is a very very popular concepts nowadays and the whole idea of big data is something that's come from internet based firms and social media type firms, where they collect absolutely vast datasets using very specialist tools. 

But this idea with big data has spread way beyond those kind of technology firms into the point where everyone is talking about big data and I think the endless discussion about big data has made people forget about good data, because often the good data is often much, much smaller datasets than what you're talking about in big data. You’re not talking about hundreds of millions or billions of records.

It's actually about, are my 50,000 customer records correct or not? And that has a far, far bigger impact than whether you record every mouse click every single one of your customers was using. So I think it's been a distraction for many organisations. 

00:04:46 Karen Plum

So here is the next potential fad or fashion that could be distracting people away from things that are meaningful for their business. And rather than fixing the data problems of their existing systems, they move on to the next big idea. 

I discussed this with Jeroen, who I knew was also interested in this, because he'd recently talked to a number of experts in pursuit of getting a better understanding of what all the fuss is about. Are people really being fooled into being more excited about this than is warranted? 

00:05:15 Jeroen Stouten

Yeah, I think so, and that's probably the definition of every fad or hype you know it's glossy, it’s shiny, it's new - we should probably have that as well, not quite sure why, but we should.

And that has the disadvantage that people don't really know what they're getting into. And then just starting to look into the data randomly, finding patterns or trying to see things they have and not questioning what they're actually looking for. Not questioning what the quality is of the data, either you know, what did you ask, the phrasing of the different questions, the sort of data you work with. That does have an effect on the answers you will find. 

So there is a particular risk in just adopting the hype without really thinking it through. 

00:06:13 Karen Plum

This rings alarm bells for me. The notion that people will go through the data looking for trends and associations, and in a sense, looking for problems to solve. This could be a very good way to create solutions to problems that nobody else has yet identified. I asked Eric about big data and data analytics. 

00:06:32 Eric Barends

It's very popular now, which is good. I mean when you look 5-10 years, 15 years ago there was hardly an appetite for using data from the organization, evidence from the organization as a base to build your decisions on, and that is changing, which is a good thing. 

So I'm absolutely in favor of data analytics, when organizations say, yeah, we're going to have a more critical look at the data and we're going to collect data, great go for it.

However, keep in mind that you should take into account some of the principles we point out in our Modules about what are barriers, inaccurate data, unreliable data, missing contextual information, there's always measurement errors, there's a small number problem. All these barriers we mention and it's only one source of evidence. You need multiple sources of evidence. 

00:07:34 Karen Plum

And so we keep coming back to the fundamentals. Consult multiple sources of evidence. I discussed this with Ravi, a data scientist at Novartis Healthcare and a past student of our course. As someone at the sharp end of analyzing data for his organization, I was interested to know what attracted him to evidence-based management. 

00:07:55 Ravi Jonnalagadda

As a data scientist, I am day in, day out working on hypothesis. There is a hypothesis that we have and we want to see whether the null hypothesis holds right, or we can reject the null hypothesis. 

And most of the times, with respect to what we want to prove through the data analytics work and the results come in favor of what we want to prove. We get so carried away, we are so happy that this is going to make my manager or his manager very happy and we are in love with what we have achieved over there. 

And the biggest problem is, we are never asking critical questions because those are going to come up when the study goes to the next levels and when they are going to ask us those questions, most of the time I or my team we end up not being so prepared for it. And this is where I believe evidence-based management really helps because even if you got a favorable result for a hypothesis you are working on, is there more evidence you can gather? 

Like can you go and check if there has been any similar work that has already happened around and have they found something similar? What kind of studies - was it a controlled experiment, was it a cross-sectional experiment or is it something that has happened across industries or a specific set of industries? Anything specific to pharma industry? 

See there are so many options available for you and when my Learning Head comes back and says Ravi, are you sure learning has impact on a person at writing I say yes it does. And it is not just in Novartis, there are these kind of organizations also which have done this and they have found the same. So we are definitely in the right place with respect to the analysis. 

The data is always there, we are not indulging in too heavy manipulating of data or even slightly manipulating data to prove hypothesis our way, but yet you are very uncomfortable with the results when facing decision makers if you don't have these kind of alternate sources that you can look into and say, OK, fine, what I have done, there are studies like this that have happened. And yes, this hypothesis has come out to be true across places. 

And trust me, I have actually used evidence-based management for couple of my hypothesis and that is where I really enjoyed it, yeah. 

00:10:43 Karen Plum

Ravi told me he doesn't always have the luxury of asking lots of questions when he's asked to test a hypothesis, but by searching for other sources of evidence he can make sure that what he's providing to his senior managers is robust and isn't drawn from Novartis’s organizational data alone. 

Fortunately for Ravi, he has a database of over 100,000 people to draw conclusions from. But he's still keen to validate it with other data. 

00:11:09 Ravi Jonnalagadda

Absolutely, and an accuracy of 80% or 75% is there as a testimony of what you have achieved and on a data of close to 110,000 associates. But that kept aside examples from the outside world give me that additional 25% to 30% weightage and bring in that sense of what should I say, assurance to my study and not just with me, but every other stakeholder whose hearing result. 

They feel the same way and then they are much more willing to buy into what are there and also suggest us if we have tested this and that, but to make them first buy into our study and get them interested and ask us questions, I believe these examples from outside world is a critical part of it. 

00:12:04 Karen Plum

So students and practitioners of evidence-based management find their own ways of improving the evidence sources that are used to make decisions in their organization. 

When I talked to Jeroen, he had clearly formed a similar view about the use of multiple sources of evidence, which is vital if you are to avoid issues around transparency and privacy of some of the data particularly associated with people analytics. 

00:12:28 Jeroen Stouten

There have to be a number of safeguards in analyzing that data. The transparency is one part you need to know what you're doing. Transparency to yourself as doing the analysis, not randomly running very advanced statistical analysis, not really knowing what they're doing. And second transparency to employees because they would like to know how you got that pattern. Is there sufficient reason to have a reliable analysis? 

Analysis is one part, but you need to verify what you found. So looking for other additional sources of evidence, that's something that is important in the course as well, right? 

So look into the literature. Is there in the literature concerning your question, already certain factors that you have available in your data that seem relevant for your question. Maybe you can just test that and see how that works in your organization. Maybe it works the same way or not. Well, that's something you could find out. 

So you have additional evidence to test your model, to test the factors that you think are important, rather than just randomly throwing in factors and then, well, there's a good chance you end up with correlations that are not meaningful. 

00:13:58 Karen Plum

I think that's a really important point. There's a danger that when you have powerful analytical tools at your disposal, it's easy to go looking for correlations, which you can then convince yourself are meaningful and need action. By the same token, if you don't have a clear idea of what the problem is that you're trying to solve and what you anticipate that the data might tell you, then there's every likelihood that you won't spot potential shortcomings or inaccuracies in the data. 

The course gives us a list of barriers that Eric mentioned earlier. And this is a helpful discipline to try to trap errors or issues that we might not otherwise recognize instinctively as Jeroen suggested, don't rely too much on the data and too little on your own analytical skills. Does your data actually makes sense? 

00:14:46 Jeroen Stouten

Yeah, that's true. You're relying too much on the data, too low on your own analytical skills. You have to have the human factor involved, shouldn't be just a machine that gives you the results. You have to interpret what the results might mean in terms of the sort of data that you used. 

Maybe there's limitations to that, or the quality of the data that you used, and the interpretation follows that quality evaluation. But also results that come out of that, the human factor can interpret these results in light of other evidence. And see whether the results matter. 

For example, one of the people in the video said, well, significant is not important necessarily, right? Something is significant, but well, if it's important or not, you have to see obviously in terms of statistical analysis, but also in terms of your data is something that is relevant and a priority for you as an organization. 

00:15:55 Karen Plum

And that resonates to what was emphasized when we looked at the scientific literature, something might be statistically significant, but in practical terms, it might be irrelevant.

If you're interested in the video that Jeroen mentioned, there's a link in our show notes. It's 18 minutes of insightful discussion, with a number of big data and data analytics experts. Jeroen was also keen to stress that while data gives us understanding, it doesn't give us solutions. 

00:16:22 Jeroen Stouten

Well, the data gives you sort of an understanding, but it doesn't give you solution. And I think for everyone I talked to about a topic they said, well that is the most tricky part. Relating the solution to the problem, so you know how it works at best. You know the different mechanisms, for example there's meaningfulness and people’s jobs, and that seems very low in terms of descriptors and a meaningfulness also triggers slower performance. You know, just as an example. 

But then, how do you resolve that situation? What does meaningfulness mean to employees? It might be very different for one employee and the other. So the solution will be different as well. For one, it might be the case that they sit in a very loud environment and the other person thinks it's not meaningful because it gets thrown away all the time, the results or the projects he or she's working on. 

So the analysis in itself doesn't necessarily give you a solution, and I think that is something that's overlooked quite a bit. People believe that if they have the results of the analysis, they also have the solution and it's just fiddling with the buttons a bit, you know meaningfulness is low, so we should raise it. 

00:17:52 Karen Plum

So even with all the data nicely analyzed, we could still completely misdirect ourselves if there are factors that we missed. And so monitoring the solution over time is also vital, so we can pick up on things that might be in play, that we didn't monitor or measure. 

Maybe we just didn't have all the data we needed and cross referring to other sources of data. This links nicely to another area I discussed with my guests - that of key performance indicators, the KPI. In the general sense of what gets measured, gets managed, we set a lot of store by KPIs. Here's Martin. 

00:18:28 Martin Walker

I'll give you an example. So in one organization they had a dashboard with six KPIs. It was relevant to interaction with their clients. I looked into it in more detail and found several of them were just completely subjective. Not that there's anything wrong per se with subjective data, if you actually understand and treat it correctly, but these are just presented as numbers. 

But what I also did in addition to just actually looking at where the data came from, I actually built an MI system that was actually using the real relevant data about costs and efficiency coming out of systems. And then I basically correlated these KPIs which people had been using for years against the real data.

And I found four, I think of the six, four of the metrics basically had no relationship whatsoever to what they were trying to measure. One of the metrics was consistently giving the wrong correlation. So it was saying it was saying things are good, when they're bad. Or things are bad, when they're good. 

And I think just one of the six actually had any kind of power as a tool to understand what was going on. And people had been churning this stuff out for year after year and looking at it and sometimes making decisions off the back of it. And it looked nice! 

00:20:01 Karen Plum

When people are busy, which they seem to be all of the time, it's very easy to just keep doing what we do, not questioning the data, just using it because someone, some time must have ensured that it was accurate and reliable. 

Who has time to take a step back and ask that awkward question - are we happy that these KPIs are what we think they are? If not the decision makers, then who? 

00:20:24 Martin Walker

And there was an industry based around creating it. That's why I think a lot of the education CEBMa tries to do, and particularly the chapters in our book and in our courses about organizational data, these really need to get through to the decision makers who are using KPIs and receiving these dashboards, so that they can then ask the right questions. Because there's layers and layers of people below them - it's just their job to produce this. But the management decision makers, doing their job well, depends on having the right data. 

And if you empower them to ask the right questions and then they realize that the data they're receiving it every day is either simply wasting their time or is misleading, et cetera, then that's a really really powerful motivation for organizational change because they realize this is -  bad things are going to happen eventually, if we keep looking at data where people have not actually considered the relevance or the quality. 

00:21:27 Karen Plum

My experience of KPIs is that a lot of times, particularly in service delivery areas, they're purely there to ensure the service provider isn't penalized for below standard service. Rarely are the KPIs associated with how the service helps the organization meet its objectives, and as such they become an end in themselves, not a means to a greater organizational end. 

00:21:51 Martin Walker

I do think it's got out of control in that people think they need KPIs for having the sake of KPIs. But designing KPIs properly actually requires a lot of thinking. You need to understand the business domain, to understand what's actually relevant; you would need to understand the data, where it comes from and the quality; you'll need, very often you need the basic statistics to design the KPIs and it's not an easy job to do. 

You could end up with very similar KPIs in the same organizations, or the same functions doing the same role, but yeah, it's just turned into an industry for its own sake in many cases, and people just don't realize the amount of thinking they need to do before they actually come up with useful APIs. 

00:22:33 Karen Plum

I think it's important to question the usefulness of KPIs. What are we really measuring and to what end? They're certainly a feature of corporate life, but they also drive aspects of policy in other areas, as this example from Eric underlines, as he talks about the opioid crisis in the US. 

00:22:52 Eric Barends

People will use the KPIs to build their practices on and make decisions. So an example is for instance in the American health care system, insurance companies really valued customer experience and customer value and therefore, they found out that one of the things that really matters to patients is whether a procedure is painful. Whether they experience a lot of pain. And when things, procedures or treatments are really painful they would not really recommend the treatment or maybe the hospital, which of course is something that they wanted to be avoided.

So they started to point out to the physicians that it's really important to take the pain of a patient seriously and do something about pain reduction and make sure that people don't experience too much pain. As a result, that is an outcome, I mean the answer is obvious, prescribe a lot of painkillers because that prevents patients from having too much pain.

Problem however, with a lot of painkillers, is that they are very addictive and as a result physicians started prescribing painkillers to patients and then they got addicted to the painkillers. And when the physician said, well, you know, I'm going to stop this prescription right now because the treatment is done, then they were confronted with the withdrawal effects and trying to search for alternatives. 

Like buying painkillers on the black market or maybe using fentanyl, or if you're really desperate, trying opiates like heroin or whatsoever to relieve them from the side effects. 

So one of the drivers, I mean, there are many other reasons why there's an opioid crisis in the United States, but certainly what contributed was having pain as a KPI for the surgeons or the doctors in the hospitals. So you need to really think this through. And when you set a KPI after a while, evaluate, like is this actually working out the way we planned, or are there nasty side effects we should really discuss it with all the people involved, all our stakeholders and see whether this is a clever KPI or whether there are side effects we did not take into account? 

00:25:36 Karen Plum

And we'll be talking much more about stakeholders in the next couple of episodes. Before we finish there’s one final area I wanted to touch on and that's the role that regulators play in prescribing the metrics that organisations need to monitor. Whether KPIs or their twin, the key risk indicator, are in fact the tail that's wagging the dog. I'll let Martin explain. 

00:25:57 Martin Walker

It's almost like the need of the regulators to get information and data from you is such that, I think the pharmaceutical industry in particular it’s almost like you design your organization not just to run as an organization, but actually to be able to produce the data that the regulators need. 

So if regulation and providing data to regulators is the key part of your organization, you actually need to think in the design of your systems about how I can effectively collect the right data and do it efficiently and accurately?

What I've seen unfortunately a lot in banking is, particularly in the light of just waves of financial crises over the last 20 plus years, is a new wave of regulation comes along requiring more data be sent to regulators. But instead of actually thinking, how should my organization and its systems and its data processes be structured to be able to quickly, accurately, and cost effectively produce the right data, it's been very much - I need to produce something for the regulators. 

Without looking at your core systems and processes and simply trying to bolt on a reporting framework on the end of processes which can be somewhat dysfunctional and broken in the first place, that can actually make things worse. 

I've seen it myself in some organisations where the desire just detective regulatory reporting box has actually distracted and taken resources away from actually fixing your fundamental processes and systems, so the data is right in the first place. It's not 100% consistent, but that can and has happened. 

And that is very problematic, and I think many of the regulators actually need to understand that if someone is just getting you data, but they're not fixing the underlying processes, that's actually a big problem. 

00:27:53 Karen Plum

Clearly this is a big issue for organisations with legacy systems. The need to ditch those and go back to basics would be an enormous challenge, but Martin feels this is vital, if the provision of accurate data and efficient processes are to be delivered. If the systems and processes are good, then the reporting is straightforward. 

00:28:12 Martin Walker

You would then have the information that senior management needs to make the right decisions and not take big risks, et cetera. If you haven't fixed that, if you haven't made the investment, then it's not just the regulators can get information and it's not very accurate, it also means the management are getting data which is not sufficient quality for them to actually make the right decisions. 

00:28:34 Karen Plum

So it becomes very circular. Poor data leads to poor decision making, leading to a layering of flawed conclusions and outcomes, which brings us back to the point Martin made in Episode 8 about the need for data intelligence as a skill for managers with the consequent processes and systems that address the critical decisions that will be taken from the use of that data. 

As with so much of our journey in evidence-based management, there are few shortcuts that are worth taking. You have to do the work, invest your efforts in the right places and do your utmost to think things through fully. Not easy, particularly if you're surrounded by a legacy of - we've always done things this way. 

And that's why change is hard. So often we embark on change based on the experiences of senior leaders without any real evidence, let alone organizational data. So organizational evidence can be flawed, but the course gives us many ways to assess its trustworthiness. 

And as we know, a multiple source decision, has more potential to lead to a better outcome than a single source decision, particularly if that source is the personal experience of the CEO. Here's Eric to wrap up this episode and we'll move on to Stakeholders in the next episode. 

00:29:51 Eric Barends

So higher up in the organization, stories are being told, compelling stories because it's a communication strategy to the employees in the organization. But sometimes CEOs or executives get carried away and based on their personal impression or feelings or what they see, they can come up with claims and stories and statements that is not supported by the data in the organization.

So I would argue that collecting data, information, or evidence on issues that are relevant for the organization because they support or help decision making on issues that are routine or often come back or is important from a strategic point of view. 

So it's a safeguard to collect data, information from the organization, because you can be led astray by impressions or experience of executives - oh, we all know that, oh we can see, hahaha. Yeah, maybe, maybe not, but there's too much uncertainty in the personal or professional perception or experiences of your executives. 

You also need the data, the evidence from the organization to back it up. Again, single source decision regardless what the source is, is often a bad idea. You need multiple sources.