Evidence-Based Management
Evidence-Based Management
Module 12 Aggregate - Weigh and pull together the evidence
This episode accompanies Module 12 of the course, which explores how to bring together the various sources of evidence gathered throughout the process. It’s the stage where the question becomes “What does this really tell us?” and where ideas about confidence, belief, and openness to new evidence come to the fore. The episode looks at why existing beliefs can be so sticky, how confidence shifts as new information arrives, and how a little Bayesian thinking can help keep our perspectives flexible.
There are also practical stories from the field, including how asking “How certain are you?” — or even framing a claim as a bet — can reveal far more than expected. The purpose of an evidence-based approach is to reduce uncertainty in decision making by examining likelihoods and probabilities, and this episode explores both how Bayes’ rule can support that and what to do when evidence appears to conflict. Contradictory evidence turns out to be far rarer than many students assume, and the discussion highlights how confidence levels can be surfaced and constructively challenged, and how cross-validation helps build shared understanding and ownership.
Aggregation is reframed not as a technical exercise but as a human one: a process of dialogue, reflection, and sense-making. The episode considers how to handle myths and “zombie ideas,” and how to craft an evidence story that is both accurate and memorable. Above all, the message is to slow down, check assumptions, and involve others — because good decisions depend on understanding the evidence together rather than rushing to action.
This episode was updated in 2025 to reflect changes to the online course relating to how best to aggregate the evidence (via parallel or serial approaches) and the importance of cross-validation with the parties contributing to the evidence.
Further reading / sources mentioned during the episode:
Host: Karen Plum
Guests:
- Eric Barends, Managing Director, Center for Evidence-Based Management
- Denise Rousseau, H J Heinz University Professor, Carnegie Mellon University
Additional material with thanks to:
- Julia Galef - President and co-founder of the Center for Applied Rationality - YouTube videos
Find out more about the course here: https://cebma.org/resources-and-tools/course-modules/
UPDATED EPISODES
In 2025 we updated two episodes to reflect changes in CEBMa's online Evidence-Based Management course:
- Episode 5 - acquiring evidence from the scientific literature (updated in June 2025); and
- Episode 12 - aggregating the evidence sources (updated in December 2025)
00:00:00 Karen Plum
Hello. Welcome to the Evidence-Based Management podcast.
This episode accompanies module 12 of the course, which is all about how we bring together the various sources of evidence that we've been busy gathering, and how we make sure we don't allow bias or irrational thinking to creep in at this next critical stage.
Since this episode was first published, the online course has been updated, to explore how best to aggregate the evidence, depending on what it looks like. Are the sources fully consistent? Partially consistent? Or do they point in different directions?
Students of evidence-based management often get quite concerned about what to do when the evidence is contradictory, but as we'll see, that's actually quite rare. We'll also look in more detail at the cross-validation process, but more of that later on.
For now, I should introduce myself. I'm Karen Plum, a fellow student of Evidence-Based Management and in this episode, I'm joined by Eric Barends, Managing Director of the Center for Evidence-Based Management, and Professor Denise Rousseau from Carnegie Mellon University. I'll also share a short insight from Julia Galef, co-founder of the Center for Applied Rationality.
And I'll be honest, when I first studied this module, Bayes hurt my brain. So, if that was your experience too, at least you're not alone.
Let's start with something really reassuring. Bayes' rule sounds mathematical, but at its heart, it's about how confident we are and how our confidence changes when we receive new evidence. Here's Julia Galef explaining how Bayes changed the way that she thinks.
00:02:01 Julia Galef
So how has Bayes' rule changed the way I think? Well, first of all, it's made me a lot more aware that my beliefs are greyscale. They're not black and white. I have some level of confidence in all of my beliefs between 0 and 100%. I may not be consciously aware of it, but implicitly, I have some sort of rough confidence level in my belief that my friend likes me, or that it will rain tomorrow, et cetera. And it's not 0%, and it's not 100%, it's somewhere in between.
And more importantly, I'm more aware that that level of confidence should fluctuate over time as I learn new things about the world, as I make new observations. And I think that the default, approach that I used to have and that most people have towards the world is you have your beliefs, you're confident in them, and you pretty much stick to them until you're forced to change your beliefs if you see some evidence that absolutely can't be reconciled with them.
But implicitly, the question you're asking yourself is, can I reconcile what I'm seeing or what I've learned or heard? Can I reconcile that with what I believe to be true about the world?
00:03:17 Karen Plum
Essentially, she's arguing that our default position is that we generally stick to our beliefs unless we're forced to change them, perhaps because we really just can't reconcile an existing belief with what we've just learned or heard about the world.
It helps you realize why the process of sharing evidence can be tricky if people, by default, don't want to consider something that could challenge and upend their existing beliefs.
I also love Julia's example about whether your friend likes you because it's so relatable. Over time, you might learn things that make you feel more confident that your friend likes and values you. Or then again, you might learn things that make you revise that confidence down a bit. It's such a simple way of showing that Bayes isn't really about maths, it's about being open to learning and letting our confidence shift as new evidence comes along.
And that brings us to a great example that Eric often uses when he talks about confidence, because he wants to know how certain someone is about a belief or a claim that they've made. Certainty is very important from an evidence-based decision-making perspective, and people make claims, sometimes quite extravagant ones, all the time.
00:04:33 Eric Barends
It's easy to make a claim, but those claims are made so easily and people don't like to be more specific. But stopping there and say, wow, that's a claim you're making. How certain are you?
This example comes from Nate Silver from his book, The Signal and the Noise. And Nate Silver is an interesting guy because he is one of the most knowledgeable persons when it comes to putting a probability on an outcome. He worked for the New York Times to determine whether polls and stuff like that actually are accurate.
But he pointed out that people are a little bit hesitant to put a number on how likely is it that you are correct or how likely is it that if we do A, B will come out. People feel uncomfortable and go like, yeah, man, I don't know, et cetera.
But what he says, it helps to turn this into a bet and say, are you prepared to put money on that? And I adopted that, and I noticed that it kind of works. It's slightly provocative, but I think at some point, yeah, they get the point, what you're trying to make is if a claim is being made by someone informally or during lunch or during an official meeting, it helps, of course, to ask, how certain are you?
That's a very important question from an evidence-based decision-making perspective. How certain are you? And then the answer is usually like, pretty certain or quite certain or something like that. It's not very informative.
So if you would then say, would you be able to bet your annual salary on this outcome? Then they go like, wow, I don't know, that's a little bit extreme. And you go, okay, how about a mid-sized car?
And they go like, sorry, this is kind of a bit weird. Well, what are you heading for? And well, how about a bottle of wine then? So the bet, the value of the bet sort of is an indication of how certain they are.
And if you ask those questions, it's usually with a bit of a tongue in cheek. But at some point they get it and they will be a little bit more detailed about how certain they are. And 9 out of 10 times, it turns out they're not very certain.
00:06:51 Karen Plum
This is so important because the types of claims people make can lead to a lot of work and effort, when their foundations might actually be quite shaky.
When we bring all our evidence together in this aggregation stage, we're really talking about how much confidence we have in what the evidence collectively suggests.
But before we combine it, we need to understand something that surprises many students. So when you first learn Bayes, it's easy to think – oh no, do I have to do this every time I bring evidence together? The good news is, no, you don't.
Here's Eric explaining why.
00:07:30 Eric Barends
To be honest, it doesn't happen that often that you take this Bayesian approach to aggregating all the evidence. In 9 out of 10 times, the evidence tells the same story.
It's not that you show the evidence in a cross-validation session, as we call, to all the participants and everyone says, oh yeah, that makes perfect sense. They absolutely have critical questions, but it's mainly about contextual factors.
So the aggregation phase is very valuable from that perspective, bringing together all the different sources of evidence with their own perspective and their own insights. And it creates a full picture rather than the parts.
00:08:15 Karen Plum
So that's the good news. While we might imagine aggregation as a battlefield of conflicting evidence, Eric says that's not typically the case. Most of the time, aggregation isn't about probability formulas. It's about sense-making.
And the best way to do that is not alone in a darkened room, but together. Which brings us to one of the most powerful parts of the whole process. Eric mentioned presenting the evidence in a cross-validation session. And that essentially means bringing people together, the practitioners, the stakeholders, the people who contributed to the evidence, and inviting everyone to take a critical look at the emerging picture.
So how does that work?
00:09:01 Eric Barends
The purpose of the cross-validation is to compare the different sources of evidence. It's a process of dialogue and reflection, as we state in the module. It's not me telling, hey, this is what we found, are you okay?
No, it's like, hey, this is what we found, does this resonate with you? Do you recognize this? Are there any situations where you feel this is not applicable. And 9 out of 10 times they come with examples or with situations. You say, yeah, I agree, however, I had a situation where you go, oh, that is interesting.
And you get good questions. People ask questions about the research. Did the researchers find anything about this? Did they find anything about this type of organization?
So it is very valuable. It's often when you agree on the outcome and it does not create any surprises there and we're all in agreement, then you can do this very fast.
It's usually one meeting with people and we all agree and we have the questions and we bring in the contextual factors. So it really helps to ask the practitioners in particular to have a look at the findings and reflect on that.
00:10:16 Karen Plum
Without realizing, I've been doing this for many years in my consulting practice. And rather than taking all the evidence and presenting it to clients and stakeholders as the last word on the situation, we've shared the results with them as emerging findings, so they could feel able to contribute, to pick up on things we might have missed, and as Eric said, to ask questions about the data or challenge it.
In my world, this was part of the change process, and a critical part of that is building trust with the parties and building understanding and ownership of the findings.
Another reason for doing this, and one that has caught me out in the past, are the beliefs that stakeholders themselves might be holding that we might not even be aware of. And this is an opportunity to surface those beliefs and to share other evidence that might provide a different perspective or interpretation.
Here's an example about a stakeholder who had read something about open plan office space and was absolutely against the idea.
00:11:20 Eric Barends
So stakeholder evidence, even when it's subjective and based on misinformation, that's your starting point. If you invite them for a cross-validation session and say, hey, from research, actually, this is what we see - the research tells us if we just knock out the walls and make this one big floor and with desks and open office systems, yeah, then you're in for trouble.
But hey, this is what the research tells us. You need to have no interruption zones. You need to have special facilities for people that need to work in a silent situation because otherwise they can’t concentrate; more confidential meetings - between the client and, for instance, in a legal firm or whatsoever, you can't do that in an open office.
So it gives way more nuance. However, there are things that you can do, and these are interventions or design features that would really be helpful.
And then they trust that we had their interest at heart, that their voice was heard regarding their critique, and that their critique actually makes sense, but we have a good answer.
00:12:32 Karen Plum
So this isn't about proving people wrong. It's about helping everyone see a clearer picture. The cross-validation is a vital opportunity to look more critically at all the evidence.
But even in the face of all the evidence, people may still hold the same view about their claim or belief. And so that's when we need to see how strong that evidence is and how much they are prepared to bet on the outcome, as we explored earlier.
And once we've made sense of the evidence together, we can start thinking about how to combine it. So there are two main ways to aggregate the evidence. In parallel, when the evidence broadly aligns, and serially, when it doesn't.
And this is where Bayesian reasoning comes in. I'll let Eric explain the difference.
00:13:23 Eric Barends
A parallel approach is, as we discussed in the module, when the evidence is more or less aligned and more or less tells the same story.
While the serial approach is more like a probabilistic approach, that's where you really do the math and try to understand and think a bit more about could the scientific evidence be wrong? This is a cross-sectional study, so yeah, they found X is correlated with I, but could there be other reasons why X is correlated with I? You know what we call confounders, as we discussed in module 7?
You think a little bit more, you invest a little bit more time on the evidence from the organization. Could there be a reason why these data are as they are? Can we be a little bit more critical? I mean, you always need to do that.
That's why we do the critical appraisal, but it's an extra step. And then do this even in a probabilistic, almost mathematical way to calculate the odds that actually the outcome is not exactly as it is.
And it is a situation where the strength of the evidence is taken into account more than during the parallel approach. A parallel approach, you take a remote view, say, well, we're kind of in agreement, and here are some exceptions, and here you say, OK, the managers we interviewed all say that working from home actually reduces performance because people are easily distracted. However, we have here three large randomized controlled trials that suggests that is not the case. That's actually so, could it be that these managers are biased or whatsoever?
00:15:22 Karen Plum
So after the cross-validation session, we're reaching an overall aggregated judgment on the evidence. And that brings us back again to the question which I and lots of other students ask, what do I do when the evidence contradicts?
I found the answer surprisingly comforting.
00:15:40 Eric Barends
It's the first question we get from people that are, you know, introduced to the idea of evidence-based decision making, an evidence-based approach. They indeed always ask, what do you do when the evidence contradicts?
And if you then explain, well, that doesn't happen that often. I mean, it happens, of course. I mean, in particular, where the evidence is emerging, it's native because it's new.
00:16:09 Karen Plum
So that's reassuring. If there's a new emerging method that everyone's implementing and my bosses are keen on, then I need some strong evidence if I'm going to persuade them that this might not be a rock-solid approach.
The people marketing this new novel approach have vested commercial interests in selling it, after all, don't they? And don't get me started on the zombie ideas.
00:16:33 Eric Barends
But that's the only place where you have contradicting evidence, or in places where the research, the evidence is well established, but there are myths, there are ideas that are still very strong, still in the domain, in the discipline.
I think Ben Goldacre calls them zombie ideas. They're already debunked years and years ago, but people are still doing this. It's still being taught. So there you have conflicting evidence that some people say, but I use this, it's actually very helpful. Say well, actually, the research on MBTI, which is a test of personalities and you get a code or whatsoever, actually is not that strong and it's not taken very seriously.
00:17:22 Karen Plum
So beware of those zombie ideas, things that have been consistently disproven, but which refuse to die. Anyway, having compared and combined the evidence, The next step is to tell the story of what it all means.
And no, this isn't about creating a fairy tale. It's about presenting the evidence in a way that shows why we looked at the question, what we found, what changed our minds, and what the evidence suggests.
The reason we do this is to recognize that not everyone is trained to deal with numbers or responds well to receiving data in that way. But that a good story will stick with people.
And as throughout all the steps of the evidence-based decision-making process, this is another opportunity to check in with our data, our logic, and whether any bias might have crept in unnoticed as we document the story.
Here's how Denise Rousseau describes it:
00:18:23 Denise Rousseau
I think one of the critical aspects of the process of building sort of a logic or a framework for here's what I think will work, here's what my evidence says, here's why I think it works, and then doing a test, that one of the most important ways in which people can de-bias themselves is in conversation with others.
And to have a, let's call it a logic model or a theory of change, it's kind of a picture of here's what I think is gonna happen to these conditions and how I'll get to there and this is the evidence that supports it.
That you are then in a position to ask other people, check my assumptions here. Is this making sense? Check my assumptions here. Does this fit with the data? Don't ask them what they think.
Ask them, check my assumptions, because that's what we're trying to de-bias and to get a better handle on.
00:19:09 Karen Plum
I love her phrase, check my assumptions, because that's exactly what a good evidence story does. It shows your reasoning clearly enough that other people can question it, test it, improve it, and all of that builds trust in the process.
It's important that we keep in mind that our brains naturally look for patterns and that confirmation bias is strong.
Taking those extra steps to consider whether there are other plausible explanations for the presence of the evidence we've identified seems to me pretty critical. Here's another thought from Julia Galef.
00:19:47 Julia Galef
It's that extra step where you're asking yourself, suppose I was wrong, what would that world look like and how, consistent is the evidence with that world? That's like the crucial step, that's the active ingredient.
Because if you're not doing that, if you're just asking yourself, is this evidence consistent with what I already believe, then it's just so easy to stay entrenched in your pre-existing beliefs because there's always a way to make, almost always a way to make the evidence consistent with them.
So that extra step is what sometimes forces you to notice, oh, this evidence doesn't really support what I believe, or maybe it does, but only a little bit.
00:20:26 Karen Plum
People typically react to evidence by trying to find a way for it to be consistent with what they already believe. And because we're creative, we can generally do that.
But if we adopt a little Bayesian thinking, we can challenge ourselves and ask, as Denise alluded to, how likely is the evidence assuming that my belief is true versus how likely is it if my belief is false?
Given that we're typically looking for shortcuts, confirmation and patterns, considering whether our belief is false is a difficult and challenging one to take.
And that's what sets evidence-based practice apart from other forms of decision-making, I guess.
Anyway, back to the point about creating a narrative, a story to explain the evidence we've found, something that'll be emotionally engaging, make sense and be memorable.
If our audience isn't going to thank us for a PowerPoint deck crammed with effect sizes, measurements and methodology, and we aren't sharing a fairy tale, then what would this story sound like? Here's an example.
00:21:35 Eric Barends
So there was this organization or you name Google or Novartis had an issue with the innovation capacity of their organization. And their idea was that the root cause was a poor, innovative culture.
And then you tell the story that actually it turned out it was not the innovative culture. It had to do with psychological safety within teams, people feeling reluctant to speaking up. That's a story that sticks. People remember that.
00:22:08 Karen Plum
As we saw in the module, the story is told in a logical and coherent way, to organize and share information so that each element follows on smoothly from the previous one, carefully explaining the reasoning and the steps taken.
And this is where aggregation becomes communication, making the evidence accessible, transparent, and meaningful. But there is a health warning here. It's all about the audience.
And while it helps to have a story, don't exclude data points the audience might be looking for. Academics would likely expect to see effect sizes and measurements, for example. As with any presentation, know your audience.
So we're almost at the end, but there's one more boundary that really matters. It's very tempting, once you've brought the evidence together, to jump straight into deciding what to do, but that's the next step.
And mixing the two makes everything messy. Here's Eric.
00:23:08 Eric Barends
Well, it's important to keep every step separate from the other one and that you take a structured, stepwise, systematic approach. And it all comes down of one of the most important insights of evidence-based decision-making is slow down. Slow down.
Don't solve your whole problem into one session. Where you present the findings and do also the aggregation and do the application and the assessment and no, just slow down. The basic message here is take your time and breathe.
Don't cram everything into one session. It is important to keep that apart because first we need to have a full consensus and agree like, do we think we should do this? Is this worth it? And then the question is and how are we going to do this?
And of course, everything that is brought in during the session, you take with you. But it should be separate sessions because they're different questions.
We need time to process this information, and we need time to go from one part to the other part, and the acceptance and the uptake will probably be better when you really paid attention to all these steps rather than rushing through it.
When I say slow down, I'm not saying that you should take two months in between or whatsoever, you can start the next day and wrap it up in two days.
00:24:36 Karen Plum
Slowing down might be the hardest part of the entire evidence-based management process, especially when everyone else seems to want the answer now. As Eric rightly points out, the pressure to go fast is always there.
00:24:51 Eric Barends
This is the demon you fight actually through the whole process. We started with module one about what exactly is the problem you're trying to solve. And the fact that you asked this question already slows the process down a little bit.
You create a little bit of more headspace. What actually, oh, that's a good question. Why are we doing this, et cetera? But the inclination is to go fast, like, hey man, we all know what needs to be done. Let's do it and get on with it and get results.
So that point is always there, this pressure of to go fast and there's no time. And that is actually the biggest advocacy of an evidence-based approach. We're always fighting as, oh, we need to go fast, but it doesn't take that much time. So people say, when you say slow down, say, stop doing and do nothing. No, the moment you stop, your brain is processing the information, looks at it from a different angle, you go to lunch, you have a talk with someone, et cetera. Then the creative process starts in your brain, which you need to solve a problem.
00:26:01 Karen Plum
In fact, taking time to reflect is one of the core principles of an evidence-based approach. Ask yourself, when do you have your best ideas and insights? When you're in the shower, out for a run, walking the dog? Or when you can bounce them off other people? And then remind yourself of that when there's a pressure or temptation to just go fast.
So that's aggregation. Not proving, but aligning. Not scoring, but understanding and making sense. Not rushing, but thinking and reflecting. And doing it with people, not on your own.
In the next episode, we'll look at how to move from understanding the evidence to applying it in practice. And I'll leave you with another reminder to take your time and never be pressured to rush to judgment.
00:26:56 Eric Barends
That's the difference between an evidence-based decision maker and maybe a junior MBA student. If you ask a junior MBA student with limited experience, like, what do you think is wrong here and what should we do? You get an answer - they say, oh, I think this is wrong. We should do that.
And an evidence-based decision maker would say, I don't know. Can you give me some time? I just hear, I'll just learn about this, this issue, so I need to think about it and see what's more evidence is out there. It's the same as a physician if you go to your doctor and you present the symptoms and you say, so what's wrong with me and what should be done?
The doctor will look at you like, okay, now I first want to do some tests. I need some more information, et cetera, unless it's clear.
But this whole idea of slowing down, taking a little bit more time to reflect is one of the core principles of an evidence-based approach.