Behavioral Science For Brands: Leveraging behavioral science in brand marketing.

Kentucky derby: gambling and the danger of overconfidence

Consumer Behavior Lab Season 1 Episode 35

In this episode, we explore how more data doesn't necessarily mean better results and what alternatives marketers have to successful metric-based outcomes.

[00:00:00] Welcome back to Behavioral Science for Brands, a podcast where we connect academics and practical marketing. Every other week, we sit down and we look at some of America's greatest brands and investigate the behavioral science that powers them. I'm Michael Aaron Flicker. And I'm Richard Schott. And today, Richard We're talking The Run of the Roses, the most exciting two minutes of sports, the Kentucky Derby.

Fantastic. I know nothing about this. So I didn't even know this was a thing until you brought it up. Yeah. Well, let's get into it. So Richard, horse racing, something that is long in British history. Yeah, there's, um, it's still a very popular sport, we've got, uh, big races like the Grand National, uh, the Derby, there's the Cheltenham Festival that's just happened, so horse racing's a popular sport.

And, uh, connected with horse racing is gambling. Oh, absolutely. It's massive for [00:01:00] gambling, you know, most people that certainly go to the event live will be betting, very few people I think would go and just watch the horses. Yeah. As an American, which is my natural given right, I assume that we invented everything ourselves and had no idea that the Kentucky Derby was inspired by the Epsom Derby in England.

Uh, and I say Derby, you say Derby. Yeah. And, uh, we think this is just the difference between American English and British English? Uh, yeah, I think so. Derby's an interesting word for Britain, that there's the horse race, and we'd often just say the Derby rather than the Epsom Derby, it does take place in Epsom, but I don't know, it never seems to get mentioned.

But it's also a town in Britain, and it's used for any rivalry. So if Man U played Liverpool, you know, their closest rival, it would be called a derby. So it's also become this general term for, you know, local teams who play [00:02:00] each other. They're big match of the year. Oh, okay. I did not, I did not appreciate that about the, about the term.

Uh, a little history about the Kentucky Derby, uh, 18, uh, 75. Meriwether Lewis Clark, who's the grandson. Of William Clark from Lewis and Clark Explorers is the head of the Louisville Jockey Club, and he's looking for a way to put on the national stage, uh, how great Louisville and Kentucky horse breeding is horse racing is, and he conceives of the idea.

Of the Kentucky Derby. The first race is held in May of 1875. It has 10, 000 spectators and it has been running ever since. So it's a major, um, lineage by American standards. 150 years, a long time. Uh, and [00:03:00] what started with 10, 000 spectators, now Churchill Downs welcomes over 150, 000 spectators each year.

Millions watch on TV. Uh, in 2023, uh, last year at the Kentucky Derby, I was at it. Uh, 188 million wagered. on that one, two minute race. Wow. Okay. So you're big business. This is big business. And it's not just big business for, uh, horse racing and gambling. It's big business for Louisville, Kentucky, uh, for the, uh, tourism industry.

This brings people from all over the world. for one week, one weekend and really focused around this race. So it's a big deal for the economy, for the local economy. It's a big deal for horse racing. It's the first race in what Americans would call the triple crown, which are three big races and every so often one horse will win all [00:04:00] three.

And be, uh, you know, be named the triple crown winner. Horse racing is nice, but gambling is very interesting. And gambling really reveals. a lot of human psychology, a lot about how we view ourselves, how we view chance in the world around us. And so we thought, well, let's dig into this. This will be fun, uh, for a podcast episode.

Yeah, I think it's a brilliant, um, area to, to explore. And again, just because the studies are run on gambling, It doesn't mean they just apply to casinos or bookies. Brands can take the underlying finding and apply it in lots of other, other areas. Which is something that you and I like to talk a lot about just because the studies in one area doesn't mean we can't apply it broadly.

So the two studies we're going to talk about today, you and I have worked on pulling out what we think not only are super interesting and penetrating for the gambling industry. But [00:05:00] also have wide application outside. So the first one is one of my favorites. It's a Paul Slovic study. Uh, he's probably better known for work he did on ideas like the identifiable victim effect, which we've talked about on other episodes, but he, In this study from 1973, he was working with professional gamblers, and A funny name in and of itself.

Yeah, yeah, yeah. People who spend all day, every day, gambling. Yes. Uh, so these were either professional gamblers or professional tipsters. And he asked them to predict the outcome of various different horse races. And the twist in the experiment is sometimes he gives them five key variables. So that might be the last five.

uh, results for that particular horse, um, whether they favor firm or, and this is where my gambling results get bad, wet ground, which is heavy. Whether it's grass [00:06:00] or, or, or clay, whether they're running on wet or dry conditions, he gives them purportedly. Exactly, all the data for how they perform in those different, different situations.

Yes. Um, they then make a prediction. They make their gamble, their bet. When they are given 5 bits of data, their accuracy and their confidence are matched. So, their, um, they expect their chance of winning is about 18 percent and they come in pretty close to that. So, I think it's like 15%. He then gives them 10 bits of data.

Next occasion it gives them 20 bits of data, next occasion it gives them 40 bits of data. Now you would think, as people get more information about the horse, they should get more accurate in their predictions. But that does not happen. You essentially see the same accuracy regardless of the amount of data.

But what does change is their [00:07:00] confidence. Each time they get more data, they become more confident. And remember, they've always had a slight degree of overconfidence. Each time they get more data, they become more and more overconfident. We'll put, we'll put the chart in the show notes, but a pretty dramatic straight line up.

Dramatic. There's a more than doubling of confidence. People now estimate their chance of winning when they get the 40 bits of data. Uh, 40%. No, sorry, 40 bits of data, they estimate 32 percent chance. There's not quite a doubling. But they have become massively overconfident. That, I think, is a fascinating finding that has much, much larger implications.

And it's this idea that the more data we have, It doesn't necessarily improve our predictive ability, but what it does is create a sense of overconfidence, and that's dangerous. Yeah, and so for us, we, we were talking about this study and we said, Aha, for marketers, all the rage is [00:08:00] talking about data driven marketing, getting as much data as we can to make the best informed decision.

Yeah. This study would tell us our confidence continues to go up with the more data we have. Of course, how could we not be making the best decisions for marketing? But this study would say the accuracy does not necessarily go up just because there's more data involved. Exactly. So you probably Want to be wary of having a data rich environment.

If there's been one big change in marketing in the last 10 or 15 years, it's more and more data available. What it would suggest is we are becoming over confident in our ability to influence people to sell. Therefore, maybe don't spend enough money. budget on our ads because we're so confident in the, uh, creative output.

Maybe we don't push ourselves as hard as we should to keep on improving the ad because we're overconfident. [00:09:00] Or maybe if we're overconfident we don't look to our competitors to try and learn things from them because we believe we've already done a brilliant job. There is a, there's a pernicious downside to increased confidence.

The, the challenge of this confidence is that, uh, we draw the stories we want from the data we have. Any agency, if they're being honest about reporting data, will say there's so much data, That I could tell any story that I want. And so I think one of the challenges of all this data is to not allow all the data to obscure what was actually effective, you know, or not to let all this data give you a false sense of confidence that, well, with all this reporting in there must be good news.

Instead of saying, well, what was, were we correct or incorrect in our [00:10:00] wager? Was this piece of marketing effective or not effective? And it's not costless as well. So put the confidence, uh, aside. There is a, there's a downside to all this extra data and the overconfidence. But secondly, the financial cost to agencies and brands is huge.

The more Bits of data that you are using to make predictions, the more time those predictions will take, the more hours it will take, and that has a financial cost. You know, and I, and I would have thought in preparing for this podcast that there was going to be a lot of writing about this, but not so much in our industry press.

Is there people talking? There was articles about the downside of collecting too much consumer data, privacy arguments, but there was not a ton of, of articles that I could find that was talking about the downside of all this data. But in 2020, Gardner did a study, uh, of marketers and data [00:11:00] analytics leaders.

And the question was, what is the number one reason that data is not used more effectively organizations and in your marketing decisions. Top reason stated was quote, data findings conflict with the intended course of action. Now, wait a second. You know, what are we looking for the data to do? Is the data leading us to what we should do next?

Or is the data Uh, explaining, supporting, uh, you know, propping up the decisions that we've made. And I think we see this, this experience among marketers that there's so much, they don't know what the right data to look for. Supermetric I'll put these sources in the show notes said one out of two marketers surveyed said there's too much data for their companies to properly analyze.

Um, And from the same [00:12:00] source, 64 percent of managers said that managing data was harder than managing people. I think we live in this time where, because there's so much data, not only could it make us overconfident, not only could it be more expensive, but that it almost makes us less able to see this simple story.

And the argument from Slovic would be. If you are putting a campaign into practice, rather than collect 50 different metrics to judge the success of that campaign, before it goes live, have that awkward tough conversation of saying, okay, what are the five or six things that we think really matter? Now rigorously debate those, pick those five or six and maybe ignore that long tail of data, because I think you're absolutely right.

Not only does analysing data. take huge amounts of time and cost lots of money, you end up having so many conflicting [00:13:00] stories that you, there is a danger that you can prove anything. And some, some of that is a mirage. The long tail of those metrics that you're collecting are probably irrelevant and they spin a good story, but they don't actually affect reality.

So pick the five or six things that matter and then base your next steps off those. I could just imagine mid level marketing leaders, mid level brand managers, mid level agency people saying, well, this all sounds very nice, Michael Aaron, very nice Richard, but I'm worried. That my boss was going to say, what about the data you didn't collect?

What about the data you didn't investigate? I think our push to those leaders would be push the teams that are evaluating the success to agree on the success metrics before you go to market. I mean, if you can get the agreement on what success looks like before you go out, [00:14:00] then you're not collecting data or swimming in long tailed, big data sets.

Just in case. You need to tell a different story. And I think you're absolutely right. If an individual goes off and makes a unilateral decision not to collect a piece of data. That's not going to end well. It has to be a Could be a dangerous decision. A joint decision. Yeah. But then when it comes to getting that joint agreement from all your team members, that they should be more selective in what data they collect, that's where the Slovic study helps, because what it does very clearly is demonstrate in an experimental setting that there is a downside to each extra piece of data.

Yes. It's very, very easy as an organization to just keep on adding extra things in because you think there's, it's all upside. Slovitch shows you that there will be a negative effect. And by making that clear, it makes it easier for teams to rigorously police the amount of different [00:15:00] data sources that they use.

Most things we've discussed so far are still slightly abstract. Are there any practical tactics for puncturing this overconfidence? Yeah, and What we're looking for is a way to really affect this worry that the data is, we're just swimming in data and how can we get to more confidence in that experience.

Gary Klein in 2010 conducts a study on something he calls the The premortem technique, postmortems, everybody listening may be very familiar with postmortem. You do it. You do a campaign, you do something, and then afterwards you pick apart what worked and what didn't work. Premortem is imagining what will have failed.

Before you do the, the thing, so he recruits 178 participants, he breaks them into five groups, and all of the groups are [00:16:00] evaluating a fictitious study, uh, the H1N1 flu epidemic plan and what they're going to do at the university to protect against it. So before the experiment begins, each group is asked to rate how confident they are in the plan's effectiveness of containing the virus on a scale of 0 to 100.

So group 1 acts as the control, group 2 critiques the plan, group 3 is asked to imagine the plan had failed and write down the explanations. For why it failed, this is the premortem group, group four lists the plans pros and cons, and group five only lists the plans cons. And we'll put the chart in the show notes, but when they, then they were asked, uh, again to rate how confident they were about the plan on a scale of zero to a hundred, Group three, the group that did the pre mortem had a 25 percent [00:17:00] reduction in confidence, meaning that they were less 25%, uh, less overconfident compared to the control and the ones that only imagine what would went wrong was only a 12 percent reduction in overconfidence.

So what can we learn from this? What we learn is that when we, if we just ask. We're about to launch this campaign, what might go wrong? We're going to reduce our overconfidence by 12%. But if we say, before we launched the campaign, the campaign failed, and these were the things that led to its failure, uh, 25 percent reduction in overconfidence.

So just in the way we phrase it and position it, helps us think about it in a different way. I think that's a great tactic. Firstly, it shows both approaches can help reduce overconfidence. There's a, there's a, there's a positive if there, get us more aligned with reality. But there is a fascinating boost by using [00:18:00] the, the, the pre mortem technique.

And I wonder if it's better at putting you in the, the shoes of, uh, of what might happen is it's a little bit more, more concrete. Now that statement of it did fail, now tell us why, is slightly more concrete than come up with some cons. And it's an interesting idea for marketers, because I don't think many people apply it.

Most people would follow that more standard technique of What might fail. Yeah, what might fail, not it has failed. Explain why. And so I, I use this study as a jumping off point to ask, it's one thing to reduce overconfidence. Can imagining occurred actually increase accuracy. There's a 1989 study by Deborah Mitchell, uh, at Wharton, at Wharton school.

And she found that imagining that an event has already occurred, increases [00:19:00] the ability to correctly identify. The reasons for the future outcomes by 30%. So not only is it a technique to reduce overconfidence, it can be a technique to have greater accuracy at the end. So again, small change of language, not just what might have go wrong, but something did happen.

Why did it happen? Affects a different part of our brain. Gets us thinking about things in different ways. I think that's a lovely, simple technique that could be easily applied. Yeah. So let's take a break. And when we come back, we're going to dig into a second behavioral science insight around gambling and how we can apply it to brand marketing.

Behavioral Science for Brands is brought to you by MethodOne. MethodOne is a team of modern marketers that practices the art and science of behavior change to fuel growth for indulgence brands. We do this by building interconnected marketing ecosystems that place the human experience at the center of brand building strategies across owned, earned, [00:20:00] and paid media.

To learn how to leverage behavioral science in your marketing or advertising, visit us at www. method1. com. Welcome back to behavioral science for brands. Today, Richard and I are talking gambling, uh, confidence, overconfidence, And of course it's applications to brand marketing. So Richard, we wanted to jump into a second study now that takes a little bit different look at confidence in gambling.

Yeah, so this one is a 1968 study. It's by Robert Knox at the University of British Columbia. And it's not a very well known study, but I think it has some interesting applications. So Knox goes up to people at a race course, 141 people, and just before they place their bet on the horse, he asks them how confident are they?

How confident that they're going to win? And he gets 'em to rate their confidence on a seven point scale. And on average, [00:21:00] 3.48 is the, the average answer. Then he gets another group of people and it's just after they've placed their bet. So seconds after they've placed their bet, he asks 'em the same question, and now you get a very different answer.

You get this 38% increase in confidence. , you've got 4.81 on the, on the seven point. Once the money's down, they know they're going to win. They know they're going to win. Yeah. And what he argues is this shines a light on human behaviour and it suggests that maybe how we think people behave and the interaction of attitudes and behaviour isn't necessarily the whole story.

So, Knox's argument is most people think that Attitudes come first, you know, we weigh up which horse is going to win, and then we pick the horse. And it's the attitude that leads to our gambling behaviour. But what Knox adds is is the behavior also influences the attitude. And [00:22:00] that's fascinating because that is completely counter to how most people think we operate.

We think attitude come first and behavior follows. What this suggests is that behavior also influences our attitudes. And from a human perspective, from the psychology, you can feel it if you never make a bet on something. And it doesn't go the way you thought you say, well, I guess not. But when you make a bet and you have something on the line, whether it's a money bet or a bet with friends, you're much more certain it's going to happen and surprised when it doesn't.

So you can kind of just feel in human experience that that's happening. Uh, when you've met you, you're, you're. You, when you've made a actual, uh, concrete estimation, bet, gamble, then you're more surprised when it doesn't happen. Potentially this affects us in all sorts of ways. The argument from [00:23:00] Knox would be we use our past behaviors as a way of explaining to ourselves who we are.

So we look back at the things we've done, we assume that we did them for a sensible reason, Right. So surely if we, for example, in our past, gave lots of money to a charity, we think to ourself, well, I must care about that charity. I must be a charitable person. What we forget is actually at the time it might have been a random set of coincidences, you kind of gave the money because your friends were there, you felt pressure to impress somebody.

Exactly. We forget the, the nuance and the context and we, and the, and the, and the You know, the chance that often informs behavior and instead we look back and create this kind of golden thread to our lives that tries to explain who we are because of what we've done. We tell the story of our own self perception through the actions we've taken.

Exactly. Yeah. So when you and I were talking about these studies, we said, is there [00:24:00] another study that would help make this more practical for brand marketers? Because here we're talking about confidence of. of gamblers and their confidence increases after they've made a bet. Okay, so far so good. So then you and I found a second study by Jack Brehm, a psychologist at Yale University, 1955, and he's studying desirability of products before and after you've decided to take them.

So he has 225 participants and he has all these home gadgets that they, uh, that they're looking at on a table. And, uh, he asked them to choose, to rate the desirability, uh, at first, and then decide which one would you like to take home and then rate the desirability after you've decided what you're going to take home.

After you've chosen your home gadget that you want, your [00:25:00] desirability for that product goes up four and a half percent. And interestingly, your desirability for those same products that you did not choose goes down seven percent. So it shows us that on physical items, once I've made a decision that I actually want it, you put more stock and more desirability into it and you discount the ones you did not choose.

Uh, It gives us a sense that for brand marketers, that if we can get folks to actually choose it, whether it's through trying it or through even making the act that I will take this home, we're going to make the product more desirable. We're going to make it more likely that they're going to buy and repeat purchase.

Yeah, absolutely. To me, it flips on a, on its head, our approach to, to marketing. What we normally think is we run brand advertising, That creates positive perception about the brand. It creates desire for the brand and then people will purchase it. What this study [00:26:00] argues is not necessarily that that isn't true.

But what's also true is if you simply get people to try your product, you just get them in by all means necessary, then the attitudes will also improve. So it gives you a different manner for boosting some of those brand metrics. And I think we see lots of innovative brands in the last 10 years really taking this idea and running with it.

Warby Parker famously gives you five free frames, mails them all to you, you try them on one at a time. You mail them all back and then you get your, your glasses that you'd like. So they have you actually envisioning putting them on, even though they're a DTC e commerce brand at home, uh, and lots of other brands have virtual try ons where you use AR.

To see the product either on yourself or Apple is now every one of their products. You can use AR to see it on your table, see it in front of you. I think this [00:27:00] is all playing towards imagining it being yours. It's not just an advertisement. It's not just something I can imagine. With this technology, whether Warby Parker is sending you the physical glasses, or AR is allowing you to see it in your own space, it's helping imagine what it would be like as if it was yours.

And you're almost choosing it before you make the purchase. Yes, I think, I think you're absolutely right. There are kind of innovative, lateral ways of applying the principle. I think there are also some really, basic simple ways, which is promotion, sales, trials, sampling. This tactic that was once seen as a slightly grubby way to just generate a short, short term sales.

Arguably it also will affect brand expectations as well. So that I think there's a, an interesting angle, not only Can you generate sales by changing attitudes? You can also change [00:28:00] attitudes by generating sales. It's interesting. Uh, as you were saying that I was thinking of the long and the short, uh, the, uh, paper by, uh, Binet and Field.

Binet and Field. And in that they talk about the right balance of long advertising, brand building, aspirational, emotional connection, and short, Which is promotional, getting you to try it, that can use pricing discounts, that can use those things. But what's interesting in their work is they talk about the balance of them.

That you, that you need both sides. You can't just do emotional advertising. You can't just do price promotion, short advertising. And it made me think that While it's true that you need both and that there's an interplay, they're not also exclusively one tactic or other gets to one goal or the other.

That having people try it, having people have it in their hands is reinforcing a story in their head that this is the type of buyer they are, this is the types of things they want to have, and it can infect the overall premiumness [00:29:00] of the brand as well. If I say to myself, I'm a Coca Cola buyer, I drink Coca Cola, Coca Cola is what I drink.

It self reinforces this idea that that's the premium product for me. Yeah, we, we look back at our past behavior and we assume that we are a rational. central person. So if we've all, if we've bought Coke repeatedly, we think to ourselves, well, you know, I had all those other soft drinks available to me. I picked Coke.

It must be the, the best one out there. Um, so I think there are opportunities in this, in this study. With all of them though, you can push a principle too far. And yes, there is value on getting people to purchase your product. But if you become so desperate. to get that purchase by slashing your prices, for example, you then, um, open a kind of Pandora's box of lots of other biases.

And for example, really steep discounting, we know damages brand perceptions, [00:30:00] because there are studies that show people use price as a gauge for quality, and people assume high price equals high quality. So if your tactic for getting people to try you is to radically reduce prices, you've also got this signal from the price part that you aren't very high quality, and studies like by Babasheev, Dan Ariely show that that can become a self fulfilling prophecy.