Research Matters

Gordon Pennycook on why smart people believe dumb things - Research Matters S2E16

Cornell University Season 2 Episode 16

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 22:24

In this episode of Research Matters, psychologist Gordon Pennycook dives into the surprisingly relatable science of why smart people believe dumb things, from conspiracy theories to “deep” nonsense that only sounds profound. Blending humor with research, the conversation reveals how our fast, distracted brains and overconfidence make us easy targets for misinformation, and why the real fix might be as simple (and as hard) as pausing to think. Watch here.

Gordon Pennycook:

We ran a field experiment on Twitter where we created a bunch of like cooking bots. So just like generic, random bots, and we'd followed, like many, many, many people with these bots. And then some people follow you back when you follow them on social media. And so the people who followed us back, that allowed us to send them a direct message slid into the DMS, as the kids would say, with a question, How accurate is this headline? And then everyone ignored it, because it's a weird message to get from a cooking bot. But still, because we asked about accuracy, the first sentence was about accuracy, the quality of the news content they shared on Twitter, and 24 hours that followed was improved, more stuff from New York Times, less stuff from Breitbart.

Laura Reiley:

Welcome to Research Matters, a podcast from Cornell University where we talk with researchers who are tackling real world problems and finding surprising ways to fix them. I'm your host, Laura Reiley, and today's episode is one I've been looking forward to the science of human stupidity. Now I know what you're thinking. Guide from my boss, my cousin on Facebook, the guy who's constantly sending me those conspiracy memes, but it isn't about calling people dumb. It's about understanding why all of us, smart people included, sometimes believe and do stupid things. My guest today is Dr. Gordon Pennycook, professor of psychology at Cornell University. His work explores why our brains fall for misinformation, how overconfidence warps our thinking and how better reasoning might just save us from ourselves. Gordon, Welcome to Research Matters.

Gordon Pennycook:

Thanks for having me.

Laura Reiley:

So, I guess let's start. So why do smart people believe dumb things?

Gordon Pennycook:

Well, in many ways, the way that our brains evolved made us not very attuned to the modern context that we happen to inhabit. Really good at drawing out and paying attention to like things in specific kind of context. Once there's a lot of stuff going on, it's pretty easy to get distracted.

Laura Reiley:

We were better when we were hunter gatherers.

Gordon Pennycook:

Maybe, I mean, we were maybe more attuned to that context. I mean, I think the other thing is that you is that we are very good at making snap decisions and thinking quickly. We did evolve a capacity to stop and to reflect, but it's something that we kind of save for particular contexts. The modern context is one where we need more, but we haven't caught up yet, basically.

Laura Reiley:

So one of the things you said to me in an email that I found interesting was motivated reasoning is overrated as a concept. I'd love for you to just unpack what you mean by that.

Gordon Pennycook:

Right, so, like my general take, is that the biggest kind of problem, reason that we make so many mistakes is we don't spend enough time thinking about things. Basically, we rely too much on our gut feelings and our intuitions. Motivated reasoning is this idea that people are thinking too much, that they're spending time convincing themselves that the things that they want to believe are true. And I don't really think that we spend that much time doing that. I think we don't spend enough time thinking. And we know one of the reasons why this idea of motivated reasoning is so popular is because it helps use the language of partisan animosity to explain why people disagree. You say, the person over there who believes something differently than me, they don't even care about the truth, and they're just deluding themselves into believing these things that it's because it's because it's so hard for you to understand that someone would actually believe something that's different than you, when, in reality, they probably are just getting different information than you are. Their intuitions are different than yours are. You know, they've been raised by different people, and so they have different different views on what seems true to them. And it's less about kind of self delusion, and it's more about lazy thinking.

Laura Reiley:

So I think one of the papers that certainly got a lot of traction for you, and that you're known for is, I guess we can say that here is the psychology of bullshit. And I the kind of word salad experiment of giving people an array of buzzy words and intuitive people saw profundity. So here's an example that he gave of like a buzzy word salad, hidden meaning transformed, transforms unparalleled abstract beauty. So lovely, lovely bunch of words. But you hand that to someone and they they see profundity, not word salad. And why is that?

Gordon Pennycook:

I mean, so it's a great example of how, when we something feels intuitively right to us. So you we, what we did is we, and by the way, bullshit is a technical word. It's from Harry Frankfurt's an excellent essay called On Bullshit, and it's about... so bullshit is different from lying, right? So if you lie, that means or implies that you care about the truth. You care enough about the truth to try to subvert it or to convince someone else to believe something else. Bullshitting is basically defined as something constructed without regard for the truth you don't.

Laura Reiley:

So is it like disinformation versus misinformation, kind of?

Gordon Pennycook:

Sort of, there's a similar... Yes, because disinformation is you're deliberately trying to convince someone that something is false that isn't or at least by your necessarily, rejecting things, it's but being own perception. Misinformation is just something that's false. Bullshit is a little bit different. It can be misinformation, but it cannot be it doesn't have to be false. It able to distinguish between what is and what is not BS. And the just means that you don't care about the truth when you've made it. And so it could be very common for advertising, or in that case, the word salad, it's not false. There's no meaning. It's just a random sentence with buzzwords. And so even if that, even though it's a random sentence, people find profundity in it. And and when I say, find, I don't mean that they're searching for it and they're thinking about it. It just seems profound because they haven't thought about it. It's fancy words. people who are more intuitive because they're kind of accepting things in a more kind of automatic way, they're less So it's sort of discerning.

Laura Reiley:

So is that? Is that why certain politicians we it takes parsing it after the fact for us to recognize there's no actual content, or...

Gordon Pennycook:

That's, I think it's a thing that happens

Laura Reiley:

It's the Peter Sellers being there, kind of effect, yeah. So you've written about this, so just thinking in in politics. Another thing that happens in politics is you get a distracted world, I think we should kind of do a little bit platitudes. So it's not like this. This is abstract jargon. It's hard to know exactly what it means. A platitude is like something that seems profound because it's so simple. But then when you think about it, not a lot has been on that. And I think that's something you've written about quite a bit. And I do think it kind of sums up the moment that we're in, more information, more distractions, the thing in our hand all the time. How, how do you unpack that in terms of what your research has found?

Gordon Pennycook:

Right? So, I mean, if we go back to the kind of primary point I was making about how we essentially rely too much on our gut feelings and intuitions, add that into a context that is now basically selecting on information that appeals to our feelings and our intuitions, and in ways in which, like, if, I mean, there's some cases where that doesn't it's not that big of a deal you know, like, yeah, you want, you know, if you're decorating your house, you want to kind of catch your eye in certain sort of ways that you know is appealing or whatever. And there's no like consequence for that. In the context of social media, when it's news that's made up or or outrageous, like videos of political others that make you angry, these are the things that will take our underlying psychology and basically weaponize it against us. And the only way to kind of get around that is you have to slow down, you have to stop and you have to actually think about the things that you're engaging with and ideally modify it so that the feed doesn't require so much work to parse through. You have to make better decisions about what you're engaging with.

Laura Reiley:

So modify how like, give us, give us a tool, like, what's a what's a modification we can make?

Gordon Pennycook:

So if you are on social media, and you're following somebody who is taking advantage of outrage and fear and anger. You know, get rid of that. You don't have to follow that person or...

Laura Reiley:

I mean, you have to have the, I don't know, clear minded wherewithal to recognize that they're a polemicist, or that they're, you know...

Gordon Pennycook:

Or even more than that, is like, have a pretty low bar, low tolerance for people who are misleading you and making mistakes, whereas a lot of... There are a lot of great journalists and sources of information, that's, that's the other side of the coin. Like, there's, it's the age of information and also the age of misinformation, but there are lots of great sources of information that you can get from, and so you just have to be discerning in how you're selecting the feed that you're engaging with. The problem is that even people who are extremely intelligent, repetition increases how much people believe things. Sure, and that is equally true for people who are smart and people who are more intuitive, and so the way that you curate your online experience is extremely important.

Laura Reiley:

So I mean, is it fair to say that misinformation spreads because people don't slow down or or is there something more kind of malicious in the in the world that we live

Gordon Pennycook:

I mean, it could be both things. I think in? it's the reason it spreads is because of underlying psychology, because it because the spread is the malicious part is the people are making it, right? They're like people who are deliberately trying to inject bad information into our information ecosystem. But then it doesn't. They don't. They don't have the tools to spread it. There's bots and stuff to try to amplify it. But it's basically up to people to figure that out, and it's the people who spread it. So it's both things at once.

Laura Reiley:

So I imagine that mainstream media and organizations like PolitiFact are huge fans of your work, you know, because in some ways you legitimize what they do, right? I mean, or you you call attention to how fact based, you know, some demonstrably fact based, hierarchical, you know, traditional news entities, maybe because there's, I don't know, there's more traceability, I guess, with with the information. So how do we kind of explain the kind of epic, like, free fall that mainstream media is in, and the rise of deep fakes and, you know, and our own president putting out, you know, fake videos and, and, like, how does that work?

Gordon Pennycook:

Well, I mean, there's, there's different elements of that. Like, some of it is, like I said, there's the top down part. There are explicit cases of, you know, you might say political elites is one of the terms that's often used in political science to refer to the people who are kind of like trying to generate the ideas and the narratives that will influence people. And there has been, basically, in my own view, a lack of regard for the truth. This, I think in a certain sense, we have like a...

Laura Reiley:

By by whom, by rank and file?

Gordon Pennycook:

It's political elites in particular.

Laura Reiley:

But that's always been the case, right? I mean, I just watched the Garfield documentary, and it was... or show... and it was all about spreading misinformation.

Gordon Pennycook:

But they don't have, they didn't have the tools that we have now.

Laura Reiley:

It was the megaphone problem.

Gordon Pennycook:

Exactly. So like, you... Then we have online influencers who have gigantic followings who don't have really any constraints on what they say in terms of, like...

Laura Reiley:

The Joe Rogan kind of that, like, that's that's not a journalist, that's a something else.

Gordon Pennycook:

And I think the key thing that I often tell people is that, so journalists make mistakes, and there's, and there's, of course, a wide range of what counts as legitimate versus illegitimate journalism, yellow journalism, all that kind of stuff.

Laura Reiley:

Probably it's a continuum.

Gordon Pennycook:

Yeah. But people at reputable places, if they were to make something up, they could get fired for that. I mean, they would be fired for that.

Laura Reiley:

Oh yeah, a correction is, you get called to the carpet and you are you have to fill out forms and mea culpa.

Gordon Pennycook:

Exactly, and the actual like journalistic outlets have have a brand around trustability. Whereas, if you're a social media influencer just making YouTube videos, in fact, if anything, you're incentivized to make things up because you get more clicks that way. There's no other additional constraint saying I on whether you tell the truth or not.

Laura Reiley:

Is there a problem in terms of mainstream media that, that you have the newsrooms divided by the news side and the opinion side, and that those are blurry now, and we've had, we've seen the rise of the opinion side. And, you know, for, you know, New York Times, Wall Street Journal, etc, Washington Post, you're seeing the creep of all the opinions on the above the fold, we used to say, and the, you know, is that, is that problematic in terms of normal people parsing the news?

Gordon Pennycook:

Yeah, people don't know the difference between them, and it's a gigantic difference. Because opinion is not news. And so you're allowed to just, it's an opinion that you can say what you want. Then that's not, doesn't have to be fact based, necessarily, or anything. And so like, and there's, you know, they're they have some editorial– at good outlets – they have some editorial, kind of like guide on what should be published as an opinion piece. But it's not the same as the news and and that's, I mean, part of the other problem is that the thing that led to that, where most of what people think of as news is actually opinion, is 24-hour news stations. There's not enough news to fill 24 hours. And so what it is, is punditry, and so it's opinion you only have within an hour block of time, you have maybe 10 minutes of news.

Laura Reiley:

It's a paucity of real news gets filled in with... All right, well, so I guess, can things like AI help, or is it going to hinder? Or what's your take on, on antidotes to some of this, or at least, like pushing back against it?

Gordon Pennycook:

Right, so that, I mean, that is people who go are concerned about AI, and I think that's totally legitimate. One of the things that we've done is we've tried to use it to improve things. So one thing that AI is good at is forming good, coherent arguments using lots of evidence and facts. And so we want to test this underlying idea. So what I was talking about before was that people rely too much on their intuitions, and that, like, kind of leads them astray. But another version of that is, if people aren't stopping and thinking, then they're not really evaluating evidence. But if you're able to give people a huge amount of like, really high quality evidence that specifically addresses their like, idiosyncratic beliefs, then it's possible that you might actually change their mind. And so that's what we were doing in these experiments, where we had people who were down the conspiratorial kind of rabbit hole talking with an artificial intelligence. They knew that they were in like a just like not literally talking, but like having a computer conversation with it. And what we had the AI do was give them specific counter evidence that addressed the unique beliefs of the person. So the person might think 911 is an inside job. They would give some reasons for that. Then the AI would just debunk all their reasons. They wouldn't respond. The AI would debunk those back and forth a few times, and after about eight minutes of conversation, we found, like a quarter of the people who believed a conspiracy didn't believe it anymore after this.

Laura Reiley:

Does this mean that AI could be taught to do that more generally, and that it's

Gordon Pennycook:

Yeah, oh yeah, that's just prompting.

Laura Reiley:

They're going for engagement, rather than veracity.

Gordon Pennycook:

Exactly.

Laura Reiley:

So how about the just, you know, we have this sense that that AI is only as good as the information it scrapes. How do we winnow what it's scraping so that we can be more confident in the truthfulness?

Gordon Pennycook:

Well, in this case, because it's they're trained on the internet, one thing the internet is good at is conspiracies. So when we got we had fact checkers evaluate the quality of the evidence that the AI was using. It was actually nearly perfect in this case. There's a scenario where like there's a lot of things written about most conspiracies, and so it's able to do that in terms of search. Now we also had a case where we did it following the first Trump assassination attempt. There was a bunch of conspiracies that were just immediately after it were, as is often the case, were just kind of growing. And then we did that same experiment. In this case, the AI didn't have any information about what actually happened or anything. But it was still effective, because it basically argued to people that they didn't have information either, which was true, and it was able to kind of increase people's skepticism, and they were more uncertain about whether the conspiracies that they had come up with or they were just hearing were true.

Laura Reiley:

So you're saying increasing people's skepticism is is a way to push back against kind of, you know, digesting without, without, necessarily, I don't know, pushing, I don't know...

Gordon Pennycook:

In an engaging way, with good arguments. It's possible to really convince people to change their minds. And I think this is counterintuitive for people, because, you know, if you say, maybe you have a relative at Thanksgiving or something, and they...

Laura Reiley:

A lot of us are thinking about a couple weeks from now and what might happen, sure.

Gordon Pennycook:

And then you have a conversation where you know they believe something that doesn't seem that seems pretty dubious to you, and then you try to tell them why they're wrong and they don't believe you. One of the reasons why that doesn't work is because maybe your arguments aren't very good, you know, and it is the silly case in our experiments, not everyone changes their mind. It's not like you can just flip a switch and change someone's mind, but if you think about it, all the information that sent them down the rabbit hole, in theory, you need almost as much and maybe just a little bit more to get them back out. And so one conversation was rarely enough to kind of convince someone to basically deconvert. But in this case, we can, if with using AI, we can get so much information that it actually pulls them pretty, pretty far out of the rabbit hole.

Laura Reiley:

Wow. All right, so if we're going to bring it back to our theme, kind of research that fixes real world problems, what does that look like for your work, and how can studying human stupidity actually make the world smarter?

Gordon Pennycook:

Right? So I mean, some of it is like that, like the power of facts and evidence, but also there are more simple things too. And so going back to the social media context there, that is a case where people essentially are kind of inattentive. And one thing that we found is that when people share misinformation online, they often don't even consider whether it was true or false before they share it, like they're not thinking about it because...

Laura Reiley:

They're sharing entertainment or...

Gordon Pennycook:

They share because it's...

Laura Reiley:

Just saying, hi...

Gordon Pennycook:

if it's news, it might be that you share it because it's like, Oh, my God, this is crazy, like people need to know this, and it didn't occur to them to say, Wait, is it true? And so we've done these experiments. In fact, what we find actually is people share more false things than they would believe if they were if we asked them directly, about whether it's true. So there are cases where people are sharing things that they would be able to identify as being false if they thought about it. So what we do is we just have these simple like, it's called actually prompt or actually nudge. It's just a reminder to think about the truth, to think about whether things are accurate.

Laura Reiley:

What does that look like? So it just pops up on their screen?

Gordon Pennycook:

Yeah, it could be like an advertisement. We've done ads. It could be like, it's called interstitial, just kind It could say any number of different things. of pops up, be like, or...

Laura Reiley:

So what does it say exactly? It just has to be a subtle reminder of whether something is true. I'll give you a more extreme example. We ran a field experiment on Twitter where we created a bunch of like, cooking We all need like someone poking bots, so just like generic, random bots, and we followed, like many, many, many people with these bots. And then some people follow you back when you follow them on social media. And so the people who followed this back that allowed us to send them a direct message. And so from a cooking bot, we sent them a DM, slid into their DMS, as the kids would say, with a question, How accurate is this headline? And then everyone

Gordon Pennycook:

facts and evidences that they have to ignored it because it's a weird message to get from a cooking bot. But still, because we asked about accuracy, the first fight against all the other things that are diminishing sentence was about accuracy, the quality of the news content they shared on Twitter in the 24 hours that followed was improved, more stuff from New York Times, less stuff from Breitbart. Just that little reminder it was in their head now. those in our online environment, in the current, like context that we live in.

Laura Reiley:

Wow. So is there any other advice that you have? I mean, it sounds like you're saying pause, take things a little slower. Think slower.

Gordon Pennycook:

Yeah, put the truth first, stop and think, I mean, that's basically it. You basically summarized it. You sound like my mom. Yeah, exactly. None of it is, like, really groundbreaking stuff. It's just that in the context of the way people have been talking about the kind of, like, rise of political polarization, a lot of it has been under the guise of people kind of believe false things because they want to, and I don't think that's true. I think people are being misled and they want the truth, and if they had time to stop and think about it, they would probably be better.

Laura Reiley:

So what are the next big questions that you're kind of asking in this in this realm?

Gordon Pennycook:

Well, we want, so there's, there's the version of our interventions that that are scalable, where just like, simple reminders about the truth that's easy to do. You put that in ads. You can do stuff like that. Then there's the things that have large effects, which is like these conversations where people sit down and get, like, dense counter counter evidence for beliefs that are dubious. We want to try to get those together. What's the scalable version of people getting good facts and evidence in a personalized way. And we haven't cracked that nut yet, but we're working on it.

Laura Reiley:

Yeah, I'll take that. I'll take two. Well, I think that's about all the time we have. And Dr. Gordon Pennycook, thanks for joining us and for giving us all hope that a little reflection can go a long way towards making the world less stupid. If you want to learn more about Dr. Pennycook's work, you can go to psychology.cornell.edu and I am Laura Reiley, and this is Research Matters, where Cornell researchers are working to fix real world problems one good idea at a time.