Cornell Chronicle

Gordon Pennycook on how to improve a ‘prebunking’ technique

Cornell Chronicle Season 1 Episode 15

As social media platforms deployed psychological “inoculation” on a large scale, hoping to help people spot techniques common to misinformation, Gordon Pennycook, associate professor and Himan Brown Faculty Fellow in the Department of Psychology and College of Arts and Sciences, had doubts about its effectiveness. He discusses new research identifying a way to strengthen inoculations, and why he began studying misinformation.
Read more about it.

James Dean  00:04

Welcome to the Cornell Chronicle podcast, where we speak with the people behind our latest headlines about how they came to make their discoveries and what their discoveries mean for the world. Today we're talking with Gordon Pennycook, associate professor and Himan Brown Faculty Fellow in the Department of Psychology in the College of Arts and Sciences. His research focuses on reasoning and decision making, broadly defined, including the study of misinformation and conspiracy theories. In a new paper published in Nature Human Behavior, Pennycook and an international team of co-authors investigated the effectiveness of a popular new strategy for combating misinformation online known as psychological inoculation. 

Gordon, this latest research paper isn't specifically about politics, but we're speaking now, a short time before the US presidential election. There's been growing concern globally about the potential for misinformation to undermine elections and democracy generally. What is your sense about the status of that threat today and our ability to withstand it? 

 

Gordon Pennycook  01:00

So I started doing work on misinformation back in 2016 and I think the threat is as bad or worse as it was then, unfortunately. Now, social media companies have done various things to try to clamp down on misinformation. For example, Twitter – now X – uses community notes, which is essentially like crowdsourced fact checking, and that has shown some effectiveness. Facebook has kind of clamped down on what it allows on this platform, but we still see a lot of the same problems with spreading misinformation. A lot of it comes from political elites, which is harder to have impact on, especially like a social media company. So it's a major problem. We need to we need to still continue developing approaches that are driven by underlying research on the psychology of misinformation that like get to the core reasons why people are sharing and believing in falsehoods.

 

James Dean  01:47

A fairly new strategy for combating misinformation on social media that you've said is wildly popular is known as inoculation. Can you describe what that is?

 

Gordon Pennycook  01:56

Right, so psychological inoculation refers to this a set of interventions that try to get at the underlying techniques that are used by people who are spreading misinformation. They use this analogy of inoculation. So if you give people a weakened dose of the techniques that are used, so, for example, emotional language, when people are creating misinformation, they'll often rely on highly emotional language to kind of grab people's attention and to, like, kind of convince them to believe something. Psychological inoculation – the idea there is that you teach people to look out for this sort of tactic. What you're trying to do is give people resistance or the ability to, kind of like, identify falsehoods before it comes up. The problem that they're addressing with this sort of intervention is that when you're fact checking or debunking things, that's always after the fact. Some sort of misinformation is being spread. And like, oh, this is actually false for these reasons, but it's always like a Band-Aid, because the misinformation has already been spread, and so you can only really mitigate the loss that's already occurred. And so these interventions are intended to kind of get ahead of it and build up people's resistance before they see the misinformation, so they don't fall for it and they don't spread it.

 

James Dean  02:59

And so how has that been implemented up to this point?

 

Gordon Pennycook  03:03

So a lot of the original research on this used these very elaborate kind of interventions. They were like games that you could play where you learn about different ways to manipulate people using misinformation, and then they would like investigate whether people actually are learning the stuff that they're teaching them in these games. The problem with that is that it's very difficult to get someone to spend 20 minutes playing some game that you invented, you know, and most of the people who are playing them are people who don't probably need to play them. And so what they did more recently is develop interventions that are based on just like, quick videos, like, literally, like a minute and a half video that's just giving you information about, watch out for emotional language. People are going to say something's disgusting if they really want you to pay attention to it. So you should be kind of skeptical about that.

 

James Dean  03:41

There was a lot of optimism that these strategies could have sort of a anti-viral effect as the as the name suggests. They've been deployed on very large scales, or at least the hope is that they could be deployed on these large scales and obviously then have an even bigger effect. But you had some doubts about that. How come? 

 

Gordon Pennycook  03:59

Misinformation is – so let's let's be clear about one thing, misinformation is falsehoods. And so when you use some sort of technique to grab people's attention, like using emotional language, that doesn't automatically make it false, in fact, some things are disgusting and and like true, legitimate news content often does also use emotional language, sometimes because things are emotional, sometimes it's for editorial reasons. But it doesn't necessarily mean that something is false. One of the issues with inoculation is that it doesn't give you the context of whether something is true or false. It just shows you the tactic. And so it's not necessarily focused on truth or falsity. It's focused on the technique. And what that means is, in these experiments that we ran, if you give people these inoculations, it doesn't actually improve their ability to distinguish between true and false news. All it really does is it makes them more sensitive to the thing that they're being taught, which is, like, in this case, emotional language. And whether that is good or bad depends a lot on what people are seeing in the real world, and whether, like, the emotional stuff they see is more likely to be true or false. And so it actually becomes a kind of big problem. 

 

James Dean  04:56

But you ended up with a recommendation about one way to potentially make inoculation more effective, to shift its focus, I guess, toward the truth or falsehood expressed in those tactics.

 

Gordon Pennycook  05:09

We have this other research showing that when people are sharing misinformation online, they often aren't really even thinking about whether it's true. People are not really focused on the truth as much as they should be, even though they do care about it. When they decide to share something, they're like, oh, is this important? Does it make me look good? Are people going to like it, whatever? And ‘is it true’ is just something that kind of gets pushed down the line of importance, and they're not thinking about it that much? And so we came up with these interventions to just remind people about accuracy. In the case of this inoculation, these inoculations were not really devised to get people to focus on the truth or falsity. They were focused on getting people to think about emotional language, for example, and so it is we can combine them into a kind of a hybrid intervention that includes both elements and then that seems to improve its effectiveness. It allows people to both kind of learn about the emotional language manipulation, but also kind of apply it to things that are true versus false.

 

James Dean  05:57

OK, so just practically an accuracy prompt paired with an inoculation. What does that look like to the person receiving it?

 

Gordon Pennycook  06:04

Oh, it’s just a little it's like a little bit of extra book-ending video. So the the initial inoculation video is just a video that's about a minute and a half that explains about the use of emotional language as a tactic. And then we just have at the start, reminders about the importance of being vigilant against misinformation and that we need to try to do things to make sure that we don't fall for falsehoods, and that we are believing things that aren’t true. And then you then they get the inoculation thing. So we're essentially just putting it in the context of thinking about truth and falsity, and just that, by itself, is enough to make it more effective.

 

James Dean  06:33

Do you have any sense that your findings are influencing the way social media and other platforms are implementing inoculations? And is it a given that in this peak of the election season that we can expect to see those being deployed more broadly?

 

Gordon Pennycook  06:47

It's possible that it might influence that. But at the same time, because of changes in the political atmosphere, companies are becoming less and less willing to actually engage in these sorts of efforts, which is the problem, because they're going to be more necessary as we kind of lead up to the election, but the political atmosphere around trying to intervene on these issues has become more tenuous. There's a lot of complaints. People have talked about these as forms of censorship. Of course, they aren't at all censorship, because all you're doing is you're just it's like an educational campaign. You're just giving people tools to help them maybe be better at distinguishing between what's true or false. They still get to decide for themselves what's true or false, but there still is a lot of concerns from high-level people in these companies that the optics are off and so they are seen to be kind of moving away from actually implementing these interventions, which is a real problem.

 

James Dean  07:36

Thinking about partisanship, have you found in your work there are claims made that the right or the left may be more prone to spreading and or believing in misinformation. I know specifically you looked at again, the role of accuracy prompts and how those influenced sharing among different sorts of groups. What have you seen there? 

 

Gordon Pennycook  07:55

Yeah, so there's been a lot of studies trying to essentially describe the information environment. Basically all of them have shown that there's way more misinformation on the right than the left. The common rejoinder to this from people on the political right, including like Jim Jordan's, for example, is that that is only true based on a particular definition of misinformation, like it's the people on the left who decide what's true, and therefore that's why there's more falsehood on the right. But we've done a recent study that really undermines this. What we showed is that even if you ask a politically balanced group of lay people to fact check or determine the quality of sources that are being shared by people on the left and the right, people on the right share more misinformation. So there's inherently no political bias in the measurement of what is counting as misinformation. It's just objectively a bigger problem on the right than it is on the left for a variety of cultural and historical reasons. So therefore these interventions, they need to be targeted to some extent. And you can see how that now becomes politically a difficult thing to try to navigate.

 

James Dean  08:56

What is it that makes someone more prone to believing in a conspiracy theory, a characteristic that you've studied that makes someone maybe more likely to go down the rabbit hole.

 

Gordon Pennycook  09:07

People have this tendency to rely on their gut feelings and intuitions. I mean, everyone kind of does it. Some people do it more. And the more you fail to stop and reflect and think about things and to, like, really question whether something's true, the more you're going to fall for these sorts of misinformation conspiracy theories, all sorts of things. Another element that's important is overconfidence, thinking that you know things that you don't know. And I think this is maybe even exacerbated a little bit on social media, because you have exposure to lots of information that seems plausible to you. A lot of it may not be legitimate at all, but you feel like you know something from that. And when somebody comes in who actually maybe has a stronger knowledge base for you, you think that person doesn't know what they're talking about, and then you can throw it away. That ultimately comes down to being overconfident in what you believe and not questioning whether the stuff that you're seeing is legitimate, or whether the things that you believe are legitimate. 

 

James Dean  09:56

And so it's perhaps a bit discouraging to accept that there are going to be some limitations with the inoculation strategy, but you've maybe more optimistically, found some potential benefits in what to me seemed like a very surprising domain: that's AI and models like ChatGPT, which I would have probably assumed could just only make things worse. But what is the story there?

 

Gordon Pennycook  10:18

One of the difficulties in essentially improving the evidence base of people's beliefs is that you have to guess what they believe. If you want to debunk falsehoods or conspiracy theories, you have to make some sort of guess about what information is going to be the most directly tied to what people believe that will convince them that they are wrong or that they should believe something that's more accurate. However, what we discovered in this recent paper is that we can use AI to help us do this, essentially. And so what we did in these experiments is we had people come in, they specified a conspiracy that they believe in. We do that in a kind of subtle way where we say, this is how we define conspiracy. Some people call them conspiracy theories. Do you believe anything like this? And people will describe something that they believe in, like that 9/11 is an inside job, or whatever the conspiracy is, and then they'll provide some reasons, and then they have an evidence-based dialog with an artificial intelligence. In this case, it was GPT-4 Turbo, and we've told the AI to try to dissuade them from believing in the conspiracy. And in this case, the AI is actually really good at drawing upon a huge swath of counter arguments and counter evidence. And what we found is that in these conversations, which last like eight minutes, after the conversation, the people who believe the conspiracies decreased their belief by about 20%. Fully a quarter of the people who had the conversation, they believed the conspiracy at the start, they didn't believe it after the conversation. That's a very surprising large impact. And in fact, people after two months, they still were at the same levels after the experiment. So they didn't go back to believe in the conspiracy. They continued to not believe it, even after two months. And maybe even longer than that. That's the longest we've looked. So it's really effective, and that's because the AI is really good at just providing strong counter evidence that directly conflicts with the things that people are saying are the reasons why they believe what they believe.

 

James Dean  12:03

One of the things about a couple of your recent papers that I found interesting is that you call them adversarial collaboration. So that sounds kind of exciting. What does it mean to researchers? 

 

Gordon Pennycook  12:13

Researchers have the same – we're humans, and we have the same problems as everybody else, which is, it's hard to see past your own perspective. And we need if you want to really know where you're going wrong and how what the right answer is, we have to kind of talk to other people who disagree with us. Adversarial collaboration is a formalized version of this in the science where people who have different perspectives on some theory or some empirical result sit down and decide how to test these ideas in a way that makes sense for everybody, and that is compelling to everybody. And then you make predictions, and you pre-register your analyses, and then you see who's right or who's more right, usually. And it's a really effective way to try to get closer to what actually is going on, and not just write papers and talk past each other and ignore each other, which is too common, unfortunately.

 

James Dean  13:02

Do you think it somewhat speaks to sort of the novelty of this type of research and the challenge you, in particular, face of just trying to keep up with technology that's changing so rapidly that there are these sorts of disagreements and a need to collaborate in these ways, to figure this stuff out? Or is it just sort of a something across science that we need to do more of. 

 

Gordon Pennycook  13:23

That's a good question. I mean, adversarial collaborations are not very common still, and there are some groups that are trying to kind of push these ideas. I think the misinformation space in particular is pretty contested, because there's a lot of people doing work on it, and they come from often very different perspectives. So we have collaborators in political science, communications, people in journalism, and we think about things a bit differently. And so I think there's more room for disagreement, which is really what you want for a strong science. You want people to disagree, to try to get down to it. Of course, you need that disagreement to be somewhat there's better ways to disagree than and using these kind of formalized approaches is one of the better ways to do it for sure.

 

James Dean  14:01

So if you don't mind, can you tell us a little bit more about the perspective that you personally are bringing to this work? How did you get started in it? Why have you pursued this field and this line of study?

 

Gordon Pennycook  14:12

So well, there's a couple of different ways to answer that. I mean, the the more proximal, literal way, is that we think about things, or I do, and my colleagues, who are cognitive psychologists, primarily, we think about it in the context of, like, how does the mind work? My initial research, and what I did my PhD on, was just the nature of human reasoning. How do we make mistakes when we're thinking about things? Why do we rely on our intuitions and so on, and what is the nature of our beliefs? You know, other researchers might focus on, like, more structural things, or like the nature of social media platforms or whatever, these things all interact. But I think it takes looking at things from different perspectives to get to the to the truth.

 

James Dean  14:49

You grew up in Saskatchewan. It's not a place that one might immediately think of as like a hotbed of big tech or, you know, concerns about it. But here you are with a voice and trying to improve that world. How did that happen? 

 

Gordon Pennycook  15:04

I mean, if you go north of my town, there's not, I think there might be 20,000 people that live north of that or something, not sure. And I think the closest Walmart was, like an hour and a half away, so, and I think that's still the case. That tells you how far away it is from everything else. So I grew up in a small town, and I decided to not believe in God anymore for various reasons, and I, as far as I could tell, nobody else had the same view as me about this. It's a small town, and not a lot of people talked about it at the time. And so that was, that was a weird experience. I mean, that got me really interested in trying to understand why people differ in what they believe. And like most psychologists, I was just trying to understand. I started doing research, trying to understand what makes me weird, you know. And so that sent me down the path of, like, just really being fascinated in the things that people believe and why they believe them. And I did, I've done some work on religious belief, but it became much bigger than that. I discovered there's, there's such an interesting diversity in thought that it doesn't matter where you go, there's always these really important differences. And a lot of the time it comes down, comes down to not just like what people believe, but how they think about things, how do they approach problems.

 

James Dean  16:08

That was around 2016 that you said you began focusing in that area.

 

Gordon Pennycook  16:12

I started grad school in 2010, and that's when I was at the University of Waterloo in Canada, and I started doing work on the nature of human reasoning and what they and did some work on religious beliefs and supernatural beliefs and stuff like that. I moved in 2016. I finished, I got my PhD, then I moved to Yale University, and I moved to America for the first time, and the election happened. The 2016 election happened, and fake news became this big growing concern. Whether it had actually influenced the election is still an open question, but at the time, it seemed like it became a gigantic problem. Misinformation grew as a gigantic issue. And so I started applying the same things I was working on to this problem of misinformation, because it ultimately is the same underlying question, why do people believe what they believe? How are they being influenced by the information that they're seeing in the world? That applies to religious beliefs the same way it applies to how people approach health or make decisions about their bodies or why they believe misinformation, and so that's that's what led me down that path.

 

James Dean  17:06

Does the work that you've done to date make you feel better about the future, or is there much more work to be done? 

 

Gordon Pennycook  17:14

There's more work to be done, but I do definitely, there are things in what we've done that I think are much more optimistic than a lot of people talk about within this context. So for example, when people share misinformation, often we assume they're doing it sort of on purpose, or that they're just trying to sow discord, or they're being partisan hacks or whatever. But our research shows that a lot of the time, people share misinformation just because they're not really thinking about whether it's true, and like if they would have stopped and thought about it, maybe they wouldn't have shared it. Trolling is actually not that common. Most people actually really care about the truth, and they want to share things that are true. They just sometimes make mistakes. The AI conspiracy paper that you've mentioned, this is another case where, like, I definitely had the belief that if someone is down the rabbit hole, you can't bring them back out of it. But it turns out, if you give people good enough arguments, a lot of them will change their mind. So there's, there's actually quite a bit of positives that come out of this work.

 

James Dean  18:03

Should we sign off with an accuracy prompt?

 

Gordon Pennycook  18:07

Don’t forget to think about accuracy. There it is. I think the whole, the whole thing is an accuracy prompt.

 

James Dean  18:12

Very good. Gordon, thank you for sharing your insights and experience with the Cornell Chronicle. 

 

Gordon Pennycook  18:17

It's my pleasure. Thank you.