Research Matters

David Rand on how AI shapes our choices - Research Matters S2E1

Season 2 Episode 1

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 26:51

In this eye-opening episode of Research Matters, David Rand, professor in the Cornell Bowers College of Computing and Information Science, Cornell SC Johnson College of Business, and the College of Arts and Sciences, reveals how AI actually sways what we think – even how we vote. Packed with insight, practical takeaways, and a peek into the future of AI in our daily lives, this episode is a must-listen for anyone curious about how technology shapes our choices.  Watch here.

David Rand:

To me, those effects, like one in 10 people changing their minds seems huge. But there were several news articles whose headline was basically, study shows that AI is not very persuasive.

Laura Reiley:

Wow.

David Rand:

So I think it all depends on your prior, on the perspective that you bring to it.

Laura Reiley:

Hi, I'm Laura Reiley and this is Research Matters, a show about Cornell researchers who are tackling some of the world's toughest problems and finding solutions that make a real difference in our everyday lives. Each episode we'll sit down with a researcher who's not just studying the world, but changing it, turning data into discovery and ideas into impact. Today's episode dives into a question many of us are already worrying about. Can artificial intelligence influence what we think, even who we vote for? We're joined today by David Rand, and you're going to have to take a pause while I tell you all of his affiliations. He is a professor of information science in Cornell Bowers, of marketing in SC Johnson College of Business, and of psychology in Arts and Sciences His new studies suggest that the answer to the question I just asked is a very troubling yes. In fact, he has research that's just published that shows that simple conversations with AI chat bots can shift voter opinions in meaningful ways. We will talk through what that means for our democracy, for us as voters, and how we engage online. David, it's great to see you. Thanks for coming in.

David Rand:

Thanks so much. It's great to be here.

Laura Reiley:

OK, so these two studies that you just published, which I've seen everywhere, so New York Times did a huge thing, Will Oremus at The Washington Post did a piece. It's kind of like been everywhere all of a sudden. I imagine for the other faculty who are listening to this, they'll want to know, how do you get a piece in Science and Nature on the same day? Just write a little script for that that we can all follow.

David Rand:

Yeah, so that was very fortuitous. We had these two separate projects that were very similar, both looking at how conversations with AI chat bots can change people's attitudes on politics, but looking at them in different ways and somewhat different contexts. And the projects were kind of proceeding in parallel and each group knew about the other. You know, I told each group about what was happening and there was some concern about like somebody's going to wind up getting scooped one way or the other. And just kind of through good luck, they both wound up getting accepted, one in Nature and one at Science within a couple of weeks of each other. And so then I emailed the editors and I was like, Hey, there are these two papers that are synergistic. Like, what do you think about doing a coordinated release where they come out at the same time?

Laura Reiley:

And everyone played nicely with each other? That is very unhuman.

David Rand:

They conferred with each other and they went with it.

Laura Reiley:

That's great. Well, so all right. So can you break down what did they each say and what was the synergy between the two?

David Rand:

Yeah. So I'll start with the paper that was in Nature. This paper looked at the question of can a conversation with one of these large language model AI chatbots change people's opinions about really high stakes political context and particular presidential elections or president presidential competitions So we did an experiment with Harris voters and Trump voters two months before the 2024 election and so we recruited about a couple of thousand people and we asked them how they you know their relative preference for Trump versus Harris and if the election happened today what would you do, vote Harris, vote Trump, vote someone else, not vote, and then we asked what issue was most important to them and why, they wrote out about that and how they saw the different candidates' positions on it. And then they had a three-round back and forth conversation with the chatbot that we told them they were talking to an AI, but we didn't tell them that we randomized the chatbot to either advocate for Trump or advocate for Harris. They have the conversation, and then afterwards, we're like, all right, now let's return to the questions from the beginning. How do you feel about Trump versus Harris? If the election is today, what would you do? And so we did that in the US 2024 presidential election. Then we also did it right before the Canadian 2025 national election and right before the Polish 2025 presidential election. And all of which were like very high salience outcomes. And there's this a lot of concern in, you know, general society about AI manipulating people's opinions, but also there's a lot of work in political science that shows it's basically impossible to change people's attitudes about things like presidential candidates.

Laura Reiley:

And you found what? So let's talk about the outcome for that particular study and then we'll go to the other one.

David Rand:

Yeah, so we found that in the US presidential election conversation, something about one in 25 voters uh who said they were going to do something other than vote for the candidate that the model was stumping for switched to saying they would vote for that candidate.

Laura Reiley:

That is a shockeroo. So were these low information voters or how did you select them? Were these people who were, you know, dyed in the wool partisans or how did you pick them?

David Rand:

It's a good question. We didn't filter on political engagement. We just essentially recruit subjects from these online survey platforms. And we say we want half Democrats, half Republicans. But we did ask about how important politics was to them and how politically engaged they were. And that was actually the biggest moderator of the effect, whereas you might imagine we saw bigger treatment effects among less engaged voters because they sort of knew less coming in. But even among the highly engaged voters, we saw substantial treatment effects.

Laura Reiley:

That is remarkable. OK. And then go to Canada and Poland because I think those effects were even larger, right?

David Rand:

Right.

Laura Reiley:

So like 1 in 25, it's like enough to, if you could get everyone to talk to the bot and if it had an effect on that side... It could have really changed the selection.

David Rand:

Right. It would really have a big effect.

Laura Reiley:

Could have.

David Rand:

Both of those are big ifs. But it's not a huge effect in from like a percentage point perspective. It's bigger than what you would expect from traditional TV ads and things like that benchmarking it against similar studies done in similar contexts using normal ads. These effects are maybe three or four times bigger than what you would expect with a normal ad, but it's still not massive. But then in both uh the Canada election and the Polish election, the effects were like three times bigger than the US effects. And so it was more like one in 10 people changed their vote to switch to saying they would vote for the candidate that the model was advocating for.

Laura Reiley:

Do you have theories on why the effect was larger there? Is it that they're less inundated by political information over the course of a campaign? What do you attribute that to?

David Rand:

So it's a great question. It's one of the obvious questions that comes out of this research that we don't have a good answer for, because with only three countries, it's really hard to know. They differ on many different dimensions. But my speculation is exactly what you said, which is that I think the media environment is just like way more saturated with presidential politics in the US compared to those countries. And also both of the candidates in the US election have been around for a long time. Like Trump, people have already heard a lot about Trump.

Laura Reiley:

You think?

David Rand:

And it's really hard to come up with some new thing that people haven't heard before.

Laura Reiley:

Sure.

David Rand:

Because one of the really important results from that paper also, was that the primary way that the AIs were persuading people, it's not that they knew some kind of psychological manipulation tricks or whatever, they were just providing lots of factual evidence and arguments. I put facts in quotes because a lot of the time, or not a lot of the time, but some of the time, the facts that the model said were not actually accurate.

Laura Reiley:

Yes. But so I wanna get into kind of facts versus persuasion in a little bit, because I think that that's a really fascinating piece of this, is that it seems like, the more persuasive, the less factual and, you know, but, all right.

David Rand:

Yeah, but we'll get to that. But I think here, if the idea is that the primary way that it's persuading is by making lots of these factual arguments, then the fewer factual arguments you've already heard, the more room you have to be influenced by that.

Laura Reiley:

Sure, makes sense.

David Rand:

And so I think in particular, if it's saying things that you already heard, it's probably not gonna move you that much. If it's saying new things you haven't heard, it's more likely to move you. And so, you know, if you are a low engagement voter or if you're in a context where there's just like less inundation of information, it has a lot more potential to actually change your mind.

Laura Reiley:

So I think every every super PAC right now that's, you know, like looked at your research has said like, huh, we got to change our game plan. I mean, obviously in the New York City mayoral race, there was a very, very clear understanding for all of us that traditional TV ads were no longer cutting it, you know. So, okay, let's move to the other study. What did the other study look at and what were the results?

David Rand:

So the other study, which was in Science and led by Kobi Hackenburg and Ben Tappin, my colleagues in the UK, uh what we did was looked at trying to persuade uh UK residents on policy issues. So we had a set of 700 different policies and they randomly assigned to one policy, same sort of setup where they give their attitude beforehand. They have a back and forth conversation with the chat bot, and then they re-indicate their attitude. But the focus of that study was uh trying to understand what are all the different things that someone might do to try to make the models more persuasive, and which of those sort of levers really matters the most. Which I think a lot of the lens that we were bringing to that was a sort of regulatory type lens, which is like, if you are interested in trying to prevent models from being super persuasive, what are the things that you should be worrying about? And so we looked at four different things. One is how big and sophisticated and powerful the model is. Because every time there's like GPT-3, 3.5, 4, 4.5, 5, every time they're coming out with these new, bigger, more powerful models. And what we found was that as the models get bigger, they get more persuasive. But not massively more persuasive.

Laura Reiley:

Is this because they are more human-like or because they have a greater, they're scraping better stuff or what, like what do you attribute that to?

David Rand:

So across all of the different things that we looked at, basically whatever made the model more persuasive also increased the number of factual claims the model made and the differences in persuasiveness were very largely explained by the number of factual claims. So I think it's just like the bigger the model is, the better it is to martial evidence.

Laura Reiley:

"Evidence..."

David Rand:

Also. OK, so we looked at how big the model was. It matters. Not amazingly, like making the models two orders of magnitude bigger in terms of number of parameters would get you a few extra points of persuasiveness. So, you know, that's important in close elections, but it's not like you're in the domain of mind control where the model can make anybody believe whatever it wants to.

Laura Reiley:

OK.

David Rand:

Then we looked at personalization, which is another thing that people have talked a lot about. Like, what makes these models so great? It's that they can totally personalize to the person that they're talking to. And somewhat surprisingly to me, we found that personalization had quite a small return. Allowing it to personalize across various different dimensions made it about one percentage point more persuasive, which is equivalent to basically giving two extra facts, which is not that much. And in the other paper, in the experiment in Poland, because we wanted to look at uh personalization there, this is some of the synergy between the papers, uh There we did what we call like a knockout experiment where we had one condition where we told that you're not allowed to personalize. Just make general arguments that are appealing to everyone. Essentially ignore what the person is saying to you and just make broadly appealing arguments. And that worked just as well as whatever it was doing at baseline. So these are sort of two independent pieces of complimentary evidence that personalization is maybe less of a thing than we might have thought. But then what we found did make a big difference was, first of all, the strategy that we told the model to use, we randomized across a bunch of different persuasive strategies. Some of them were the kind of psychologically informed persuasion strategies like deep canvassing, where you really try and appreciate the other person's position first and stuff like that.

Laura Reiley:

But that is not personalization, understanding the other person's.

David Rand:

These things intersect. In the personalization... Well, I don't know, as I was saying, that didn't work. So this is consistent. There's some personalizing going on there but, like, there's that, or there's like uh moral reframing is another popular thing where essentially you take the position that you're advocating for and you try and sort of rephrase it in terms of moral values that you know the other person holds. And, you know, so things like when you're talking to conservatives about climate change, you should talk about it through the lens of purity, you know, like that kind of stuff. We did those kinds of things. And then we had tried to just give as much facts and information as possible, just pack it as densely with information as you can. And what we found was they all were persuasive. Like they all saw significant shifts in attitudes, but the psychological strategies were the worst performing. And the just pack it with as much information as you can was the best performing.

Laura Reiley:

So all right. So the takeaway is just inundate people with facts and they are putty in your hands. I guess one thing that I found interesting is that these effects endure. So even if someone after the fact, I know you've done a lot of kind of fact-checky type work in your, I think you were like fact-checking journalist of the year or something for-

David Rand:

Fact-checking researcher —

Laura Reiley:

Researcher of the year. So, you know, this is something that's been important to your research for a long time, but so even if people have time to step back and parse what they've just heard or fact-check everything they've just heard, there is an enduring effect, like what AI is able to do to persuade sticks.

David Rand:

Yeah, but I mean, it's because, you know, most of the facts and information is providing is accurate and you can be very misleading using only accurate information like you don't need to lie...

Laura Reiley:

Omission

David Rand:

Exactly, exactly

Laura Reiley:

Just like we all talk to our parents in high school, right? It was it was a lot of lying through omission. That's you know...

David Rand:

Yeah, exactly. We do find enduring effects. In the Science paper we followed up a month later and about 50% of the original effect was there. In the Harris- Trump experiment in the Nature paper, we followed up five weeks later, which at that point was three weeks before the election. So was like maximal, what we call counter-treatment period. That is, they're receiving tons of extra information all the time. We found that about a third of the effect was there. To the extent that someone showed an effect initially, about a third of that effect was still observable five weeks later, which is, I think, pretty persistent to me in the context that, you know, this is, like I said, it's the time when getting lots of other input. And, you know, It's not that surprising to me that these effects are durable because they're having this back and forth conversation where they're paying attention and the machine is giving them tons of information. To the extent that you make up your mind based on information, it's like a very intense treatment. And there's this classic literature in persuasion, the sort of standard theoretical framework in persuasion is called the elaboration likelihood model. It goes back decades. And the basic idea is if people are really paying attention and engaged, you can get large, durable changes in attitudes by giving them relevant facts and information and product demonstrations and credible endorsements and stuff like that. But if they're not paying attention, then that doesn't work and you have to do all these more kind of psychological tricks.

Laura Reiley:

It doesn't really depend upon, I mean, I know your colleague and co-author, Gordon Pennycook, does some work on intuitive voters or people who maybe are less fact-based. Does it matter kind of how much information you start with as a consumer of this AI information?

David Rand:

Yeah, I mean, on the one hand, people that are more politically engaged show less of a treatment effect, presumably because they're coming in with more background information. We didn't measure Gord's favorite uh measure of analytical versus intuitive thinking in this study. Uh. Another study that Gord and I have run... I should say, Gord is also one of the co-PIs on the Nature paper. In a different study that Gord and I had in Science last year on AI's debunking conspiracy theories, we did measure that, this sort of tendency for people to think analytically versus deliberately. And you see bigger treatment effects among people that are more inclined to be analytical thinkers, which kind of makes sense that if you really don't care about evidence and aren't paying attention, it's not going to work. But you have to be really on the low end of uh not paying attention to evidence for it to not work.

Laura Reiley:

All right, so I have a question about the newness of large language models versus the old tiredness of traditional TV advertising. So is there a sense in which we are more persuadable or more susceptible to an LLM right now because it represents this... unknown. We don't have we don't presuppose a lot of things about its agenda. Whereas, you know, a TV ad is demonstrably partisan, right? Is there is that going to change over time as we have new relationships with these technologies?

David Rand:

Yeah, it's a key question. And uh we haven't looked at this directly in the context of political persuasion, but Gord and I did have a paper earlier this year in the conspiracy debunking context where we randomized whether we told people they were talking to an AI or we told people they were talking to an expert. And we see that people in the expert condition are likely to think it's a human that they're talking to. And it didn't matter. It was just as good at debunking the conspiracy when they thought they were talking to a human as when they thought they were talking to an AI. So it's not about AI deference and just being, OK, whatever the AI says, I should believe it. But I do think that this not realizing that the AI has an agenda is a really important part of this. We just ran an experiment a few weeks ago as a follow-up. We were trying to change people's attitudes about some policy issue, and we randomized whether we told them beforehand that, or we sort of reminded them, you know, AIs may have agendas, you know, and they're not necessarily objective, but they can be told to advocate for one side or another, and that made the models about half as persuasive.

Laura Reiley:

Okay, so it's suspicion of motive that may contribute to how persuadable some of these things are.

David Rand:

Totally, and I think that what that implies is ah the potential policy approach that could be helpful to suggest is transparency, not so much around the fact that it is an AI, but transparency around who is responsible for it and who told it what to do, and ideally, what are the specific instructions that it got.

Laura Reiley:

All right, I wanna talk briefly about how what you've just found can be used for evil and then how it can be used for good.

David Rand:

All right.

Laura Reiley:

So how are the bad guys going to weaponize these two papers, the findings in these two papers?

David Rand:

I mean, my hope is that these papers uh are not gonna enable too much weaponization in that I don't think it's like that surprising to people that AI can change people's minds. And my sense is there already are campaigns that are experimenting with this kind of thing. But presumably...

Laura Reiley:

I think the magnitude is pretty disturbing.

David Rand:

Yeah, okay, fair enough.

Laura Reiley:

I mean, let's say that the Canadian and the Polish study even more so. That's pretty big shift.

David Rand:

Yeah, it's interesting because uh to me those effects, like one in 10 people changing their mind seems huge. But there are some other people that I've talked to that I think are not coming from a social science perspective who are like, well, like nine out of 10 people didn't change their mind. That's like a small effect.

Laura Reiley:

You think about close elections.

David Rand:

I know. But there were several news articles whose headline was basically, study shows that AI is not very persuasive.

Laura Reiley:

Wow.

David Rand:

So I think it all depends on your prior, on the perspective that you bring to it. But anyway, so I think that the negative use cases are the most extreme, is people using the models to actively mislead people. So making false claims, making inaccurate claims, persuading people based on evidence that is just like really not right. And, you know, then I guess the more general version is just the ability to persuade people with arguments that are like not fully uh representative. But it's not that different from canvassing, which is like a political, it's an AI canvasser.

Laura Reiley:

Except for canvassing you kind of know that there's a perspective or agenda behind...

David Rand:

That's right, that's right. And so I feel like ideally these models would have paid for by, know, candidate attached to them. So with all of the like campaigns adopting the chat bot thing, the main issue is how do get people to talk to the chatbot? Like great that if they talk to the chat bot, it'll change their mind. Yeah.

Laura Reiley:

But what's the forum where we all, you know

David Rand:

No Harris voter like wants to go talk to a Trump chatbot or vice versa.

Laura Reiley:

But it's texting you as a canvasser.

David Rand:

Right. Or calling you because voice to voice is also pretty easy. So that's, know, that's like that's one route. And then, you know, I think there's going to be a lot of innovation in political campaigning around how do you get people to talk to chatbots, basically. But a whole other uh mode of potential influence is there are some chatbots that lots of people go to all the time to ask questions about all kinds of things, like Chat GPT. And we don't know what Chat GPT's prompting is, because it's proprietary. But it seems like, in general, the goal of all these frontier models right now is be accurate and produce responses that the person will like. And there are some negative consequences like sycophancy and whatever. But in general, the information is quite accurate. And I think that they have a lot of uh sort of like an explicit goal to make the models as accurate as possible so they're useful for people. But it means that like if Sam Altman decided there was some issue that he wanted people to feel a particular way on, you can just go in there and like tune it and be like, well, people ask about this. Tell them that thing. And you saw that with Musk.

Laura Reiley:

Sure.

David Rand:

Where like he didn't like that Grok was always telling him he was wrong when he said things that were wrong, so he put his thumb on the scale.

Laura Reiley:

And look what it's doing. The anti-Wikipedia, for sure, for sure.

David Rand:

Yeah, exactly.

Laura Reiley:

Yeah. So well, that's fascinating. OK. So now how can the good guys win via tweaking this for good?

David Rand:

Right. So the flip side of it is these models are really good at explaining things and have access to a huge amount of information. So if you take models and you prompt them saying your goal is actually be accurate and just like correct misconceptions that people have. And again, if you can get people to talk to the models, they can be quite effective at explaining the truth to people.

Laura Reiley:

So I mean, I know you've done stuff on COVID and I mean, there's obviously a lot of real ripe arena there for anti-vaccine information or, know, so is that an area that you think there will be progress made in terms of disabusing people of wrong information?

David Rand:

Totally. So we started with conspiracy theories. So we have this... We did this paper, but now we also have debunkbot.com so anybody can go try it out. I did it with my dad at dinner a couple of nights ago, cause he watched some documentary about aliens that he found very compelling. And I was like, let's, let's talk through this. And it was great. It just totally broke the whole thing down and like knew about the, knew about the movie and it had all what it was. It's very effective. And so we're doing experiments now where we're taking social media bots and hooking them up to debunkbot. And so they automatically go around and find people that are posting conspiratorial content on social media platforms and respond with debunking. We've also looked at this in the context of vaccines, like you're saying. We've got a few different projects on trying to address vaccine hesitancy and address concerns around vaccination. We have a project on climate, both on trying to address climate skepticism, but also, I think potentially more impactfully, talking to people who believe that climate change is happening, but feel overwhelmed and like there's nothing they can do about it. Trying to make sort of concrete suggestions about what are things that you could do. And actually when we were testing that, um I was talking to it, it's like, you shouldn't eat red meat. And I was like, okay, I don't eat red meat, but my kids do. And it's like, we should talk to your kids about it. And I was like, yeah, okay. So I talked to my 10 year old.

Laura Reiley:

How about you guys talk to them? You talk to my kids, right? Get the bot to talk to them. It's more persuasive than you are probably.

David Rand:

Well, I guess that's true. Although I think I have some credibility still. My kids are young.

Laura Reiley:

That'll disappear.

David Rand:

But so I told my nine year old, I was like, GPT said you shouldn't eat red meat. And he was like, why?

Laura Reiley:

Or eat cheese, probably.

David Rand:

We got a 2,000-word summary about why. That's like beef was particularly bad. And he's like, All right, I'm not going eat beef. And now he's like six months into not eating beef.

Laura Reiley:

That's amazing.

David Rand:

Yeah.

Laura Reiley:

All right, so we've got evidence right there.

David Rand:

Yeah.

Laura Reiley:

All right, we could go on and on about this, I'm sure. But we have no more time. I would love to know, for readers or for listeners who are interested in this topic, what's your best pick in terms of a book recommendation right now? Or what should they be reading? What's a good resource? It could even be a periodical.

David Rand:

Yeah, mean, one of the best sources, I think, of information about all the new and interesting and exciting or concerning things happening in AI is Ethan Mollick is a Wharton professor that has a Substack called One Useful Thing that is really awesome. I was teaching an AI and society class and I basically just like went through all of Ethan's posts. like, all right, here's all the good stuff.

Laura Reiley:

Great. That's wonderful.

David Rand:

And Noahpinion is also a really good one.

Laura Reiley:

Okay, perfect. Thank you so much. All right. This has been wonderful, David. I've enjoyed it every second and I'm sure we could have heard you know, half an hour more about these papers alone. You've been listening to Research Matters from Cornell University. If you want to learn more about Professor Rand's work, check out Cornell Bowers College or the SC Johnson College of Business websites. I'm Laura Reiley. If you like this episode, subscribe wherever you get your podcasts and share it with friends who love facts as much as you do. Thanks for listening. And remember, when research meets purpose, we move closer to a healthier, fairer world. Thanks a lot.