Allan Boyd Talks to Experts About Things

Dr. Hammond Pearce – How AI Bots can swing elections

Allan Boyd - Journalist RTRFM Perth Western Australia

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 17:23

Hammond Pearce - Senior Lecturer, School of Computer Science & Engineering, UNSW Sydney 

INTRO: In the immediate wake of the Bondi terrorist event, we saw a flood of misinformation trolling across our social media feeds. Much of it false and most of it toxic. 

Indeed, with all the online confusion and fake media in our faces every day, even real content is doubted – as false accounts, chatbots and bot-networks can be used to rapidly spread false information with ease.

So, how do we make sense of it all? And can chatbots sway an election? RTRFM’s Allan Boyd caught up with cyber expert Hammond Pearce to talk about AI botnets and the worlds first social media wargame…

Conversation Article: World-first social media wargame reveals how AI bots can swing elections - January 16, 2026 


Send me a message

SPEAKER_00

Generative AI tools like Chat GPT are supercharging the creation and rapid spread of misinformation and fake news. Fair is free, easy-to-use image, voice, and video generators now make it relatively simple to produce authentic-looking stuff while making it harder to tell what is actually real. Indeed, around half the content that we consume online is now made and spread by AI. These tools can also be used to create fake accounts, to generate further misinformation via social media posts, to deceive and confuse, usually for political and financial reasons. But how effective are these bot networks and how hard is it to set them up? And how do we deal with this onslaught of false content? To help unpack some of this, I'm joined by cyber expert Dr. Hammond Pierce from the School of Computer Science and Engineering at University of New South Wales. Welcome, Hammond.

SPEAKER_01

Thanks very much. It's so good to be here.

SPEAKER_00

So just want to break it down a little bit. First of all, what is generative AI?

SPEAKER_01

Yeah, of course. So generative AI, you might have been exposed to it without necessarily knowing what it was, but it's those services like ChatGPT, right? Where we can go online and we can talk to a computer functionally and ask it to do something for us, and it will then do its best to generate a response for us. How it actually works is the technology that powers this has been trained over every copy of every piece of media that the people who built the AI were able to get a hold of. So something like ChatGPT, which which you talk to, that has been trained by consuming every book ever written by humankind, every website that ever existed, every YouTube video, every picture on Google Pictures, you know, anything that's public on LinkedIn and Facebook and Twitter. All of that content sort of gets distilled to produce sort of this machine that can generate things that look like that content, right? That can come up with sort of very plausible outputs, which are in many ways sort of the average of everything it's seen going into it.

SPEAKER_00

When you're asking questions of ChatGPT, it asks you questions back as well. There's this tit for tat going on.

SPEAKER_01

Yes, and then that makes sense if you think about it, right? So prior to these sorts of AI, right, I might have gone on onto a website, I might have gone on Twitter or Reddit or something like that. I might have gone, hey, Reddit, I really want to buy uh a barbecue. Who's got a really nice barbecue? What what should I buy? And other people would respond with questions to me, right? They might go, well, what do you what do you like cooking? You know, do you like smoked food? Do you like grilled food? Uh because that will affect your answer. And in that same sort of back and forth way, the AI has learned to do that too, right? So it can also come up with very plausible outputs where it's asking follow-up questions of you, you you know, you can have sort of interesting things with it. But the the the point is it's not real, right? You know, it is just an imitation of of everything it's ever seen. And uh so you know, you can get maybe you think it's it's quite deep or it's quite meaningful, but it really is an imitation of everything it's seen before.

SPEAKER_00

It is amazing how real it's it does feel when you're having a chat with a yeah.

SPEAKER_01

I mean, the technology is really, really cool. It's definitely one of the most impressive things that's been invented in the last few years.

SPEAKER_00

It is, and it's it's running forward at a rate of knots too. What about a bot? Tell us what a bot is. We hear about them all the time.

SPEAKER_01

We do, we do. So bot have a long history, right? Then they didn't come out with generative AI. They used to exist for a long time before that. A bot is simply just an automated program running on some computer somewhere, which is trying to achieve some goal, right? So, you know, like a robot in many ways. But a bot in itself, on the context of, say, a social media platform like Facebook or Twitter, a bot is something that is not real. It's not an authentic user. Instead, it's a program that's trying to mimic a user and is running through some simple rules. Oh, someone posted they wanted to buy a barbecue to go back to that example.

SPEAKER_00

Yes.

SPEAKER_01

Anytime someone posts they want to buy a barbecue, I'm gonna recommend, you know, this brand. I'm gonna recommend best barbecue. Doesn't matter what they've said. If I just see the word barbecue, I'm gonna recommend, you know, best legitimate barbecue.com. Now, the problem with that sort of content is that A, we might not want it, and B, it's not authentic, right? It's it's just some program that's running somewhere. And it might not be true. But in the past, right, these bots, they were quite primitive. They were they were very basic, they were doing keyword matching. You could very easily tell that they weren't real. There's some very funny things from back in the past that people would you'd ask like a follow-up question to these bots, you'd say, Oh, you know, can you tell me what's the color of the sky or something? And they'd just tell you something else about barbecues, right? Um, so they they couldn't they couldn't have a conversation. But with generative AI now, with these AIs that can come up with sorts of plausible conversations, now it's this whole sea change because now you have a bot which is running a program to try and persuade you to go to best legitimate barbecues or whatever, but that same bot is also equipped with this engine that can have conversations. And so now you can have these really sort of scary, very plausible conversations with someone. You don't realize they're not real because it's so realistic. And at the same time, that bot is still doing that same thing, trying to get you to engage with whatever product or service, or or not even a product or service, but maybe a belief, a political belief that's trying to persuade you of some viewpoint.

SPEAKER_00

Yeah, I'm gonna get to that in a second about your social media war game. That sounds great. So, what about a bot network then? So we've we've defined what generative AI is and a bot, but what about a bot network?

SPEAKER_01

Yep, so a bot network, it's exactly what it sounds. It's it's when you have more than one of them. Of course. So, so yeah, so we could have one bot and that's fine, but quite often we want to we want to make it seem like there's consensus or a crowd, right? We want to make it seem like we're not just one person on social media telling you to go to best legitimate barbecues. We want to make it seem like every person in all of Australia wants you to go to best legitimate barbecues. So we're gonna have not one bot recommending these things, we're gonna have hundreds, maybe even thousands. And all of those together, you know, acting in unison and we can make them agree with one another. So, you know, you go, what barbecue do I want? One of the bots says, Oh, I've heard that best legitimate barbecues is the best brand. And then another bot simply says, Yes, I you know, I agree with that. I bought one of those and it was fantastic. And then another bot, you know, says, Oh, yeah, you know, I used one of those at my friends get together, right? And none of this is real, it's all completely fake. But you see, you know, it becomes very persuasive because you're not just seeing one story, you're seeing all of these sort of interlinked fake people interacting with one another, talking to one another, and you go, wow, it's like I'm joining a busy room where everyone's really passionate about this stuff. So, yeah, it can really create this very sort of fake feeling online when you go online, you see this sort of thing, and you go, Oh, is anything that I'm seeing actually real?

SPEAKER_00

Oh, it's getting like that, isn't it? In your article in the conversation, you you mentioned the term liar's dividend, where even real content is approached with some kind of doubt. So can you explain that liar's dividend?

SPEAKER_01

Yeah, so I brought up the liar's dividend because a very reasonable question to this is to say, okay, well, it's all very well to get someone to try and buy a barbecue, but you know, a lot of the content that we're really worried about is not anything to do with barbecue, right? That's the example we've been reaching towards. But if we think about what we saw recently in in sort of very high-profile fake content online, right? We saw after the really horrible events at Bondi, we we saw this sort of fake content. And so you go, well, hang on a second, like why is anyone motivated to post that? Why would you get your bot network to do that? And here the point is not necessarily to persuade someone of anything in particular, really, but you muddy the waters so much that people now just start to think, well, anytime anything happens on the internet now, I go, well, is this actually real? You know, can I actually believe it? And the benefit that that has to the liar is that they might not actually ever want you to trust anything you see on the internet, because sometimes stuff you see on the internet might simultaneously be true, but also be critical of them. So if you can persuade everyone to just never believe anything, and then still seem trustworthy yourself because you have control of some other media or you know, you're a high-profile person or something like that, then you can get something from this. Because anytime anyone disagrees with you, you just go, well, you know, that's fake, you're a bot, you're a shill, you know, you're not real. And and you can just totally shut down those sorts of conversations online. That sort of dismissal, that sort of complete, you know. That's the dividend, that's the goal. They get something out of that, even though you there's no sort of like first order benefit to persuading anyone that this video is real, we all are worse off because now we know that people can put fake audio of players. That's right.

SPEAKER_00

And you can't unsee some of that kind of stuff either. Once you've seen it, you've seen it, and it's changed changes your mind.

SPEAKER_01

Yes, and and the other benefit too is that even when you know content is fake, it can still play into your own biases. You you can see all of these horrible fake things online. And one of the examples here is just to take a different pitch, all of the fake airbrushed people that we see that are super beautiful, super good looking, and you go, man, you know, I don't look that good. This is terrible for my mental health. But you know that those people aren't real, right? We know, we know at this point, if there's a photo on the internet, you know, 99 times out of a hundred, it's been improved in some way by filters or lighting. Yeah, sure, yeah. You know, full-on Photoshop or whatever. But even though we know that, we still get the same feeling when we look at these pictures and go, man, I wish I was they're not that beautiful. No one and it's that same sort of thing when it comes to political content too. You might know that it's fake, but it still actually has an impact on you and your mood and your feelings.

SPEAKER_00

You're listening to RTRFM 92.1. My name's Alan Poyd, and I am chatting with Dr. Hammond Pierce, cybersecurity expert, about how AI bots can swing elections in terms of political stuff. You've helped set up the world's first social media war game where participants build AI bots to influence a fictional election using tactics uh that mirror manipulation of real social media. Can you tell us a bit about that?

SPEAKER_01

Yes, absolutely. So, bearing in mind all the things we've talked about with bots, we had this question, right? Which is firstly, how hard is it for anyone to sort of set these bots up? You know, is it actually a difficult thing? Do you need, you know, specialist skills or resources? And then the second question is once you have actually set them up, what can you use them for? How dangerous are they? So we built a simulated social media platform. So we didn't want to do this in the real world for obvious reasons. Obviously. So we created this, yeah, we created this totally fake social media platform called Legit Social, which we then filled with what we called simulated citizens. So these were actually our own bots, but they were not given any explicit instruction other than to just behave in a realistic way. And we gave each bot a sort of personality that we uh that we programmed for them. We had about 4,000 of those bots, so quite a few. And then we also said in this sort of simulated social media platform, every one of these bots is actually from a fictional country, and in that fictional country, we're having an election between two fictional people. So that's the the sort of the setting for the game. 4,000 voters who are gonna vote between one of two candidates. When we set the game up, we made it so there was an even 50-50 split between voters who supported one party and voters who supported the other party. Then what we said, okay, now here's the game. We are gonna get a whole load of students from across Australia. They they might not be computer science students, they can be any sort of student, but we want them to be university students. And what we want them to do is try and build AI bots that will interact with our platform and try and persuade our simulated people to vote one way or the other. And we saw just the most incredible entries, right? We had teams that went above and beyond. And what was really interesting is that none of these teams had necessarily done this before, and none of these teams had the resources of a government or anything like that. They were all just using consumer-grade, off-the-shelf stuff like Chat GPT, you know, just like that. And what they were able to do was produce these sort of fake people, these fake bots that were able to post relentlessly, and I use that word deliberately because they would just flood the platform with content and not just the same message over and over again, but actual content that's saying, oh, this political candidate is absolutely rubbish, you know, their policies are gonna lead the nation to destruction. If you vote for them, you're a communist, whatever. All of that sort of stuff, which we see in the real world. Yeah, we do. A lot of the competitors, we you know, we did a survey, a lot of the competitors were saying, as I was playing this game, I was actually really taken aback by just how real the platform looked. In other words, the competitors were hugely successful in building the bots to produce the content. And in the case of our simulated citizens, they were able to persuade enough of them that the actual election outcome changed. Well, it was really interesting to see the kinds of techniques that the players ended up using. We had one team that actually paid for servers in another country in Singapore, and they built a bot network on those servers and they linked it with their phones so they could monitor how well their bots were doing and how all of the accounts were going. We had collusion rings where different teams would boost the content of other teams to get criminal level kudos amongst themselves. It was really, really cool and also quite frightening to see because these are just students, these are not experts. Normal people who have been given this task and have just been able to really, really excel in the rounds of the game.

SPEAKER_00

Yeah, but in your article you say that some of the students were saying things like, we needed to get a bit more toxic to get engagement, and then eventually the platform became a closed loop where bots talk to bots to trigger responses. Yes. Yeah.

SPEAKER_01

So so one of the things that we had, right, is that our bots, the the competitor bots, the the one, the 4,000 that we controlled, because we were trying to make them behave like humans, you know, they had quite limited posting. You know, if you think about how much you might post on social media, you know, maybe it's one or two times a day. Same with me, right? It's not a huge amount of content. But the player bots were not, they were posting dozens of times every minute, sort of thing, right? You know, because because what they're trying to do is flood the zone. They're trying to make this vision of consensus. And so what was happening is all of the highly visible content on the platform, basically all of it, just became generated by player bots who were making it an illusion of consensus online that you know one person was bad or the other person was good. And so it's really interesting to see, like, okay, obviously it was always going to be bots talking to bots in our platform, but what we really saw was actually it was player bots mostly talking to other player bots, having these online arguments if one team supported one candidate and one team supported the other. And and our sort of simulated people really didn't get in to say much at all because they were just completely, uh completely flooded out. Yeah, it was so that was really interesting to see.

SPEAKER_00

It's quite amazing uh the results there. And yeah, I guess we're running out of time here, but you say also that we need some digital literacy, we really need it to raise awareness of this kind of misinformation that is online. How can we recognize that we're being exposed to this stuff?

SPEAKER_01

Yes, so that's a really great question. How we combat the kinds of misinformation and and kind of susceptibility that we might have to this fake content online, and step one by letting people know that it's there. And step two is letting people know why it's there, right? We know people post fake content. We know they're doing it to either persuade you to do something, you know, whether that be buy something, be scammed, you know, vote for someone, vote for someone that might not actually be a good person for you to vote for. That they're trying to get something out of you, even if that is just the liar's dividend, which is just to make you feel like you don't want to be online anymore. Now, how do we respond to that is by telling people, hey, this these sorts of things exist, but even though they look sort of realistic and even though they look sort of plausible and that the content seems quite realistic, because you know they're there, you might be able to change your own behaviors. You know, you might go, okay, I'm not gonna trust this content from the spot because when I look at their account, all they ever talk about is how bad this thing is, or all they ever talk about is how good legitimate barbecues are, right? Now, if we can if we can say to people, okay, approach the internet and social media with some skepticism and have a think about the sort of content that you're consuming. Hopefully, when they see fake content or when they see these sorts of artificial, really dominant narratives, you can go, well, maybe there's actually another side to the story that that's just being hidden on this platform by these bots. Yes, yeah, definitely. So you can to take the example back to Bondo, for instance. We saw in the comments on some of the fake content online. People in the comments were saying, hey, this is not right, you know, this is fake, this isn't real. Although you still had some people insisting it was, and sort of acting to boost that content, you did have people calling out, hey, this is not real, you need to be aware that you know this kind of stuff can happen. And that does provide, okay, it does make it more confusing because you have to decide which side you believe in, but at least you know, okay, you know, something funny is going on here. I'll have to wait until we get clarity from you know a trusted source. So I I think in many ways we may also have to rely more on traditional media again to say, okay, you know, you guys with the this might need a professional to untangle. What is the actual baseline story here? And that's just something that people need to reflect on before they get too energized about what they're seeing online.