Inside Geneva

Is AI a risk to democracy?

March 19, 2024 SWI swissinfo.ch
Inside Geneva
Is AI a risk to democracy?
Show Notes Transcript Chapter Markers

In 2024, four billion of us can vote in elections. Can democracy survive artificial intelligence (AI)? Can the UN, or national governments, ensure the votes are fair? 

“Propaganda has always been there since the Romans. Manipulation has always been there, or plain lies by not very ethical politicians have always been there. The problem now is that with the power of these technologies, the capacity for harm can be massive,” says Gabriela Ramos, Assistant Director-General for Social & Human Sciences & AI Ethics at UNESCO.

Analyst Daniel Warner continues: “I’m worried about who’s going to win. But I’m also worried about whether my vote will count, and I’m worried about all kinds of disinformation that we see out there now. More than I’ve ever seen before.” 

Are deep fakes the biggest dangers? Or just not knowing what to believe? 

“I think the problem is not going to be the content created, the problem is going to be the liar’s dividend. The thing that everything can be denied, and that anything can be questioned, and that people will not trust anything,” said Alberto Fernandez Gibaja, Head of Digitalisation and Democracy at the International Institute for Democracy and Electoral Assistance (International IDEA). 

Laws to regulate AI are lagging behind the technology. So how can voters protect themselves? 

Host: Imogen Foulkes
Production assistant: Claire-Marie Germain
Distribution: Sara Pasino
Marketing: Xin Zhang

Please listen and subscribe to our science podcast -- the Swiss Connection. 

Get in touch!

Thank you for listening! If you like what we do, please leave a review or subscribe to our newsletter.

Speaker 1:

This is Inside Geneva. I'm your host, imogenfolks, and this is a production from SwissInfo, the international public media company of Switzerland.

Speaker 2:

In today's program, Earlier this week, as the 2024 election campaigns picked up, steam Meta announced it would start labeling AI-generated images to help users better judge what they're seeing.

Speaker 3:

I'm worried about who's going to win, but I'm also worried about whether my vote will count, and I'm worried about all kinds of disinformation that we see out there now more than I've ever seen before.

Speaker 5:

I'm going to show you two videos. One is real, one is altered by AI. Can you spot? The deep fake. The prerun is not a particular false statement or semi-false statement. The prerun is a guess that once you make people question people and institutions and processes they can't trust, then there's no way back. The British government must do more about deep fakes which he believes are a clear and present danger to UK democracy.

Speaker 1:

If they see something that's released to them that then uh-uh, this has been manipulated. They say we're not using it. It's been manipulated. The real picture did not look like this. The Republican National Committee imagines a dystopian future if President Biden is re-elected, but all isn't as it seems, so all the images in that ad were actually created using AI artificial intelligence.

Speaker 2:

Oh my goodness, wow. There is also a need for the government to step up their information campaigns. It's super important that they create the awareness for people to think twice before believing what they see.

Speaker 1:

Hello and welcome again to Inside Geneva. We've got a very interesting program for you today, particularly for those four billion of you who've got the right to vote in an election this year. I'm joined by our analyst, daniel Warner, and the first thing I'm going to do, danny, I have got a little test for you. Who is this?

Speaker 4:

You know the value of voting Democratic on our votes count. It's important that you save your vote for the November election. We'll need your help in electing Democrats up and down the ticket. Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again.

Speaker 1:

Any idea, it sounds familiar.

Speaker 3:

It sounds familiar. It's not a New Yorker like me. That's true, and it's a Democrat, so it's not Donald Trump. Let me guess that it's Joe Biden. But is it Joe Biden? I can't see him, so I don't know. Even if I saw, I will have questions.

Speaker 1:

Yeah. So hint, hint. Some of you may already know sounds like Joe Biden, but in fact that is a deep fake of somebody purporting to be Joe Biden, telling Democrats not to vote. The clues not in the way he sounds, but in what he's saying might tell you that's not actually Joe Biden. That is the nature of our program today. It's 2024, a massive election year and the first, probably, where we really will see the influence of artificial intelligence deep fakes, manipulation of voters. Danny, you're going to be voting this year. I guess You're American. I'm British. I'm going to be voting. Are you worried?

Speaker 3:

Well, I'm worried about who's going to win, but I'm also worried about whether my vote will count, and I'm worried about all kinds of disinformation that we see out there now more than I've ever seen before.

Speaker 1:

That's the thing it does seem to be around. We've seen a few. We're going to discuss some of the examples we've seen. I've been talking and we're going to hear from them over the course of this podcast to different people one from the United Nations, another from an NGO who are also pretty concerned about the influence AI artificial intelligence could have on this big year for democracy and whether we're actually prepared to recognise it, to cope with it. Now, the first of them is Gabriela Ramos of UNESCO. Now she is in charge of a department looking at new technology and how it affects our rights, basically, how it affects our rights as voters, as citizens of democratic countries. So my first question to her was Gabriela, are you worried?

Speaker 2:

Yes, yes, we are, Because there is a conjunction of probably two trends that we have not seen in previous periods of democratic processes. One is the scaling up of the capacities of artificial intelligence with the generative AI, which exponentially increased the tasks and the work that these technologies can deliver for people, for good and for bad, and the other part is the fact that the large number of countries are going to go into the electoral processes. That concerns almost half of the world population. So, yes, this is something that we should be very cautious and that has been the object of many discussions, in terms of how much the new technologies can help us improve the quality of the democratic dialogue and democratic processes, or will Joseph spoil it all? So I think this is the challenge that we are confronting.

Speaker 1:

It's interesting. She says it could improve the democratic process. Do you share that hint of?

Speaker 3:

optimism. The ideal Imogen is always a free and fair election. But we know in history in democratic countries that has not always been true. There have always been problems. But when she says scaling up capacity seems to me it's both positively to get more people voting, more information but on the other hand what we're talking about is the negative things that can happen. And when she talks about scaling up capacity, I think we're mostly worried about the negative.

Speaker 1:

I think I am. Yeah, I mean, maybe I'm a victim of sensational headlines, but I'm not seeing how this stuff can be used positively. I mean the hints in the name artificial intelligence. Maybe I'm making a bit too much of it, but it does seem to me. We've got this stuff here, we're not quite sure what it is and what it can do, and we're all going four billion of us to vote. One thing that I wanted to know was what should we be looking for? Are you looking out for particular things that you need to be wary of before you go to the ballot box?

Speaker 3:

Well, I think, first of all within the country, within the United States. We've had funny things happen before. When Kennedy was elected, we talked about votes in Chicago in Illinois Suitcases full of them and dead people voted. Exactly the dead people voting, but in this particular election, it seems to me, there's also an emphasis on foreign interference, and that's something that we have to look at, probably for the first time.

Speaker 1:

Well interesting. You should say that because I also talked to Alberto Fernandez-Kibacha. He's part of an organization dedicated to democracy and supporting democracy and he's been taking a very long, hard look at artificial intelligence and the kind or the threats to democracy on a broad spectrum in this year of 2024. And, as you will hear, he too highlights foreign interference.

Speaker 5:

I am worried about the role digital technologies are going to play, or are ready playing, in 2024 elections. The first one is an increased risk of foreign interference because of the current geopolitical scenario. We have seen this recently by the French government unearthing a Russian allegedly Russian disinformation campaign. Because of the geopolitical scenario, because of the tensions between NATO countries and Russia, us and China, we can see an increase in foreign interference. Foreign interference takes a lot of shapes, but it is also important to realize that foreign interference only works when there is a soil in which it can grow. So it has to be. It permits the vulnerabilities of the existing system. The second risk that I see is an increase in what some academics call participatory disinformation. What is participatory disinformation? Well, we tend to think that this information comes from a hidden corner of the internet and it grows organically. But it's not. It comes from leaders, it comes from active politicians with hundreds of TV live minutes, and it becomes participatory because they spread narratives and then some people pick up those narratives and reinforce those narratives.

Speaker 1:

Basically, you're saying that people who are standing for democratic election are lying.

Speaker 5:

Not all of them.

Speaker 1:

Not all of them, but some of them. A false narrative is a lie, really isn't it.

Speaker 5:

It's not necessarily a lie, and I think that's also one of the problems that we have with this issue. Let me put it this way this statement President Barack Obama was not born in the US. That's something we can check. We can check that he was actually born in the US. There is proof about it. But what about? The leader of the GOP says that it's not convinced that President Barack Obama was born in the US. Can you check that? Can you say if that's true or not? The statement is true. He says that he has questions.

Speaker 5:

This is not about one particular statement. This is about the narrative, the way of understanding things. What is the intention of this narrative? They didn't try to disqualify President Barack Obama. They were trying to erode his legitimacy as the legitimate President of the United States. It was all about people considering he might not have the legitimacy to be the president, and that's the participatory disinformation. Once the narrative is out in the wild, then people pick it up and find whatever evidence they can find or make up to reinforce this narrative. That's why I think it is an increase in risk, these political leaders throwing these narratives to the population.

Speaker 1:

So that was Alberto Fernandez-Gibarja, just to give him his full title. He's head of the Programme of Digitalization and Democracy at the International Idea Organization, dedicated to supporting democracy. Interesting things he said. Which worries you more? The foreign interference or the participatory disinformation?

Speaker 3:

I thought the second one and I love the expression the soil to grow. And as an American I was thinking of Roy Cohen in the 1960s who taught Richard Nixon many of the things that then were taken by other people. And I come back to a speech in 1997 by a public relations person, john Rendon. He gave at the Air Force Academy in 1997. He said I am the first information warrior.

Speaker 1:

Way back then.

Speaker 3:

Way back then, and I mean I think the Rendon meant something relatively simple. But I do know that the implications about information warfare have now evolved much more sophisticatedly than they were before.

Speaker 1:

One thing that strikes me is Alberto said that he's worried about foreign interference. What I think we as voters can perhaps trust is that our governments are looking at that. They are looking for foreign interference, perhaps, and trying to clamp down on it, but at the same time, in many countries around the world, these are the same governments who are standing for re-election and they might be quite interested in. This is where we get to the in-house misinformation. That was Alberto's second point. They might be quite keen on using some of this new technology to swing some votes their way.

Speaker 3:

I think his use of the word legitimacy is crucial, because what is legitimate and who should we believe? And I think that becomes more and more problematic.

Speaker 1:

I think this is the challenge voters are faced with and we hope, over the course of this podcast, to give you voters out there some advice about what to look for. One thing Gabriella from UNESCO pointed out to me is that the technology may be new, but leaders, dictators, people standing for election even in democracies who want to tell us things that persuade them to support us that is not new.

Speaker 2:

Propaganda has always been there since the Romans. Clean lies by not very ethical politicians have always been there. The problem now is that, with the power of these technologies, the capacities for harm can be massive and therefore people need to be really, really careful of what sources they use to inform their choices, to ensure that they can distinguish facts from wrong, because what happened is that the deep fakes and misinformation and disinformation is micro targeted.

Speaker 1:

You're right when you say propaganda has been around for thousands of years, but to a certain extent, because it's been around so long, we as voters in democracies are a bit wise to it, but AI generated stuff, we are not. It's too new. Do we need civics lessons in schools, for example, for young people to train them to start verifying information better?

Speaker 2:

First and foremost, we need to steer the development of these technologies ethically, because if you go down immediately to say citizens should protect themselves which is true, citizens should educate themselves yes, but you need to cut the bad guys at the beginning, not at the end.

Speaker 2:

It's not only the downstream that we Because there are also technological tools that we can use to counter these algorithms, to fight algorithms, to detect when something is false and not to be exposed in the cyberspace.

Speaker 2:

Those things are good, but I think that it's very important that we also tackle the upstream.

Speaker 2:

Those that are developing these technologies with this malicious intent should be framed in a way that there will not be impunity, that will be accountability, that will be responsibility, but also that will have the legal frameworks and the liability regimes that will increase the cost from doing so. Because what I think is that now there is very low cost on producing this misinformation and misleading people, and therefore we need to go back to the rule of law and even go in further, because we are now talking about neurotechnologies Without the neural data that these malicious actors collect or that the big platforms collect, but that then can be used for the wrong reasons, the manipulation and the kind of misleading information that people fault will not thrive because they will not be targeting you, and therefore we also need to tackle the question of neural data and the ethics of neurotechnology so deep information about your identity, your emotions, your cognitive biases that then can be used for these misinformation purposes. This is the whole system. We need a systemic approach.

Speaker 1:

So a systemic approach, an upstream approach, sounds great to me. Apart from, it isn't happening and hasn't happened with big tech and new technology.

Speaker 3:

Well, I think she does mention in the gen legal accountability and my point. I always use the example of the law of the sea. By the time the lawyers and the countries agreed on the law of the sea, the technology had already evolved. So if we count on states to pass laws or international, multinational, multi-state holders to pass rules about what goes on in AI, by the time they do that the AI has probably gone to other places and it's going to take a long time in that sense. So we're looking at 2024 and it seems to me almost impossible for either the tech companies or the countries to get something in place to make sure that we have free and fair elections, or freer and fairer elections, Yep, I agree with you, I think 2024, that carry on with your maritime metaphor.

Speaker 1:

that boat has sailed. But I also you said for the tech companies to pass. Do they even want to? Do they want to comply? Because how many times have we seen Mark Zuckerberg answering questions in Congress or the Senate? I never really promised any firm commitments but argued that we can police it ourselves, when a lot of the evidence if you look at particularly Facebook, the way it was used in Myanmar or Ethiopia for incitement to hatred has a terrible track record I don't think we can trust the big tech companies to police themselves this to you.

Speaker 3:

I don't see that she mentions low cost. It seems to me it would be hugely human resources intensive. And again it may get into certain kinds of subjective things with Elon Musk, what you put on, what you don't put on. That, only a government we would hope would be objective. So I think it's very expensive and I don't think it's for tomorrow.

Speaker 1:

So there is one place on our currently pretty benighted planet where laws are being passed to protect us and indeed have been passed and that is Europe, and Alberto sees some hope in that. He thinks that Europe's keenness to protect its citizens from the downsides of AI and big tech could be a template for the rest of the world.

Speaker 5:

If we look at the European Union, they have been regulated Users of social media companies in the European Union. Now we got the option to, for instance, don't get our, our feed run by a logarithm, but rather just by time. That's fundamental shift.

Speaker 1:

And it's really only the European Union, isn't it?

Speaker 5:

But many of these rules are.

Speaker 1:

It's the only part of the world that's done this.

Speaker 5:

True, but many of these rules are trickling down to other countries slowly and, of course, the companies are spending billions in lobbying to make sure that that doesn't happen. But it's happening slowly, but it's happening. The best example we have is, again, the data protection regulation at the European Union, the GDPR. This has become the global standard and it's been adopted by many, many different countries.

Speaker 1:

Can we just move on then? I mean, I'm quite heartened by what you've said, that there is some regulation, we know about the European Union and that we can protect our data being harvested here and there when we're online. But that was going on for a very long time, and meantime we have the new development of artificial intelligence and we played catch-up with social media. Are we playing catch-up now with AI For this year? Is it too late to prevent that kind of technology being used to manipulate democracy?

Speaker 5:

Artificial intelligence fundamentally doesn't change this, with a caveat, which are deep fakes. We have seen the example in the United States how they tried to convince people that they didn't have to vote in a primary election with a deep fake of President Biden's voice. So it was a rubbacle that it really looked like President Biden. I am assuming people recognize that the chances of President Biden calling you personally are very, very low, so that was probably.

Speaker 5:

And it was actually called out very quickly and it was called out very very quickly, but I wouldn't say the American elections cannot be affected by this and I think the problem is not going to be the content created. The problem is going to be the liar's dividend, the thing that everything can be denied and that anything can be questioned and people will not trust anything. We kept thinking that we can fight that with trying to correct falsehood, and that's not the problem. The problem is not a particular false statement or semi-false statement. The problem is that once you make people question people and institutions and processes they can't trust, then there's no way back.

Speaker 1:

That statement. I think he hit the nail on the head. I mean, that chills me actually. We know that Steve Bannon, for example one kind of communication advisor to Donald Trump on the far right of American politics has said openly that his role is to disrupt and sow doubt. And this is a real problem in an election year, if people are being encouraged not to trust any of the institutions, including the UN even, for example.

Speaker 3:

Most people get their information today from social media, and the social media can be easily manipulated. So the question is do you go to newspapers? Which newspapers, which institutions do you believe? And there has become a devaluation of the institutional integrity, so that there's more power with more people and more situations, which is a good thing. On the other hand, there's less trust for certain people in certain institutions. And once that trust is gone and I think Alberto was spot on how do you come back to trust certain things? And the sophistication of the artificial intelligence just continues to improve, which makes it more and more difficult to check.

Speaker 1:

And this is the problem we keep coming back to is that at the moment, perhaps we're exaggerating. Perhaps there are all sorts of good uses of AI benefiting us all over the place that I personally have unfortunately not paid attention to. What I see is bad actors who, if AI didn't exist, they'd be using something else. They'd be the old kind of old school social media, facebook or something. But now they have this new tool and it is to make us doubt all the time, make us not trust, make us believe in conspiracy theories.

Speaker 1:

You see it writ large in America. You're starting to see it in the United Kingdom Big election there too this year. So we come back to regulation, and this is a question I put to Gabriella, because, you know, the other thing that's happening is an unwillingness for countries to work together on big issues, and the UN is the body that's supposed to bring countries together on big issues, like trying to regulate AI for good, which is the UN slogan, ai for good. And she's in charge of this department, where she is getting the Chinese and the Americans and other countries in the same room to talk about it, and she was, I have to say, surprisingly optimistic, but maybe it's her job to be that.

Speaker 2:

You don't release these products to the market without having the due diligence, and the core of the matter is bringing us to something that is, it has its level of complexity, but it's not impossible to do, which is to increase the capacities of governments to understand the technologies and rule the technologies in a way that they deliver good outcomes, ensuring that we have the means to understand how it works and, with something going wrong, to have the liability regimes to sanction and therefore to establish again the good incentives for people to be careful when they are using these technologies. Companies are not bad guys. Companies always look at look-holes and if you have a free-flow market in this technological breakthroughs that has no ethical variables, they will use it. But then it is for us, for the international institutions and for the governments to redress that situation, and we can do it. It's another sector. It's another economic sector, like the financial sector, like the pharmaceutical sector, like the very complex, innovative sectors that we have been able to frame and we didn't frame them In 2008,. You have a crash.

Speaker 1:

Therefore, we cannot have a crash with AI Interesting comparison with 2008.

Speaker 3:

She's very positive, which is nice to hear, and I think of the recent situation with the photo of Kate and how quickly she said that we made a mistake. They haven't explained exactly why they did it, but it came out very quickly and was changed.

Speaker 1:

Well, what struck me I'm a journalist was it was the picture agencies who withdrew it. They put what we call a kill notice on it, and that was interesting because this was a harmless photo. They had probably fiddled it to make her look better or to make the kids all smile at the same time, or something. You know it came from the royal family. Everybody wanted to see how she was. But clearly the picture agencies are, in this big election year, again wise to this and are looking for it and obviously the control editors there said look, we can't put kill notices on doctored pictures of politicians which violate our guidelines and then let the royal family go through. But it was a sign to us that they are watching.

Speaker 3:

But what's interesting in the situation in England in Begint, is how quickly there was a recognition. That's what I'm saying. So in that sense, what Gabriella is saying, to be slightly optimistic, here is a case study where something was immediately said there's an error and there was a change.

Speaker 1:

And I think we can be positive that this is all quite legally unregulated. But the media outlets have got their rules and they are playing by them, and if they see something that's released to them that then uh-uh, this has been manipulated. They say we're not using it. It's been manipulated. The real picture did not look like this. So, yeah, I think that's quite positive. At the same time, what I wanted to ask you was when you go to vote, because the rules that Gabriella has been talking about throughout this program, as we agreed, are never going to come in in time for you and I. Going into the ballot box, what are you going to be looking out for? What are you going to do to prepare yourself to vote?

Speaker 3:

Well, I mean, I read about the candidates. I try to see when they present themselves in the media to hear how they sound. But basically, as I remember, when I started voting in New York, people brought a newspaper and, depending on whether they were Democrats, republicans or other, they read in the newspaper what they suggested that they voted. So it was based on a certain degree of institutional trust. But it is true that people and I much more skeptical when I see and read certain things because I want to know what's behind this.

Speaker 1:

Yeah, I mean I use him a little bit more optimistic than me. I mean, I look back to 2016 in America and I guess there's no point in me pretending that Donald Trump would have been my candidate had I been able to vote in the United States. But you saw all sorts of conspiracy theories about Hillary Clinton, about whether Barack Obama was really American, and this did seem to influence some voters. This worries me.

Speaker 3:

And there was certain news that came out about Clinton's this, about different things about Obama. That painted a picture which raised questions. You were less sure about things than you were before, and that is a difficult situation to begin when you're not trusting the sources of information.

Speaker 1:

And that let's go to Alberto again, because that was my question to him. His organization is supporting democracy in this new digital age, so I asked him what should voters be looking for? How can we protect ourselves, since the law this year is not going to protect us?

Speaker 5:

The first thing is common sense. We live in a world where almost anything that happens is recorded. There is always like a CCTV camera or somebody recording with their phones. It's almost impossible to do anything that is especially if you're a politician that is not recorded publicly. So if you see something that is extremely surprising like, let's say, President Biden canceling the elections use common sense, Like is this really really happening? And if it's happening, you are going to see a lot of different trustable media outlets confirming that this is happening. And that takes me to the second thing they can do Trustable media.

Speaker 5:

I think people has also a lot of people has lost the capacity to discern what is trustable media. That's homework we have. But you can trust media. And trust media doesn't mean you need to agree with media, but you can trust their ethics. You don't need to agree with the financial times. You can trust their ethics. They are not going to lie, they're not going to manipulate with false information just to convince you to vote one party or another. Same for the New York Times, Same for the BBC. People need to understand who the media you can trust and in general, you can trust real newspapers, but don't trust the YouTubers you're following that is saying things that are a little bit difficult to believe. That's something maybe you shouldn't trust.

Speaker 1:

Well, I for one, being from one of those trustable media Feel feel quite heartened by hearing that. Having said that, I do know that YouTube and influencers and so on on YouTube are incredibly popular with younger voters.

Speaker 3:

Incredibly popular people make their living now as influencers. The second point about what Alberto said is the number of people Reading the New York Times or listening to the BBC is not very high.

Speaker 3:

No, it's not we see fewer and fewer newspapers being published. So to say that you know, read or listen to Trustworthy, what does that mean? And the second point which I really didn't understand was his emphasis on common sense. What I think is common sense might not be what you think is common sense. So the concept of common is something in a polarized world that becomes more and more problematic.

Speaker 1:

I mean, I think he's trying to get into the head of, like you know that, the Pete the horrible pizza gate Rumor about Hillary Clinton that she was the head of some pedophile ring run out of a pizza restaurant in Washington. What he's trying to say is use your common sense. How can that possibly be true?

Speaker 3:

Some people believe it so much they went in and shot people in this restaurant on the other hand, on the other hand, imogen, there's an expression where this smoke this fire.

Speaker 3:

But there was no fire there but there was plenty of smoke and people bought it, and the question is why do people buy into this? Why does someone have 260 million people on the site? The question is appealing to certain things about people that are not necessarily Common sense that we would agree with, and the influences Clearly have more influence today than the New York Times.

Speaker 1:

Yeah, they do. And so that brings me to the very last bit of interview have here, which is from Gabriella, because she also took on board what Alberto said. You know, use your common sense, use trustable media. But she said in this election year, even if there's no laws in place yet, governments have a responsibility to tackle this information.

Speaker 2:

There is also a need for the government to step up their information campaigns is super important, that they create the awareness for people to think twice Before believing what they see. It's super important. They need to be able also to ask the big platforms to have the right elements there to avoid that the content go viral when we know is wrong. So so there are ways to do it and therefore I feel that this is also Responsibility and then a big message to civil society. I think that you have a lot of institutions in the civil society that are also Informing, and media, and, because you are a journalist, I think it's important that media plays its role to Warn, to prepare to, to increase the capacities of individuals, also to be sure that they protect themselves, first and foremost.

Speaker 1:

There's an awful lot of weight being put on the shoulders of the media and the last couple of interview clips there, I mean I'm glad that we are being shown as a good example of how you can get proper information In a functioning democracy. How would you sum this up?

Speaker 3:

Do you feel more confident about going into the ballot box now, or no, I feel more confident that the people are trying to do something, but I'm worried that between now and November in the United States or the other elections coming up in 2024, that we will have too little, too late.

Speaker 1:

Yeah, I think that would be my way to sum this up too. I'm worried that in the big Western democracies the US and the UK we're gonna see elections. I think they are going to be the ugliest, dirtiest ever, and In a climate like that, it's ripe for lots more disinformation. People might turn away from the main news and go to their Social media feeds to get away from the ugliness that's actually real, and I think we're gonna be Flooded with information. So it's gonna be hard for voters, it's gonna be hard for the media, it's gonna be hard for responsible Governments to call out things that are not true. Now, I'm sorry listeners out there to be pessimistic. I think it's good to know, I think it's good for us to be aware, it's good for us to discuss it, it's good that there are organizations like Albertos and the UNESCO, part of United Nations, looking at this. But be very alert what you consume in this election year. That's it for me and Danny from Inside Geneva. Thanks to all for listening.

Speaker 1:

If you have comments on Inside Geneva, don't hesitate to contact us at Inside Geneva. At Swissinfoch, you can find us, subscribe to us and review us wherever you get your podcasts. A reminder you've been listening to Inside Geneva from Swissinfo, the international public media company of Switzerland, available in many languages as well as English. Check out our other content at wwwswissinfoch. I'm Imogen folks. Thanks again for listening and do join us next time on Inside Geneva.

AI Deep Fakes in Elections
Regulating Technology for Good Outcomes
Navigating Disinformation in Elections