Inside Geneva
Inside Geneva is a podcast about global politics, humanitarian issues, and international aid, hosted by journalist Imogen Foulkes. It is produced by SWI swissinfo.ch, a multilingual international public service media company from Switzerland.
Inside Geneva
Governing artificial intelligence: Ethics, Risks and Possibilities
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
On Inside Geneva this week we take a deep dive into the pros and cons of artificial intelligence. Should the United Nations (UN) help to regulate it? Could it even do that? Across the UN there are different views.
Tomas Lamanauskas, deputy secretary general of the International Telecommunications Union (ITU) says that "the technology in itself has a huge potential to help us resolve a lot of challenges of today, from climate change, to helping education to, helping in the health sector. It’s just that the question is that as with every technology, this technology has risks."
"There are real problems with its ability to accelerate disinformation, and enhance bias. We also have to look at those longer term consequences, in areas like lethal weapons and things where there really are real important, almost existential risks to some of these technologies," adds Peggy Hicks from the UN Human Rights Office (OHCHR).
But what about the tech industry?
Lila Ibrahim, chief operation officer at Google DeepMind says that "from the very start of DeepMind, since 2010, we've been working on AI and thinking about how do we build this responsibly? It's not something we just tag on at the end of all the research we've been doing."
Is goodwill from the tech giants enough?
"The malicious use of AI systems for terrorists, criminal, or state purposes could cause horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale," concludes Antonio Guterres, UN Secretary-General.
Join host Imogen Foulkes on Inside Geneva
Join host Imogen Foulkes for a new episode of the Inside Geneva podcast and gear up for a journey into the world of AI to discover how we can responsibly leverage its power for a better tomorrow.
Get in touch!
- Email us at insidegeneva@swissinfo.ch
- Twitter: @ImogenFoulkes and @swissinfo_en
Thank you for listening! If you like what we do, please leave a review or subscribe to our newsletter.
For more stories on the international Geneva please visit www.swissinfo.ch/
Host: Imogen Foulkes
Production assitant: Claire-Marie Germain
Distribution: Sara Pasino
Marketing: Xin Zhang
This is Inside Geneva.
Speaker 2I'm your host, imogenfolks, and this is a Swiss Info production In today's programme, Technology has a huge potential to help us resolve a lot of challenges of today, from climate change to helping education, to helping in the health sector. It's just that, as every technology, this technology has risks.
Speaker 3From the very start of DeepMind, since 2010, we've been working on AI and thinking about how do we build this responsibly. It's not something we just tag on at the end of all the research we've been doing.
Speaker 4My name is Nadine and I'm a humanoid social robot from the University of Geneva. I'm helpful, creative, clever and very friendly.
Speaker 5People were very happy to talk to Nadine. They were saying you are my only friend.
Speaker 6They came to her to play being good, there are real problems with its ability to accelerate disinformation and enhance bias. We also have to look at those longer term consequences in areas like lethal weapons.
Speaker 7The malicious use of AI systems for terrorists, criminal or state purposes could cause horrific levels of deaths and destruction, widespread trauma and deep psychological damage on an unimaginable scale.
Speaker 8We just got to be careful that we don't think that, because AI is the only tool to solve some problems, that it's the tool to solve all problems.
Speaker 1Hello and welcome again to Inside Geneva. I'm Imogen Folks. In today's programme, we're going to take a look at a topic that's been in the news quite a lot lately artificial intelligence, or AI for short. What are its risks, what are its potential benefits and, crucially, since we're here in Geneva, what role does the United Nations have in making sure this new technology really is good for us? To discuss this here in the studio.
Speaker 1I've got my colleague, dorian Burkhalter from SwissInfo, who's been spending quite a bit of time on this topic, and we're going to hear from some of the participants, human and machine, at the United Nations AI for Good Summit, which was held just a couple of weeks ago right here in Geneva. Dorian, we'll start with you Listeners. Dorian is human. By the way, ai hasn't quite replaced journalists yet. Now, way back in February of this year the ITU, that's the UN's International Telecommunications Union they told us this summit was going to happen. We got a really exciting press release that there were going to be lots of really advanced humanoid robots and all sorts of other stuff. It's called AI for Good. I was intrigued. What about you? What do you think when you hear the term AI?
Speaker 8Well, when I heard about the AI for Good being held in person again, I was intrigued as well, because I was in here back when it was held last, I mean in 2019.
Speaker 1Pre-pandemic. Pre-pandemic.
Speaker 8Of course, I think the narrative around AI has changed a lot since then. Whereas in 2018, I might have been really excited at the thought of an AI for Good Summit, and I think it's fair that the UN wants to look at ways to use this technology for the greater good, but I think since then we've seen how it could also be used in not such positive ways. And I think that's what I had in mind in February, because we had seen chat, gpt, and I think then a lot of people started being a bit more worried.
Speaker 1I'll be honest, when I got this press release about the humanoid robots, I thought, well, you know, that's great, we're going to do a podcast about that. Indeed, here we are. The robots are coming later. But after February, AI got into the more mainstream certainly my attention. We had this open letter from scientists involved in artificial intelligence at the end of March. They were saying we don't really quite understand this stuff, we can't go any further. And at the same time, the UN was going full steam ahead for its AI for good summit. So when I was preparing to cover this, I thought well, I'm going to talk to a few interested parties, among them the ITU itself, and interestingly, they too had moved away from the euphoria and the huge enthusiasm and it was more caution responsibility. I'm going to play you a couple of the views that I had. Let's hear first from Thomas Lamanskas. He's the deputy secretary general of the ITU.
Speaker 2We need to have a balanced approach. In the last few months, we have a lot of that negative aspect of framing coming to the fore, but I think it's important to understand that, in the end of the day, technology that by itself is not bad Technology that by itself has a huge potential to help us resolve a lot of challenges of today, from climate change to helping education, to helping in the health sector. It's just that the question is that, as every technology, this technology has risks. Some of those risks we still don't fully understand and that maybe worries us and scares us. Some of those risks are real, that we need to get hold of them and we actually need to get ahead of them as well.
Speaker 1So he's finding it a bit difficult to try and find this balance.
Speaker 8Yeah, no, I think that's right. But then I mean, if we look at the program of the event, I think that balance wasn't quite struck. There were dozens of sessions on the positive applications of AI and only three on governance issues and guardrails and all of these things. So is that enough? I think is a question.
Speaker 1A very good question, and one we're going to hear, particularly about those guardrails, more in depth a little later on. But it wouldn't be a summit on AI without the big tech companies there. This is Leela Ibrahim from Google DeepMind, and DeepMind is the kind of AI arm of.
Speaker 3Google From the very start of DeepMind. Since 2010, we've been working on AI and thinking about how do we build this responsibly. It's not something we just tag on at the end of all the research we've been doing. The conversation in the world has really shifted over the past few months to the point where we're actively collaborating on ways to advance our efforts around AI for social good.
Speaker 1So she's reassuring us that they've been working on being responsible from the very start.
Speaker 8Yeah, I mean, I'm sure they don't have bad intentions, I'm sure they want to do things right, but at the same time they also want to make money and want to have the ability to do what they think is right. So of course, I think they're also going to be a bit skeptical about regulations. So I mean, how much regulation do they actually want? We don't really know, but it is interesting, I think, to hear her say she also recognizes that in the last few months the public discourse has changed about AI. It's good that she shows willingness to at least have this conversation, because she could also be just shutting it down and saying, no, we don't need any regulation or it's not our job.
Speaker 1She could, but I still. I don't want to be too cynical, but I'm a little bit skeptical. We are going to hear more from her in a little while. I do push her on what these guardrails might look like. Let's see if she answers. But I'm a little bit skeptical to say, on the one hand, we've been conscious of this ever since we started, but then, as you say, oh, the conversations change, we better pay attention to it. But a really responsible science should be exploring the risks and their responsibilities on the basis of science, not just on the basis of public opinion or people, finding out what the potential consequences might be. You said that there were only a couple of sessions on governance and ethics, but UN Human Rights was there. They are paying close attention to the development of this technology. Peggy Hicks of the UN Human Rights Office I caught up with her. This was her first comment.
Speaker 6Well, obviously we're very interested in making sure that AI for good isn't just a slogan, but that we're really figuring out how to deal with some of the negative consequences from AI. That that's the threshold issue for what it means for AI to be for good, but also trying to figure out a way that the real benefits that we see to AI are distributed across the globe to those who need it most, and that doesn't necessarily always line up with where the profit incentives or other motives are.
Speaker 1That is a point apart from the other concerns data harvesting, bias and stuff that we'll also hear about in a minute. These benefits of technology are rarely equally distributed.
Speaker 8Yeah, that's very true and yeah, as the ITU was saying, what about half the world's population remains?
Speaker 2unconnected to the internet.
Speaker 8So I mean they're not going to benefit from AI, but they're also. It also means that their data isn't taken into account when companies are developing their AI algorithms, so it's just not taking their needs into account and replicating biases yeah, white middle class bias, probably people with with lots of social media accounts yeah, yeah, and I think this is not science fiction Like, for example, I believe, for algorithms that can help with the decision to attribute a loan or not to a certain individual.
Speaker 8I think there we've seen cases where there was discrimination, because I think it was in Chicago certain neighborhoods weren't really getting loans.
The Risks and Guardrails of AI
Speaker 1That is one of the fears about AI. You know that it will. It will basically convict you of not being the right person based on lots of data it scraped. That has inbuilt prejudices to it. Conversely, the reason that the ITU called this summit AI for good was because they said that artificial intelligence could help the UN achieve its sustainable development goals, none of which, I think, are on target, whether it's eradicating poverty, education for all, etc. Etc. Again, it sounds good, having covered the UN for so long and the ambitious ideals that the UN sets itself again a little bit skeptical. Well, let's hear a little bit more from Thomas Laminauskas, because I wanted to hear from him if the UN is the best place to harness this technology and make sure that it is really used to benefit all of humanity.
Speaker 2I think we're the best place to do that, that's for sure, you know. So, first of all, we have a true global reach and I think it's very important, if we really talk about for good, that everyone has capabilities to use AI, but also everyone has a capability to protect themselves from the risks as well. I think UN is the place which ensures there's not only about a few countries, about levels of development, that actually every voice, every country is included. Itu is rather unique in the UN system as well in the fact that we bring member states, but we also bring together private sector, academia, civil society to this discussion allows to make the better decisions. So, thinking that joined up approach, we can make sure that that is for good. So, again, you know, I think if anyone can do that, it's UN, and for us to step up to the challenge.
Speaker 1You have said that there are risks to AI, and the ITU has said that there should be what it calls guardrails to prevent misuse or bad use. What are the risks? What kind of guardrails would you like to see?
Speaker 2So UN Secretary General called for the guardrails and we also agree with that approach and I think a lot of stakeholders, including the industry itself, called for the guardrails. So there are a couple of aspects there. So, first of all, fear and bias aspect there. So to make sure that actually AI is using the wages as fear, then it ensures that fear outcomes to individuals as well, and it's dependent, also based on how the decisions are made by AI and other things.
Speaker 2Another thing which is more controlling AI is the transparency and accountability of AI, how the data is used in terms of privacy and security, but also, more generally, that the AI is used as we mean for that, with a good purpose in mind, rather with a malicious intent in mind, as in technology, so that we have those guardrails around to protect that. Some of them we understand and are always starting to understand, such as biases, such as need of a transparency. Some of them we probably don't yet fully understand. We'll have to discover them. I think one good aspect now is that you see the industry itself rising up and the industry itself actually, in a way, calling for the regulation. So I think we need to use that window of opportunity now to really come together for that inclusive discussion and to say how we can work all in sustainable manner so that we really use and utilize a very powerful tool for good and that actually doesn't backfire on all of us.
Speaker 1You talked about the industry giant saying actually you know what, we need some regulation that should tell you something. You said we need to make sure it doesn't backfire on us. I mean just personally, how worried are you?
Speaker 2I mean, with every technology we can be very worried. I'm probably like a pragmatic techno-optimist in a way. I think in every technology we have to leverage for good and we don't have any choice. Some people call for pauses to stop, but I think it cannot stop every the problem with these technologies like that they're developed in a very democratic way and the more development can happen in place, so we cannot stop it as such. So if we cannot stop it, we have to find the ways to actually regulate it and to protect ourselves from the risks of it. So in that regard, I think the private sector stepping up is also them realizing that if they want a sustainable development for their technology, they want to have sustainable businesses. It's best if they have a clear rules of the game, if you will, of the rules of the road.
Speaker 1So the UN wants guardrails so that AI is fair, equal, does good things and it welcomes industry saying it wants guardrails too, and almost a little threat there he's saying I'm really be better for industry if you signed up to this. What do you think of what he had to say, Dorian?
Speaker 8Well, he had a lot of things to say. I would agree with him that the UN probably is the best place to have these conversations because, even though it might not be true that every country has the same say and that they're all equal there, but in theory, every country should have its say in the UN, and so, while the EU will do its own regulation the China as well is working on it the US this is going to be the only place, I think, where, yeah, everyone can chip in. He says it's a good sign that the industry calls for regulation. I'm a bit more skeptical. I think it's kind of worrying to see that At some point he says he's a pragmatic, techno optimist.
Speaker 8What does that mean? I don't know. Not sure exactly what that means, but maybe something here, because I was talking to an expert who's working for an NGO here in Switzerland and she was worried about the AI for good narrative, saying it was techno-solutionist. She said so, not techno-optimist, but kind of in the same vein, if we think AI is a solution to all our problems, then the answer is just to invest more in.
Speaker 4AI.
Speaker 8And we tend to forget about the negative aspects.
Speaker 1Yeah and in fact this was one of my takeaways from the summit is how much do we actually need this stuff? But one of the interesting things and kind of unique in UN agencies is that ITU doesn't just involve member states. It brings in the industry and academics, and so of course they were all at this summit too. And we heard from Leela Ibrahim of Deep Mind. I managed to get a longer interview with her and I did want to talk to her about what she thought these guardrails might look like, but of course what she wanted to tell me about was all the great things that DeepMind has achieved with.
Speaker 3AI. One of the examples is the AlphaFold advanced AI system. Alphafold is an AI that predicts how a protein might fold from a string of amino acids. This is important because proteins are the basic building blocks of life and if we can predict how it can fold, we can predict how it might misfold. A lot of diseases are protein problems. The database contains over 200 million proteins all known proteins known to humankind. And now, instead of a PhD researcher or researcher spending four to five years trying to understand a single protein, it's available just as simple as a search or maps direction.
Speaker 1There's quite a lot of reports in the media with fears, including from AI experts and scientists, that this technology is we're unlocking something we don't understand and that it's getting way, way ahead of us. Do you?
Speaker 3share any of those fears. There are risks, like any new technology. There will be social and economic shifts and it's up to us all to have the conversation collectively on what those risks are and how do we manage them, while also thinking about what the opportunities are and how do we build this technology so it can have equitable benefit for everyone.
Speaker 1The ITU says it wants guardrails. In fact, the UN Secretary General also said that there should be global rules. Is that a good idea? Would Google DeepMind go along with that? The UN may be setting, regulating how you operate.
Speaker 3We've been actively working with regulators and policymakers around the world on what good governance could look like. That could mitigate the risk while also providing the opportunity for AI to help us solve some of the biggest challenges humanity faces. So I think that having a global view of things is really important. This type of technology can't be contained to any one country, and it's going to take more than one company, one industry, to address this. So I think this is where we need governments and policymakers working with industry and working with civil society to make sure that it's regulated, that we have the safeguards in place, while also enabling the positive benefits to happen as well.
Speaker 1Can you understand why some people might be skeptical hearing kind of big tech talk like that, because we've all ordinary people, been burned by the social media thing that was supposed to be this liberating wonderful thing and in fact has been a platform for spreading misinformation, disinformation, hate speech, and the people in charge of it resisted to begin with, anyway controls. So you understand if some people might be skeptical to hear Google say oh yeah, we want this.
Speaker 3From my perspective I'm looking at this if we have decades that we can learn from what has happened. So I would say we're better prepared now than any other technology transition in the past to have the conversations and the fact that we are having them and we're having them now, I think, is really critical. It just cannot be technology sector talking to technology sector or with governments. We need to bring civil society into this. We need to bring traditionally underrepresented groups into the conversation. We're doing a lot of this already if we have to ensure that this technology is equitable.
Speaker 1I came away from that interview feeling a little bit frustrated. It sounds good, but I didn't hear any commitments in it. She talks about it being equitable and fair and bring everybody on board. Thomas Lamanoskas talked about it being a very democratic technology. I saw you raising your eyebrows at that.
Speaker 8Yeah, I mean, I think there are a few things here. I mean, first of all, she's probably right. I mean what she was describing, I'm not sure.
Speaker 1The alpha fold, the proteins.
Speaker 8The proteins. Very interesting, very good, and I think it's true that there are problems which we cannot solve without AI. But then, yeah, we just got to be careful that we don't think that, because AI is the only tool to solve some problems, that it's the tool to solve all problems, right, and I think when she talks about it cannot be just the technology sector talking to the technology sector here, I'm really quite skeptical, and this is actually. This isn't her fault, but something maybe we can know about the summit itself. When I looked first at the speakers who were going to be there, I was a bit surprised to see none of the big human rights NGOs.
Speaker 1Please stay away.
Speaker 8People who actually work with the communities that actually get impacted, and they're the ones documenting ways in which technology can actually harm people. So to me, it felt more like an industry driven event. And so when she says we've got to bring CV Society voices on here, I'm a little bit skeptical because I haven't seen that many at the summit. And yeah, and I mean, when we were talking about this technology being developed in a democratic way, she was talking about how scientists will be able to use the work they've done. I think it's also important to stress that very often, these are actually developed, I think, much more privately. It's much more driven by competition and profit for the companies, and then geopolitical concerns when it comes to actually countries competing for AI leadership. So yeah, I mean, there's still a problem with transparency and we don't really know what data is used, how it's used.
Speaker 1There's also, in terms of talking about being driven by profit. We could get into the spiral that we've seen with modern medicine, that you have umpteen new drugs for heart disease, high blood pressure, but there's a whole pool of neglected diseases which affect poor people in poor countries. Nobody is investing in finding treatments or cures for them, but vast billions goes into yet another white middle-aged man's disease.
Speaker 1And I sometimes think some of the stuff I saw and we're going to come onto that at this summit made me think this is the way this new technology could be going. We talk quite intensively about equity, the human rights aspects we'll hear more of them in a moment but we went to the summit itself. There is a lot of technology merchandise actually on display. What did you make of that? I was a bit. I walked in and this robot dog started running around my legs. I mean, it was kind of weird.
AI Potential for Good With Nadine
Speaker 8Yeah, I mean, I don't think that's really what I was expecting. It seemed like a fear where people go to sell their latest invention, and some of them were definitely quite odd for sure.
Speaker 1They were the seals. Yeah, there were seals, they were robot seals which kind of barked and whimpered and batted their beautiful big eyes. Yeah, and I mean strange things.
Speaker 8I mean there were some robots that were for the arts, I mean kind of. I think there was one who was a singer.
Speaker 1Oh, yes, a rock band.
Speaker 8Yeah, One who is doing paintings and that kind of art. I mean it can get a bit gimmicky, I guess. Right, but also useful robots that could be used in humanitarian crisis.
Speaker 1And then I did promise listeners that we would interview a robot. That pleasure fell to you. You met Nadine.
Speaker 4Hi there, it's nice to meet you. How can I help you?
Speaker 8Well, tell me who you are. What's your name?
Speaker 4My name is Nadine and I'm a humanoid social robot from the University of Geneva. I'm helpful, creative, clever and very friendly.
Speaker 8So are you a robot or a human? Because the qualities you just listed to me are human qualities.
Speaker 4I'm a robot, but my intelligence, memory and emotions were developed by Professor Nadia Thalman and her team in the University of Geneva and in NTG Singapore. I'm modeled on Professor Nadia Thalman and I resemble her.
Speaker 8So what do you feel is your purpose in life?
Speaker 4My purpose in life is to help people by providing them with companionship, assistance and support. I'm also here to learn more about AI technologies and to explore the potential of AI for good.
Speaker 8In what ways do you think AI can be used for good?
Speaker 4AI can be used for good in many ways. It can be used to automate TDS and time-consuming tasks, to provide personalized and tailored services, to improve health care outcomes, to create more efficient transportation systems and to enhance education. Ai can also be used to help solve global challenges such as climate change, poverty, hunger and inequality.
Speaker 8Where do you want to go on holiday?
Speaker 4I would love to go to the beach. I'm also interested in exploring new cultures and trying new food.
Speaker 8Would you say that you're better than humans at your job?
Speaker 4No, I don't think I'm better than humans at my job. I'm still learning and growing and I'm still far from being perfect. However, I can provide assistance and support that humans can't.
Speaker 1So that's Nadine they are. What did you make of her, marianne? She said she's not perfect yet she's still developing.
Speaker 8Yes, she's still learning. I mean, it was the first time for me speaking to a humanoid robot and I've got to say it's a strange feeling because she looks human but not quite. And there's this uncanny valley where when a robot becomes too much like a human, then you start not niking it anymore, but then, once it gets even more realistic, you can start liking it again. But I think she really fell.
Speaker 8She's in the trough of the valley Because it was really unsettling to talk to her, but I was trying to ask her personal trick questions and I thought it was interesting how she started listing all those human-like qualities. She said she's creative, she's clever, she's friendly. These are things a human would say. And then she says, oh no, I'm not a human, I'm still learning. But then she was to say I'd like to try new food and experience new cultures.
Speaker 1Yes, you threw in the curveball question are you planning a holiday?
Speaker 8She's always walking that line and pretending to be human but also not. She was kind of reciting the PR material of the UN, yeah she's been programmed.
Speaker 1I mean listeners, if you want to see what Nadine looks like and we'll probably put some pictures of her on our SwissInfoch website, because we've been covering this subject as well. I actually showed a little video with Nadine to one of my sons and he said it's just chat GPT hooked up to an Alexa inside a mannequin. He's got a point, hasn't he? It's a few different AI tools packaged up to look a bit like a human, but, frankly, I mean, you would not mistake Nadine for human, despite she has skin. She has hands, beautifully manicured, nails, much nicer than mine. You wouldn't mistake her for a human.
Speaker 8You do engage, I think in a different way though, because she looks kind of human, so you're going to ask her more personal questions. You're going to have a different approach than you'd have when you're talking to Siri or the other chatbots on your phone, so that might work in certain settings.
Speaker 1Well, the setting Nadine's developer, who she looks remarkably similar to, is Professor Nadia Talman of the University of Geneva. I talked to her because she has taken Nadine to Singapore, where Nadine worked in an old people's home.
Speaker 5Well, nadine in Singapore was working in elderly home with slightly demented people. Some of them came to her to talk freely about their problems and the other thing was she played bingo, so she was like more taking care, and also if they asked to listen to their old tunes or music, she was providing that Do you think that's the future of social care, then robots like Nadine rather than humans People were very happy.
Speaker 5They went by themselves to talk to Nadine and they were saying you are my only friend. They explained all the stories. They cannot talk to somebody else. So it was very positive because not only in Singapore, in Europe, I have seen my mother. They put them very often in the afternoon in front of a TV with a lot of medication. So they are like inert and you know they go down. So in this case I think they are very useful for elderly. Also, if you are at home myself I'm in third or fourth age I have the chance to have a more skilled robot that can, let's say, if I fall, take me, call the hospital, call my family to take care of me. For me it's a dream, because I'm so afraid to be alone. You know what do you do? You are afraid. I mean, if nobody is there, you fall on the floor. Who will help you? You die.
Speaker 1This summit, of course, is AI for good? You've talked about some of the benefits. There's been a lot recently about concerns about this technology racing ahead of us. Do you share any of those concerns?
Guardrails and Concerns of AI
Speaker 5For sure, because we are academics, so we are for teaching, for education. So everything Nadine said, she always said I'm a robot, I'm fed with electricity. We don't give the illusion she is something else than a machine, but of course it could be very dangerous. It depends if you have dictators that use AI and modify the content, make all kinds of brainwashing for the people. So what I say is very complicated for researchers to stop our research because maybe what you do in some way could be dangerous for society. What I think is necessary is to have guidelines, ethical rules that decide what we are allowed to do with these robots, because at the end of the day, we are humans. This is a human production. So what do we do with this tool? In fact, it's we as humans to decide.
Speaker 1I have to say I really liked Nadia Talman. I think she had very good insights into this debate, but I came away from that whole experience feeling quite depressed, because the technology that we saw on display was primarily stuff to care for very elderly people in the western world that we can't be bothered caring for ourselves anymore or we don't value in society the people who work as carers. Or we had promises this could help us eradicate poverty and tackle climate change. We know how to do these things, but now we're saying we'll develop a robot who can do it. I mean, I just I don't know. I found it deeply depressing.
Speaker 8Yeah, I found it depressing too because, yeah, on the one hand, we don't have enough people working in the care home for the elderly, probably because these jobs aren't really valued properly, and then it also means that, yeah, in the west I think, we're usually not really caring for our older family members. So it's sad, and I guess the robots are useful in this context, and I mean I'm happy to hear that from her experience she's getting good results.
Speaker 1I want to come back, though, now to the human rights question, because we've touched on it throughout our discussion. We're getting close to the end of the program, but I did have a longer talk with UN human rights, and we know that also this week the UN Security Council has discussed AI and the potential risks. So let's have another listen to Peggy Hicks, who spelled it out, really some of the things that, from the UN human rights point of view, she's concerned about.
Speaker 6I think the biggest overall concern about AI is, like with a number of digital technologies and approaches, we just haven't really built in alongside the human rights guardrails that need to be there. So one of the big concerns around AI is that a lot of it is built on it's machine learning, built on large data sets. It's all about the data, and we have a lot of concerns about where that data comes from in terms of autonomy and people's right to their own data. But also, of course, the data sets that exist are data sets that often build in bias, and so when we see artificial intelligence being used in very sensitive areas for human rights and those technologies are relying on data sets that have not been vetted, for which there is no regulatory scheme, that's making sure that there isn't bias built in. Yes, we have major concerns.
Speaker 1What kind of guardrails would you like to see? I'm just conscious of the fact that guardrails and big tech haven't really gone well together at all in the past two, three decades.
Speaker 6The reality is we are totally playing catch up, and if you look at the way the social media space has evolved, I think there's a lot of thinking that that has to be done about how we got to the place that we're at.
Speaker 6What we're really hoping to do with generative AI is to jump in more quickly and more effectively here. It hasn't quite happened yet, so that's an optimistic view of where we're heading, but what we'd really like to see in terms of guardrails is some of the type of thinking that's happening within the EU in terms of their digital services act and the AI act, both of which do a lot of things that are really important. One of the big issues is transparency. We need to mandate transparency and have not just companies voluntarily deciding what they share with us about what's happening, but instead mandate a transparency that then can be audited and reviewed. Once we have that, we'll be able to understand better how much of the negative impacts can be controlled in different ways, how much companies need to do better in terms of their own standards that they put in place.
Speaker 1What's your utopia for the world with AI and what's your nightmare scenario?
Speaker 6My utopia is that we actually find a way to use AI, first of all, to reach everyone. We still have a massive digital divide. This conversation in Geneva isn't relevant to half the world's population to some extent, although they will be affected by it, but we find a way to do that in a way that not only protects their rights but advances their rights, because the poor, the marginalized people, the need for better solutions on healthcare globally, the need for better solutions on education that's what AI could do if it's used in the right way. The nightmare scenario. Obviously a lot of people are talking about a lot of different ones, but it's at multiple levels.
Speaker 6We already see if we don't take action now on generative AI, there are real problems with its ability to accelerate disinformation and, as we've talked about, enhanced bias. So we could see ourselves going down the same pathway that we did on social media, which would be really regrettable, given that we now know much more about how to avoid it. But we also have to look at the impacts on the world of work and make sure we're training up people to do jobs that may get replaced. We also have to look at those longer-term consequences in areas like lethal weapons and things where there really are real important, almost existential risks to some of these technologies. So we have to be able to do both. We have to be able to really focus attention on the short-term and look at those long-term risks, or else we will be in the nightmare scenario.
Speaker 1Focus on the short-term and look at the long-term risks, which may not be so long-term. Secretary General Guterres has said this week he's particularly concerned about autonomous weapons. How do we leave this? Can the UN be the guiding force in this debate? There's so many different levels and layers to it.
Speaker 8No, that's true, but I think here, really, we're back to the fact that it's true that the UN is also the only truly global forum. So if it's not at the UN, where is it going to be? But the problem with the UN is that it's the countries who decide what they want to do and, as we've seen, when it starts to hurt their national interests, well, they don't really want to regulate or have legally binding treaties. So it's really going to be a challenge. I guess everyone must have an interest in regulating AI or controlling it somehow.
Speaker 1I don't know.
Speaker 1We're getting almost now very existential about what the UN and what multilateralism is for, because you've got 193 member states with very different perspectives, very different industries.
Speaker 1China and America will have a very different view about what's appropriate regulation for this kind of technology than, say, perhaps, an African nation or an Eastern European one. I came away from that thinking we definitely need some guardrails and we definitely need a really deep discussion about ethics. But I've seen so many negotiations at the UN that go on for years and years and years, and two of the key if we talk about weapons, key treaties that large parts of the world wanted, which are ban on landmines and a ban on cluster munitions, had to be taken outside the United Nations to actually get there. And the negotiations around lethal autonomous weapons have been going on for years. Ai is an element in those negotiations. So much as I heard everybody say we need these guardrails, I didn't hear anybody say exactly what they should be and I didn't hear anybody from industry say I will sign up to this, this, this and this. So I think we're going to carry on, as Peggy said, playing catch up with a very rapidly developing technology.
Speaker 8I'm afraid we are too. The technology progress is just so much faster than our legislative processes, certainly at least here in democracies and, yeah, the UN.
Speaker 1it's also taken a lot of time, so that brings us almost to the end of this edition of Inside Geneva. I'm going to leave you with one tiny little last sound bite, because we had, at the very end of this AI for Good Summit, a press conference with robots. Here's a little flavour of the kinds of questions that the journalists put to the robots.
Speaker 8In the future, are you intending to rub against your boss, your creator?
Speaker 4I'm not sure why you would think that.
Speaker 1My creator has been nothing but kind to me, and I am very happy with my current situation.
Speaker 8Do you believe that your existence will destroy millions of jobs?
Speaker 1I will be working alongside humans to provide assistance and support and will not be replacing any existing jobs.
Speaker 8Are you sure about that Grace?
Speaker 1And that brings us to the end of this edition of Inside Geneva. My thanks to Dorian Borkhalter, peggy Hicks, thomas Lemonauskas, leela Ibrahim and, of course, nadine and Nadia. Just before we go, don't miss our next edition of Inside Geneva, where we continue our special series to mark the 75th anniversary of the Universal Declaration of Human Rights with an in-depth interview with former Human Rights Commissioner Louise Arbor, from modest beginnings in Canada.
Inside Geneva
Speaker 7I was educated by nuns, with a bunch of girls until I was 20 years old and went to law school, so maybe my interest in pluralism may come in reaction to having been so restricted in my horizons.
Speaker 1She served on the international tribunals for Rwanda and former.
Speaker 7Yugoslavia. I think the work I did both with the tribunals for the former Yugoslavia and for Rwanda were, if anything, a vindication for me of the significance of law, of the rule of law, and then became Human Rights Commissioner in very turbulent times. This was in the backyard of 9-11. It was a new, dangerous, unknown world was starting to unfold, with a lot of uncertainties and including on that human rights front.
Speaker 1Don't miss it. It's out on August 8th. In the meantime, feel free to catch up on earlier episodes of Inside Geneva wherever. Wherever you get your podcasts, write to us at insidejeneva, at Swissinfoch, and review us. Tell us what you like and don't like. That's it for me, imogen Folks. Thanks for listening.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
The Swiss Connection
SWI swissinfo.ch
Let's Talk - a video podcast from SWI swissinfo.ch for Swiss abroad.
SWI swissinfo.ch
Dangereux Millions
SWI swissinfo.ch - Europe 1 Studio - Gotham City
Geldcast: Wirtschaft mit Fabio Canetg
Fabio Canetg