British tech advocacy consultancy, Milltown Partners, recently completed a first-of-its-kind study on international attitudes toward artificial intelligence. Director of Technology Policy, Leo Rees, says where you live - U.S., UK, France, or Germany - has a lot to do with how you feel about AI. Bonus: Stay tuned until the end of the episode, when Leo tells Niki about his adventure buying his first NFT.
Niki: I’m Niki Christoff, and welcome to Tech’ed Up. Today’s guest in the studio is Leo Rees, whose company recently completed a first-of-its-kind study on international attitudes towards artificial intelligence. We’re breaking down the study’s most surprising findings and what they reveal about ourselves, how our personal human experience influences the way we feel about AI.
As a bonus, stay tuned until the end of the episode when Leo will walk us through his experience buying a Damien Hirst NFT.
Niki: Leo Rees, thank you for coming on the podcast.
Leo: It's a pleasure, Niki. Great to be here.
Niki: People might wonder how I recruit guests [Leo: mm-hmm], and the answer is we met a couple of days ago and you made the mistake of telling me about an NFT[Leo: mm-hmm]. And I said, ”You have to come on the podcast now.” [Leo: mm-hmm] And you're here!
Leo: No, you're welcome. And I'm glad the bar is that high. [both laugh]
Niki: It's a high bar! So, we're not talking NFTs, although at the very end, you're going to do me a solid and walk-through- you own an NFT, and people have no idea how to buy one. [Leo: Sure] That'll be two minutes. [Leo: Sure] The point of the podcast today is artificial intelligence. [Leo: mm-hmm] AI. You've just completed, kind of, a first-of-its-kind study of international attitudes about it. [Leo: Yep] So, let's dig into it. What, what was the study? How did you tee it up?
Leo: Sure! Um, so what we wanted to understand is, I think artificial intelligence is an area that's got a lot of assumed truths about it. A lot of myths. Most of them, very fear-mongering or blindly optimistic. And, we wanted to understand when you dig into the attitudes of the people who are informed on this topic, what do they actually feel about it? And, at the moment, it's a big white space in regulatory terms, so we wanted to understand where do people see the benefits of this technology? Where do they, where are they concerned, and what might society's response to that look like?
Niki: And you had described to me, which you kind of just said, but I'm going to repeat it [bmm-hmm] The concept of doomsday [Leo: mm-hmm!], AI positions, and then, evangelicals. [Leo: Sure!] Of the people you surveyed, did they have pre-existing perspectives? Did you screen for that?
Leo: Yeah. So, what we did was we tried to recruit people using actually a panel, uh, which my consultancy Milltown Partners has built with YouGov, which is basically a panel made up of people who work in and around technology policy issues and have a specific interest in AI. But just to be clear, these aren't, like, the developers that are building it, the engineers, or, like, really specialist professors of Stanford, although they may be captured by our panel. They’re people who have a generalist interest in technology and AI issues and are therefore informed, but not developers themselves. If that makes sense.
Niki: Yeah, which is a little bit like the people who listened to this podcast [Leo: Right] So, people who are informed, but want to learn more, have an interest in tech.
Leo: Exactly. And also, it's important to say we recruited from around the world. So, we've got German, France. British and American respondents on there. And, I think, that's really important in understanding where the kind of differences and alignments are in global attitudes to AI.
Niki: So, let's dig into it. The Americans. The Brits. [Leo: Yup] The French and the Germans. How do they feel about AI?
Leo: Sounds like the start of a joke! Doesn't it? [Niki: It does sound like- I know, yes!] Well, I think we see a surprising amount of commonality and I think to, your point on, like, headlines, I think when people are primed with things that they've read before in the newspapers, like, AI creates doomsday, Terminator-style scenarios, or AI is gonna solve climate change, there's actually a lot of consensus around some of those topics because people are familiar with them and have seen them before and are happy to kind of agree with them on the basis that they're in the discourse.
However, we see, like, some serious regional differences on a number of topics, and a lot of those have to do with AI, um, and how they see the use cases of them in their markets. But, I think, a lot of them are also about faith in the states and faith in society to deal with the implications of those technologies. So, the French and the Germans are generally much more trusting in states, generally much more enthusiastic about the ability of policy to cater for new and fresh challenges. Go too far in hypothesizing, why that might be, but they are in a position of much more political stability than some of the recent British and American history, I'd guess.
Niki: Yeah, I think we're in- I don't want to say like meltdown mode [Leo: chuckles], but y'all had Brexit, we've had 2016 through now. So, I do think that there's, certainly in the United States, a distrust, or it might not even be distrust, but, just, a lack of faith that the government can get its act together in regulating things.[Leo: Sure] I don't want to speak for the Brits, but it makes sense that we might feel skeptical that our government can handle it.
Leo: Absolutely. But I think we just see that there are different levels of trust in the ability of regulation to bite and in the ability of society to hold some of these technologies to account. And the Brits and the Americans are, definitely, are slightly more cynical on that, than the, than the French and the Germans. Now that doesn't necessarily translate into optimism for the technology, because the British are actually quite optimistic about tech. It's just that they don't think, on this topic, that regulation is going to do a great deal.
Niki: I think that’s a really good point. And another thing about, obviously we're known for this, Americans, don't want to be over-regulated. We have states where we want very little regulation than we have other states, like California, where they're highly regulated [Leo: mm-hmm], but there is sort of a contrarianism. But that people don't want to be, have too many rules here, but I do think we're optimistic about technology and technology solving challenges, so [Leo: hmm] Okay. So what were the optimistic findings that you found?
Leo: Well, I think that people definitely saw AI as a game-changer. We actually did one really quite fun, uh, test with people which were on the survey, which was, you had to, you were, you were confronted with an option of two statements: one, which was exciting and the other one, which was scary; one, which was ethical, one which was unethical. And we asked respondents to just, without thinking, answer as quick as they could. And the idea of this is you get people's instinctive reaction to something when they haven't thought about it too much. And really surprisingly, there was overwhelming optimism and overwhelming, sort of, positivity in the answers. So, people's starting point is actually quite optimistic. And, I think that people are really optimistic about the potential of AI when they think about challenges that they know are currently too big for us with our current tools to take on and take forward. So, things like genetics and medicine that people have seen over the course of the pandemic, things like climate change, which I think we accept as, a, an existential threat to us all. Um, by contrast, I think people saw a lot more pessimism in areas where it was seen that AI might have been a potentially invasive or very, sort of, manipulative role in their personal well-agency. So, or, in areas where there was more perceived downside than upside, unlike with something like climate change.
Niki: So, just to back up [Leo: Sure] a smidge. An example might be, you would quickly ask them positive or negative about using this to sequence a gene- I mean, I have no idea actually, but-
Leo: Sure. I think we were; actually, it was more, it was even a step further back than that. It wasn't about a specific instance. It was just: when you think of artificial intelligence, do you think it's exciting or scary? Do you think it's ethical or unethical? Is it hype or is it legit? And, like, on that, people genuinely seem to think that it's a legitimate thing, but that, and that is very exciting and has a lot of potential. Um, which is interesting because if you just read, you know, POLITICO or you just read the kind of, I guess the investment theses, of lots of these companies, there's a lot of, like, cynicism about what is AI, is there actually anything in it or not? But I think this audience generally thinks that there is.
Niki: I am optimistic in any use case where it makes my life easier. [Leo: mm-hmm] If AI is making, if it's adding my flight to my calendar, [Leo: chuckles] without me having to do that myself- I’m for it! If it’s helping, once we get to self-driving cars, if it's going to help them scan what's happening and make them safer and we have fewer accidents or deaths because of that- we're not there yet, we're really far away [Leo: Sure], but I think that's good. The idea that we're using AI…Well, I'm interested in how China might play into this [Leo: mm-hmm] I think for me, where I get freaked out, is the idea that the Chinese government is using AI for facial recognition [Leo: mm-hmm], for controlling its population, for I'm sure. Cyberwarfare [Leo: mm-hmm] and cyberespionage [Leo: mm-hmm]. This podcast drags China a lot [Leo: mm-hmm], but, this is where I get freaked out. Did you dig into that at all?
Leo: Yeah, definitely. And, I know, I heard that you've spoken about this on your previous episodes, I think [Niki: laughs] [Niki: It’s a go to] Um, no, but it's, it's an important thing to be talking about. Uh, I, we did try to dig into this a bit, because I think when you talk to rule makers about this, there are kind of two things going on in their head at the same time. There's firstly, what do we do about this, quite, y’know, exponential technology that is going to impact every level of our lives and how do we ensure that it reflects societal expectations of what technology should be able to do and respects the law. On the other hand, we have this geopolitical dynamic where AI supremacy, if you will, or securing geopolitical advantage by having the most powerful AI is a real concern. So, there's a tension there between wanting to constrain AI in some ways to ensure the innovation is responsible, perhaps a bit slower paced and works for society versus ensuring that, particularly Western liberal states of the kinds that we surveyed, can keep up with where China is with its, kind of, no-limits approach to AI.
And, what we found is, when you trade those things off against one another, people are actually more concerned about ensuring they do limit the use cases of AI in a kind of domestic setting for society's benefit than they are just throwing all rules out of the table and allowing unbridled innovation to keep up with China. But, I think, what we saw was that, that is a real, y’know, a real schism in the minds of people thinking about this. And depending on how you ask the question, you might get a different answer.
Niki: So to recap what you just said, in these Western liberal democracies, US, UK, France, Germany, people would rather we take, in general, these kind of informed influencers paying attention to it, would rather we take the time to make sure we have take bias out of it, respect privacy, have some transparency probably about around the algorithm. So, these are examples of policies that we might put in place to just roll it out slowly, methodically and in a way that is consistent with whatever our culture is and our laws. But then, that's, that is something people care more about than “Just give the AI whatever we need to do to keep pace or outpace the Chinese.”
Leo: The reasons why they might be concerned about, kind of tethering AI a little bit might change based on the individual that you ask. But I think, over a survey of a thousand people from these countries, we found that when you directly measure up those as motivations for why we should be looking at regulating or not regulating AI, people are much more concerned about ensuring that it's held to account in a responsible way for society than they are about securing geopolitical advantage by allowing innovation at all costs.
Niki: I think it’s an important insight and maybe a way that, and this is the next question for you; is how regulators and policy makers think about it? I mean, here, we're sitting in Washington, DC. We've had some resignations of people from our defense agencies [Leo: mm-hmm] saying “I'm resigning because we're so far behind [Leo: mm-hmm] in cybersecurity, in understanding this tech, like, we're so far behind [Leo: mm-hmm], we may never, we may never catch up.” This has happened recently in Washington.
That's a subset of people who their daily life is around that. And, is there a way we can kind of bifurcate consumer apps versus how the government uses it [Leo: hmm] in a way that kind of squares this, right? So that you take into account people's concerns. So who's going to benefit the most from AI?
Leo: Okay! So, that's a different question, but let's start with that because that's really interesting.
So, I think what we found is that the biggest beneficiaries of AI across the piece, without any shadow of a doubt, is big business. And 80% of people think that big business is going to have a huge net benefit from this. By contrast, minority groups are seen as people that do not stand to benefit very much from this technology. I think it was something like only 23% of people saw net benefit for minority groups from the application of AI. So., there's a real, pressing question there about where power from these systems end up. And that has got to, to your point, feed into the regulatory conversation. Now, the second point, I think that you had, that was like, what does the rules look like and how should we think about making them?
And actually, when we did this research, we did it in partnership with Clifford Chance, um, who’re obviously a global law firm, because I think they're increasingly being asked questions that relate to this space that are not so much about compliance with the letter of the law, but in thinking about, well, what's the spirit of policy-making. Where is it likely to take us? And, there are some, there's a whole bunch of data in the report about this. I encourage anyone nerdy or sad enough like me to go read it [Niki: chuckles]. But like, I think in general terms, what we found is that there's not a great degree of a preference as to what form the rules take, but that is a lot of strength of feeling that there should be something done.
And one thing that came clear from the options we presented to respondents on different types of regulation that might come forward, is it's clear that the era of self-regulation of companies taking governance on themselves is over. Like, it's not sufficient on its own, while it's important, it's not sufficient on its own.
Now, interestingly, when you started looking into sectors in, like, the traditional economy that we understand and ask them how should AI, how important is it to regulate AI in those sectors, we saw a much more varied picture. So in things that are always traditionally very regulated, like defense or financial services or healthcare, people saw that actually those were real priorities for AI regulation versus those areas where perhaps they're sort of new sectors or the kind of perceived upside of where AI might add something like climate, and therefore, innovation should be allowed, where possible, attracted lower levels of support. So, to answer your question, it might be that actually thinking about AI as this, like, intangible blob that we just need to do something about might be not the way that people are more accustomed to thinking about rules.
It might be better to think about the use cases, the applications, because that's where people can get a head, their heads, much more clearly around the impetus to regulate or not.
Niki: I think this is so important. I actually just had this conversation with one of my clients [Leo: mm-hmm] last week where I said, you know, when I think about AI and where we need to focus on a framework. Insurance companies making decisions about your coverage [Leo: Right] that concerns me. Giving credit or a mortgage based on AI in ways that could hurt minority groups or people who are protected under the law, but we just have to be so careful [Leo: mm-hmm], same with financial regulations and just the security of your money. And then, you're right! Climate models? Figuring out crops? Using it for any kind of analysis that's going to help us have a healthier world, I'm for it. So this is the takeaway, I think we just got there, which is, as people think about how to regulate it, if you separate it by sector, it doesn't feel as- you're not boiling the ocean.
Leo: Exactly! And I think, like, we sat in, to your point on insurance, I think there are lots of companies and industries that have been making these kind of data-informed judgment calls for a really long time and doing so without, with very little oversight, outside things like financial services regulation, and now this kind of new fad of like it's AI, we need to regulate it's taking what that product is to a whole new place of understanding and thinking about what that is. And I think there's a lot of impact that this conversation will have about how we conceive of products. But I think you're right that, actually, the way in which it can affect society is often through the application. And that's, I think, where it's easiest to start focusing efforts.
Niki: And so, if we can get across these liberal democracies, if we can start to move more quickly by separating out the higher risk [Leo: Right] areas, we might get there faster, [Leo: Breaking down the conversation] Exactly! And then we can work on our geopolitical dominance, which [Leo: Exactly] I, I'm not on your panel [chuckles], but I'd probably be the outlier.
Leo: Okay. Maybe, we do that next time! [both chuckle]
Niki: We'll do that next time! Ok! So, we'll end on that. You have been brought into this podcast studio for one other reason [Leo: Sure], which I really appreciate, ‘cause you don't want to do this necessarily, but you own an NFT. [Leo: mm-hmm] We've done a crypto 101 series. [Leo: mm-hmm] What's an NFT? What is blockchain? Last episode, we did web3. Can you just quickly explain your personal NFT experience?
Leo: Sure, and just like for your listeners’ benefit, I'd like to caveat this by saying that I am one of you on this one. So, I've struggled my way through this process. I am not an authority on this, but I entered into, last summer, I entered into a-which was launched by the artist, the British artist Damien Hirst-and Hirst's work generally revolves around the idea of art and value because it's obviously one of these spaces where you get monstrously inflated, valuations based on a kind of collective assumption of value because one artist wrote their name at the bottom of it. So, he's done various things in the past, from formaldehydes sharks to diamond-encrusted skulls, and all of these works at their heart have the idea of “what is value when you make a piece of art?” Now, his latest, uh, sort of project was producing 10 thousand A4 paintings, which are his kind of dot paintings, which are quite famous at the start of his career and basically saying, “If you have $2,000, you can enter the raffle, and from that, we'll pick 10,000 winners.”
So, I thought “Why not? It'd be a laugh.” And I, so I entered in, and was fortunate enough to win one. Now, what the competition then did was, said, “Right. You have two options. You can either take your winnings as the physical A4 painting, or you can take an NFT of the same image, and you have one year to decide. And at the end of the year, we're going to destroy whichever option you don't take. So, if you take, the, the hard copy we’ll destroy the NFT. If you take the NFT, we'll shred the hard copy.” And I think the idea is to understand how markets and the idea of value plays out when you have to choose, like, if 9,000 people take the hard copy, do the NFTs gain in value? If everyone takes the NFT as the one person left with the hard copy, y’know, suddenly a lot richer than the $2,000 they put in. So, that's kind of the idea at the heart of it.
Niki: So, you have a year to decide…
Leo: I’ve actually already decided. I have to admit, um, I've actually decided to take the hard copy. [Niki: You took the hard copy, you didn’t take the NFT!] Yeah. I didn't take the NFT, but I did enjoy the NFT while I could because I had access to it in the year preceding my decision.
Niki: Oh! So, you made your decision. Ok, and then quickly, and I only want to go through this because it is complicated and I want people to feel- this is a shame-free space. I tried to get onto OpenSea and I just gave up, I thought it was too hard to buy an NFT. So can you just, very quickly, explain what you had to do to get an NFT?
Leo: Sure. And, so again, I’m no expert on this. And I think this process was probably made a little easier by the fact that the gallery exhibits in the works, which has quite a kind of centralized institution in the world of all of this, partnered with a provider that helped with this. Damien Hirst's team partnered with a group called HENI, which I think is a gallery that works in producing NFTs and showcases. They, in turn, basically gave me a link to my token, which I then had to access by downloading a wallet, which could host the token. So, I had to download something called a MetaMask wallet, which was insanely complicated- that's probably me preaching ignorance rather than anything else. [Niki: No! They are complicated.] And, once I'd done that, I then had to pair that with the link that HENI sent me and approve the transfer in order to get it across. From there, in my wallet, I then had to view my piece in a viewer, which was separate to both programs in order to see the piece. And that was my journey. I have to say that the friction of some of that, and not having an immersive enough space for me to enjoy it, I couldn't picture what I was going to do with it. And that maybe it's just me being horribly traditional and British, but I prefer to have the original piece and stick it in my bathroom, to be honest.
Niki: It’s so British to be so self-eff- [Leo: chuckles] I mean, I just think you don't have to be, so self-loathing about it. [Leo: laughs] I would have done the same thing. I take the hard copy.[both laugh] I take the hard copy because we just don't know what's going to happen with the NFT market. [Leo: Right] And now you have this amazing story. [Leo: Right] Who cares that they trashed your NFT? I mean, maybe Damien does. [Leo: chuckles]
Leo: Yeah, maybe he does! And, maybe I won't be laughing so much when NFT people kept the NFTs and they are multi-millionaires, but… [Niki: oh my gosh and what if it was worth 11 million dollars?] I know! Well, I'll probably change careers and get out of tech I think, Niki.
Niki: Leo, thank you so much for coming into the studio. You're on a work trip. I really appreciate you taking the time. This is super interesting.
Leo: Pleasure to chat, thanks for having me.
Niki: Next week, we’re talking about space, satellites, and smartphones. To make sure you’re staying on top of the conversation you can follow Tech’ed Up wherever you get your podcasts. New episodes come out every Thursday. You can also learn more about the show at TechedUp.com