
Aiming for the Moon
Aiming for the Moon
129. AI Needs You: Verity Harding (director of the AI & Geopolitics Project @ the Bennett Institute for Public Policy at the University of Cambridge | Founder of Formation Advisory)
With the development of artificial intelligence on the rise, we are at a crossroads. How will we continue our innovations and regulations of this new technology? But, this is more than a technological question. As my guest, Verity Harding states, “AI needs you.”
In this episode, I sit down with Verity Harding to discuss her book, AI Needs You: How We Can Change AI’s Future and Save Our Own.
How we apply AI is a multi-disciplinary issue. We need everyone, from tech people to teachers, to students, to nurses and doctors, and to everyone else.
Topics:
- Why AI Needs Everyone
- Technology's Shadow Self
- The Socio-Technical Approach to AI
- "What books have had an impact on you?"
- "What advice do you have for teenagers?
Bio:
One of TIME’s 100 Most Influential People in AI, Verity Harding is director of the AI & Geopolitics Project at the Bennett Institute for Public Policy at the University of Cambridge and founder of Formation Advisory, a consultancy firm that advises on the future of technology and society. She worked for many years as Global Head of Policy for Google DeepMind and as a political adviser to Britain’s deputy prime minister.
Socials -
Lessons from Interesting People substack: https://taylorbledsoe.substack.com/
Website: https://www.aimingforthemoon.com/
Instagram: https://www.instagram.com/aiming4moon/
Twitter: https://twitter.com/Aiming4Moon
I'm going to start on Zoom and then start this on my microphone. Alrighty, well, welcome to the interview. Thank you so much for joining me today. Thanks for having me. Yeah, you published a fascinating book. Ai Needs you how we Can Change AI's Future and Save Our Own. And to start off with kind of the obvious question here why does AI need me?
Speaker 2:start off with kind of the obvious question here why does AI need me? Well, ai needs all of us because it's a really important and pervasive technology that has the potential to influence lots of different aspects of our lives, whether that be people who are at school, people who are in work, people with families, people in the creative world. Ai has a potential to have a huge impact. But at the moment, the main voices in that debate about whether AI should be used here or it should be used there, how we think it's going to benefit us or how we think it might hurt us be used there, how we think it's going to benefit us or how we think it might hurt us that conversation is one that's dominated by quite a small group of people, and I think it's really important that that conversation is broadened and that many more people have their say when it comes to what the future looks like. And what the future looks like that should be up to all of us, not just the people sort of building and creating AI.
Speaker 1:You propose in the introduction of your book, basically the shadow self of AI, the idea that technology mirrors us. You say so that's been a big, I guess, counter argument, or at least a big promotional part of AI has been well, it'll all work out in the end. And you say, well, no, not necessarily. We have to be very intentional about the way we develop this. So what is the shadow self that we should be thinking about as we involve ourselves in AI, and why we should get involved?
Speaker 2:Yes, and you're right to use the word intentional. I think that's the word I think about when I think of AI. How can we be really intentional about what we're doing and aware Now? When you think about big technological changes in the past, it feels like they were just always there or that what happened was in some way inevitable. But what I show through my research in the book and looking at the history of, but what I show through my research in the book and looking at the history of transformative technologies is actually that's not the case.
Speaker 2:Technology is hugely influenced by the sort of society and culture and politics and values of the time.
Speaker 2:So while, of course, we think about things like the Industrial Revolution coming along and changing how we live and work that's true but actually that technology and all technologies have really deeply influenced the other way as well, by humans, not only in terms of what gets built, but you know what gets funding and what doesn't get funding, and who gets funding and who doesn't.
Speaker 2:Those are all very political decisions or are very human decisions but also how that technology is used, how it is regulated, and you know, throughout the book I show all these different examples of how we could have made some different decisions, things might have gone a different way, and that might be good and that might be bad, and it might also be not obvious whether it's good or bad, and it might also be that people disagree about whether it's good or bad, because of course, everyone has these different viewpoints.
Speaker 2:So the the shadow self when I talk about that is to say, if technology is just us, if AI is just us, and that technology, no matter how innovative and um and new, still represents and reflects the societies that we're living in, then that means that it's going to represent all the great things about humanity, inventiveness and creativity and all the wondrous things that we can do, but it means it's also naturally going to represent some of the more disturbing aspects of human nature, some of the more disturbing aspects of human nature, and so I argue that we need to be intentional about trying to push that technology towards those better qualities.
Speaker 1:You propose this interdisciplinary, cross-discipline thing where we have a democratic approach to not just managing the regulations around AI but also proposing the future of the technology itself, like expanding beyond just tech experts as well, and at first to people that might not appear apparent, like why, in a highly technical thing, would you want people who I don't know, high school students, for example, or like philosophy majors or history majors, who don't have and maybe don't feel equipped to deal with a big technical computer question like this? Why should they be involved as well?
Speaker 2:Well, there's this word called socio-technical, and what that means is an approach to technology that is not technical only, but it is wrapping in the social sciences, wrapping in wider society's needs and thinking about technology issues not just as technology issues, but really as human issues. So an example might be when it comes to AI, do we think it's okay to have AI mark a student's term paper? And someone might say, yeah, I mean mean, of course, if it can do a good job, then why not? And others might say, no, you know, if a student's worked really hard on something they want to know, that a human being has, has looked at it and used their human judgment on it, doesn't really matter about, um, the outcome that the process is is really important, um, so that those are, those are human questions. You can't answer something like that just with a technical. Can it do a good job or not?
Speaker 2:But you will see that a lot in an ai debate people will say, well, you know, in ai it's more likely to make a more neutral decision than a judge, um, and so why not just, you know, create a program that can do it?
Speaker 2:But then I would say, well, actually, we, we, uh, we take really seriously when we're taking away somebody's life or liberty in a criminal justice setting, and that deserves to be a human interaction based on centuries of human evolution in terms of the law, and it doesn't matter whether the technology is accurate or not. So that's a sort of socio-technical approach, and so when you have that, then of course it can't just be technical people that are making those decisions, because they will be very expert in their area of science or technology. So you will have a brilliant computer scientist, for example, who's incredible at building AI programs, but what do they know about the criminal justice system? Or a school or a hospital, you know? Or the creative industries? Not very much. And so what I say to people is you don't have to be a deep AI technical expert to be involved in AI, because you may not be, but you are an expert in something, and whatever that thing is is really important to working out the socio-technical questions when it comes to AI.
Speaker 1:It's fascinating because when we think about papers being graded or AI involved in the criminal justice thing, we're not making arguments all based on the accuracy of the decisions, which we are and there are terrible examples of how AI has gone awry in some of the algorithms and the data they've been trained on but we're also appealing to something that's more like, for example, writing a paper. We're also appealing to something that's more like, for example, writing a paper. Even as a high school student, I spend so much time writing the paper that I want someone to experience it with me. I don't just want the grade itself, but I'm trying to convey an experience. When you listen to a podcast, it's not just the information you're going after, it's the experience and interaction between the guest, the idea, the host, as well as the listener, and you're engaging in the conversation as well through that. There's something that we seem to be missing when we replace computers with people in some of these instances. Now, I'm personally yes, yes go on no, please.
Speaker 2:I mean, I think, I think and you're thinking about it in that really smart sort of holistic, socio-technical way, you know what? What actually do I want here? What actually are we trying to build as a society? Trying to build more human connection or less? And that doesn't mean that ai will always lessen human connection. There are some incredible examples of ai where I think it can really, um, uh, help us a huge amount and not detract from our experience in any way. But there's not going to be a one size fits all approach, and so I think you're completely right to be thinking about it in that way. What society do I want to live in first, and then can AI help me get towards that society? And if it can't, then maybe we don't actually need AI in that case. And if it can, then great, let's be really thoughtful about it.
Speaker 1:I mean, I think that the computational and the technical side of this to me is absolutely fascinating. I love programming and analyzing algorithms themselves. So it's not the solution. Is that? Well, you know, we should just ban AI or not do anything with AI, because the case study that when people tell me that I always point to is well, what about stroke victims who have lost the ability to speak and now, with algorithms, we can reproduce their voice and allow them to speak again? That's incredible and it furthers human interaction as well and human connection. Like as you repeatedly point out in your book. It's the intentionality behind it, and technology is not always a replacement for progress. In fact, sometimes progress is through humans as well, it seems.
Speaker 2:Absolutely, and you're right. That's a great example. I think there's a lot of exciting potential in AI in healthcare. We know that there are AI programs now who can analyze, be know, retina scans of the eye or mammograms, um to detect cancers, perhaps at an earlier stage, um, or just perhaps do that at a scale that we're not able to do when it's purely a human review, but that won't replace the doctor or the nurse, because if they bring so much more than just excuse me, nurse, because if they bring so much more than just excuse me, because of course they bring so much more than just, um, looking at that one image, and but what it might do is enable them to do their work in a more efficient way.
Speaker 2:And, and that's why it's really important to look at these things on like a case-by-case basis. You know, and think very carefully about whether it's appropriate, whether whether it isn't appropriate, appropriate, and, as I say, think about what type of society you want to live in. And then is the ai program helping us get towards that um, or is it detracting from that and try and manage the um, those, those effects, uh, in the best possible way? That that we can and we really do. You know what, what, what comes across I hope in we can and we really do. You know, what comes across, I hope in the book is that we really do have the potential to influence this technology. It's very human decisions that end up deciding which way these things go. And that's again to your point earlier about why it's important that's a diverse representation in those discussions and debates about why it's important, that's a diverse representation in those discussions and debates.
Speaker 1:I think we've made a pretty good case for why people should be involved and why the public outside the technical areas should be involved as well. Now let's get to the pragmatic and how do we actually do this? And you propose basically now you can correct me on this bipartisan, if you think about it from the American perspective. Bipartisan support towards an intentional future, as well as debates and having to compromise to make good policy as well. Now, it could just be that I'm a teenager and an American growing up amidst an election cycle, but it doesn't feel like we have a lot of bipartisan talking really about anything. Bipartisan talking really about anything. Um, and it's pretty, it's pretty chaotic from a political perspective and people have a lot of deep-seated hate towards um, the other aisle. And how do we then propose something like as big as a policy about ai and ai future, like what? How do we get through this essentially?
Speaker 2:well it's it's. It does feel very divided and polarized at the moment in lots of places, not just in the US, and I think that does make it more difficult to find a political solution. I would like to see political leadership that says I would like to see political leadership that says you know, we are going to do this in a consensus-driven way, but it's harder to do that through a political process and sometimes and one of the examples used in the book sometimes it's best to sort of almost outsource that to trusted experts, and so you could see, rather than this being something that's decided politically at the political level, at the political level, they decide to appoint somebody neutral and independent who brings trusted experts together and they, even if they disagree, they debate and they discuss in good faith and they produce a report. And that's what happened in the UK back in the 1980s, in the early stages of biotechnology, when we had to reckon with a lot of these ethical questions as well. And it worked, it worked very. It worked very, very well. So, yes, I would like to see it done politically.
Speaker 2:I think it is harder, when it comes to the polarisation that we see today, if it's not going to be done in a political process and, of course, there's lots of areas where we don't need politicians to be involved. We can have what they call permissionless policy making. We can see informal coalitions come together. So you might see lots of heads of schools come together to decide how to tackle something you might see and we have seen examples of in Hollywood the unions negotiated directly with Hollywood studios and they made some decisions about AI through that process. So it doesn't always have to be political decisions about AI through that process. So it doesn't always have to be political. And if you know like you are I know in the US at the moment struggling with that then I think it's sort of all the more important that more people take it upon themselves to say, hey, you know I might not be an AI expert, but I have a view on this and let's try and pull some people together to think it through.
Speaker 1:I've been reading in preparation for an interview with former NIH director, dr Francis Collins, about vaccine hesitancy and kind of expert mistrust as well. How do we deal with something that's that sensitive as well, because a lot of people in the US feel as if sometimes experts either don't represent them or are after them, and that seems to be a consensus that you feel in political rhetoric, and I'm not sure exactly where this all originated, but it's definitely something that my generation's growing up amidst. How do we go about talking about AI in a way that it both explains it to people and also proposes these policies?
Speaker 2:Yes, it's a very good point and I think all the more reason why we need to take it upon ourselves, where we can, to try and talk to people thatusting the other person's motives and sort of you know whether they're a good person or not. And so when it comes to AI, I think you know. I hope to see more of that. But to your question, you know, I think something that I've been not pleased to see over the past couple of years is what I think is almost an overinflation of ai's capabilities, these um warnings that it might become a sort of sentient or powerful intelligence that sort of takes over and is super dangerous. And the reason I don't like that is for two reasons. One, I think it distracts us from some of the actual, real, tangible, pragmatic issues that we have to think about when it comes to AI today, because you're thinking, well, this is only going to be a problem when it gets to this kind of far off and, frankly, theoretical, unproven and very much disagreed about amongst the AI community future. On the other hand, I think if you kind of encourage people to believe that it's all powerful and it will be super powerful in future, then it makes them think that it must be pretty powerful right now, and then you can end up sort of outsourcing your judgment to AI programs because you think, well, you know, if everyone's saying it's so potentially powerful and dangerous, it must be able to at least, you know, handle this small problem that I'm dealing with it.
Speaker 2:So I don't like the way that we've talked about it in the past couple of years as a society. Neither do I want to talk about it, as we talked about earlier, only as something that's dangerous. Because when you know, while there are considerations that we have to think about with AI and the harms that it can definitely and has, can do and has done again, I think that can also encourage people to sort of step away and disengage. When people are frightened, disturbed, they tend to disengage, and I think that is going to leave us all the poorer.
Speaker 2:If people feel like this isn't something that they want to bother getting involved in, it just sounds too difficult, too frightening, too scary. And we, of course, you know science and technology always moves us forward, and so we do want the good side of this to come through. So I'd love to see us talking about it in a more measured, practical, thoughtful way, like you know we're doing today, and so it's great that you're doing this podcast, because I think it's the type of debate that warrants calm, you know, warrants rationality, and you know that can uh hard to find sometimes these days yeah, absolutely, um, this has been a fascinating conversation so far and I I think this kind of approach of intentionality and thinking and having conversations about all of this will definitely hopefully help shape the future, um, in a in a good way, I guess.
Speaker 1:We we'll see. I hope so. Wrapping up with these last two questions, what books have had an impact on you?
Speaker 2:You know I'm a big reader. I love books and I've read them since I can remember, as soon as I could read. So I've had books that have influenced me my whole life. When I was younger I was really influenced by To Kill a Mockingbird, which we read at school, and I know that book has touched a lot of people and that definitely had a big impact on what I chose to go on and study. It made me sort of really interested in studying history and the past and what we could learn from it so that we don't repeat those mistakes.
Speaker 2:And a historian that does that beautifully now, who I read many years later, because I read that when I was a teenager and unfortunately that was a long time ago, Taylor, that I was a teenager, so the ones I read today is this Harvard historian called Jill Lepore and she wrote an incredible one volume history of the US called these Truths, which really is an incredible use of history to bring to life some of the current issues as you talked about today.
Speaker 2:Why are we in this position? And she does a great job of showing how we got there and that was really my inspiration for how I chose to tackle this book and how I chose to tackle AI, really to say we've been through this before as societies. We've dealt with hugely transformative technology before. I mean, imagine if you showed an iPhone to somebody from 1850, you know, and we've navigated that before and we can do it again. But we should learn from history to help us guide that future. So I think that I really do find those types of books that are able to do that bring the past to life and contextualize it and situate it and use it to help us explain today. I think those really really influence me.
Speaker 1:What advice do you have for teenagers?
Speaker 2:Well, look, I think AI is an area that really needs younger input into it. And right at the end of the book, in the conclusion, I tell a story about some teenagers bandying together in the UK to protest against what they felt was an injustice relating to technology technology. During the pandemic in the UK, the government decided that students would get their final grades these are the grades that decide whether you go, which university you can get into or not and they decided that these grades would be decided by algorithm, so there wouldn't be exams and they would just take their predicted grades, because we have a system here in the UK predicted grades. They would take their predicted grades and then they would use an algorithm to adjust for how good of a school they were at. And if your school was an underperforming school, you would. You would get your grades marked down. Um, and obviously this is deeply unfair, because we're incredibly talented student who just happens to have to go to a maybe a more underprivileged and therefore less highly performing school. You might not get the grades that you deserve, and people felt very strongly about this, understandably, and they protest.
Speaker 2:They went to uh, you know, downing street, the equivalent of our white house, and they stood outside and they campaigned and they protested and they the the government overturned that um decision and I quote in the book the prime minister at the time who said you know, sorry, this was a rogue algorithm or something like this.
Speaker 2:And um, it's a really incredible example of the power that you do have, and I think sometimes, growing up in this society, it can probably feel like, well, you know, do I have much power? But what I would, the advice I would give to teenagers is encourage them to really to know that they do have a huge amount of power, especially if you team up together. But even one person alone can make an enormous difference, especially if you team up together, but even one person alone can make an enormous difference. And so I'd encourage them to think about that, to think what they think of AI. Try and work out what they think first, before anybody else tells them Read my book, but read other books too, and then use that voice in creating the type of future that you want to see.
Speaker 1:Well, Ms Harding, thank you so much for coming on and speaking to us and promoting the idea that AI needs us all throughout all of the different segments of our societies. It's been a great conversation. I've really enjoyed it.
Speaker 2:Thank you so much. It really does. It does need everyone. I appreciate you, taylor, for taking the time today, and I really enjoyed our conversation too. Thank you.