Living With AI Podcast: Challenges of Living with Artificial Intelligence

Can AI Be Ethical? Projects episode

Sean Riley Season 2 Episode 7

This 'Projects Episode' discusses a few TAShub projects grouped around the theme 'Ethics'

Project: Consent Verification in Autonomous Systems  
Inah Omoronyia - Project Lead, Senior Lecturer in Privacy, University of Bristol

Project: Trust me (I'm an Autonomous Machine?) 
Joseph Lindley -  Project Lead, Research Fellow, University of Lancaster

Project: ARGOS  
Enrico Gerding, Project Lead, Associate Professor, Director of the Centre for Machine Intelligence (CMI), University of Southampton 

Industry Partner: Ian Forrester, Senior Firestarter, BBC


Podcast production by boardie.com

Podcast Host: Sean Riley

Producer: Louise Male

If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at
www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.

Podcast Host: Sean Riley

The UKRI Trustworthy Autonomous Systems (TAS) Hub Website



Living With AI Podcast: Challenges of Living with Artificial Intelligence

 This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.

 

Season: 2, Episode: 7

Can AI Be Ethical?

This 'Projects Episode' discusses a few TAShub projects grouped around the theme 'Ethics'
 
Project: Consent Verification in Autonomous Systems  
Inah Omoronyia - Project Lead, Senior Lecturer in Privacy, University of Bristol

Project: Trust me (I'm an Autonomous Machine?)
Joseph Lindley -  Project Lead, Research Fellow, University of Lancaster

Project: ARGOS  
Enrico Gerding, Project Lead, Associate Professor, Director of the Centre for Machine Intelligence (CMI), University of Southampton

Industry Partner: Ian Forrester, Senior Firestarter, BBC


Podcast production by boardie.com

Podcast Host: Sean Riley

Producer: Louise Male

If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at
www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.


Episode Transcript:

 

Sean:                  Welcome to Living With AI, where we discuss how artificial intelligence is changing everything from your daily commute and work life to how you holiday and what content you stream. I’m Sean Riley. This is the TAShub’s own podcast. The TAShub is the Trustworthy Autonomous Systems hub, an interdisciplinary network of people researching the idea of trust in autonomous systems. If you’re new to the podcast, this is season two so feel free to check out our back catalogue.

 

Links are in the show notes or if you search up TAShub, I’m  sure you’ll find us. We’re recording this on the 25th May 2022 so all opinions correct at time of recording. Today is one of our project episodes. We’re going to feature a few TAShub projects grouped around the theme can AI be ethical? We’ve got three researchers joining us and a representative from industry. We’ve got Inah, Joseph and Enrico. They’re our researchers. Ian is joining us from industry. So I’m going to ask each of you just to introduce yourselves and the name of the project you’ve been working on.

 

Then we’ll  listen to a bit more detail about the projects before we chat about the general theme of ethics. I don’t know if we’ve got enough time in the world to chat about the general theme of ethics and AI but hey. Just starting top left just on my screen, Enrico, can you tell us your details and tell us a bit about the project you’ve been working on?

 

Enrico:               Sure. Yes, thank you. So I’m Enrico Gerding. I’m associate professor at the University of Southampton. I’m a principal investigator on a project called AI Assisted Resilience Governance Systems. That’s a whole mouthful of course so we abbreviate it as ARGOS.

 

Joseph:              Hi, I’m Joseph or Joe. I’m a research fellow at Lancaster University. I run a thing called Design Research Works which is all about promoting design led ways of understanding the world. My involvement with TAS was a project called Trust Me, I’m an Autonomous Machine. That is using some design led research techniques to try and understand our relationship with trust and machines which behave autonomously.

 

Inah:                  Hi everyone and thanks for having me. My name is Inah Omoronyia. I am a senior lecturer in privacy at the University of Bristol. The project that I’m involved in within the context of TAS is a project called Consent Verification in Autonomous Systems. Pretty long but we simply called it CVAS.

 

Ian:                     I’m Ian Forrester. I work for BBC R&D in Manchester. My job title is Senior Firestarter, which is an interesting title. What I mainly do is research around adaptive media and this thing we call the [unclear 00:02:43] which intersects nicely into what this whole podcast is about, artificial intelligence.

 

Sean:                  Fantastic. Thanks so much. Enrico, can you tell us a little bit about your project then?

 

Enrico:               Yes, sure. So as I said, it’s called ARGOS, just for abbreviation, it’s a bit easier. So what we’re looking at is to see whether artificial intelligence can be used for governing policies, especially in situations where there is a lot of risk from certain events that occur. So our focus has been on say flood risk for example and combining that with other research could happen at the same time. So a very good example recently is where you have for example both Covid happening and flood.

 

So you have restrained resources that you can implement. So what kind of policies are most effective and then you’re having a data driven approach to these sort of policies and implementing the policies. So we’ve been really looking at, and this is just pump priming so it’s not a fully-fledged project, so we’ve been looking a lot at the literature to see what kind of artificial intelligence techniques can we use for the different phases of policy and policy implementation. So that’s just in a very brief nutshell.

 

Sean:                  So, Joseph, can you tell us about Trust Me, I’m an Autonomous Machine?

 

Joseph:              Absolutely, yes. So this is another pump priming project which I think had a time limit of around 12 months. It’s just finished up actually. So it’s been running for the last year or so. It built from the premise that there’s a big gap between what a lot of experts say are the important things to do with trust around autonomous machines and AI and the understanding that much of the everyday public HAS. The project was really trying to understand if that gap exists and then try to start bridging it.

 

The way we went about this was quite eclectic really. It began with a fairly straight forward workshop. So we got 20 experts from various academic disciplines as well as various industrial partners together. I was going to say in a room but we were in a virtual room, on a Zoom call. It was last year so restricted by the pandemic. We got the experts together and tried to capture a snapshot of their views. Then the interesting part came when we started using some design led techniques to try and figure out what all of their perspectives meant.

 

This was really dancing between the data that we gathered at the workshop, some visual concepts that we were trying to develop of how to represent those various, not always agreeing, points of view. Uniquely on this project, we also were taking influence from John Ruskin, the 19th century thinker, philosopher and polymath. So we were essentially trying to think what would John Ruskin make of these from his 19th century perspective, he’s had a lot of comment on the societal changes coming about from the Industrial Revolution.

 

What would Ruskin think of the 21st changes we’re going through with autonomous systems and AI? We’ll maybe get on to it a little bit later but we’ve produced a series of designs and a prototype which we’re using as an engagement and research tool to try and capture the public’s perspectives on all of what the experts said in that workshop way back when last summer.

 

Sean:                  Fantastic. I’m imaging these wristbands WWR, like what would Ruskin do. I think that was the thing, wasn’t it, a few years back, to get these wristbands made. Inah, can you tell us about the consent verification you’ve been working on?

 

Inah:                  Yes, sure. So the title of the project is Consent Verification in Autonomous System. It’s really based around the motivation that when we talk about autonomous systems, some processing of data takes place somewhere. Sometimes that processing of that data might be personal data. When it is personal data then there is another dimension to it. There is someone behind that processing for which that will have some impact on.

 

So as a consequence, the question then is how do we build autonomous systems such that it recognises this impact it has on users and the privacy specific implications for that. So within the context of this project, we looked at this much broadly and also we wanted to look at it from the lens of regulatory compliance. When we asked that question then first off it is is there a lawful basis for such data processing. A whole range of lawful basis out there. In this project we said we’re going to focus on consent, what does it mean for an autonomous system to one, understand consent.

 

That, in some sense, would involve requesting for consent, it will involve managing that consent and be able to know the duration of such consent and what to do thereafter. To be [unclear 00:08:14] when we talk about consent in the building of software systems, there are some fundamental assumptions that we would normally make. The assumption that we understand when we are dealing with personal data, the assumption that we understand that there is a purpose for process and when that purpose changes, we can go back and revalidate the lawful basis.

 

The assumption that if we say this is personal data, this point, it would remain personal data. In some sense it means the data in itself has not evolved. But when we are talking of an autonomous system, these assumptions don’t hold anymore. The emerging behaviour of autonomous system means that the purpose for processing would change over time. It is also the case where how do we know autonomous systems [unclear 00:09:03] understand what personal data is?

 

Sean:                  I think it’s really interesting you talking about emergent behaviour there and we’ll perhaps come back to that because I think that ties in to all the projects. But Ian, can we have a quick chat with you because I believe you were involved in the Trust Me project, weren’t you, in some way. Can you tell us about that?

 

Ian:                     Yes. I’m aware of the Trust Me project and I’ve worked with Joe quite a lot. So it is something that we are very interested in in lots of different guises I’ll say.

 

Sean:                  Fantastic. I mean obviously the TAShub is all about trust. I know a key-

 

Ian:                     Sorry, I was going to say that I think that’s probably the core of it is the trust. Whenever we talk about AI and machine learning, it’s about the trust. Do you actually trust the system.

 

Sean:                  Because one of the things that Inah’s just mentioned there, and I think this will tie in as I mentioned before to all the projects, is this idea of emergent behaviour because whenever you’re talking about something that’s autonomous, they may have been trained in certain situations. Then if it experienced different situations to what it’s been trained in, there are going to be these properties that emerge where it’s going to have to cope with those situations.

 

So a very simple example is, I don’t know, an autonomous car driving on a different road to the one it was trained on. You would hope it’s still going to drive in a manner which is let’s say what we expect or would hope from it but emerging behaviour often has, and it’s like a watch word in these podcasts, unintended consequences. Anybody want to talk about that? Enrico, what do you think about that?

 

Enrico:               Yes, of course, I mean these systems need to generalise to unseen situations. That’s very important. I think a lot of important research goes into designing what’s called robust mechanisms so at least you have some safety in place. For example if the system doesn’t know what to do, at least it knows that it doesn’t know so it can for example hand over control back to the user. So these sort of situations.

 

So it needs this meta knowledge, it needs to have some sort of self-awareness to understand its own limitations. I think that’s a really interesting direction and we haven’t even scratched the surface in terms of these sort of algorithms, this self-reasoning. It’s very interesting.

 

Sean:                  On a previous podcast we talked about the idea of ethics with various people. Somebody said you might be choosing different ethics depending on the territory that you buy the car in for instance if it’s an autonomous car, just using that as an example. Okay, the ethics here in this territory are different to the ethics in- I mean this is just such a massive, massive topic but it just makes me think what would Ruskin do, Joseph.

 

Joseph:              Yes. Well I have to confess, when you mentioned that before, I had never heard of it. If that’s actually a thing and I’ve got this far through my project involving Ruskin without knowing about it, I’m terribly remiss so hands up on that.

 

Sean:                  No. When I say that, there are various what would certain person do, what would certain person do. It just struck me that I know that certain people had wristbands made. Anyway, carry on, as you were.

 

Joesph:              This is a mean I’m now going to create and pretend that I invented so anyone that doesn’t listen to this podcast will think I’ve come up with it. What this made me think isn’t really exactly what Ruskin would do but it relates to where we ended up on the project. There’s a big difference between just autonomous behaviour and autonomously changing the behaviour of the system. That’s much harder to deal with because it involves if the system is to behave in a predictable or trustworthy way, it has to fully understand the next context that its new behaviour is taking place in.

 

This is a big part of where we ended up with on the project, having considered Ruskin’s point of view and having considered our experts point of views and tried to design something that reflected all of these. We come up with this concept of trust being a distributed concern and that’s distributed in many ways the individual perspectives of those people, the stakeholders in any given system but also distributed over time in that your perception of how much you trust a system will probably change from one moment to the next.

 

I think that in a way links with something Inah said about consent, that if you consent to a particular use case on Monday, that may well change by Friday if the system’s updated. So I agree it’s a hugely complicated topic but from my side, championing the approach that we took, I think creative and design led and practice based approaches to try and understand this problem are a really useful part of the pie really. They’re not the whole thing but it’s really useful to talk to other experts, people with technical perspectives, maybe formal methods guys, social scientists, all these people and try and bring it together and build a prototype. That’s what we’ve done on our project.

 

Ian:                     We say these things are complicated. They are way, way more complicated than we think that they are. So for example let’s take, I don’t drive. I do drive a motorbike which is a whole different kettle of fish but especially with my public service hat on, there’s a whole bunch of things that people that are not involved in the discussion around this.

 

So for example when you talk to automated car designers then they’re talking to the manufacturers, they’re talking to maybe the councils and stuff like that, the governments and they’re doing that but they’re not talking to the citizens. They’re not talking to the people who are walking on the street. This discussion needs to be much bigger. Obviously from an industry point of view, that is a very scary place because this can go on forever.

 

But I think in this case in particular, as we already know, especially when we talk about the diversity. In car automation, we already know there’s the MIT or there’s this paper that darker skin people, like myself, are more likely to get run over by an automated drive car. So these discussions need to go much wider than they currently are. That is a very difficult point for someone in the industry who just wants to create automated systems that can sell.

 

But I think that there is a big point about some of these automated systems which I know some of these discussions that we’ve just had, some of the projects that have been discussed really hit on and that is where are the transparency and the discussions with the general public because right now it’s being sold as, I mean I saw the documentary by Channel 4 about Tesla and how Tesla spliced together a video to make it look like the car was driving itself. Those things stick in people’s minds. They go that’s great, I could have an automated car.

 

We don’t realise that there’s a lot more going on and those discussions need to be had in an adult way, not someone showing a video and go look it’s all great, come and buy this car. That’s just literally the very edge of a much bigger iceberg.

 

Sean:                  Just to play devil’s advocate there, advertisers have been doing this for decades, if not centuries, making things look better than they actually are. I mean you only need to go to your local fast food joint and look at the menu board and then order one of those things to realise that it isn’t going to look like that thing. So I know there’s a wider point there but how do you regulate that if you’re promising things that, surely that’s just Trading Standards. If they’re saying it can do x, y and z and it can’t do x, y and z, there is a problem there anyway, isn’t there?

 

Ian:                     There is a problem, yes. I mean I agree but as we’ve already seen, regulation of different industries has kicked in I think a bit too late. But I think we know this, we can see it happening and we’re repeating the same mistakes again and again. I think the important thing about artificial intelligence is that moment of trust. Do I trust the system? When I go up to a hand dryer, I expect the hand dryer to work. When it doesn’t work, I do not trust the system, especially when someone else puts their hand in there and it works.

 

I think these are the things that are really important. That’s why I’m really interested in the projects that come out of this hub because then start to unpick this. To make this more public and more accessible to the public is one of the key things that we need to do. It helps industry in the end I think right now the industry gets away with what we currently do where it’s kind of like look at this flashy video, it’s great but there is a much bigger problem that we need to handle. I think when it comes to AI and some of the things, we’re embedding it so deeply within our infrastructure that we really need to have those discussions right now with everyone.

 

Sean:                  Totally agree. I totally agree. Unfortunately, legislation is often way behind the curve, isn’t it? You only need to take another highway example of the fact that they won’t put a speed camera up until somebody has been killed in a collision or in some kind of incident before that action happens. But just to pick up on something you mentioned earlier, Ian, about basically potential racial bias in say Tesla.

 

This often, and it’s not the only thing but it often comes down to the data set that’s been used to train systems and AI systems. Is that a big thing that’s waiting to get us all the backside? Are we all going to get got by this? Is it really a case of rubbish in, rubbish out? Does anyone want to talk about that?

 

Ian:                     Garbage in and garbage out, that’s what we used to call it. I think that is one of the major things. I think part of it is about literacy. It sounds like how does this connect but we need the public to understand what this is about. So for example I work at [s/l Mozilla 00:20:25] on different things and they have a thing called Common Voice which is about training voice systems based on a diversity of voices rather than the people who can spare the time and have the time and maybe get enough money to actually donate their voice to the systems.

 

I think we need more of that but right now the public is a little bit scared to do that kind of thing because they’re wondering where it goes. I think that does go into some of the stuff about consent and understanding the stuff but right now it feels like we’re in this point where the garbage is going to overflow and it’s going to kill us all. That’s my view, not the BBC.

 

Sean:                  Thank you for that clarification. Inah, I think this is a good point to bring you in, mentioning consent there. What’s your take on this?

 

Inah:                  I think it’s right, it’s garbage in, garbage out but I also present a different perspective on this. At the end of it, autonomous systems are built by people and it goes back to the original topic we have for this podcast, can AI be ethical in itself? People would argue that look, that’s really not the question because AI is a tool, autonomous system is a tool, people build tools and tools can be used for good things and it could be used for bad things.

 

So from that perspective, it’s almost an answer of no. It’s impossible for AI in themselves to be ethical. But the other side of it is yes because if good people build good AI systems then they will be ethical but then if we build them in a way that uses the wrong data, that uses the wrong processes, that uses the wrong techniques and that uses everything that is wrong, of course you will expect that it will garbage out something that is wrong.

 

So the point here is there’s a big responsibility on people who build systems in general to be able to demonstrate accountability at different levels of that building process. Really here this is where I think it really becomes very, very interesting because there seems to be this very interesting divide between system building processes, and I’m saying this with my software engineering hat on and then at the end with my regulatory [unclear 00:23:08] hat on, people who build systems, software engineers, system engineers, they have a functional objective. They have a business objective to follow. That’s what motivates them.

 

Compliance, ethical aspect, as much as it’s a good thing to talk about, when they’re on that table doing their job, they’re listening to their boss and their boss is telling them you need to implement this function very quickly so that we get to the market on time and sell beyond our competitors. That’s what is driving them. So they are interesting, software engineering and system engineering processes at one side that does not necessarily align with the processes that we have in place to support building ethical systems to support compliance in processes. There are lots of privacy engineering techniques out there.

 

There are techniques out there around privacy impact assessments, around privacy reviews, around techniques for you to improve transparency and checking that the subject is able to exercise a whole range of rights just to be able to achieve that objective of privacy preserving and ethical systems I would say. But these techniques do not fit into software engineering system, engineering processes. So you have these two parallel things with [unclear 00:24:37] objectives.

 

I would say suddenly at this point I don’t think we’ve reached that point yet where these two things are talking with each other. Until we get to that point, I just feel that we will build very interesting functional autonomous systems but it almost feels to me as well too that we are building a dystopic future world for ourselves where we will end up with an interesting system, with interesting futures but totally untrusted in different dimensions.

 

Sean:                  Again, I’m here to say the nasty things, not the nasty things but the difficult things to hear but surely that’s just software engineering following capitalist principles, isn’t it, get the product out to market at the cheapest price possible and don’t worry too much about the ethics. I mean-

 

Inah:                  That’s right but it’s really an unfortunate situation. I think from a research perspective, that’s almost what is driving the research we do. There is software engineering techniques that we do and techniques that will help get these products out very first and not techniques that would necessarily help us make sure that we do efficient privacy reviews.

 

Joseph:              I agree with your devil’s advocate position that that’s some product of the capitalist system we’re operating in but it doesn’t have to be that way. I always like taking a contrasting view to other technologies. The classic one I’ve used over the last couple of years is the built environment. If we just imagine the built environment is quite well-regulated and we have lots of experience over what things are safe, what we trust, what we think are acceptable and so on. There’s building regulations.

 

If you want to go from nothing to a new bridge, then you have the concept of the bridge, you know you’ve got the technology there, you employ an architect, a civil engineer, you go through the planning process and we have none of that for this technology space. People refer to it as the wild west. We need to catch up on that. I think a lot of the research that we’re all doing is figuring out what we need to do catch up. It’s not about constraining the capitalist machine, even though there’s probably some other reasons to do that, it’s about doing it in a way that we all find as being acceptable.

 

Enrico:               One important component is where are the incentives. So what is the incentive? It’s important to align the incentives by ascribing responsibility correctly so accountability, responsibility. So if you take the building example, so the architect might be responsible, they have to check things, making sure it doesn’t break down so there is a clear channel of responsibility. If the building does collapse, who has to pay for it basically. I think we also have to look at that with autonomous systems.

 

Who’s responsible because if you have a vehicle, it has components of different manufacturers, different software. If something goes wrong, if there’s a collision, can we trace back what went wrong and ascribe proper responsibilities to this. That’s a big research project. Once you have that responsibility, it’s not just getting it to market quickly because you know that there’s a potential cost later down the line.

 

Sean:                  Yes. I mean we have the concept of the MOT certainly in the UK where every three years or whatever your car has to be checked to make sure it’s road worthy. There’s no reason why that couldn’t apply to software as well I suppose. Inah?

 

Inah:                  Yes. Just to carry on with the aspect on responsibility and the complexity that it brings when we are talking about autonomous systems. For me there is a direct relationship between explainability which is a concept that is fundamental as well when we are talking about ethics and responsibility. For me, the way I look at it is this. Particularly real autonomous systems don’t operate in isolation. They operate in a federated environment.

 

They have to depend on a whole range of order services, input and then that output from that autonomous system would be an [unclear 00:29:01]. So it’s a complex forest of interaction between systems. Now a question then comes in if one single autonomous system in this big chain refuses to be transparent in its operation, refuses to do specific needs that are required to demonstrate accountability and responsibility from a general perspective, then it affects the whole system. So the point I’m making here is that the complexity that autonomous system brings is very obvious and it’s something I think we need to talk about and try to begin to think about real solutions to them.

 

Ian:                     There’s so many things I want to say about what’s been said but I find the whole question about the capitalist way of we need to be this as faster, better, not better, just faster and quicker, a very complicated one, especially with a public service hat on. But one of the things I find interesting is that when we think about artificial intelligence and machine learning in particular, we think about it as this is a thing that’s being done to us. I’m really interested in a diversity of different artificial machines checking each other.

 

So I have my own machine that checks to see if the device that I’m about to use is compliant to my standards. Those kind of things I think there’s a lot more interesting things that can be done and from a capitalist point of view, to the idea of having this thing which is, I know a lot of people are very against transparency which we’ve mentioned roughly but I think that especially if I can then check the status of another thing because it’s transparent and also I know the thing that I’m using is operating in the way that I want it to, I think there’s a lot more opportunity for a much bigger market.

 

I think that notion of, I’m going to call them bots for now but that notion of a personal bot which the can check others is quite powerful and also satisfies the capitalist market going forward as well.

 

Sean:                  I think what you’re alluding to a little bit there is the idea of having provenance in things and checkable. Just to bring in yet again to ask the difficult questions but you can do that in the supermarket. You can pay for the food that’s organic but you do pay more for it and there will be people who don’t want to I suspect. Anyway, just throwing that out there. Joseph, you wanted to speak.

 

Joseph:              Yes, there’s a slight segue into the Trust Me project that I was talking about so I thought I’d mention it. The prototype that we ended up with essentially took participants that wanted to have a go on it through a journey in an autonomous taxi. It was a story based thing with various events happening which were meant to highlight some of the challenges that autonomous systems might meet.

 

After each one of these events we asked people to answer some questions and all those questions were tied into an overarching framework which linked into the idea I mentioned previously of trust as distributed concern. What you ended up with was a unique shape that was completely unique to you and it was your trust fingerprint representing where you sit on a spectrum, in fact multiple spectrums because these shapes were layered up.

 

You could see how you relate to the mean and to other individuals but part of the thinking there was along the lines of what Ian was talking about where we can imagine a future where I can put my unique trust fingerprint into a bot or into a car or into some system and it will come say the system is okay for you to use or not. I think these imaginaries of worlds that yes, have systems which might behave autonomously but aren’t necessarily dystopian and horrible. There’s things put in there to mediate between our desires and what the system could do. I think those kind of imaginaries are really useful in hopefully arriving at a preferable future.

 

Ian:                     I mean I don’t think they’re that unimaginary because if you look at browser add-ons, the amount of people who have add-ons and they choose those add-ons and they make decisions based on those add-ons. So what you see, so when I go to someone’s browser and it’s popping up a bunch of stuff, I’m like why have you not got this already installed? I think that is becoming a thing. So the idea of something that also helps you with I’m about to enter a taxi. When was the last time this taxi was checked? All those kind of things will start to become a think and it won’t be an unimagine future.

 

Sean:                  I think that ties back to the conversation we were having about data quality as well, that you might be able to have some clues revealed to you last time this thing was audited to make sure it wasn’t based on discriminatory data for example. Enrico, resilience plays a part in this as well, doesn’t it?

 

Enrico:               Yes, definitely. I mean the project I’m involved in looks more at resilience to policy, having policies that make sure that society is resilient so against flood, against Covid, against all kinds of disasters. I think autonomous systems fit in there as well. Of course yes, you have to build systems that are resilient. If we go back to the beginning, if you shake them up or put them in an unfamiliar environment, how do they recover from those sort of situations. I think if we increasingly rely on autonomous systems, we need to build them not just quickly that day seen to work like in the Tesla example but actually working under all kinds of conditions.

 

Sean:                  So we actually get the burger that we asked for when we go into the fast food joint.

 

Ian:                     I just want to say, I mean I know we talked a lot about autonomous vehicles but going back to what was just said, artificial intelligence machines or algorithms are affecting every day life. I think that’s important. It isn’t a thing in the future, this is happening right now. If it be the hairdryer that you’re using or decisions at a much bigger level about the traffic light systems, they’re happening all over the place. So I don’t want someone to walk away and go oh yes, I’m not worried about that.

 

This is why the research that’s being done right now is so important because as Joe’s already said, it’s a bit of the wild west out there. I really would like the public to understand that this is something to be taken seriously right now rather than in the future. I know there’s lots of things to worry about but this is something that’s directly affecting your bank balance and the world around you right now. It’s really good to understand and to support more research into this area.

 

Sean:                  I mean it’s a few years ago that the comedy sketch computer says no came out but there is a reason that that caught on because people were finding situations where automated systems were making decisions about their lives and as you say, it’s really important and it is happening. It’s one of those things where I don’t think we can even, it’s like taking an ice pick to an iceberg, this subject of ethics and AI. I’m sure it’s something we’ll return to over and over. But we’ve run out of time for this episode so it just remains for me to say thank you to our guests. So thank you Enrico. Thank you Joseph.

 

Joseph:              Chers.

 

Sean:                  Thank you, Inah.

 

Inah:                  Thank you very much.

 

Sean:                  Thank you, Ian.

 

Ian:                     Thank you.

 

Sean:                  Thanks also to you the listener. Make sure you’re subscribed to the podcast so you know every time we upload a new episode and we’ll see you on the next Living With AI. If you want to get in touch with us here at the Living With AI podcast, you can visit the TAS website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub.

 

The Living With AI podcast is a production of the Trustworthy Autonomous Systems Hub, audio engineering was by Boardie Limited. Our theme music is Weekend in Tattoine by Unicorn Heads and it was presented by me, Sean Riley.

[00:38:38]