
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Should we Worry about Swarm Robotics (Or simply rebrand as Flocking Robots?)
00:20 Alan Chamberlain
00:32 Richard Hyde
00:45 Sean Riley
01:05 AI Jobs (BBC)
10:29 "Ey up mi duck y'alright?" = "Hello friend how are you?" (BBC)
10:55 Code-Switching (BBC)
13:20 Sabine Hauert
14:15 A Moment of Convergence Termites Film (YouTube)
14:25 Kilobot (Harvard)
14:50 Black Mirror "Hated in the Nation" (Wikipedia)
26:10 Big Hero Six (Wikipedia)
39:55 Michael Crichton (Wikipedia)
48:55 MIT Fibre Bots (thenewstack)
59:20 ExoBuilding
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Louise Male
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Podcast Host: Sean Riley
The UKRI Trustworthy Autonomous Systems (TAS) Hub Website
Living With AI Podcast: Challenges of Living with Artificial Intelligence
This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.
Season: 1, Episode: 11
Should we Worry about Swarm Robotics (Or simply rebrand as Flocking Robots?)
00:20 Alan Chamberlain
00:32 Richard Hyde
00:45 Sean Riley
01:05 AI Jobs (BBC)
10:29 "Ey up mi duck y'alright?" = "Hello friend how are you?" (BBC)
10:55 Code-Switching (BBC)
13:20 Sabine Hauert
14:15 A Moment of Convergence Termites Film (YouTube)
14:25 Kilobot (Harvard)
14:50 Black Mirror "Hated in the Nation" (Wikipedia)
26:10 Big Hero Six (Wikipedia)
39:55 Michael Crichton (Wikipedia)
48:55 MIT Fibre Bots (thenewstack)
59:20 ExoBuilding
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Louise Male
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Episode Transcript:
Sean: Hi and welcome to another episode of Living With AI, the podcast about artificial intelligence and how it fits into our day-to-day lives. Today we're featuring swarm robotics, you'll hear from Sabine Hauert, Reader of Swarm Engineering at the University of Bristol. Before that let's meet this week's panel chatting all things AI, this week it's Alan Chamberlain and Richard Hyde. Alan is the creative industry’s sector lead for the TASHub, a reminder that's the Trustworthy Autonomous Systems Hub which started this podcast. Welcome Alan.
Alan: Hi there.
Sean: And a debut performance from Richard. He's Professor of Law, Regulation and Governance in the School of Law at the University of Nottingham. In fact I’ve written, “Shool of Law” so that shows exactly my education.
Alan: Maybe we should get some-
Richard: That's fine yeah.
Sean: Welcome Richard we will try and go easy on you. Hosting all this it's me Sean Riley, I make the videos for The Computerphile YouTube channel and various other things and we're recording this on the 11th of February 2021. So yeah, and what's been happening Richard you seen anything exciting recently?
Richard: Yeah there was a, a really interesting thing on the BBC website actually a couple of days ago which was talking about AI judging people in relation to their jobs. So, but not just judging their CVs and deciding whether jobs should be offered but taking interviews and, and looking and learning of what was, people were saying in the interviews and judging things like, will they be good in a team?
Sean: Goodness I mean this is, this is a whole new level of computer says no isn't it?
Richard: It's, I, I’ve got to say that I would not particularly like for it to have a look at my interviews that I’ve done to be getting jobs. And what is very bizarre about it from a kind of trust point of view is that I really wouldn't trust that, that AI to kind of make judgments about whether I'm going to be good at, at my job or, or not on either side. But so it's, so it's yeah, it's a very interesting old thing to have a look at and it certainly seems that, that lots of places are now using this, this software to do it.
The interesting thing brought up was that Amazon got built an AI to do job and employment related stuff and then got rid of it because it realised that it was biased because it was kind of learning from who they hired in the past which was generally more men than women and things like that. So yeah, problematic I think.
Sean: Is it training data at fault again, yet again? Alan what have you, what do you think about this, have you ever had a computer test before a job or anything like that?
Alan: No. I, I do know what, I, I think that it's, it makes me feel uncomfortable and I think immediately with this sort of stuff like, like Richard said it’s, it’s, it's kind of, it feels to me as though it's really this, this is a, this is a social problem and somebody's trying to throw tech at it to try and make it better. Whereas we all know we've, we've got, we've got, we've got friends that are good friends, friends that we sort of trust to do certain things, people that we rely on to do a job.
We know that some people are great at doing podcasts Sean, some people are great at doing sort of research into law Richard. But if I, if, if I kind of developed an AI based on what I knew as a human- let's not forget like, like Richard was kind of alluding to, it's going to have all those kind of biases and things that I'm not really conscious of and when you employ somebody sometimes you kind of want somebody who you can get, get along with that who knows how to do the job as well so-
Sean: This is a really important point-
Alan: Yeah.
Sean: -because I think that's it, often a great deal of any interview I’ve been in from either side of the desk as it were, a great deal of it is working out if you’re going to be able to get along with the people you're talking to. And we've all had those interviews or seen those interviews where somebody has kind of learned how to play the interview game rather than be any good at either getting along with people or necessarily doing the job. I remember being involved in an interview- I won't say which side of desk I was on, now involved in a panel interview where the, the candidate seemed extremely well qualified for the job and then it turned out they weren't that well qualified for the job. They were, what they were qualified for was taking interviews and I feel like this would just been another level of that, people will learn how to, to, to answer the right questions in the right way rather than being necessarily good for the job.
Richard: Personally I was never a fan of the computerised tests before, before job interviews. If someone can explain to me why my mental mathematics were really important to my jobs as a, as a solicitor I, I'd really like for them to, to tell me. But it, it, it is a it, in some ways we can all acknowledge that there are biases in recruitment processes however they're done. The problem is that it's difficult to see how you get rid of those biases by putting in place these sorts of technological solutions and in fact the danger is that you kind of embed the, the biases, biases further and, and that leads to the gaming that you were kind of kind of talking about potentially.
Sean: I, I do remember back in the late 90s after I left university I went for a job at a company- I don't think they exist anymore, but it was based down in Gloucestershire and they were a multimedia company. So I was going in you know, with my kind of like media head on and all this sort of stuff and the first thing we did was they put us in front of one of these computerised tests. And it clearly had things like- it wasn't particularly sophisticated I might add at this point, but it clearly had things like a kind of “Warning you are running out of time” embedded to come up at question number seven or something, because I’d been answering them all very quickly and then suddenly it was, “You're running out of time.” It's like, oh how do you deal with stress? It's you know, it was so transparent anyway who knows, who knows is this, is this the way things are going? People are using these systems aren't they?
Alan: Well according for the- sorry, according for the, to the article they’ve been running for 10 years some of them. So I just, I don't know, I, what I find fascinating about this is that you know we’re, we’re pretty creative as like a species humans aren't we? So immediately as soon as I saw this possibly you two as well, you think, we we'll start to use words like game the system because, because we're extremely good at lying and extremely good at being agile and flexible and, and AI kind of isn't.
So as soon as we realise what the process is and we can work out the, the algorithm that the so-called reasoning or the reason that it's doing something, then we can kind of sidestep that quite quickly. And can you imagine with, with tools like social media everybody would be sharing wouldn't they, you’re going for this job you know, it's kind of quite important, the pay scale is phenomenal and you've been looking at all these kind of cheat sheets that people are sharing.
Sean: Absolutely and actually that sort of thing happens in the, in the normal system. When I used to work at the BBC and a very structured system for human resources and recruitment and you could ring the HR department, ask for what exactly they were looking for as answers to certain questions and they would pretty much tell you. And there was one point where I know colleagues and former colleagues of mine who were perhaps on contracts and wanted to be extended or whatever and had to go to interviews. They would be sharing this kind of list of, right one they asked this, this is what they're looking for, this is how you answer it to gain the maximum amount of points.
So I suppose there's an argument to say that this sort of thing has always been there and I remember hearing that somebody I knew who worked in media had just a way of culling X amount of CV's for every you know a job application they were looking for just by saying, “Well there's a spelling mistake in that one, right it goes in the bin.” Whereas that might have been the best candidate for the job, but they had to have something. So as a sort of triage or kind of like initial thing I sort of see, maybe you could use this. It's when it, it's when it takes over that's the that's the possibly the problem there.
Richard: It, it was the, it was the bit where they started talking about the, the teamwork questions that the, it was taking the, the what the answers were to the teamwork questions and kind of scoring you on how much you said, “I” and how much you said “we”, how much you talked about your own contribution compared to how much you talked about the team contribution. And, and that goes back to what Alan said about, about gaming as being a, a way of doing things but also we all have different ways of speaking, we all have different ways of kind of expressing ourselves. And if we are seeking to determine whether somebody is going to be a good member of a team dependent on whether they say, “I” or “we” when talking about their past team participation that's seems like we’re, we’re picking only one type of person who speaks in only one type of way.
Sean: And yeah, and, and particularly as a lot of these systems are born let's say from the West Coast of America and trained on let's say transatlantic- no not transatlantic maybe but certainly kind of like, a certain type of accent. I mean best case for us might be it's kind of South of England worst case it's going to be as I say West Coast of America and then poor Geordie fella coming in for a job interview he's going to be absolutely stuffed isn't he?
Richard: Yeah.
Alan: Well I was just thinking that actually when Richard was saying because it's- I don't know, I remember sort of doing some research in linguistics and, and one of the examples that, that I was given by a, a linguist was- we were discussing I think it's called register. So that's the kind of I don't know I would say-
[00:10:16]
Sean: [unclear 00:10:16] is that?
Alan: Yeah the, yeah the, the voice that you use to- yeah, in a certain context. So, so you go in front of a judge or there's like some kind of international committee talking about laws, I don't walk in there and say, “Ey up mi duck y'alright?" I sort of, I sort of, you kind of go in there and you-
Sean: I’ll put subtitles in the show notes for anyone who didn’t get that.
Alan: Yeah, yeah but you might do that if you walked in, into a pub and, and it's kind of somebody you know from your village around Notts, you know the deal. But, but when you're going in front of, for a for a job interview your register is going to change, you suddenly become a little bit posher you might say.
Sean: Well there's a, there's a term for this it's called code switching, I listened to a really good radio documentary on the BBC about it. You switch modes depending on the audience I suppose, I mean I have to be, confess that you know, if somebody comes around to I don’t know build a wall in the back garden I’ll talk to them differently to if the policeman stops me and you know for speeding or something.
Alan: But I did like the- I think it was Rich, Richard that I saw in some notes that, that, that you, that we've written previous to the interview- we do, do that everybody, we do, we do, do research as researchers. But yeah I, I did start to think to myself yeah, if you were trying to get rid of bias in AI or make systems that were designed in a sort of more ethical way, would that start to mode switch? Would you get this sort of, “Ey up me duck” AI or would you get the “Yes Your Honour”?
Sean: I hope we do actually that’d be great wouldn’t it, or would it? I don’t know, I don’t know. Yeah the, apparently the chief executive of one of these companies said that, “The AI is more impartial than a human interviewer, there's a desire to have a fair process and AI can help evaluate all those candidates in a very consistent way.” It might be consistent but-
Richard: I think that it makes me trust that system less when there's not necessarily an acknowledgement that there might be something built into there and, and maybe a, a, a bias to look at it. If you think, if, if you've not been aware that there might be some problems with bias in AI I'm, I'm not sure that this is the podcast for you.
Alan: It brings up those horrible words again. I think last week it- we were looking at something, was it sentience where something's aware of something and sapient where they can make like, sort of appropriate judgments? That's what people can do, you, you might be really biased as a judge but you're, you're judging a case based on, on, on the kind of, on the, on the remit aren't you? On the bands of the case and what it all means and the law and, and what people have said and what they've done. It's not- your, your, your personal feelings are somehow drawn out of that. I think in in this system it's almost like you're putting your, your, your feeling upfront about- what an awkward technology.
Sean: Today we're talking about swarm robotics and our guest is Sabine Hauert. Sabine is Reader of Swarm Engineering at University of Bristol and her specialism is large swarms at small scales, she's engineered nanoparticles to fight cancer and also swarms of flying robots. So welcome to the Living With AI podcast.
Sabine: Thanks Sean it's great to be here.
Sean: Well it feels like I'm reading the blurb on the back of a sci-fi novel. How does it feel when one of these projects comes together?
Sabine: It's really funny, it is a little bit science fiction but we try to make it as real as possible and increasingly we're trying to get swarms out of the lab and into the real world so it stops feeling like science fiction, actually like a technology that people can use. But you know when you start out doing these technologies it is, it is a bit of a dream you know, PhD students who come in my lab are like, “What do you want to work on?” and they're like, “I want to do swarms for environmental monitoring.” and there's, there's that little bit of yeah, sure let's do that and so we have a go at it. So I'd say it's a mix between science fiction and actually trying to have an impact.
Sean: Well I have to say I'm absolutely fascinated by this subject and I’ve, I’ve worked on a video before that centred on termites and there was a parallel there. I remember looking at some footage of something called Kilobot, that was 1000 robot swarm or something, was it at Harvard?
Sabine: It was at Harvard, it was 1024 that's why it's a kilo, a Kilobot because instead of a kilo it was a Kilobot. We actually have 1000 of them over at the Bristol robotics laboratory.
Sean: There's a, there is an unfortunately negative connotation with the word swarm though isn't it? How do we get to turn that around and make it a positive thing because you know, we're thinking, it elicits things like locusts and, and bees or wasps or whatever. So how do we switch that around?
Sabine: Yeah, a lot of science fiction also has you know, the swarms usually go terribly wrong. So if you look at Black Mirror it starts with a good idea where you have these swarms of robots for pollination and then if you've watched the episode there's a little bit of a challenge with those swarms at the end of it. So, so there can be a bit of a negative connotation but there are also a lot of natural systems that people are just amazed by.
So the ability of termites which you were mentioning before to create these beautiful mounds or the ability of ants to some of the amazing things that we do. And I think there's, there's a healthy and positive fascination by a lot of these creatures or indeed human society and our ability to self organise. And I think if we can both understand the really exciting things that we can do with swarm technology and then make it real. So that it's not just the science fiction but actually the things that have an impact in people’s lives I think, I think that's how we step away sometimes from this fear around, around this word of, of swarm intelligence.
So simple examples you know, if you had a bunch of robots that could help you clean your house or if you had robots that would be used for environmental monitoring or you had robots that were helping power some storage areas that you're trying to manage, that you just don't have the capacity to do. I think those helpful swarms are the way that we crack this.
Sean: There's a thought as to how you control these things right? Because I know we don't know how, say termites are controlled they just do what they do, but how does it work with a swarm? Do you, do you give each one a, each member a tiny bit of intelligence or is there a some kind of boss or how does it work?
Sabine: The, the beauty with swarm intelligence is that we have simple agents typically and typically a large number of them, but that doesn't need to be the case and each one follows a set of simple rules just using its local environment okay? So if you think of a flock of birds and their ability to create these beautiful dances in the sky, while maybe each one of those birds is just looking at its neighbours and it doesn't want to bump into its neighbours, so it's going to scatter a little bit but likewise, it wants to stay close to its neighbours because otherwise it would just fly away on its own. And just those simple rules you know of attraction, repulsion, maybe they try to go at the same speed as their neighbours, gives rise to these beautiful dances in the sky.
And so this simplicity is really attractive and the local, the localness of it, the fact that it's just reacting to your neighbours, and why it's important is actually because of that you can scale to large numbers. So you know, in flocks you could have you know, huge numbers of these birds and also they're quite robust to individual failure, so if any one of those birds crashed to the ground the whole flock wouldn't crash to the ground because there's no single point of failure. Ultimately there's no leader telling everyone, every bird what to do in that flock. It's really the emergence of those simple rules and everyone interacting or working, working together so the, the beauty is in its simplicity and the complexity that emerges somehow.
Sean: So we mentioned before the, the idea of 1000 robots or 1000 agents, what's the current state-of-the-art how, how big or how small does it go?
Sabine: I can tell you about the robots we design in my lab because we really try to span the different scales. So it, in the smaller sizes we're looking at nanoparticles for cancer treatment, so there we are dealing with 10 to the power of 13 particles so a huge, huge number. But those particles are quite simple in their individual design you know, they won't have computers on board. It's really just a matter of designing particles in a clever way so that when you put them in the biological system they do what you want them to do.
At a slightly bigger level we have microparticles or bacteria or mammalian cells and there we've built a device that allows us to control 1000s of these and sort of engineer their swarm behaviour so that we can create new materials or consider things like antimicrobial resistance. So slightly smaller number of agents and they're slightly bigger. And then we've got Kilobots and those Kilobots we have 1000 of them in the lab and they're about the size of a coin. So there it's thinking more on the robotic scale of how you can do when you have many, many things that work together so those robots are quite simple individually but the number gives them their power.
And at the end of the spectrum we have a new swarm that's much more capable, they have better sensors, more computation, they'd be really useful for things like organising a warehouse. Those ones we’re building the 20s to the 100s so fewer of them but much more capable in terms of their intelligence. So, so we don't have a recipe but I think there was always this trade off between number and ability of the individual robots and trying to find the sweet spot for the different applications is, is, is, is a research challenge. It’s trying to find what the right fit is.
Sean: Is there a manufacturing issue though as well? Because I mean obviously the more you, you're making the, the you know it, it has to be some kind of assembly line of these things right?
Sabine: Yeah so, so the reason it works with the tiny things is it's produced in parallel through chemistry so you mix things in a, in a vial and you end up with 10 to the power 13 or however many particles. So that works because it’s massively parallel, when we're looking at the more sophisticated robots that work in smaller numbers then yeah, we have to produce pretty state-of-the-art robots. In this case it's our, one of our postdocs Simon Jones who created the robot, has a whole machine workshop actually at his house now because of Covid to build up these robots it’s a lot of skills that goes into developing them.
[00:20:09]
But I think there's also something quite cool that we're thinking now, we have this new project on Stochastics Forums and we're thinking about how we can mass produce robots that may be biodegradable, could you know, could be single time use. They could be used for example if you needed to deploy them very quickly over an environment to sense what's happened and all they would do is you would you know, maybe there's a sheet of material that you fold up and then they jump seven times over, or eight times over this field that you want to monitor and change colour. And you look at them from above with a drone and those robots could be massively mass produced but again simplicity. They're much simpler, so it's always this trade off that we play between the abilities of the individual robots which is ultimately how quickly you could manufacture them and how many we need to do the job.
Sean: Yeah those, those trade offs have kind of come across in, in all sorts of areas don’t they? I mean that, that sounds like origami, crazy origami.
Sabine: It is.
Sean: I know in the sci-fi novels they are always talking about the idea of the bots sort of building themselves and therefore kind of replicating is, is that anything like reality or is that just science fiction?
Sabine: We haven't, we haven't done self-replicating robots. One thing that we do, do is we don't just build hardware, we need the algorithms to power these swarms and actually a lot of work goes into engineering the swarm rules because it's never obvious what rules give you the desired swarm behaviour. And so we spend a lot of time either you know, talking to biologists or learning from the biologists about how they know ants and birds and termites work, but also increasingly we use machine learning to automatically discover these rules.
So we're essentially using artificial evolution which- to go back to your question is a way of creating at least virtually lots of different individuals that reproduce in different ways to basically converge or test lots of solutions. And then generation after generation in the virtual world we come up with solutions that hopefully do the job and then we can test them in the real robots. Actually sometimes we can do that whole process onboard the robots because we have enough computational power so swarms can learn to swarm on the go.
Sean: But otherwise there's some modelling going on as a like, simulation I suppose. Is there anything being used kind of commercial or practically at the moment or is it all research?
Sabine: Yes so I'm, I'm excited by this move to the real world now because I think we're at a stage where we have hardware, we can design all sorts of different robots in large numbers, we have software, we know how to explore the different swarm rules and so I think we're ready to get out of the lab.
So something that we've started doing is, is use proper use case studies of what people would want with swarms so rather than us just trying to guess in our lab. So we did one recently with firefighters, people who do bridge inspection and also people who work in, in shops and need to organise their storage room. And so we were speaking with them about what they would like to get out of the swarm because I think actually in some of these contexts we could develop the right swarm for them.
And it was quite encouraging because many, many times first of all because we made it very concrete about their current work life and what they would need as a tool it wasn't science fiction. It didn't even come up, it was more about yeah this would be really useful, I need some help in the storage area of my SME or my food bank to be able to organise this space, and we have these problems and we need this one to be able to do this for us. I think we're trying to come up with ways in which swarms could be quite, quite useful and actually one thing that we think swarms are useful for is, is in SMEs.
Sort of one, one consideration is that you know companies that typically have lots of robots, they're going to invest a lot in R&D and dedicated infrastructure. So if you think of Amazon warehouses for example, they build warehouses specifically for the robots and they have a lot of money to put into that research endeavour and to make the right robot for them.
A lot of SMEs, a lot of companies that simply are even thinking of automation but would actually need multiple robots to do the job, what they need is a solution that's usable out-of-the-box that requires zero set up, zero training, zero infrastructure. And we think swarms are quite good at that because they basically, the idea is are very reactive to their environment and they're quite simple. Which might mean that they're less efficient than an Amazon robot but they could also be you know, more ready to go and more adaptive in the environments that we set them out with.
So yeah, it was encouraging because they were keen if we got it right, and the getting it right is the thing that we're trying, trying to figure out now. The other projects that are more applied, we have the A project with Windracers which is a company that makes a drone, that has a high payload capability so they can carry 100 kilos for 1000 kilometres. And we're thinking with them how we could design a swarm for, for fire, fire mitigation. Because if you think of extinguishing big forest fires or you think of applications like aid delivery those all require lots of robots to work together.
The last example I wanted to mention is a new project funded by Nestle and there we're making 100 robots, that’s another swarm I haven't mentioned. They're little, basically screens on wheels and these robots are to help crowds of humans interact so it's sort of a mix between a human swarm and a robots swarm. And we think they could help crowds reach decisions or they could help them brainstorm or they could you know, they could be little Post-It notes, they cluster in cool ways. So that's kind of a low hanging fruit where there’s no critical need for it but how exciting it would be as an enabler, a new tool set for creativity. I think there's loads of [s/l tasks 00:25:50] where actually swarms if we could get these robots into places.
Sean: The overarching idea of this podcast is to think about the trust angle, so all these sound really cool, cool things but even in kids movies there is a problem sometimes with the, the swarm going wrong. You know, Big Hero 6 has a swarm I forget what they’re called in it, Big Hero 6 an animated kids movie from a couple of years ago and it's all fantastic until the swarm gets into the wrong hands. I mean is that a likely scenario I mean if, if the intelligence is on the swarm itself?
Sabine: Yeah I think we need to be considerate about how we design our swarms so that we understand what we, what they do so that we have an expectation for what they're meant to do and, and confidence that they are doing the right thing. So we recently wrote a paper on safe swarms with a colleague [s/l Edmund Hunt on machine intelligence 00:26:37] sort of putting out a checklist for what we thought would be useful to make swarms safe and I think there's different angles we could go about it.
One is, is first realising that the challenge in swarms is that while we might understand these local rules quite well because they're so simple, the emergent behaviour of all those rules interacting in the, in the world in which they operate- which is also a complex world, means that sometimes the outcome of it might not be as predictable as we would like. It’s, or it's not as obvious as we would like and so what we need to do is build a framework around these swarm algorithms so that we understand you know, at a granular [s/l level 00:27:16] first of all it’s the application, some of these applications are not safety critical.
So the robots that deploy in an environment and interact with humans in a very friendly way and these are small robots that really nothing, nothing could go wrong I think those are low hanging fruits where actually there isn't a huge safety consideration in their deployment. And so, some of these applications I think even if, even if the rules didn't work all the robots were buggy you know it wouldn't be an issue in the grand scheme of things. So those are low hanging fruits where I think we could have a go at it.
The other is we can design some of these swarms to be safe by design and that's because they can be made really simple. So I like this idea of sort of little, jumping robots that are biodegradable because the worst that can happen is that they don't cover the area that they're meant to cover and they biodegrade. So I’ve been thinking out really about the individual modules and how you design them to be safe so that the failure modes safe is another consideration.
And as we get into more and more complex robots then we need to use some of the verification tools that we use with robotics technology in general. So whether it's formal verification, whether it’s our ability to test these systems really thoroughly so that we have confidence in the way they're going to operate. Whether it's making sure that there is the cyber security onboard you know, if they're all networked and connected. So it's sort of a spread from making things very safe because there's actually swarms are quite safe in their robustness too.
So in terms of hacking them we're wondering if there’s kind of an inbuilt safety mechanism too that we're currently exploring. But I think the key thing is, is we're just starting out to do this properly so that's why this whole Trustworthy Autonomous Systems Project kicks in is we're really trying to understand how we design these swarms to be trustworthy. So what does that even mean first of all? What is it, what makes a swarm trustworthy? But then what are the tools that we can, can build up front so that the, the, the swarms that we ultimately deploy are the right ones?
Sean: Sounds good, I think sounds like modelling is going to be a, a key tool there as well isn't it?
Sabine: Yeah so we do, everything we do starts with simulation because you can't wrap your head around what you know, 10s or 100s of things do. So we do a lot of simulation and increasingly we're building up a capability so that the physical world and the digital world really marry up. So this notion of a digital twin where you can quickly deploy things in the real world and then you get the data from the real world and modify your digital world and then you use that digital world to try lots of solutions and when you're happy you push that back to the physical world.
So I think we're trying to build up confidence with simulation but also make that simulation better and embedded in reality so that the matching happens. Because so often we do these things in simulation, we put them on the robot and then, ah didn't work, didn't work yeah.
[00:30:04]
Sean: I think the, the, the thing as well is this scope like you said of everything from chemical you know, tiny, tiny particles almost that are nanobots through to kind of sophisticated robots. How do you define when something is a swarm robot? Because you know, just from a simple explanation of the jumping robots that biodegrade I joke that it sounds like origami, but are they still robots? I mean what, what's, what's the defining principle?
Sabine: Yeah I, I'm, I'm not too religious about the definition of a robot because, because I like this idea that we can be quite criss-cross disciplinary in in the area of swarming. For me the key thing is that they're able to sense their environment, act on their environment and often change their motion as a result of their environment. And I think at all those scales we get, we get those elements actually. So on, on the larger scale it's obvious we can programme our robots, they have sensors and then they do something as a result of what their neighbouring robots or their environment has.
At the, at the micro scale or even in, in with the stochastic jumpers there, there's definitely ways in which they can sense their environment through their materials or through their chemical composition. The things they sense might be molecules, they might be receptors on the cell rather than you know, using a camera as a, as a robot might use but they have their ability to sense their environment. And then they have ability to act on their environment which at the time scale might be emitting a chemical or might be heating up or doing something else.
Then what we're trying to discover or engineer more at the micro nanoscale is abilities to control the motion, because that often is very passive you know, you go with the flow and you're just so tiny and we're trying to understand if we can engineer a little bit more control and the way things are, things move at those, those tiny scales. And we, and we have some ways of doing that actually both with nanoparticles through self-assembly and disassembly and with things like bacteria which we can also have react to light and do, do interesting things.
Sean: And, and you're kind, of sort of touching on a little bit anyway but what, what's coming in sort of 5, 10 years down the track, do you do you think these rollouts to you know commercial applications and things will, will be the next step or will it be, are you looking into something specific that you think that's going to be the area?
Sabine: Yeah, I think there's different frontiers. So, so on for me one thing is getting them out of the lab and into the real world so these, these swarms for people I think are something that's going to happen in the near term at least for the low hanging fruit applications. The ones where, where we can get them out quite quick because we already have the capabilities and, and the considerations around them aren't, aren't too huge.
And then the other area where I think we're going to have an impact is on the micro nano biomedical fields because our ability- I think these algorithms and this way of thinking of swarming makes sense of those skills because of the sheer number and the simplicity of the agents. So I think there's more of a scientific endeavour, is trying to think about how we can coordinate lots of things that work together at tiny scales but actually the cancer front funds makes a lot of sense now for some of these treatments.
And then on, on the larger robot side, I think we're going to find that our robots are, are- well it's interesting, there's, there's kind of two different angles to it. One is I think we'll be able to produce lots of very simple robots in the more classical hardware sense. So if, if you think of, of those, those stochastic forms or you think of Kilobots I you know I, I completely think it's possible that we might have little floating you know, fluorescent bubble bots in harbours that could tell us if it's polluted. Or I could imagine them designing you know, ways of automatically planting seeds because every robot is a little seed that goes and plants itself so I think that sheer scale, the really minimal, smartly designed material robots.
And then there's just the sophisticated robots that will work in, in, in larger numbers through a thing that, that we've, that we're currently calling distributed situational awareness. And what that is, is essentially even though they’re swarms and we keep saying swarms or simple agents, actually these swarms or agents are getting smarter and smarter and interpreting their world. And so you know, they don't need a global leader, they don't need the central computer but I think they could be really efficient in a lot of real world applications. That last one might be trickier, trickier to wrap your head around it but it’s, it’s in, in short it’s this idea of an out-of-the-box robot.
Sean: When we start thinking about a swarm and thinking about 1000 robots that does sound like a, a large amount, but actually lots of these manufacturing organisations in the Far East are producing millions of things aren't they?
Sabine: Amazon is already using on the scale of 1000 robots in their warehouses and they have hundreds of 1000s of them deployed around the world. So it, it's just a different way of, of doing it, they do it in a very centralised way. The way I would like to see swarms deployed in the real world is everyone can buy a box of 10 robots for the back of their workshop or their food bank or you know their pop-up warehouse for Covid distribution supplies and take it out.
And then they might be a lot of random walkers, they might not be as efficient but you could tell that swarm with your Bluetooth phone to you know, “Bring me the” I don't know, “My you know, a box of masks.” and it would go and fetch it because it can just look around like we would do if we were looking for that box of masks right? We would be like, oh where is it? And they’d look around and they find it and they'd bring it to you, you know instead of having a central computer that tells them exactly where the box is and what the warehouse looks like. So that's, that's what I think of in terms of out-of-the-box swarms.
Sean: It's, yeah, it's the difference isn’t it, between having a having something that adapts to our world or adapting our world to the something.
Sabine: Exactly.
Sean: Of course to Living With AI podcast is part of the TASHub, What's happening with your node of the TASHub?
Sabine: Yeah we're just really excited with the TAS nodes. First of all to see the huge community and the UK build up and in our case we’re looking at evolving functionality. So I'm working on swarms but the others in the teams are working on software bots, they're working on aerial robots that learn and kind of looking at the legal and ethical application or aspects of that. And I think it will be really fun for, for the team and for me to learn a lot about how we can make these systems more trustworthy.
Sean: Sabine it's been an absolute pleasure talking to you, thank you very much for all your input and knowledge and information and thanks for being on the Living With AI podcast.
Sabine: Thanks so much for having me.
Sean: Great to hear from Sabine there and you might have noticed that I was rather excited during that. I, for me this is, this is like side stepping into a, into a parallel sci-fi world I mean I get so excited about swarm robotics and things. What are you’re, what are your guys thoughts on this? Alan do you, you've sent me a link about some really fascinating looking kind of swarm art robotics.
Alan: Yeah, well immediately when you said swarm robotics I think, I don't know, I don't like the word swarm because all I think- we'll all agree things that swarm are not nice to humans. They, they've normally got yeah, they’ve got little spikers and they're, they're sort of often bright colours and here’s 1000s of them and you tend to stay away from them. So I, I, I don't know I, I did think, wow swarm robotics that's straight into Kilorobots isn't it and intelligent Kilorobots and 1000s of intelligent Kilorobots-
Sean: It’s, it’s go every, it's got every single trigger word hasn't it in there?
Alan: -that, that that are kind of small and then I, I don't know I, I, I did- I'm not an enthusiastic person but I did love Sabine's presentation and, but I did start immediately- I mean this is going to sort of bounce back towards Richard start to think about the whole legality of this. I mean I’m fascinated obviously from a creative sort of perspective what these things might be used for I’ve, I’ve seen sort of some of the big lighting shows which have been done by 100s of drones and stuff in China. But when, when they're getting down to small level and they can change colour and they can be like, little change shape and-
Sean: Yeah.
Alan: I, I don't know maybe, maybe I’ve got one on the table at the moment but-
Sean: No and I think the thing about it was, was this idea of emergent property. So it's not a robot that we are giving a set of instructions or rules it's many robots that we're giving certain low level rules to and then something as a whole emerges from that. That's the bit which feels really intangible to me, what do you think Richard?
Richard: Yeah, I, I thought that was really interesting and I, like Alan I think the creative possibilities- I mean we only have to look at those sort of starling murmurations or whatever that you see and, and you just see how beautiful they are and if you think of kind of a, a, a similar sort of thing that, that you could do with these swarm robots it could just be fantastic.
But I did think that the idea about well, you wouldn't know what as a whole the, the bunch would come up with was actually really, really exciting. And I thought one of the, the things that really came, came out well in, in the interview was how perhaps not these sci-fi killer robots that they are. They need, they need a better PR officer though because swarm robotics really is not the, a great way that we might think about them because it does sound like they're going to kill us all but they're not.
Sean: Yeah, I think, I think that's, yeah I think the marketing is, is, is off isn't it? Particularly if you think, okay we're talking about a bunch of origami here and we're calling it a robotic swarm, I mean you know I’m not downplaying that because I think that it sounds like those tick a lot of boxes you know, reasonably cost effective solutions to doing things that might be you know, tricky to do in any other way that sounds absolutely fantastic. Yeah, then then you hear the word swarm robotics and straight away you're into kind of Michael Crichton territory.
Alan: Yeah you, you are but it, it did make me think about I think one thing that's fantastic about the, about trust and autonomy is, is that really lots of people that you speak to when you talk about trust they say, “What do you mean?” and for, for me this stuff really got me thinking about- oddly enough in a very same way to Richard it, it was kind of, I suddenly start thinking, what's trust? And I, I thought, well hold on if, if I believe in something and I use it and it’s, it’s got, it's got a reputation of some sort and if that reputation isn't great it's sort of, it's not trustworthy and I share this with other people, “Oh yeah, I used that so and so the other day it's not great is it?”, “No, no I’d go there if I were you, that's much better.”, “Oh really?”, “Yeah, yeah, they gave me a bit of a deal.”
[00:40:46]
So all this stuff is being negotiated constantly amongst us a lot. And, and I did think, then this, then this trust and reputation it kind of sinks into memory and it becomes a historical fact. So one of the things that might impact upon my trust of these systems is the way that we all, we're all discussing them now, and you never know our interview might have had some sort of bias on that.
Sean: Correct absolutely I mean-
Alan: So-
Sean: Go on, go on?
Alan: No, no. So, so I was saying it’s, it’s, it's an interesting thing that these, there's a sort of in some respects really, really they’re, they’re sort of technical solutions aren't they, that are sort of being, being made and we're trying to understand how they're made and what they do and how that relates to sort of all kinds of biology and who knows what. But at the heart of this it did make me reflect, actually this is a huge social problem when we look at it.
Sean: I think that the interesting thing about just the word trust right? It, it's such a kind of wide-ranging- there are lots of levels of it. So we can talk about trust as in- and we've done this on the podcast before, we say, “Oh I trust something because I have experience of it happening as I expected or as I hoped.” So I wanted to drive somewhere in my car, I’ve been able to drive somewhere in my car as often as I like and it's not broken down, therefore I trust the car right?
There's the flip side of that which is the trust when you start getting computing, computers involved and third parties and the agents and do we trust that they are, I don't know taking our, not taking our details and doing something nefarious with them? So that's sort of different sort of type of trust almost yeah, what sort of trust are we worrying about when, when we hear the word swarm? I mean, we can't stop going on about what the, what the word makes us feel like. I mean what if it was called benign groups of friendly robots? Would, would we-
Richard: Or flocking, flocking robots.
Sean: Flocking.
Richard: -would be, be, be fine I think or, or better because I, I don't know-
Sean: Shoals.
Richard: -maybe, maybe flocks of, flocks of birds have a, a- to me a. a nicer feeling I think. Flocks of geese maybe not, but who knows?
Alan: Yeah, shoals of piranha?
Richard: Yes, shoals is good I mean if we could have sort of under, under water swarm robots. but I mean I, I, I think I- the trust about them getting the job done I think is, is there for me, I think that it, it seems like they could do really good things. The thing I was thinking about all through the interview was forest fires right? And how much these could contribute to kind of working together as a, as a flock or swarm to, to put out these kind of big forest fires and it wouldn't be you know, you going and having the, the big plane scooping water out of the sea and dropping it on these, these forest fires.
So I thought I'd trust them to do the work and, and similarly the sensing work was, was fantastic, like the sort of ability to get these large groups working together to sense and map the sort of things that you wanted. I think it’s, it’s a phenomenally exciting things, thing to do and similarly the creative side of things. I, I, I think obviously my concern I think is more or, or my trust concern I suppose is more at the kind of operational level of, how do big swarms of fairly small robots work alongside humans in our, our environments and how does that, that interact with us? Whether that be in terms of safety, whether that be in terms of privacy or, or even things like sustainability and, and that's the kind of kind of level I think that, that the trust has got to be built. Because if you don't have that trust I think acceptability and, and therefore acceptance and adoption is going to suffer and, and not necessarily be there.
Sean: The, the, the other thing that obviously is kind of in your ballpark is the idea of regulation of this right? So I mean, when we talk about emergent properties that come, I mean emergent properties, any complex system has emergent properties right? So lots of people live in a, in a, a village then some sort of social activity will probably happen that wasn't a direct consequence of somebody moving into a house in that village, but it is an emergent property of those people living in the same area. So I suppose my concern with that is how do you regulate something that you perhaps have 1000s of and you've given each of them a small set of instructions and then suddenly those 1000s of things are doing something? How do you regulate that?
Richard: I, I mean, I mean that's, that's the really interesting question for me about this is because obviously we- in the interview there's the discussion about Amazon right? And Amazon controlling their robots in their warehouses, working to make the environment work for, for the robots and that's a contrast with the, the way that the, the swarm will work because the swarm will work in the environment.
But if the idea is that you, you make this safe, that you, you ensure that your swarms are safe and safe by design well, how exactly are we, we, we going to do that? And because, do you build each individual robot to be safe and that, that's fine, that's important you don't want your tiny little robots blowing up and injuring people or whatever. But also you don't want the swarm’s behaviour as a whole to become unsafe and that's a lot more difficult to, to deal with and regulate and design when we're talking about emergent properties.
So if you're creating an obligation which there is on all designers of products to make sure that their products are safe, the difference with the swarm and, and swarm robotics is that it may be perfectly possible to make your individual robot safe but how do you regulate the whole swarm and their behaviour and ensure that, that is safe if you don't have a single controlling mind, and you just have lots of little controlling minds?
And that's a big challenge for, for the people who are, who are doing swarm robotics and it's also a challenge for regulators to think about you know, do regulators, the people who are responsible for product safety, do they understand the sort of things that that swarm robotics might be doing in order to allow them to say, “Is this safe?” Or, “not safe” in particular circumstances?
Sean: And we're talking about physical things here but of course these, this sort of stuff happens all the time with kind of software agents right as well?
Richard: Yeah.
Sean: And the same problem can happen if I don't know, you put the wrong combination of apps on your phone that causes a problem with the phone right? So let's, let's think about things in a more lighter, lighter way. Just on the, on the- before we do that, that last sort of thing I was going to bring in the termites again which I mentioned in the, in the, in the interview. I did a film with some people who are working on termite research and the idea is of course thinking of construction and using swarm robotics to think about constructing using those kind algorithmic methods a bit like termites do.
But then you can think of termites having a horrible reputation, you mention termite to somebody who lives in say North America where lots of the buildings are made of wood, they're just going to immediately think you're talking about a problem rather than a solution. So these, these- yeah, it's very important to get the marketing right. Alan let's move on to the, the artistic side of this because I want to talk about these fibre, are they called fibrebots?
Alan: Yeah, sorry just I was just reflecting a little bit on what Richard said before and thinking, yeah things that have got like their own autonomy but work as a group you know, and you know, sometimes they do things together, central control, individual control it's a bit like humans really isn’t it?
Sean: Exactly yeah, exactly.
Alan: Yeah but yeah the fibrebot stuff well you know, I did the, before the interview I went away and I had a look at the, the, the latest news articles and sometimes AI can be a little bit yeah, the Kilorobots, negative stuff, swarms you know, job interviews that don't quite work. But, but I came across this thing, fibrebots and I, I really liked it. It's, there, there's some there are, they are it is swarm robotics and basically they're a cluster if you can imagine this, of small robots and they kind of, they weave their own tubes. So, and these things move about and they're, they're lit and they look really, really nice actually.
[00:50:02]
And I, I started just thinking in in terms of the TASHub we've, we've got our sort of creative strand, creative industries and I was thinking more about what might we use swarm robotics for in, in terms of that? And if, if there are sort of artists out there who have got an interest in this, this type of technology how might they engage with it and what might people such as Richard and I learn from the way that they interact? Because it’s, it’s not always the case that you, you come across this, I mean I do HCI, did some sociology and design I go out there and make something, people use it, they tell you about what they do in their day-to-day lives.
They tell you about the legal stuff, legal doesn't always mean ethical of course it just means that if, if something's legal. But so, so you get all these kinds of issues that come up that are fascinating, but sometimes when you get people that are a bit more creative that, that design systems that they bring things to the fore that you just, I don't know you just can't imagine. So, so I saw this kind of stuff and I thought, wow self-replicating and then I- because, because they make their own effectively robotic arms and then-
Sean: So, so we'll, we'll put the link in the, in the description of the, of the podcast but just to yeah, to explain it this is like fibreglass tubes is it? Can- yeah tell, tell us?
Alan: Thank you, I think, I think so. It kind of seems as though it’s, it’s they almost, for me it looked like they were building like cocoons is the wrong word but they were like something that you'd expect to see underwater that, that kind of grow I guess. I think Sabine talked about it a little bit didn't she? She talked about a- self-replicating is, is a different thing but I, I-
Sean: I think there's been interesting time lapse because I’ve watched a couple of the clips on the, on the link you sent through- again should be in the show notes and they seem to kind of weave as you say, these tubes that they are attached to themselves. So they grow I think it, it looks a bit like a giant sea anemone.
Alan: Yeah, yeah but they-
Richard: Or a coral.
Sean: Or a coral yeah precisely yeah.
Alan: Yeah a coral. But, but I did think, I mean thinking about it in a legal way and thinking about all the law and legislation about the environment at the moment it, it did make me start to think, what, what are the materials that are being used? How, even if it's biodegradable it doesn’t, doesn’t mean it's great to have like, 1000s of swarm robotics biodegrading at a certain place. That's not, how does that fit in with the environment or the- yeah it, you have one small thing that happens and it can completely screw up an ecosystem as we've seen. So in terms, in terms of Richard's work around this, I guess closer to this kind of stuff it must be a nightmare, Richard?
Richard: Oh yes it is, I mean just to say though those fibrebots are incredibly beautiful like it, it's, it, it looks amazing. My thought was kind of well, how does this, this work in the real world? What happen, what happens afterwards? And I mean look I, I'm a, a, I like fireworks I think that they're, they are good things that to, to see in the air, terrible from an environmental perspective, just awful. And my, my sort of thought was, well if we can do these sorts of incredibly beautiful things with, with swarm robotics perhaps we can get rid of, of our traditional firework displays and, and things like that.
But Alan’s absolutely right, I mean you know even if these are biodegradable how do they interact with the environment when the environment’s- during the time that they're there? How are they going to be, how, how's it going to make it, make a difference afterwards? What are they going, what is their kind of legacy and how do they, they look for a period of time? Now you know, is it going to be a permanent installation? I could see like really like, nice things about these things rebuilding themselves all the time like let's- instead of statues being you know, you sculpt them and then you leave them there these sort of swarm robotics creating new shapes would just be like really exciting I think.
But it, it it's like, how do they, they fit with what, what's happened there? What would you know the local bird population think about this, this thing? Would they go and land on your local swimming robotic statue and then what would they think that was happening? And it, it’s, it’s one of the I think really interesting things about, about the, the swarm robotics from, from this sort of perspective is they're coming into an environment and they're becoming part of that environment. It's not just kind of static it’s, it’s working as well and I think-
Alan: Exactly.
Richard: -thinking about how that works in a, in as I said an ecosystem and how we need to regulate it to make sure it doesn't upset that ecosystem is kind of an interesting thing.
Sean: There's a, there's a sort of ending to the article you sent through which basically suggests that these will be the bricks of the future. So whether it's that we're exploring with them as an art form but actually we might just sort of, I don't know I'm using air quotes here but, “Instruct them to create a building”?
Alan: We don't know do we? I mean it’s, it’s a difficult thing to know. With all this sort of stuff I mean this, the, the, the trust thing is- AI and intelligence is an, is an easy thing to look at in quite a negative way because you can pick it to bits. But there, there's certainly a difference between- when I first saw that I thought, wow it's like candy floss. Imagine, imagine you've got a few of those you know it's kind of it’s, it’s my, here's my candy floss tube, it makes what it likes, I eat a bit that's great it's in my front- I use it about twice and put it away.
There's a difference between having something like that and something like you say, that's in a real world environment or it's building rebuilding reefs you know, underwater and, and, and also you just don't- yeah like Richard was saying we don't know how other things will interact with them. So I, I was interested in, in the trust again from a more social angle thinking about what, what, what is it that makes me stay away from certain things? Can things be threatening on purpose in a positive way?
In other words do they form some sort of defence and can that be used in the natural world to stop other things destroying something like, like we've seen with some of the- I can't remember, is it crown of thorns starfish that are destroying the reefs and things like that? So but, but it's very difficult because I mean like, like Sabine said, I think you rightly asked in the interview it's kind of, so it's a lot of this stuff modelled, is it in the real world and, and how do you model an environment like ours which is so complex particularly with-
Sean: This is definitely a problem yeah, with any simulation is it, your simulation is only as good as the data you've put in and if, if it's not complicated enough it's not going to give us a proper outcome is it?
Alan: And, and, and, and as well if you if you're talking about law, I mean how on earth do you model legality into a simulation?
Richard: Very, very, with great, great difficulty is the, is the general answer there. I mean some of it’s easier others but the, the, the standards that aren't you know, clear and the sort of judgmental standards are just really, really complicated. One of the things about modelling that I was kind of interested in is, is sort of how you use those models of how it's working to, to make people feel more trusting of these sorts of things. How you demonstrate the kind of modelled good things that the swarm robotics are doing-
Alan: That's a Great idea.
Richard: -to, to, to kind of demonstrate how that they can help and how that might engage trust.
Sean: In a much more simplistic way it's a bit like architects showing you a rendered model of, of a building before it's built to say, “Look you know, this will look nice in this environment. Look here it is, look see the lady walking down the road with her shopping in front.” That's been rendered by-
Alan: Yeah or, “Look, look at your kitchen.”
Sean: Yeah, yeah absolutely, absolutely.
Richard: Yes.
Sean: That’s a really nice idea.
Richard: Also on the, on the bricks of the future thing I, I thought you know, how much would I have loved in my house where I’ve now been working for you know the, the past year, that I could instruct my swarm robotics bricks of the future to make me an office so I'm not sat in the, the kind of spare bedroom or, or things like that. Remodel your houses with your, your future bricks-
Sean: On the fly.
Richard: -that can kind of do that sort of thing.
Alan: Yeah and I think living architecture, I mean this is, this is kind of, I mean it's not a swarm as such but you can imagine couldn't you like having 1000s of buildings that have all got some intelligence built in them architecturally. And, and I think in the mixed reality lab we did have some, we did have some research based around that that's called, I think it was called The Exotent that, that, that moved and it moved in relationship to your, to the way you felt.
Sean: Yeah it was breathing related as I recall I’ll out, again I’ll put a link in the show notes for people who are interested in it. I'm not going to start asking how building regs would feel about this and planning.
Alan: No.
Sean: The planning department but okay.
Alan: Can you imagine your mood-
Richard: That's my, that's my next avenue of research you pointed it out to me Sean. It's-
Alan: Well if, if you do have one we can all come round and sort of join you in your mood shed.
Richard: Yes.
Sean: Which we’ll have to expand for social distancing right?
Alan: Well it, it would automatically do it, we wouldn't have to do it.
Sean: Absolutely, just as a point on the fibrebots, apparently they closed by saying, “We chose fibreglass as a basic structure material specifically, fibreglass can provide energy efficient, green, sustainable solutions for building enclosures. It has relatively low embodied energy due to its composition and can be shaped to carry loads in multiple directions.” Interestingly it says, “can be” or “can provide” yeah.
Alan: Well that's, that's a lot of, a lot of the sort of argument with all this kind of stuff isn't it? It's that, I mean even when you- it's like this idea that you would designed for safety or design for whatever we, we just don't know what, what the bias of the future is going to be? We don't know what, what's culturally going to be different what, what materials can and can't be used, what's in there, what's not and you only have to look back- I presume there are, there are loads of materials that were designed as safe-
Sean: Plastic for one.
Alan: -that, that-
Richard: Yeah.
Alan: Well yeah and it’s, it’s easy to make and it's made and it's made out of something that naturally occurs isn't it? But nobody thinks about sort of if you-
Sean: Well the, the, the whole part, a big part of that was making sure that it lasted and obviously that is, is its inherent problem is that it lasts too long.
Richard: Yeah, and I mean also asbestos right, putting asbestos into buildings to, to make them more fire resistant which was, at the time it must have seemed like a good idea but it's in fact an absolutely terrible idea in hindsight.
Sean: Yeah and I trust it’s definitely fireproof yes.
Richard: Yeah, yes.
Alan: I guess once you sort of mix that up with like, with some form of intelligence and if you don't update that intelligence to make it aware of what it can or can't do?
Sean: I’m now imagining you know, this self, this sort of self healing sort of adaptable, automatically adaptable building that I trust to build me a conference room when I need one or an office if I need it. But I'm also imagining the AI pushing it over my neighbour’s boundary and then me ending up in front of-
Richard: To catch the sun or something.
Sean: To catch- exactly to catch the sun and then me ending up in front of a robotic judge.
Alan: Yeah or it blocks, or it blocks out your light and it, although it looks nice for a moment, built it at night and the next day it's in front of your window.
Sean: Anyway I think, I think what that all comes down to is the fact that the one thing that we all agreed on for sure was that swarm is a terrible name for this kind of technology and that we should now call it flock, flocking robots or-
Alan: Oh flockity robots.
Sean: Yes or schoaly robots. I would like to say Alan and Richard thank you very much for coming on to Living With AI.
Alan: No problem.
Sean: Thanks Richard.
Richard: Thank you.
Sean: And thanks Alan.
Alan: Thanks again.
Sean: And hopefully we'll see you again on another episode of Living With AI very soon.
If you want to get in touch with us here at the Living With AI Podcast you can visit the TAS website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub. The Living With AI podcast is a production of the Trustworthy Autonomous Systems Hub, Audio engineering was by Boardie Limited. and it was presented by me Sean Riley. Subscribe to us wherever you get your podcast from and we hope to see you again soon.
[01:03:11]