Living With AI Podcast: Challenges of Living with Artificial Intelligence

Are You Talking to Your Autonomous Car? (Maybe you Should!)

Sean Riley Season 2 Episode 8

This 'Projects Episode' discusses a few TAShub projects grouped around the themes of autonomous cars and voice control.

Project: Chatty Car
Professor Gary Burnett, Project Lead Contact

Project: Understanding user trust after software malfunctions and cyber intrusions of digital displays: a use case of automated automotive systems 
William Payre Assistant Professor in Transport Design and Human Factors at the National Transport Design Centre at Coventry University (CU)

TAShub Verifiability Node
Ana Cavalcanti Professor of Computer Science;  Deputy PI, Verifiability Node

Podcast production by boardie.com
 
Podcast Host: Sean Riley

Producer: Louise Male

If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at
www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.

Podcast Host: Sean Riley

The UKRI Trustworthy Autonomous Systems (TAS) Hub Website



Living With AI Podcast: Challenges of Living with Artificial Intelligence

This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.

 

Season: 2, Episode: 8

Are You Talking to Your Autonomous Car? (Maybe you Should!) 

This 'Projects Episode' discusses a few TAS HUB projects grouped around the themes of autonomous cars and voice control.
 
Project: Chatty Car
Professor Gary Burnett, Project Lead Contact

Project: Understanding user trust after software malfunctions and cyber intrusions of digital displays: a use case of automated automotive systems
William Payre Assistant Professor in Transport Design and Human Factors at the National Transport Design Centre at Coventry University (CU)

TAShub Verifiability Node
Ana Cavalcanti Professor of Computer Science;  Deputy PI, Verifiability Node

Podcast production by boardie.com

Podcast Host: Sean Riley

Producer: Louise Male

If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at
www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.

Episode Transcript:

 

Sean:                  Welcome to Living with AI, where we talk artificial intelligence.  It’s everywhere but should we trust it.  I’m Sean Riley and this is the Trustworthy Autonomous Systems Hub podcast.  If you’re new to the podcast, this is season 2, so feel free to check out season 1, all the links are in the show notes.  We’re recording this on the 26th of May 2022.  

Today is a project episode, we’ve taken the theme, Are you talking to your Autonomous Car?  And we’ll feature a few TAS Hub projects related to it.  Now, I can’t let this pass without some mention of the hit 80s TV show, Knight Rider, where David Hasselhoff’s character, Michael Knight and his talking car, Kit, thrilled us every week with crime fighting capers.  I’m really hoping one of today’s projects brings us that tiny bit closer to me being able to mutter in to my watch, Kit, I need you.  I can’t help wondering if Elon Musk watched the same shows as I did growing up.  

Anyway, we have three researchers who have joined us today.  We’ve got Gary Burnett, William Payre and Ana Cavalcanti, they’re our researchers.  So, what I’m going to do is ask each one of you to introduce yourself and the name of your project and then we’ll listen in a bit more detail about the projects individually.  So, just because you’re on the left of my screen, Gary, I’m going for you first, so Gary tell us, name, rank, serial number?    

 

Gary:                  Okay, thanks.  So, I’m Gary Burnett I’m a professor of Transport Human Factors.  I’m also the head of the Human Factors Research group in the Faculty of Engineering at the University of Nottingham.  And, yeah, our project that we’ve been working on for the last year or so, within the TAS Hub overall project is called Chatty Car. And it does what it says on the tin, it’s basically- This is about the sort of use of digital assistance in vehicles but particularly focused around sort of the next generation of automated vehicles on our road.

 

Ana:                   I’m Ana Cavalcanti, I’m a professor in the University of York.  I am the leader of the RoboStar Group that works in the area of soft engineering for robotics.  I am here representing the verifiability mode of the TAS programme.  And the verifiability node aims at supporting every other mode in the very central task of making sure that the expected and essential properties of autonomous systems can be established.  

 

William:            Hello everyone I’m William Payre, I’m an assistant professor in Transportation Design and Human Factors at Coventry University.  And the name of the project I was involved in is Understanding user trust after software malfunctions and cyber intrusions of digital displays, in this case, an automative systems.  It’s a very lengthy title but in a nutshell we investigated the effects of cyber attacks on drivers’ behaviour and attitude while using an automated vehicle.

 

Sean:                  Fantastic, thank you, for that.  So, if I could just go to Gary for a bit more detail then, Chatty Car, you know, is this my Knight Rider dream coming true finally?

 

Gary:                  At some point, I suspect, yes.  So, car companies for a long time have sort of dabbled with speech recognition systems within cars.  And there’s been a lot of problems with that, with being able to understand and the quality of the speech synthesis and stuff like that.  But, this has got a lot better since the Alexa/Cortana days.  And the sort of rise of more conversational user interfaces in the home is now sort of finding its way in to the car.  You see quite a few of the car companies bringing in their own version of Alexa in to the vehicle.  

 

                            And, so, you know, for a lot of people this sort of makes sense in terms of the distraction argument, because when you have a speech interface then that’s not visually taking your eyes off the road in order to interact with the other systems.  And, you know, for many people they are in a car on their own for long periods of time.  And to have that sort of equivalent of a passenger that they can speak to can be quite important in terms of just a sort of a level of engagement, a sort of human sort of relationship, if you like.  

 

                            Yeah, so, going back to your original question, yes, they are already around to some extent and they will likely be around more and more so, sort of going in to the future.   

 

Sean:                  Thank you, Gary.  So, Ana, could you just tell us a bit more, so it’s not specifically a project the verifiability node, it’s effectively working with all the project or all the different nodes, is it.  Tell us about what, you know, tell us about that?

 

Ana:                   Absolutely.  Yes, verifiability is a concern that is about ensuring that expected properties are present in a system.  And the other nodes, the [unclear 00:05:16] node, the trust node, the security node, they all can benefit from the possibility of verifying the properties that they are concerned with.  So, we work very much in collaboration with the other nodes.  

 

And the vision for the node is of a world in which the focus of the developers, the focus of the designers move away from code-centric to model-centric and do it in a world where we can use these models to add a lot of value to the systems.  And to that trustworthy as well.  So, in that world the development of the systems like autonomous cars and a whole load of other applications as well are more trustworthy because we have verified, using a variety of techniques, the properties that are of particular concern.  

 

So, we want cheaper, we want safer, more secure and all that.  Our long term vision is of a verified autonomous store, where developers will find systems and components that have been extensively, comprehensively verified before getting to the store.  But also as the component evolves that verification, that rigorous approach to verification is maintained.  That’s our long term vision, you can think of the app store for the autonomous systems designer.  

 

Sean:                  I’d be interested to hear the other two contributors today, because they’re both human factors whether they think we should move towards a human-centric rather than a code-centric, but we’ll come to that I’m sure.  

 

                            William, tell us about your project then?

 

William:            Well, the project is about mostly cyber security, but indeed with a focus on human factors.  So, the idea of the project is that- In the past two years I’ve been scammed at least three times, so it was a huge source of inspiration for that project.  And when there was this opportunity for the TAS Hub, one question was if someone tries to scam you or if you actually are scammed, what happens when you are in an automated vehicle, but you’re not driving.  So, there’s some kind of mix of feelings with being enraged and being ashamed and panic as well.  So, what would you do if you see there is some kind of red somewhere popping on your dashboard or in-vehicle screen.  You are driving, what do I do.  It’s it distracting, do I trust the system.  Is it about the connectivity of the vehicle or is it about the driving is automated.  What shall I do, shall I resume control and if I resume control is it safe.  And am I going to make a mistake.  So, that was the purpose of our study.  

 

                            We actually conducted a driving simulator study where people experienced different types of failure, one was silence.  It means that the driver is not notified of the failure, and that actually replicates the Tesla case, where the turn signals fails to activate during an overtaking manoeuvre.  So, it’s a silent failure, you don’t know it’s happening, but on your dashboard and on the HMI, there is no turn signal.  So, it’s not very safe but you’re not notified.

 

                            The other type of failure we investigated is called explicit, so you are notified and that explicit failure was the ransomware.  So, you are driving in automated mode, so you don’t have to use the steering wheel, you don’t use the pedals and at some point there is something popping on the screen and that actually replicates- Was inspired by the WannaCry software, so the design is very cheap.  It’s not moulded at all, but with some typos and that’s how it’s presented.  But I believe that it’s even more realistic.  And when you are presented this, while you are not driving and you are actually engaged in a task that is not about driving, what would you do?

 

                            And that was the DM of the study, and we saw that people distrusted the system, which is not very usual in a driving simulator study.  Usually, there’s a drop in trust but not too the extent to which people would distrust the system.  And some of them were very sceptical about it.

 

[00:09:57]        

                            The other thing is that one participant crashed the vehicle after resuming control  So, some of them resumed control and it was fine, one person crashed the vehicle.  Some people thought it was some spam, so, okay, whatever.  Some people thought, well I’m in a driving simulator so it’s bogus, I know it’s fake.  And some others actually were distracted.  So, not only you look at the HMI and the ransomware and you have some interaction because you could tap some buttons.  

 

But, also if you resume control because you feel that there is something wrong, you kept on looking, some people kept on looking at the screen.  So, this is very destructive and actually very dangerous. 

 

And one thing that I should mention regarding the ransomware is that we asked participants at the beginning of the study to fill in information on this in-vehicle device so they were provided with personal information, such as hello John or hello Mary, and we asked to enter their email address and with a password.  So, when there was the ransomware, it sets their personal information was encrypted and you had to pay a certain amount of bit coins and so on.  

 

So, yeah, that’s what the study is about and we have quite interesting results as regards distraction and safety.

 

Sean:                  It sounds absolutely frightening the idea of ransomware while you’re in a vehicle.  I mean, you know, best case it perhaps drops you off at the cash machine, you know, worse case who knows.  

 

                            I know we’ve talked before, Gary, about the transition between autonomous driving to manual control, we’ve talked about that on the podcast before- Yeah, I’m not surprised people kind of crashed the vehicle in that kind of frightening situation, are you?

 

Gary:                  Yeah, yeah, I was really interested by that, William, it’s a great example of the sort of studies that you can do in a driving simulator that there’s no way you could do on the road.  And, obviously, with simulators that are very, sort of high level fidelity you can get, you know, good data, even if there will be issues for some people about to what extent that this is treated as a sort of real experience, as you mentioned, but for many people the levels of- What they’re calling this area, is presence, this sort of belief that you actually are somewhere different to where you actually are, which you will get in a virtual reality, immersive driving simulator, could be very sort of compelling.  And you will get examples of people panicking.  You will get examples of people having strong emotional responses to these sorts of safety critical situations.  

 

                            Yeah, before I sort of talk about stuff relating to, you know, resumption of control, it did remind me of a study we did in our lab a couple of years ago, also on failure modes, but our failure modes weren’t related to an automated vehicle, it was for a fully manually driven vehicle, but the failure modes were towards digital mirrors.  

 

So, when you have a side mirror which is now a camera based representation, there’s various failure modes for that side mirror.  And so, it could go blank, it could get distorted with a sort of pause or signal.  And, not surprisingly the worst thing it could do is freeze.  You know, we were getting people to do lots of overtaking manoeuvres and just at one particular point throughout the- When we knew they were in the middle of an overtake manoeuvre it would go in to a failure mode after a whole- You know, lots of drives where nothing happened. 

 

And you get some really interesting behaviours there, and, not surprisingly, as I said the freeze failure is the worst possible thing because it’s misleading information.  You think there’s nothing there and there is something there.  And it gets really interesting in terms of sort of behaviours that sort of occur in that sort of situation.  

 

So, yeah, so William, that’s a sort of really interesting sort of another type of sort of failure mode type study that you can do in a driving simulator there’s no way you could do on the road.                      

 

Sean:                  Yeah, I’m very glad that’s done in a simulator, those sorts of things.  There’s been a lot of talk recently about kind of the move towards autonomous vehicles, I mean there’s stuff in the Transport Bill which we’ll come to shortly, I’m sure.  But I remember hearing about the idea of, oh, no, Britain is free to kind of push forwards with, you know, not less regulation, but certainly a freer chance for people to experience kind of autonomous vehicles on our roads or to research and experiment with them.  What do we think about that?  Is that something where, you know, the idea of chatting to your car is going to help, because we’ve all had problems with voice assistance when they don’t do what you expect.  What do we think about, you know, the idea of more autonomous vehicles, are we ready for it?  Ana, what do you think, are we ready for that yet?

 

Ana:                   My expertise is not in the area of autonomous vehicles specifically.  Of course, that’s very much of interest to us in the [s/l node 00:15:29] as a whole, and I have the feeling that given our discussions that no, we’re not ready for it, and we’re nowhere near being ready for it.  And I would like to hear what Gary and William think, but I think there is also a concern about infrastructure.  When you think about having cars, Chatty Cars or cars with a high level of autonomy.  

 

In the city centre of York, for example, it’s extremely scary.  If you think about these cars in the highway, perhaps a smarter highway, providing all the data that the sensors will need and potentially identifying the situations of failure as Gary has just explained, that will be a situation in which this can become some more realistic for our prospects today of being able to assert.  To verify and assert what we really can expect from these systems.  And, of course, the human and the failure rate are very big components that need to be part of that verification and assurance.    

 

Sean:                  Gary, I know you know a little bit about this new Transport Bill and moving towards, is it level three vehicles, can you tell us about that?

 

Gary:                  Yeah, shall I explain a little bit about levels, but essentially there are plenty of cars on the road at the moment that have level two functionality.  You know, most obvious one being Tesla autopilot.  But, Nissan Pro Pilot, you know, etc. these essentially will do longitudinal control, so redactive cruise control and will do lane to keep you in lane but you should keep your hands on the steering wheel.  And if you do take your hands off the steering wheel it’s going to start beeping at you and turning the system off after a certain amount of time.  

 

So, in a level two functionality vehicle you’re not free to do other stuff.  You may start to get distracted much more easily, and there’s plenty of cases of that and accidents that have happened on the road, but you’re not- Yeah, the big change in the move to level three is the driver is taken out of the loop by design.  So, you are, therefore, free to do whatever is considered to be appropriate non-driving related tasks, and that’s a big debate.  

 

But there is a view that you will go from being a driver to not being a driver whilst the vehicle is in motion.  And that’s a fundamental shift in the concept of being a driver.  The moment I get in my car and it’s from the point at which I open the door and I get in the front seat I am a driver until the point at which I get out of the car and, you know, with my car safely parked somewhere.  

 

So, now I’m going in and out of being a driver and not being a driver and that means we have new tasks of the transition of control as we go, hopefully seamlessly in and out of control.  

 

And I think, yeah, it will be very interesting to see how much of this gets legislated for in the Transport Bill.  There was already, you know, the latest sort of Press stuff was all about you’ll be free to watch films while you’re driving, but you won’t be able to use your phone.  And this is really quite problematic- So, you can use the in-car system and watch a film if you wanted to on your little display inside the car, but- You know, most people’s phones are connected to their in-car systems.  And so how that all gets sort of managed- So, I may not be able to hold a phone, but I’ll be able to access a lot of my phone functionality through the car system.  And how it sort of moderates that and what it allows and what it doesn’t allow is going to be really problematic.  

 

[00:19:40]        

 

Sean:                  We did discuss this on one of the other podcasts and I think the idea was obviously- The sense behind that or I suppose [s/l incense 00:19:47] behind that is that the car, obviously, if it needs you to be alerted to something- It has control of its internal systems where it doesn’t necessarily have control over your phone.  And one of the contributors on a previous podcast actually suggested that rather than thinking of these levels of what the car is capable of doing, we should be turning this around, and as human factors you’ll kind of get the idea of this, but of saying what is my job while I go in this car.  Am I the driver, am I a supervisor, am I, you know, am I a collaborator- You know, what’s my job, not what can the car do but what’s my job.

 

Gary:                  Well the reason why this all comes up is because, ultimately the car is a consumer product and it will be marketed in terms of its functionality.  And so this gets then- You know, this leads to phrases like autopilot and full self-driving and, you know, all this sort of bigging up of capabilities.  You’re right, you know, you do need to think very much about what is the task and I think the key thing here is particularly during that transition of control period, where the car requires you to drive by a certain point, for instance, it’s coming out of its operational design domain.  

 

So, it doesn’t work once you come off the dual carriageway.  It knows you’re going to come off the dual carriageway at certain points, so it requires you to start driving by a certain period of time.  It gives you suitable notice but you now are in this period of shared control where you need to discard your non-driving related tasks.  You need to start building up situation awareness in preparation for then physically controlling the vehicle.  

 

And these are new tasks and these are things where a conversational user interface, as we put forward in Chatty Car could be very, very useful to help you through that process of doing this properly.  Because if you leave people to do it naturally, which we’ve looked at in various studies over the years, then people will just carry on with their non-driving related tasks until the last possible minute.  You know, my phone is more important to me so I’ll keep doing my phone and then the system counts me in, three, two, one, oh hands on steering wheel.  And so you just focus purely on the control aspects of driving and not on the tactical sort of development of situation awareness which is, you know, really fundamental.  So, if you do it the other way round and you’re just here to control  and then it’s like well where the hell am I and what’s going on, where are all the cars, that’s not the way that this should happen from a safety perspective.                   

 

Sean:                  I remember many years ago doing an advanced driving course and what they tried to teach you there, and one of the sort of minor things they tried to teach you was to do any of the complicated procedures, say, for instance, parking your car, turning it around and everything at the end of the drive rather than at the beginning.  So, that sounds silly until you kind of think of an example.  So, rather than driving to a parking space and then leave yourself to reverse out when you’re starting off the next day, do all the reversing complicated manoeuvres at the end of your drive when your kind of mind is in the right zone and you’ve been driving and warmed up to that.  So, you can reverse in to a space and then when you next drive and you’re cold as it were, you can just do a simple drive forwards.  And it’s this change of, as you say, that kind of like cold to hot is going to be absolutely crucial isn’t it.  

 

And I think we’ve discussed this on the podcast before, my car has a bleeping noise if it thinks I’m about to crash, which often is a false positive because perhaps in front of us there’s a car that’s in a filter lane to turn right and I’m carrying straight on.  And bleep up comes this thing and it shocks the life out of you.  I think there are real issues around this.

 

William, what were the findings when people were finding, you know, that these messages were coming up in the middle of a drive, did you find people shocked and did they make erratic decisions? 

 

William:            Yes, well as I mentioned earlier there was a crash but it’s one out of 40 participants.  Yes, there were a lot of different reactions, the most exotic ones- Everybody noticed the ransomware, but one participant, and this is called attentional blindness.  And the person was so in to checking what was going on, on the road, so the person was engaged in non-driving related tasks, so, basically it was a word search task and the objective was to find as many words as possible at the same time the person was looking at the roads on a regular basis to make sure everything was fine.  

 

And the ransomware pops on the in-vehicle screen and was displayed there for three minutes.  And the person did not notice it.  So, that’s one of the, let’s say funny reactions because there’s no reaction whatsoever.  And that’s actually frightening in terms of design, in terms of safety, because if you don’t notice something that’s red in a dark environment, there’s something definitely wrong with this.  And mainly because people are kind of sceptical about the autonomation and how the vehicle behaves.              

 

There were some people very surprised about it, some video that I could possibly post on Tik Tok but I won’t because of the privacy issues.  But some people were very shocked and, oh, what’s going on.  Some people started smiling because I know that- I’ve seen that come many times and it rings a bell.  And I won’t do anything.  So, people did not have any interaction but looking at it, say okay, well- My strategy is not to tap the button or to do anything, I’ll just resume control or not resume control.  And life goes on.  

 

So, there were different reactions and the last spectrum from ignorance to not seeing it to resuming control to crashing the vehicle because it’s overwhelming.  

 

Sean:                  Yeah, it’s one of those things, isn’t it, because we see this with email, with phishing scams, with any kind of message that pops up on a computer, be it cookie warnings or whatever, people deal with them in different ways.  You know, there’s the classic accept, accept, accept, just press okay, as many times as you can until it gets to the screen that I want to see.  Then there are people who will go in to infinitesimal detail about accepting certain cookies but not accepting the others.  

 

And there’s people, and I happen to be one of them who just ignore the cookies message and if it obliterates my screen on that particular website I go to a different website.  And, you’ll find this, I wonder if training is the way forwards for this, do people need kind of a sort of training in what potentially might happen that’s, you know- Because even though, in theory you don’t need for a full automation, you know need a driving licence, but you are going to have operate this thing in some way.  And, obviously, the overarching theme of this podcast is about talking to the car.  So, do we say don’t show me any of these messages.   

 

Gary:                  Yeah, I do think training is going to be really important for these new types of vehicles because there are new tasks, and you cannot expect people to have the right mental models for how these vehicles work.  I think it’s already problematic with the level two type functionality.  And level three just sort of ramps out quite a lot.  And, you know, we’ve been talking for quite a while you know in our research about how can you help people to develop the appropriate mental models for the technology so that their trust is calibrated, it’s at the right level in relation to what the technology can do.  And then how can they then develop the right so of behavioural routines that will allow them to use the technology properly.  

 

And so, it’s sort of links to Chatty Car.  Because Chatty Car is more specifically about the interface, the human/machine interface within the vehicle.  But the background behind that, is more about a pneumonic that actually was called chat where the CH stands for check, and the A stands for Assess and the T stands for takeover.  And hit’s this view that you- A bit like mirror, signal, manoeuvre, apparently kids don’t get taught mirror, signal, manoeuvre anymore, but I certainly was.  

 

And something that I remember very clearly as a pneumonic, that helped me learn to drive, then CHAT basically is I need to check myself, I need to check that I’m discarding my non-driving related tasks.  I need to check my mirrors.  I need to check my blind spots. I need to assess my situation around me.  And then I put my hands on the steering wheel and my feet on the pedals.   I don’t just go straight for the pedals and the steering wheel. 

And I think this is just so important because, you know, as we said before situation awareness is just a fundamental driving task.  You need to have strong situation awareness.  And you build this over time and you then have to maintain it.  And it will get lost when you become a non-driver.  And you have to have that process and so, yeah, that’s something we’ve been pushing for quite a bit.  And, hopefully, you know, that can start and make a difference over, you know, what might happen within the Transport Bill.          

 

[00:29:55]        

 

Sean:                  Do you think the technology can help with this?  I mean, you know, we’ve talked about, you know, drivers will always, well people.  Let’s just say people will always leave it to the last minute, no matter what the task is.  Not every person will but there is a tendency to leave things until the last minute, so as you said, you know, the machine’s counting me down so I’ll go three, two, one, oh right now what’s going on.  Perhaps it could know in the kind of sense of it that you haven’t been paying attention and therefore not relinquish control, I don’t know.

 

Gary:                  Yeah, so this is where sort of driving monitoring systems come in and the chances are that that will be part of the legislation as well, that there has to be a formal driver monitoring system.  How much that gets legislated for I don’t know, but if you have a driver facing camera, then you can start getting in to, you know, you are not- You know, you need to stop doing this and, obviously, if they’re with- If their in-car system is interacting here it can turn that film off.  But how much you’re involved in that process, because if it just suddenly turns it off, then a lot of people aren’t going to be very happy with that.  But, yeah, how that all works and the role of monitoring systems on board is going to be really interesting.                         

 

Sean:                  I’m having flashbacks to kind of long haul flights where you’re watching a movie and suddenly it stops because the captain wants to make an announcement about turning on the seatbelt signs or something like this.  

 

                            Earlier, yeah, I remember, Ana, you saying about whether we move from a code-centric kind of idea to a model-centric idea of kind of development, I’m guessing, you’re talking there, aren’t you of systems.  But with two human factors, people on the call, I just thought, you know, should we be looking at it from the humans point of view rather than the modelling or even the code?

 

Ana:                   These two views are not in conflict with each other.  We definitely need to have a human-centric approach to the design.  And that approach with techniques like the techniques that William are talking about, and Gary as well, we will get to answer some of these questions, what is my proposed design.  But when you have that design, when you have made your best attempts at designing a system that will be, I’ll call it human friendly, then that design should not be moved to a code-centric development phase.  At that moment it should say, okay, let’s write the models that capture that design and different aspects of the model as well.  

 

One thing that Gary has said, and I think William hinted at it as well, that it’s very appealing and very suitable to approach with taking in the verifiability node, is the issue of the training and is the issue of the timings that you expect a response, the scenarios in which you expect some sort of behaviour.  And what you can contribute is to capture those expectations.  There is no possibility of developing a model of human behaviour, what will humans do, who knows.  What I will do, I don’t know.  

 

But you can capture models of expected human behaviours.  You can say I want to train my user to do something and then I can capture what is the behaviour of a trained user.  And then with that model, that human model they come in the loop and you can say under these expectations then I’m sure that- I’m being a bit generalist here, my car will not crash.  And if you try and say I want to verify that y car is not going to crash ever, we don’t need to do any verification, I can tell you it’s false.  There will be situations in which it will crash.  But for us, in the verifiable autonomy store, we want to say these are the assumptions that I’m making, these are the operational requirements of my car, of my drone, whatever robot I’m using, whatever autonomous systems I’m using and with those operational requirements, we now can have some evidence that this is fast enough.  And that’s a [unclear 00:34:17] process, because I’ll go back to the discussion and say the human needs to react in three seconds and then the specialist, okay, given the cognitive capacity of an average human, potentially tired, potentially watching a film, that’s not possible.  Then I need to revisit and say, okay, if that is too strict what can I do to my design to make it satisfy the properties that we are interested in.  

 

So, these things are not in conflict and I’m really fascinated by what Gary and William are saying here, because it matched our view very nicely.  

 

Gary:                  Yeah that’s really good to hear that.  You know, yeah, I’m always going to make a big strong for human factors, you know, that’s what I do.  And, yeah, it’s very interesting, you know, over many years having worked with engineers and computer scientists and designers, etc. just how important it is to work together in these different fields.  And I understand each other’s perspectives.  And, you know, that is largely the case in car companies as well not just in academia, you have whole teams of people that are working together on these sorts of things.  

 

But, as I said before you have to come back to the point that the car is a consumer product and so, they’re going to be quite, you know, terrible pun, driven by marketing and by consumer demand and, you know, humans don’t necessarily know what’s best for them.  

 

Sean:                  And by costings, of course.      

 

Gary:                  Yeah, yeah, you know, the financial side of things, and product differentiation and, you know, all these sorts of things.  So, I suppose it’s why I’m particularly interested in the Transport Bill and how much gets legislated for here in the move to these vehicles.  

 

Going back to a sort of earlier point about the worries that we have of this type of technology coming in to vehicles, I think it is really controversial and from a human factor perspective you could argue this is one of the worst things you can do is to put people in a situation where when the technology has a problem, going back to our failure modes, we’re going to chuck control back to the human who is in the worst possible situation to deal with it because they’ve been out of the loop.  And so, you know, the emergency situations are going to be really quite problematic and the sorts of things that William talked about will all come to the fore, I’m sure.  

 

But there are other people who sort of made cases that, you know, the true benefits of autonomous vehicles won’t come until we get to level four and level five and we sort of have to go through the pain of level three in order to get to level four in the way that we’ve got level two that’s been around for quite a few years that has provided, you know, the capabilities and the understanding to get to level three.  So, you know, it’s a difficult sort of area.

 

Sean:                  There’s also, there’s an addition there which is that if we do have to go, let’s open air quotes, through the pain of level three, you see cars out there on the roads that are 20/30/40 years old and classics that are 50/60 years old.  So, this level three pain could last decades couldn’t it, actually? 

 

Gary:                  Yeah, yeah, quite possibly.  And I was at a meeting yesterday where they were talking about whether or not young drivers should be predominantly trained in EVs because those are the vehicles that they will be using in 10, you know, 20 years’ time.  But, you know, as was pointed out there will be many manual cars and EVs are very expensive for people who have just passed their test and so, you know, we will continue for some time, I suspect in still using- You know, driving sticks as they say.                       

 

Sean:                  Well, there is such a thing as an automatic only licence, isn’t there.  

 

Gary:                  Very few people do it like that.            

 

Sean:                  Yeah, there are a few people who do it, yeah, people have that, but yeah if you have an automatic only licence, you’re not actually allowed to drive a manual car, which is quite interesting.  

 

Gary:                  So, there is a whole thing about whether or not these vehicles will be so different that you might need different certification, you know, different sort of licensing type arrangements.  For the sorts of level three capables, or level three functionality vehicles that will be on our roads to start with they’re just not going to be that different.  They’re going to be very sort of restricted use cases, on motorways, low speeds.  But you know that that will expand out to, you know, higher speeds on the motorway and then on to sort of stop/start traffic lights in cities.  And it will expand out.            

 

Sean:                  Which is the thing we want really isn’t it, that’s what we want, we want those annoying situations to be taken care of, don’t we.  We don’t want to be doing the stop start, stop start through bumper to bumper traffic, we would rather a machine did that for us.  Maybe we want to enjoy driving through the countryside and going round the curves, but, you know.

 

[00:39:56]

 

Gary:                  But I think it’s a really interesting issue that this becomes- Different cars will have different things and be called different things.  And this goes back to the mental model, sort of, issue as well, that you may well struggle to understand really how your car works, what it can do.                             

 

Sean:                  And we haven’t even mentioned Johnny Cab, which is, you know, we could get through a whole episode about talking to your car without going in to the Total Recall Johnny Cab example.  

 

William:            One thing about this functionality for stop and go and highways or motorways, always I will remember this interview I had with a participant a few years ago, that person said to me if I don’t want to drive when it’s an annoying situation, I’ll just use public transportation.  If I want to work while I’m commuting, if I want to chat or play games with someone else I’ll just go on the train or whatever.  So, I believe there’s still a strong argument for whether these vehicles are useful or not.  And I think it’s not the main solution to transportation issues.  

 

                            The other thing when we mention training and legislation and this new Bill, it’s not only it affects people in the car but people outside the car.  So, all types of road users, it could be pedestrians, it could be cyclists.  And to what extent these people who are not trained- You won’t train a pedestrian to walk or a cyclist, well you can a bit to some extent, but my point is there will probably be a phase of adaptation.  So, there are these new vehicles, how are they behaving, okay, I know they behave.  So, now that I know that it’s an automated car and it’s the response time of an automated car, it's way better than the human, so I can actually cross the road or jay walk.  

 

                            So, to some extent there will be behavioural adaptation and in the history of transportation there always was behavioural adaptation.  You put ABS people will drive closer to the vehicle in front because they know they can brake harder.  You add a speed limit control well you can actually increase the speed limit because it’s between the legal limit and the radar thresholds.  So, there will be behavioural adaptation and I’m a bit scared of that, I think it’s very interesting.  But it not only concerns people inside the vehicles it’s also everybody around.                          

 

Sean:                  Yeah, I see that, and just a minor point, I know I’m kind of a niche case, but I would use public transport a lot more if I wasn’t a videographer carrying six or eight massive heavy bags, that I don’t quite need a van for but even though sometimes I’ve considered getting one of the electric scooters around Nottingham and trying to hang all the stuff off it, I think it would be probably a bad idea.  

 

Ana:                   Just on a point related to what Gary was saying before, and you were discussing about the legacy cars, I think we will also be facing legacy drivers.  I wanted to just say I was chairing another public engagement event a while ago and, of course. When we’re talking about autonomy the audience wanted to talk about autonomous cars and there was this gentleman in the audience who was really upset, why are you doing this, why would you be interested in that at all.  I like driving, I want to drive my car.  And I think we will find other people that wants to have the newest gadget and make use of it but also we will have the people that wants to keep the control of their car and actually drive it.  And, yeah, there is I think quite a lot of legislation concerns to deal with these situations as well.

 

Sean:                  I totally agree and I understand that because as a reformed petrol head myself, you know, I really enjoy a good drive, yet, you know, I’ve ordered an electric car.  I mean I’m fully on board with the environmental thing and I would prefer to catch the train if I’m going anywhere.  But I do enjoy driving it doesn’t mean I want to drive everywhere.  I think personally it should become a bit of a hobby where, you know, a bit like steam enthusiasts where every third Sunday you get to take your steam tractor engine out and tinker with it.  

 

Gary:                  You go on to a test track and sort of let off some steam.  I think, yeah, how this will sort of pan out in 20/30 years’ time, you know, we’re all sort of Nostradamus aren’t we trying to sort of fit where this will all go.  And I do think the genie is out of the bottle in relation to level three functionality because it’s not just the UK where this is happening, it’s happening in many different countries around the world.  

 

And, going back to what William said about the sort of interaction with other road users, I think when you start having vehicles I autonomous mode in city situations interacting with vulnerable road users, pedestrians, cyclists, e-scooters, there’s going to be more and more of those on the road, definitely, as well, that’s another part of the Transport Bill and the legislation to allow e-scooters, you know, around the whole country, you’re going to get these situations where the vehicle- People are not going to be sure whether the vehicle is in autonomous mode or not.  And whether or not I can make eye contact with a driver or that driver is not a driver.  They may be sitting there- So, this is interesting when there’s still a steering wheel, because they may be sitting in the seat where there’s a steering wheel, but not actually engaged as a driver.  And so these sorts of communication issues are going to be important.  And to what extent the car should be telling you what its mode is and telling you what its intent is.  So, am I giving way or you can cross.  You know, these things are going to be very interesting and will also have legislation stuff to do with them as well because there’s a big difference between saying, I am giving way and you can cross.                           

 

Sean:                  I’ll leave it to you, the listener, to decide how much closer we are to Knight Rider becoming reality, meanwhile, in the real world, we’re running low on time so I’m just going to say thanks to each of our guests on this episode, so thanks Gary.

 

Gary:                  Thank you.                      

 

Sean:                  Thanks William. 

 

William:            Thank you.                      

 

Sean:                  And thanks Ana.

 

Ana:                   Thank you, thank you very much.

 

Sean:                  Thanks also to you, the listener, make sure you subscribe to the podcast so you know every time we’ve uploaded a new episode.

 

                            If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub.  The Living with AI Podcast is a production of the Trustworthy Autonomous Systems Hub, audio engineering was by Boardie Limited, our theme music is Weekend in Tatooine by Unicorn Heads and it was presented by me, Sean Riley.