
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Living With AI Podcast: Challenges of Living with Artificial Intelligence
Exploring Trust in Online Environments
Featuring our very own panellist Paurav Shukla, we delve into Trust in Online Environments
Panellists Joel, Christine and Paurav join Sean Riley
00:20 Paurav Shukla
00:38 Joel Fischer
00:43 Christine Evers
00:48 Sean Riley
01:02 Facebook Oversight Board reveals first cases
12:30 VHS vs Betamax, the VCR 'War' (Wikipedia)
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Louise Male
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Podcast Host: Sean Riley
The UKRI Trustworthy Autonomous Systems (TAS) Hub Website
Living With AI Podcast: Challenges of Living with Artificial Intelligence
This podcast digs into key issues that arise when building, operating, and using machines and apps that are powered by artificial intelligence. We look at industry, homes and cities. AI is increasingly being used to help optimise our lives, making software and machines faster, more precise, and generally easier to use. However, they also raise concerns when they fail, misuse our data, or are too complex for the users to understand their implications. Set up by the UKRI Trustworthy Autonomous Systems Hub this podcast brings in experts in the field from Industry & Academia to discuss Robots in Space, Driverless Cars, Autonomous Ships, Drones, Covid-19 Track & Trace and much more.
Season: 1, Episode: 6
Exploring Trust in Online Environments
Featuring our very own panellist Paurav Shukla, we delve into Trust in Online Environments
Panellists Joel, Christine and Paurav join Sean Riley
00:20 Paurav Shukla
00:38 Joel Fischer
00:43 Christine Evers
00:48 Sean Riley
01:02 Facebook Oversight Board reveals first cases
12:30 VHS vs Betamax, the VCR 'War' (Wikipedia)
Podcast production by boardie.com
Podcast Host: Sean Riley
Producer: Louise Male
If you want to get in touch with us here at the Living with AI Podcast, you can visit the TAS Hub website at www.tas.ac.uk where you can also find out more about the Trustworthy Autonomous Systems Hub Living With AI Podcast.
Episode Transcript:
Sean: You're listening to Living With AI, the podcast where we look at Artificial Intelligence and how it's changing our lives. Does it really change anything about our general well-being? Today, we're looking at trust in online environments. Shortly, we'll hear from Paurav Shukla, no stranger to repeat listeners as Paurav is a regular panel member here, but today we're diving into what he does specifically. In the words of his own website, Paurav is a management thinker, an educator, researcher, consultant and an entrepreneur. Paurav also has two hats on today as he'll be part of the panel as well. So, without further ado, the rest of our panel consists of regulars Joel Fischer and Christine Evers.
Joel is an Associate Professor at the University of Nottingham specialising in Human Computer Interaction and Christine lectures in Computer Science at the University of Southampton looking primarily at machine listening. And trying to ensure equal time for all, it's me, video maker and tech lover, Sean Riley. We're recording this on the 3rd of December 2020, so as the quiz programmes say, all answers correct at time of recording. Well, as far as we know.
Well, one of the big stories I spotted today is Facebook's oversight board, an independent body that reviews Facebook moderation decisions, took on its first cases. Now, appointing these humans to an overseeing role, is this a failure for AI or is this sort of augmentation going to be more and more the norm? Paurav? Well, we're going to come to you anyway. Joel, what do you think about this kind of like humans getting involved in AI decisions?
Joel: I think it's going to be really important to have regulation because otherwise, if you only end up having sort of business interests governing the ways in which social media networks work. So, that's what many of you who watch the social dilemma will be aware of.
Sean: Paurav, we're going to talk about trust in AI in our feature with you in a moment, but anything, any headlines on this?
Paurav: This is something what has to happen at some point in time, because one of the things we have to remember is that the AI as a technology, especially within social media space also, is very new. This is like a baby that we are talking about. And so, Facebook as a social media, again, we have to remember, this is all early 2000s. That's what we are talking about. So, we have very little learning attached to it. And the same is the case with AI and so, when these two new babies are coming together, we don't know what it'll look like. And so, human oversight is a very important aspect that is associated with it. And I certainly welcome that.
Sean: Time now to pull Paurav gently to one side, mute the other panellists and segue into this week's feature, Trust in Online Environments. So, Paurav, let's grab a virtual pew. Paurav is Professor of Marketing at the Southampton Business School here in the UK, but has worked all around the world. But on the subject of living with AI, how does marketing research figure in the world of AI trustworthiness? Are you looking at how to sell AI products, Paurav?
Paurav: There are a number of ways in which we look at AI. Firstly, when you think about it, AI, you have to remember that at some point in time, this AI is going to be in touch with individuals, human beings. And so, at that point in time, whenever there is a human handover or human touch involved, at that point in time, it becomes very critical that we trust that system. And if we do not trust that system, it is not going to work for us. And so, while we are looking at number of brands and number of companies involved in this, so it is not just about AI sales, but also about AI's interaction with us and our psyche affecting that interaction. So, that is something what we look at in my lab in particular with the number of researchers I work with.
Sean: Maybe it is a bit a blanket statement, but trust can be difficult to achieve with something that is new. So, with a novel product or a novel service, why is it that it is so difficult to achieve that trust?
Paurav: You are absolutely right, trust is very difficult to achieve. And there are a number of reasons for that. But let me start with a very simple example. Think about it as, imagine that you are going into a pub and you are sitting and you see someone, you know, or let's go to Matrix as the movie, right? So, you all remember Matrix Part 1, Neo is taken into the Matrix environment for the first time with this person Morpheus and then comes the other person called Mouse who has created this environment wherein a lot of people in black suit are walking towards Neo.
And then suddenly there is this lady in red dress, blonde hair, walking towards Neo. And as you remember, Neo turns his head around to look at her again. And at that point in time, that lady becomes the agent. And so, you know, what kind of trust mechanisms are developing there? You know, you start thinking about it. And so, this actually shows us our gullibility. And so, trust in some sense is associated with our gullibility’s, our vulnerabilities, actually produces and propagates them in different ways. And in that regard, a major aspect of it is, is this is because we do not trust things very easily.
So, when it comes to, I think if you remember, I call him the greatest marketer or strategist we have ever seen, Gautam Buddha, you know, because I don't know if a brand exists on its own for the last 5,000 years and will continue to exist. I don't think any other brand would be able to achieve the same kind of average brand life, I would say, as Buddha. And Buddha said it very nicely, “Change is the only permanent constant.” And the problem with this change is that this constant makes us uneasy. And that uneasiness is the problem that makes us not trust things, because it is something new and it puts us out of what we call our comfort zone, where we like to be. And so, trust always, always comes and new things always make us feel a little more sceptical.
Sean: I know we discussed this when we talked about virtual assistants. But the other thing that helps with trust is experience, right? Maybe that's where Buddha is a champion, because he's been a constant in people's lives. But the other thing, there's an old saying, isn't there, you know, “I'll believe it when I see it.” Right? And part of the problem here seems to be, you can't see the workings, you know, it's not like a vehicle where you can open, you know, the bonnet and have a look underneath and try and work out what's going on. There's something kind of opaque here, right?
Paurav: Absolutely. You know, this is one of the other things which affects trust also, as to do I understand this phenomenon? So, for example, if rain is falling, and I go out, I know I will get wet. But if I go out, and rain is not falling, and I get wet, something's seriously wrong, you know, I start making, I have problems with it. So human beings, whenever there is something unknown, become sceptical about something. And this is something almost wired in our genetics, you know, the friend versus foe argument, as we say.
So it comes from evolutionary biology to evolutionary psychology also, as we are seeing. And in that regard, that scepticism, in online environments in particularly, is so rife and there is a reason for that, that we still don't understand. So when I click, for example, that PayPal button for payment, what happens behind it? I'm not sure about it. And so I have type of what I call a type of a mistrust, not necessarily a distrust, because there are a number of different kinds of trust. Again, we have to understand there is trust, but the dark side of trust has three sides, funnily enough.
So there is untrust, there is mistrust, and there is distrust. And so when you think about it, you know, different kinds of things emerge in different environments. In an online environment, all three of them could emerge. And so it is quite interesting to see that when you are thinking about trust, we are only thinking about the positive side of it, but there is a very interesting darker side to it also. And that brings about very well in the online environment.
Sean: I think it's interesting, because in terms of online environments like online banking, I feel like people have begun to trust these things. PayPal I use occasionally, it's on my phone, it requires a fingerprint and bang, somebody's been paid and a parcel will arrive on my doorstep a few days later, all being well. And I trust that system does tend to work. Where, where are the edges of that? Where's the boundary, though? I mean, what where is it that you're looking at mostly?
Paurav: So some of our work, and particularly our work looks at how value gets derived. That is what we ask. The most fundamental question I ask is quite simple. It's, “Why do we value what we value?” And, you know, earlier in the, just before starting the podcast, we were talking about microphones, and the way how you put it, Sean, that, you know, your old microphone is as good as any other new microphone, because you value that, you know, there is a value attached to it.
Now, as a non-podcaster, as a newbie in this world, you know, I would go to Amazon, and I would look at it and I would say, “Oh, right, what is that?” Anyway, I would type in Google, the most ugly type of query I should type, and that is, you know, “What are the best microphones?” That is the wrong most query to type. But this is what I type, I find it, then I see some other people's review, and I believe whatever the crowd is saying. And so in reality, when you think about, you know, our own trusting mechanisms, they are- anyway this is what we ask is in terms of what do we value? Why do we value that what we value?
And, for example, I'll give you another little thing on my desk right now, I have a little thing where in my son has written up “Dad” And that's a little folder, you know, it stays on my desk, and it makes me feel warm. Now, is that any valuable to you or anyone else? No. So you start seeing this kind of value associations to different things. And one of the other domains we particularly work on where trust really comes in and value really, really gets exaggerated is luxury goods. You start seeing, let me ask a very simple question to our audiences, whoever are listening. Imagine that tomorrow you are going for a job interview and you have two different dark suits that you can wear. One is from Primark. And the other happens to be a Gucci. Which one would you wear?
Sean: In my case, whichever one's clean, I think. But yeah, no, it's a fair question. It really is.
Paurav: And so you could easily think that, you know, what we value are funny things. You know, it's a brand which makes me feel more empowered. How does that happen? But that is because I trust that brand more, I get more value from that brand. And so in those kinds of circumstances, trust becomes such a powerful mechanism that drives any sort of behaviour. And what we are interested in finally, when we think about engineering, when we think about science is whenever we are creating products or services, the end point is, is that at the end, there is a user, there is a consumer, and that consumer has to buy the product.
So however good a product I make, if, you know, my Sony Betamax was better than VCRs, it doesn't matter. If the consumers don't value it, it's not going to be bought.
Sean: Yeah, this is the argument about the market driving things though, right?
Paurav: Absolutely, absolutely.
Sean: But you know, I was inwardly laughing when you were talking about your purchasing of microphones, because I've had the same thing several times recently with printers, with webcams, with various bits of equipment. And you are following what the crowds say and looking at high rating reviews and you're also looking for a brand you've heard of because you don't know if the XYZ brand from the Far Eastern manufacturing country is going to do exactly what you expect it to do. Because, you know, we are also wary of things like fake reviews and fake news and all this sort of stuff. So there is, there's a whole new world to sort of navigate now, isn't there? And, you know, is that the sort of thing that you need to be kind of, how do you quantify a new world like this?
Paurav: Absolutely. Oh, this is the holy grail right now in social psychology, especially in online environment, that how do you identify a way wherein you can actually build greater trust among entities? Even I was recently involved in a UK government initiative, particularly called Area of Research Interest. It was a working group by the SAGE committee, wherein we were actually looking into trust in public institutions, because that is also what is happening online.
So once upon a time, if you remember, in 2008, post-2008, a survey was done, and it was found that people trusted retail supermarkets more than banks. And so you saw many of the supermarkets come up with their own banks, which is also a very interesting phenomenon. And so what you start seeing is that, like what you said earlier, also, that a particular country's own products, I would associate, because I mistrust that country, I don't distrust, but I mistrust that country and I feel that the country would not be able to provide me the kind of quality I am expected to have.
And so what do I do? So if I'm Apple, how do I navigate that space, right? So what do I do? I write on the back of my phone or my devices that it is not made in, but I would say, designed in California, and assembled in a particular country.
Sean: As you say, there are these connotations of a certain manufacturing country, even though they're the best in the world at certain bits of manufacturing. I mean, there are things that can be made, you know, in say, for instance, China that cannot be made in most of the rest of the world, at least certainly not at the price that they can make it for. I think it's important now that we perhaps look at what the differences between untrust, mistrust and distrust are, because even in a high level, it would be worth kind of thinking about what those different areas are. Is that something you can do for us?
Paurav: Yeah, certainly. So for example, when you think about mistrust, mistrust is almost like a misplaced trust. So for example, I used to trust an entity. So, I'll give you a recent example, there is particularly an auction website, very well-known auction website in the world. And I had an encounter wherein for last 10 years, I've always had the product coming to me. But this time around, I had ordered something, it did not arrive. And the money was taken initially. And now I have that mistrust. So I have got a misplaced trust towards this entity, wherein it is not a betrayal, but it is like, I'm feeling that something's wrong here. Now I need to be a little more careful.
But when I think about Amazon, I don't have that, somehow, in a way, I think this is the genius of Amazon in overcoming that trust problem people had through customer service. The whole point they did was they broke that mistrust barrier by providing such exceptional customer service, that people now are ready to pay the premium. So many times when I look at my online buying, and I try to compare, say, for example, eBay versus Amazon prices, and even if eBay prices are probably about 5 to 7% lower, I'm still going to Amazon. Now that 5 to 7% on millions of products every day makes Amazon a very rich company. So in some sense, this is kind of a way in how the barrier towards mistrust can be broken.
Then there is this phenomenon called untrust and untrust is like, I feel little trust towards the trustee. As in terms that if I have little confidence in this person, or in this company, that they will deliver.
Sean: Is this the brand you've never heard of, or the website you don't know, you've not got any experience of?
Paurav: Exactly. And so, for example, you go on Amazon and you see, you know, two little wireless headphones, and one has got 5,000 reviews, another has got only one review. And that one review thing has only got, you know, the price is 20% lower. Now, the thing you are thinking about is, where do I go? Do I have confidence? So that is untrust. And how do you get over untrust again? You know, so AI has become quite a powerful tool there. Wherein Amazon actually removes this untrust by using certain signalling’s, you know, editor's choice, Amazon suggests, Amazon recommends, number one selling product, number one reviewed product, and this, that and everything. So there is a range of ways untrust again, can be, you know, I think broken or built over. But yeah, certainly that is also happening.
And then comes the idea, I think, of distrust. And this is almost animosity. So I have almost a trust that this person or this entity will do harm rather than, you know, actually help. And in that regard, for example, in TAS, one of our grand challenges is around that, that we want to reduce that distrust, we want to build public well-being through our aims. And I think that is a very powerful motive, again, which drives me in my work also, that how can we make sure that there is greater trust within entities also?
Sean: So it's funny, isn't it? Because as we sort of talked about with the suits example, some of these brands that we trust are purely being trusted because they've spent millions of advertising dollars. And we may not even have experience, I don't have experience of a Gucci suit. However, I would think that, I would think that would be the ones aware because, you know, people are going to think, well, you know, that's a known, respected brand, you know, perhaps this person has style, all that sort of stuff, which is completely untrue, I'll point out right now. Perhaps it's easier to think of branding of physical products, but the services are more difficult to quantify.
[00:20:08]
Now, Amazon, you mentioned earlier, Amazon have ways of getting around mistrust and distrust by putting their almost authority on it. Is that perhaps how AI is going to sort of breach these trust issues by having a trusted, perhaps a supermarket will come out with an AI that we can then trust?
Paurav: Possibly, or, for example, let's take example of automobiles, right? We're in right now, you know, the way how last time around, when we were talking about, in one of the podcasts, we talked about Honda coming up with level three automobile. When you when you think about such, such phenomenon, it is right on people, in front of people's faces, and trust becomes a very important issue. So for, for instance, if I think about a new automobile company versus Honda, you know, that brand carries that weight, that I feel some sort of internal trust towards Honda. And then I have this idea called, you know, it's a Japanese company, you know, automobiles, electronics, they know their stuff, let's go ahead.
And so that kind of a brand association, that country association, these are the types of antecedents that we may use to actually overcome those kind of issues around trust itself, and AI is going to play a major role in it. From the reason being, for example, you know, how are we going to anthropomorphise that entity? Because that that brand is going to talk to me in automobile scenario, right? So one of our teams wherein I am involved, again, with the University of Nottingham and Joel's colleagues is around a project called chatty car. And the idea is, how do we make sure that we anthropomorphise the brand in such a way that the brand fits with the person? As in whatever I have thought about a Honda car, and the car's version or the car's talk to me is the same. And these are the ways how AI will have to overcome those big challenges.
But the challenges will also remain with regard to, for example, the human aspects, my personality, you know, my risk taking ability, my innovation orientation. So another project which we are working on around the AI aspect is that, that how does human factors in terms of personality, psychology, socio demographics, situational influences, the emergency of situation. So for example, you know, you may not trust an autonomous car.
But imagine that you have suddenly a physical emergency, you have to go to the hospital, you can't drive your own car, you have to book an Uber, and Uber gives you two choices, say it's five years down the line, it gives you a choice and says that, you know, you have car available, which would arrive in two minutes time. But it is an autonomous car, there is no driver involved. But a driver car, a driver car would arrive in next 12 minutes. What would you do? You have 10 minutes to wait?
Sean: Yeah, I think you're going to be weighing up risks, right? How serious is this injury that I need to get to hospital versus what's the likelihood of my probably unfounded worries about autonomous vehicles. And those are the sort of tipping points, aren't they? Those are the, those are the things that make people start to do something they wouldn't have done before. I remember the first time I booked something online to do with travel, right?
It was about 1999. And I'd seen that a ferry company was offering a very good deal on a ferry. And even up until the point I drove up to that ferry, I was wondering how am I going to be getting on this ferry, right? But after that, you know, oh, I've booked one thing online. It went well. I now book things online. And for the last 20 years, it's not been an issue. There are these tipping points and these, you know, it might be a financial thing, or it might be a, as you say, a medical emergency.
But yeah, I suppose identifying those tipping points. It's interesting you mentioned the brand and the car, because a few years ago, didn't Google come up with the Google car? And it was supposed to be all happy and friendly faced. Whatever happened to that?
Paurav: I know. Those are the kind of things that would happen with many companies, right? They would come up with these projects, which are ill-fitting. I still remember I was working with a company, a chemicals company. It was very well known chemical company in Asia, one of the leaders and they decided to go into consumers market. And they decided, because we are using the same chemicals to, you know, which produce the shampoos and if you actually look at the shampoos and soaps, and, you know, toothpaste, there is hardly any difference chemically when you think about it. Right?
So the company thought, look, we are producing that, you know, we just put a brand on the top of it and we sell the toothpaste and we sell. And somehow the CEO had the grand idea that we should use the same company's name, the chemical company's name on the toothpaste. So branding was fantastic. The problem only was that after three months, the brand tank, nobody bought it. And when we, anyway, that's when we were involved in, you know, consulting around that company. And we started talking to customers. And they said, “You know, when I put that toothpaste in my mouth, I feel that chemically soapy feeling.” And that was it.
So, it was a brand which was making them feel that way. And so the same would happen. Imagine sitting in a, you know, I don't know, one of the top greasy restaurant, as we know it, you know, very well-known with the yellow symbol, you know, and they develop a car and imagine how would you feel in that car?
Sean: Yeah, I think that would be a challenge. I know, a few years ago, I was working in Russia and I talked to some people there who said they wouldn't eat Pringles crisps because they were made by the same company that made whatever shampoo. So those things are powerful, and they are tangible. And it takes a Tesla type disruptor, I suppose, to try and change things in any branding kind of exercise?
Paurav: Absolutely, absolutely. And also, what is more important to us is that these elements have to come together to make sure that they overcome this security, this privacy, and all those particular aspects that we have been talking about, that would then reduce the mistrust or the distrust we have. And so organisations will have to think, in a way, our research particularly shows, for example, visual appearance matters. You know, how are you putting the visual appearance of a product matters a lot, of the website. So for example, if you use a dark black background, issues will happen, people will think of your website as a little dark website or something, you know, so colour combinations and all that matter.
Beyond the visual appearance, also order fulfilment. And so like what you said in case of, for example, the ferry ride. And so if the order is fulfilled, you start feeling greater trust, it becomes a virtuous cycle. But whenever that cycle is broken. So right now, I told you that earlier example of a failure of delivery with a particular auction website, when that auction website failed that delivery over the last 15 years, probably first time this has happened, I have mistrust towards that entity, which is a much larger entity. And they have hardly any control over it. But that can happen. So in a way, the relationship with consumer can turn very vicious very quickly. And we have to be very careful as with this technology companies.
You know, when Tesla, that accident happened in USA, suddenly everyone was very afraid of it. And so these are the kind of things that are going to happen with new technologies.
Sean: One step forwards and 10 steps back, isn't it? You know?
Paurav: Very much so.
Sean: I think the other thing that, you know, not that we're into kind of marketing podcasts here, but you know, things like headache tablets, where it's the same headache tablet for a fraction of the cost, because it's not got the brand on it. And people will still pick up the brand because it's a thing they know that they've used before, etc, etc.
Paurav: Yes, I certainly agree. But in the technology and AI part, particularly the issues that are going to arise around trust are not just the brand, but it's the privacy of it. It's the security of it. And they are going to be far more potent in terms and how companies will have to, you know, companies will have to communicate this from day one or build it into their design. That is again, one of the things which task project tries to do, you know, building trust by design, and not just as an afterthought is so very important.
Because companies come up in a way, you know, brilliant engineers would come up with brilliant idea, but do customers actually feel that their privacy or security is affected by that, then they are not going to go forward with it. Anyway, that's why we are, that's where we are ready to give all that information to Google, which we are not ready to give our partners also, at times. Because we trust it.
[00:29:53]
Sean: Yeah. When we're talking about autonomous vehicles and things, though, it's not just the vehicles, is it? It's everything goes around them. I mean, there was a recent story, and I forget exactly where I read it, but somebody supposedly blowing the whistle on what servers etc were actually at Tesla headquarters, and therefore you shouldn't trust the cars because, and again, that could just have been mis or dis information from one of Tesla's rivals. But it starts to make you wonder, you know, you've got a car which is able to update its own software. How can we learn to trust that that's okay to do?
Paurav: Exactly the right point you're making Sean here. In regards to trusting, one of the things is that a number of factors have to come together for the organisation and the overall external environment to work together for people to have trust towards something. Those could be human factors, as we talked earlier, the psychology and personality of the consumer, but also the infrastructure around it. If my organisation or if the wider policy environment is not able to provide the infrastructure, that is the big debate going on right now, that, you know, people have one major misconception about automobiles, especially these electric automobiles, is, is there a charging point nearby? So you can imagine that infrastructure issue is a massive issue with regard to adoption.
And then there comes the technology issue. Can I trust this brand to actually do what it suggests? Because we are seeing so many accidents all the time and, you know, for example, airlines, we talk about airlines, airlines are phenomenal autonomous systems. When you think about it, the airplanes and so on and so forth, most of it, we know for years now, that, you know, many of the times pilots are just sitting there because the plane is flying on autopilot.
And so in reality, it's this power of technology, but we know that there is a human behind it somewhere, you know, who would be able to take control. So technology matters. And the final is the environmental risks. These are again, very, very powerful risks in terms of how the society behaves around it. If the society accepts it, I'll do that. If the society doesn't accept it, again, I'll not do that. So we come back to the same review functions or what we were talking about, you know, when we talked about the mics and everything. And so all those things have to work in tandem, work in sync for a new technology to be trusted by consumers.
Sean: Paurav, many thanks for being our very own in-house feature today, time to open up the discussion to the rest of the panel. And it would be rude to exclude Paurav from this. So we'll welcome back Christine and Joel and Paurav can be part of the panel as well. So Joel, what are your thoughts on this kind of trust in the online environments then?
Joel: I thought it was really interesting to listen to Paurav there. And what struck me when Paurav as talking about trust is that as someone who's really interested in interaction and, you know, human computer interaction in particular, how we can perhaps think of trust as an interactional achievement, as something that is sort of an outcome of interactions, not just one interaction, but a series of them, you know, experience of interacting with something.
And I think you touched on some of those things when what we do when we assess the quality of reviews for a product that we want to purchase, we do that in particular ways by interacting with that information and using our experience and judgment to establish whether a review is trustworthy or not. And also, it's not just one piece of information we draw on to do that. Yes, we look at the ratings, but we also look at overall, you know, we look at a range of things, and we look at different sources. And it's actually what I think is actually a competency is something you can learn, right?
It's something that sort of you'd learn from experience of using these kinds of systems, and interacting with this kind of information, that you actually learn how you can establish whether something is trustworthy or not. And I think that's the interesting thing for the TAS Hub and what we want to achieve, I think, is we want to enable people to have these kinds of competencies with regard to AI technology. And how do you even do that? So I mean, and that there are some really open challenges with regard to how you might, you know, teach people, educate people to assess the sort of outputs, if you want, of an AI algorithm, for example.
Sean: That’s a really good point. I mean, the idea of being trained to understand whether the review systems and, you know, sort of stack up and, and using experience in combination with that. I think to do that with AI, it's obviously such a broad topic, that to quantify I think is quite difficult. Christine, what are your thoughts on this, what Joel's just been saying about the training idea?
Christine: Joel raises a very interesting point there in terms of training people to learn competencies with regards to actually evaluating this. It's quite interesting to see how different people actually perceive reviews that are posted online. Probably a lot of people that I know review online reviews, not only bearing in mind what the product is, but also the language that is being used by the reviewer and how much objectivity but also competency that review reflects. Whereas perhaps other people might review reviews also more on a subjective level and an emotional level.
I think ultimately, in our research, and through the training that we received, probably we were trained that the more objective a text is being written, the more trustworthy that might be. But that's possibly not the same for someone who has a different incentive to actually read that review.
Sean: My academic career stopped after a degree 25 years ago or so. So maybe I'm looking at things in a different way to train academics who've gone on to PhD level and further. Paurav, what's your idea on this?
Paurav: One of the things I would like to mention here is the point of training is so important. But at the same time, we have to remember what we trust more. And the thing is, is that what we find again in our lab is that when we see effective information, that is emotionally charged information, we tend to trust it more than actually what we call objective information. So it's a funny one, you know, how our brain operates. And in one of the studies we recently did, and what we wanted to know was a particular aspect around, how do you reduce shopping cart abandonment? So it's a multi trillion dollar problem for you know, there are about 85 to 90% of all shopping carts are abandoned online.
Imagine going to your local superstore, filling in the bucket, and then going to the door and just leaving it there and walking away. And about 80 out of 100 people doing that. How, how would it feel like, you know, we don't do that in real life.
Sean: But that’s a physical problem for a shop, right? Because they've got to then pick everything up. But obviously, in a digital world, it's trivial to clean that up. That's, what's driving that is people wanting to sell more stuff, right?
Paurav: Yeah, but also at the same time, what I think I would like to point out here is that if we can convert just 1% of that into real shopping, that would be about £4.6 billion worth of more kind of consumption happening. Now I don't promote consumption. What I want to put in, is it's a major challenge, because we have now become habituated with that phenomena. And it does have its own energy related implications. Today morning, only there was a discussion going on, a report has come out that even just watching high definition videos has its own carbon footprint and significant sustainable impact. And so in that sense, leaving that type of a shopping cart, because it is there, and so the company will have to maintain it for a while, and so on and so forth, is also eating into energy. And so it has a sustainability impact.
So we tried to find ways and means to do that. And what we found was that when we are attracting people inside of the website, as in terms of when people are coming in, we should provide them messages, what we call promotion focused messages, that is messages that are positively charged, more effective in nature, more emotional in nature. But when people are about to leave the website, or as they get into the checkout stage, we start saying, this is the time left, or this is the point in time or kind of the offer would be gone within a moment, and so on and so forth.
And if you see Amazon, it uses that so very well with lightning deals. You know, when those lightning deals come about, you know, five hours left, 10 hours left, and people flock in and you start seeing that little button, little line, which says 20% of the people have bought this, or 90% of people have bought it, only 10 left, and so on and so forth. So this, what we call prevention framed messages, that is, something is becoming scarce, you're going to lose out. And all that actually works.
[00:40:03]
Sean: And I think sites like booking.com use this for accommodation as well, don't they? Christine, you wanted to say something about this?
Christine: I've actually got a question for Paurav. So coming back to the shopping trolleys, I've noticed that an increasing number of online companies actually start chasing you as a company, as a customer, and sending you lovely little emails saying, “Oh, you've abandoned something, you've left something behind, why don't you pick it up?” What is your opinion about that also with regards to trust? And how do people perceive that in general?
Paurav: Again, there are two sides to it. You're absolutely right. Companies are doing it now because they've realised that, you know, there is an opportunity cost here and possibly somebody would come back because once it's abandoned it’s abandoned. So what they are looking for is through using this AI behind, they are realising that one, this is an opportunity to actually create one more point of contact with a customer. Second, they are also realising that some customer will feel some sort of guilt that they've left something. So they may buy.
Remember, these are these are people who have already left their trolleys. So in some sense, if out of those 100 people who have left their trolley, even if one person comes back, that is one more person and that is great for the company, because that would add into the revenues. So I think companies are doing that as one, to create that point of contact and create some trust. And also, you know, if you see some of those emails, they are so personable, they are so very nicely set up that it makes you feel effective, it charges your emotions. And that is how they are creating connections with you. So that's how they are building a trust wherein you are feeling that this company remembers me. And the power of reciprocity comes in.
Sean: I think the other thing to remember about any of those abandoned shopping carts is that a third of them are probably people like me who started to buy something on the iPad and then didn't get to finish it because the kids interrupted. So started to do it on the computer. I didn't get to finish it because an email came in and got distracted and then I went off and finished buying it on the phone. So half the abandoned shopping carts lying around are distraction related and multiple device related. Anyway, I do appreciate that's not a trust related issue.
Paurav: I was thinking, could we possibly share a trustworthy or untrustworthy experience we had without naming the brand that would actually connect with more people because they would feel, oh, I have had that, something of that sort.
Sean: Well, I can kick that one off quite easily, because the moment you mentioned a popular online auction site reminded me that I did once buy a car stereo online from an auction site. And it was faulty, but in such a minor way that it took a long while to discover it was faulty. And by that point, I'd missed the point of returning or anything like this. And it made me feel, from that point on that I was happy to use an auction site to buy certain things, but not electronics, because it made me feel like, no, I couldn't trust it for electronics. They're too complicated. It's not like, I don't know, say a shovel where you can see if it's working or not return it if not. So that's my experience of trust related online auction sites. Has anyone else got one of those?
Joel: So I still haven't been reimbursed for a flight which was cancelled during the pandemic and it was right at the beginning of the pandemic. And it was with a major airline, of known and probably reputable brand. And yeah, I still haven't had a refund for that. So I certainly feel less trustworthy towards that brand now and I would be reluctant to fly with them again if I had a, you know, a viable alternative.
Christine: I actually had the opposite. I was quite pleasantly surprised because I had paid the deposit towards a holiday with an airline that was booked through the airline. And I was really panicking that we would lose the deposit because of the travel restrictions and everything. And they were very, very helpful, actually, which probably has achieved exactly the opposite to Joel's experience. And I actually would use them even more than I already do and trust them more because they understand where I come from as a customer and what the difficulties are at the moment. And that realistically, they can't force you to actually pay that deposit if they realistically can't really, well, they did offer the service technically, but with the restrictions, it wasn't feasible to take the flight.
Sean: And knowing the size of some of these corporations, that's probably exactly the same company, isn't it, for both of you?
Christine: Possibly.
Sean: I'd like to say thank you guys for your experiences. Thank you, Paurav, for being our feature topic this week. And so it just remains to say thank you, Christine.
Christine: Thank you.
Sean: Thank you, Joel.
Joel: Thanks for having us, Sean.
Sean: And thanks, Paurav.
Paurav: Thank you very much.
Sean: And hopefully we'll see you on another Living With AI podcast very soon. If you want to get in touch with us here at the Living With AI podcast, you can visit the TAS website at www.tas.ac.uk, where you can also find out more about the Trustworthy Autonomous Systems Hub. The Living With AI podcast is a production of the Trustworthy Autonomous Systems Hub. Audio engineering was by Boardie Limited and it was presented by me, Sean Riley. Subscribe to us wherever you get your podcasts from and we hope to see you again soon.
[00:45:48]