Social Footprint

Artificial Intelligence

We Are Group Season 1 Episode 4

This episode discusses the ever-increasing sophistication of Artificial Intelligence (AI), the role it is playing in cyber-crime, the effect this has on people, and how organisations can support their customers in the face of AI-fuelled cyber-crime. 

Professor Karen Elliott is Chair (Full Professor) of Practice in Finance/FinTech at the University of Birmingham Business School and Co-Director of the FinTech MSc Programme. She has been named as ‘Standout #35 Women in FinTech Powerlist by Innovate Finance’ for Policy and Governance 2019/ 2020/2021. In addition, Karen is an Advisory Board member to Incuto, championing access to fair and affordable finance, and an Impact Council member for We Are Group addressing the digital divide. Karen co-leads Agency, UKFin Network+ and INFINITY projects (£12m EPSRC/UKRI/Research England) to optimise trustworthy, responsible, and ethical AI. A member of the Prime Minister’s Champions Group for Dementia, Corporate Digital Responsibility, Radix, IEEE Global Initiative Planet Positive and a Digital Poverty Alliance Ambassador, Karen seeks a balance between academia and practice towards an equitable digital society.


Share your thoughts with us.

LinkedIn: @WeAreGroup.

X (formerly Twitter): @we_aregroup


Want to work with We Are Group? Contact us here or email: info@wearegroup.com


To find out more about We Are Group's services visit wearegroup.com


Social Footprint is brought to you by We Are Group

Hello and welcome to today's episode of Social Footprint. Today we're discussing Artificial Intelligence (AI) and we're joined by Professor Karen Elliott. So hi, Karen it's great to have you on today. Hi. Great to be here. Thanks for inviting me to talk about AI and various other subjects around that. We're thrilled to have you on. So just to start with, if you'd like to just introduce yourself and tell us a bit about your kind of academia and accolades maybe. Yeah, so professor Karen Elliott at the University of Birmingham Business School. I'm a chair in Finance and Fintech, and I run the MSc programmes, a Masters programme in Fintech. But outside of that, I'm really passionate about the impact of technology on society, particularly for those who don't have a voice in the area. And those accolades have come with working with the Financial Conduct Authority (FCA), so forthcoming tech sprint on financial inclusion, and I'm working with Government Leading Body, and I'm also a director of the UK Fin Network, which gives out money to buy academics time to work with industry to develop tech solutions. Wow - so yeah, definitely. You've got your foot in kind of financial inclusion as well as digital inclusion. There as well, which is really interesting. So no, that's great. I guess with the episode being, you know, predominantly about artificial intelligence, which is a very hot topic, you know, within news as well as kind of, you know, leading kind of on entertainment as well as, you know, work life as well. Can you maybe perhaps give a bit of a definition about artificial intelligence and maybe outline its evolution from its beginnings up until now? Maybe. Yeah. Well, I had an interesting conversation with someone who was in a department that wasn't technically focused, and they quipped to me, Oh, that deep learning, that new thing. And I had to sort of say to them, actually, deep learning is not new. It's been around since Alan Turing. And since artificial intelligence came into being. So what is the difference between deep learning and artificial intelligence? Some people may know, so you don't have to listen. But if you don't, deep learning is the fundamental core of artificial intelligence. You could look at machine learning and AI as subsets of deep learning. So you've heard the term black box, and that comes from the deep learning area because it mimics the brain. So you have neural networks that we can run and ask it to do problems and solve them. So from A to B and you run it through the network and you can wait what it's looking at, but it comes out with a parameter and you judge whether it's accurate within a parameter rack, you see, then what you do is take the output from that and start to train the machine learning on data set around a particular topic. So if you want to identify pictures of a horse, for instance, you can teach the machine learning to look at what identifies a horse, it has a mane, it has a tail, a certain type of style, to see if you can get it to pick up the pixels and program it. Now, the black box issue is starting to be unpacked, but basically we don't fully understand how that neural network, like the brain in psychology, we don't fully understand how the synapses in the brain make these connections and go from A to B, likewise in deep learning. So this is why data ethicists like myself and people who are expressing caution around AI, it's because if we don't understand it at the core, deep learning, then when we extrapolate and do machine learning, which is then aggregated into AI, which then has the capability to self learn to an extent, then how do we know that the learning is occurring as intended to the machine and an engineer pushes? And if it's based on something we don't fully understand, how can we ensure that society is safe, i.e. that it doesn't create more exclusion rather than inclusion? And these are some of the debates that we've had around ChatGPT and generative AI that is biased being introduced. Is it inclusive? How is it going to change the world? Is it going to learn and just decide that humans are no longer viable? So you have this dystopian utopian debates going on. But in all honesty, AI is no different in some ways to other technology that there's always the scaremongering over what it can actually do. So at the moment, it can do what it is programmed to do, to look at. I mean, if you played with ChatGPT, you know the prompts sometimes come out with really strange answers or not quite what you were thinking. And so this is what people talk about in terms of hallucinating because it's... I liken it to you give a child a piece of Play-Doh or plasticine and you say create me a man. And it has a head and arms and body but it doesn't really look like a human being. And that's to an extent what ChatGPT was doing. Like tada here I go, you put in a prompt. This is what I've got and you go hmm it’s not quite right. And this is why students are keen to use it because it's very good at doing summaries, it's very good at coding. But other things are a little bit more complex. You can't get that nuanced difference yet, I would say yet, because innovation is always continuing and it's always improving. What we're doing with a deep learning machine learning and AI. And so that's why I work, my main collaborator is computer science, so we can look at that social, technical space to understand where do we still need the human in the loop to make sure that the AI is not going to go out of control, to quote like Sam Altman and other people. So this is what AI is in essence and what can it do? Its best use is we've seen in medicine, really good at detecting cancers much quicker than a doctor can do. You can use it to build virtual reality studios and medical rooms to train doctors, to work on artificial, you know, virtual reality patients. So they're not harming anybody in the process. And, of course, we've seen it in flight simulations. Now, in the metaverse there are good uses of tech. But I think that the with AI, we just have to be careful that we don't kind of switch off in terms of ease of use and saving us time without considering the consequences or unintended consequences of its use. No that’s all really, really fascinating. Yeah. I mean, you know, the complexity of AI and I know I myself, I didn't realise kind of the black box deep thinking was kind of a separate segment to the whole machine learning AI, which we're probably more familiar with. Perhaps as kind of layman's to the subject. But no all really fascinating and like you said, all very complex. Loads of kind of uncertainty and like the unknown with it as well with where it might go when you know the human element of control as well. But I think pointing on like the tech for good, using the AI for good, we do know that AI is also being harnessed in you know, cyber crime and, you know, to make cyber attacks more effective, more far reaching. So how, you know, how is A.I. being applied maybe to the role or to the realm, I should say, of cyber crime and the pursuit of kind of cyber criminals? How is... What's the impact there? Well, again, we've had a lot of what you call ethical hackers. So they have been, some of them, have been hackers in the past. On the other side, on the criminal fraternity, and then transferred into what are called ethical hackers or pen testing. So using the technology like you say to reverse engineer to actually pinpoint where they are, what you call backdoors in the code, which allows the hacker to get in and you will have heard the term ransomware as well where they're held to ransom because they've hijacked someone's system and because they're in the code, they can put a stop and shut everything down until you pay them the money and then they unlock the code and everything works normally. So this is what I'm saying. There are infinite good uses of technology even in the cars that we drive now. Okay. You've got warnings about if you're too close to a car, if you're moving out of your lane, which infinitely saves lives progressing towards the fully automated human-free car. But we've still got problems there. And that kind of highlights the restrictions of AI that it can't fully identify objects. And then it has to make the trolley problem choice between A or B, and as we've seen in some testing on fully autonomous vehicles, they end up crashing because that choice they've made isn't a human one that would be able to recognise that that choice was not quite the right one to do so. But having said that, might say the good use of are we able to detect fraud more easily? We're able to like back engineer and building safeguarding in the cybersecurity world to look at how we're doing the propagation to make sure that it's as safe as we possibly can be. But again, one of my key things, when people hear me speak, as I say, it starts and ends with humans. Because when you've got somebody training the machine learning, we're all biased to a certain extent, depending on our background, our education, where we've come from, influencing how we think and perceive. And one of the issues we do have is that the engineers are still predominantly male and still predominantly white, which causes a lot of exclusion in what the AI can achieve because it doesn't know what it doesn’t know. And if you’re training it from your own perspective and not having that diversity thought, then we're perhaps missing opportunities to do for good. But we're also then excluding when we don't mean to exclude. So there's also an element to training the engineers to be aware of what we're doing when we're designing right the way through. And then good uses, like you say, we're actually being more efficient in what we're doing in finance making decisions. But again, sometimes when it's financial, we have to have a human in the loop to make sure that we're not discriminating against people who could get access to credit. But if you look at the really good uses, say, I was speaking to an expert in in South East Asia and Indonesia, and there they have to do inclusivity by design because the mass market is people who don't have so much money. And if they want to be in the market for, say, Fintech and financial inclusion and digital inclusion, they have to target their products at that kind of lowest bottom line. So it's slightly different from what we see here. But I'm not going to sit here and say all tech is bad and we need to do this. It's not. it's just reflecting humans because we fundamentally design it. We have flaws. We have a dark side to our psyche. And we have to be aware of that when we're actually building and speaking to, I mean, UX user experience. This is what UX designers do. They work with machine learning experts to go, okay, what is the user's experience look like and how do we marry them together to hopefully get the best outcome? And you see companies continually improving their offer by liaising with users, having groups to find out what's experience like, what can we improve? I mean, whenever you go on anything online to give us your feedback, tell us use a five star rating. What can we do to improve our services? So this is where technology is saving time. You can do a lot of things online, can transact online, could do energy on pretty much everything online. But as you said, on the back of that, we need to make sure it's private and secure. However, there is a human caveat because we tend not to really be bothered too much about the privacy and security until it goes wrong. So I'll give you an example. GDPR okay. Protecting our privacy. How many times have you or I wanted to get on a website seeing the list of terms and conditions and just gone yes, without really thinking too much about where that, who now has permission to access a profile of me and then this can lead to criminal activity which has happened to me where I came back from holiday and someone had tried to open several accounts in my name and it said tens of thousands of pounds. And this was because there was a security breach at the university I was working at before. So my details got out there, which then could be used with the technology to impersonate me. Luckily, there's also safeguards in place that now the financial institutes have to go okay Karen This has happened, but we've got your safety. We didn't approve those, which is a good thing, which is when it becomes really important. But if you think about our day to day use of downloading apps, opening them, agreeing to the terms and conditions and rejecting or accepting the tracking on your data and the permission to resell your data, we don't engage with that too much because we're all busy. We've all got ease of use, easy navigation, we've all got 10,000 things going on at once. So this is where it's a trade off with starting and ending with humans, it's a trade-off all the way along. Well, no, that's all really, really interesting there. I mean, there's a lot to unpack there. Lots of interesting things you said. I mean, you know, your experience yourself, obviously, of somebody kind of pretending to be you, that deception. I mean, you know, from my knowledge, I know the presence of deepfakes now video deepfakes, like you said, kind of voice cloning, you know, APP fraud with the whole social engineering forcing somebody to do something because the cybercrime is so realistic. Now with the use of AI, like how is that impacting people kind of on like that grassroots level and how is it impacting maybe society as well? I think the research is showing that because I know you mentioned earlier about literacy, about digital and financial literacy in the digital literacy sphere of being away. What is, what is authentic or is that potentially a fake? There is a gap there when we hear that many vulnerable areas of society or aspects and citizens become targeted because they have this gap in the knowledge to recognise that it's a fake. I mean, the emails now, I mean, years ago used to get the fake emails from your bank and you could tell it was a fake, that it was phishing because the email address usually had a foreign suffix like do FZ or ZH, or somewhere you didn't recognise it definitely wasn't the UK, but now through technological innovation, they are so good. It is hard for people to determine they think they're genuine and they fish and then it's too late. They've got your bank account details and we hear time and time again of particularly the elderly as well, who are not so digitally savvy, losing all of their savings, losing the right to their house, losing everything just by responding to, say, a postal advert saying you must comply and give me your financial details. So, it's like you say that there is this problem around the digital divide, not least because we need to make people aware that you can prey on the most vulnerable in society. And they do seem to be attacked more than people who are digitally aware. Okay. So I guess, yeah, the education piece is a huge it's kind of point. Yeah. Okay. Yeah, huge part of it. So is that something maybe you know, talking about what organisations or businesses might be able to do to help alleviate, you know, the pain that their customer base may face. Is education a big part of what they can do maybe, in terms of helping the customer base, preventing them falling victim to cyber attacks, how might that be a solution they can adopt? Yeah, I mean, it's not straightforward as it sounds, but speaking about the financial sector, because that's where I spend most of my time. At the moment, the FCA so the Financial Conduct Authority have got consumer duty pushing through, which is aimed at tackling this very problem by putting the responsibility onto the financial providers to ensure that that consumers protected, that they understand it's coming from them, that they keep checking and using friction in a system to make sure that they're identifying vulnerability. Now you might just expect that to be socioeconomic related. But in the current economic climate, where mortgages have gone up sometimes two or three fold, then people who are on good salaries are now becoming financially vulnerable, which then if you become vulnerable financially, you're more open to looking at options to relieve that that financial burden, which could, in theory, open you up to more scamming attacks, because you may be, you may be desperate to sort out your financials and make ends meet. So it's not, it's shifting now to not just assume that vulnerability means that you come from a particular postcode or a socio-economic deprived area, which I don't like those labels anyway. But now what it’s shifting to go, we have to look across the board, hence this duty has come in. Now, whether it actually gets down to those levels is another matter because we still have millions of people in this country who are underserved by banks, therefore they can't reach them. Right. So I think that, like I said to you, the education part is one part of this puzzle, because we need to look at what we can do as educators, how we filter that down and then also engage with other companies. Because I think it's a broader issue than just one company can do as much as they possibly can, which they are doing. But we've also got gaps around how do we prevent fraud and keep people safe and secure online. It is an ongoing challenge. Okay, no, no that’s yeah really interesting. So I guess in the nature of it being broad then, like in a broad challenge to tackle. I mean to my knowledge there's no kind of bill that's been passed to kind of monitor and limit the use of AI, you know, amongst people who want to use it for good as well as amongst people who want to use it for bad. So you're looking maybe more of a political level and governmental policy. Is there something perhaps that's, you know, on the horizon which might, you know, benefit people using AI for good, but also limit again the potential for risk in people using AI for bad? Yeah. I think if we look at the recent AI Summit that, Rishi Sunak, the Prime Minister held and Elon Musk was there. So there was there was lots of different perspectives in the room about where AI was going. And I think that's leading to there is a forthcoming bill coming out around AI strategy. We've already had some from the EU and America are also following suit. So I think there is this recognition, but it's that argument between regulation and innovation. Where do the two meet? And I think what we've seen from emerging markets and we've seen a little bit more willingness to not overregulate, to allow to see where the innovation goes and then there, you know, the jury's out as to whether they've been unintended consequences. But so far, so good. Now in the UK we have to balance that. You know, you have to know your customer before, before you lend, or if you're a digital company, you have to check to make sure you are being responsible in what you're doing around your company. But yes, there is legislation and acts coming through. There's also guidance from like British Standards Institute, ISO Standards, etc. There's also IEEE, which is looking at the use of AI globally, and that's a conglomerate of academics, practitioners, consultants. So I'm part of that, looking at the effect on the planet and people. So there's lots of people like me getting involved in that to do there little bit to say, Well, what should it look like? But definitely in the UK after the AI summit, I mean Rishi did state that we're going to start looking at marking your AI homework. So what does that mean? That means trying to be transparent as possible, how you're going from deep learning, machine learning to AI and what you're using it for, to make it more trustworthy. Because if we as customers don't understand this process, and not technologically inclined. Then the more transparency of describing how you get from there to there actually starts to build trust in the technology, but also how your data and your online identity is being is being used and promoted and how AI is taking that and then presenting you with good offers or looking at your profile and seeing what would be best suited to making recommendations. That's what it does. Or in ChatGPT and I'm helping you to find the right prompts, servicing you, etc.. So that's the way it's moving. But yes, there is always this continual balance between regulation, not stifling innovation, but making sure that the innovation doesn't go off in a direction where we have to look back and go. If only we'd known, we wouldn't have done that. So I think this is where the debate is at the moment. We know there’s innovation. We know there’s possibilities. AI can do so many fundamental things to release us to do more efficient, more interesting work. And they can deal with like the hard crunching of big data and coming up with the efficient ways to do that and reduce protocols and time processing. But also, we need to make sure that it's doing that in the right way. That doesn't cause harms further down the line. Yeah, no, it sounds really promising that like you said, there are kind of regulatory barriers for perhaps cybercriminals, which is obviously a real positive for the general population who again, you know, with the digital divide, they might lack the education to protect themselves. So it’s really promising in that element. And obviously innovation sounds really exciting too within the realm of AI. So lots to look forward to there in terms of yeah, regulation and innovation, like you said. I mean, you know, with your knowledge as well of kind of emerging technologies, are there any kind of solutions or strategies that businesses and organisations are perhaps using to protect, you know, their customer base as well as their own kind of softwares and hardwares from AI? Yeah I think the cost to serve is coming down because as innovation grows, it becomes cheaper. So for instance, if you're going in the metaverse now to buy a visor, it's expensive, okay? It's about £3,000 to buy an apple visor to enter. But we've seen this with technological innovation in mobile phones. You know, although they are still reasonably expensive, you can get different levels of mobile phones, get access to that market and be able to function. So this is what we see the trends. When you start with an innovation, it's expensive as more people copy that and use AI, the cost to serve comes down. So this is for companies. What they need to do is ask the right questions. Is this the right tool for the job? You know, basic kind of physical tool. Now we move into a digital tool. Is this the right job? Is machine learning do what it wants to do while still protecting my customer. I think they're the fundamental questions to ask because then you can look at mitigating the risks. You know, if the machine learning does this, does it cause a raft of risks here? And this is what you have to look. It's always cost-benefit analysis around risks when you're in business because that will impact on your profits. If you get it wrong, then you will be hit by compensation claims and fines, etc. from the regulators, depending which sector you're in. But they’re all regulated around the use of data in digital technologies, not least GDPR as well. You know, if you're storing people's data, you can only store the data that you need. Otherwise you shouldn't store it without permission. You shouldn't be able to identify individual people without their consent and permission. And you keep them to check if you're going to reuse that data. So I think companies need to ask fundamental questions of what their operations are, which digital tool are right for the job, that they want that digital tool to do. And if they don't know the answers, then they can work with universities, they can work with consultants who have expertise, but obviously that comes with a cost. So we've got to look at universities are good at that in setting up research centres that can then put research out there in the open domain and using open source coding, etc., that they can use. And again, that caveat leads on to ChatGPT. If you're using it for coding and programming you've got to be careful where it comes from. Ask the question, know where it comes from. If I bring this into my business, do I know the authenticity and the route of it? Can I authenticate where it came from? Because, you know, if it's reliant on a one person in their room with a server, and that falls over your whole technological stack could fall over and crash, which you don't want. So I think that, back to humans, we need to ask the right questions to get the right technology in place. Do we need machine learning or AI or is it a hybrid or something else? So I think it's very much defined and tailored to the use of the organisation and depending which sector you’re in there will be regulations around the use of it. No, that's great. I like that. It is looking at the tool or the perhaps the strategy and upon analysis, looking how it. Yeah, not only will it benefit the company, you know, with profits and whatnot, but how it will benefit the end user of that tool or you know, of that service. So no, that’s all yeah really fascinating. But yeah, I think that's a great place to wrap up as well. But it's been great to have you on Karen I’ve learned so much about the, you know, the world of AI. And hopefully, you know, our listeners will have learned a lot as well to adopt into that their kind of business strategies maybe. But yeah. Thank you. So yeah, I think. The final thing I would say is it comes back to starts and ends with people. So users now have a voice to kind of ask what they want from the companies as well. Brilliant. Yeah, great piece of advice to end on as well there. But yeah, thanks for your time today. It's great to have you on the social footprint.