D&I Digest

Is AI Racist?

Teagan Robinson-Bell and Henry Fairnington Season 1 Episode 7

In this episode of D&I Digest, we're looking at Artificial Intelligence and discussing some of the ideas that it's racist. (Note: this episode contains references to suicide)

You can read the two articles we discuss here:
'AI can be racist, sexist, and creepy. What should we do about it?'
'Baltimore teacher accused of using AI to create fake, racist recording of principal.'

Professor Arshin Adib-Moghaddam is our Spotlight this episode for all the great work he's done around diversifying AI.

If you have a question for us, you can submit it through this form and might even hear us answer in a future episode!

Music used is:
Who Do You Think I Think You Are? by Mini Vandals 

Please note that this episode has mention of suicide.

 

T: Welcome back to another episode of D&I Digest!

H: I’m Henry, I use he/they pronouns.

T: And I’m Teagan, I use she/her pronouns. We make up the Diversity and Inclusion team at Anchor, which is an organisation that specialises in housing and care for over 55s. So far in the podcast, we’ve been really focused on people and how we can behave in ways that are a bit more inclusive. This month, we’re going to broach a new topic for us, and I don’t think we’ve covered it at all so far?

H: I don’t think so.

T: And we’re going to take a look at AI. Artificial Intelligence. So, big opening question: how do you feel about AI? Do you like to use it? Thoughts?

H: So I’m going to out myself as an absolute Luddite here, because I really don’t like it.

T: I know you don’t!

H: This is going to be a fun one!

T: Yeah.

H: Yeah, I get really suspicious of it. It unnerves me. And I think mostly because I’ve seen in like the arts?

T: Sure.

H: Being used. So- obviously there are so many ways that you can use it – some really helpful ways. One of the biggest, kind of, takes I’ve seen on it is where it’s being used to create art – as a replacement for like voice actors, to write books, and I hate that. So yeah, that feeling has just kind of spread to all of it very generally. I’m not a huge fan. Someone has to make quite a good case for me to warrant- you know?

T: Sure, and I think that’s totally justified. I can come at this from a completely different angle and say I love it.

H: Oh this’ll be fun!

T: I do! I really do love it, and I think actually it’s probably getting a bit of a bad rap at the moment. And I would say that AI’s not the problem, people are.

H: Yes, I think that is a very fair take.

T: I think people will always find a way to abuse things, systems, usually for capital gain, and I think AI definitely has its place in being used appropriately. And usually, a lot of the time, to work slightly more efficiently than we have done before. I completely understand peoples’ nervousness around AI. My husband works in the creative industries, he’s a designer, and he, for the longest time, has been also quite keen to see how AI can be used, what it can be used for, but he’s absolutely under no allusion of how this could impact peoples’ careers within the creative industry in particular.

H: Yeah.

T: I think – one of the things about AI at the moment is it’s very much in its infancy. And a lot of it isn’t great, to be honest.

H: Well it’s still learning, isn’t it?

T: Totally, and I think that’s probably – well, I know for sure – that's something we’ll get onto later on in the discussion, but it is in its infancy, and it’s only as good as the stuff that people are actually putting in it. Obviously it can do it a lot faster, it can do it in a way that not every human will be able to understand and manipulate the information much like AI does, but I think I – I've got a lot to say about this so I’m quite interested in the discussion that we’re going to have around it. But yeah, my initial thoughts and feelings on it is that I like it, I do find it useful, I think in the right hands it can be a very useful tool, I think in the wrong hands it can be an absolute disaster.

H: Yeah.

T: If there’s anything that people take away from my opinion of what I think of AI is that people are a problem, not AI.

H: That is interesting – oh this is going to be a really interesting conversation! Come at this from two completely different angles!

T: Well that’s good! I love a good in depth conversation about stuff. Especially when we’re not in agreement on it, I always think that’s really good.

H: I think that’s going to be one of these today actually, because I think- lots of things we’re like, we potentially agree with the crux of it, but come at it from different angles. I think it’s going to be a different one today, I think we’re going to be in disagreement.

T: I agree. I agree.

MUSIC

T: Okay so let’s just dive straight in with article one, then. So our first article today is from CNN, and it’s called "AI can be racist, sexist, and creepy. What should we do about it?" Great title, what a clickbait! So bear in mind that this was written in March this year, so it’s really recent, and at the rate that AI is changing, it probably means that some of this is out of date already, we don’t know, and obviously we’ll try and take that into account. So the article comprises of excerpts from an interview with Reid Blackman, who wrote ‘Ethical Machines’. He starts off by explaining that AI is essentially software that learns by example and that we all need to help it learn and interact with it, which is what we were saying at the beginning, wasn’t it? Things like your camera apps learning particular faces when you put them into folders - love that feature on my phone, I must admit - filters on Instagram and TikTok, even those picture checks on websites to make sure that you’re human, that’s all in aid of helping AI learn things, as well as various hazards and what traffic signs looks like for self-driving cars.

H: That one freaked me out, I’ll be totally honest. I feel like I’m just going to out myself as a conspiracy theorist in this episode, I’m like “I hate it, there are things everywhere!”

T: Blackman also talks about various creepy, or just generally unsettling, stories about AI. So, for instance, there was a story about ChatGPT and Sydney, which was an AI that essentially tried to get a New York Times writer to leave his wife. Wild.

H: Bold claims.

T: Yeah. He essentially says that there are instances where things are creepy but are relatively harmless (unless you decide to take life advice from ChatGPT), but there are more instances that could quite dangerous. He mentioned one time where a chatbot responded to the question “Should I kill myself” with essentially “Yes, you should.”

H: Yeah.

T: Oh God. So there’s definitely the takeaway that maybe not using chatbots for life advice is a good idea.

H: It’s a good takeaway, that. 

T: Obviously- But also the fact that OpenAI developers have explicitly said that it makes things up and will give out misinformation, so there is definitely some harmful applications of this.

H: Yeah. Which goes into what you were saying with like the tool itself is good and useful and helpful, so long as the people on the other end of it can use it responsibly.

T: Yeah, absolutely. The area where this probably gets a bit more explicitly into diversity and inclusion then, which is what our whole podcast is about, is this area of bias, which we covered actually in our previous episode when we were talking about unconscious bias versus conscious inclusion et cetera. One thing that Blackman mentions is that AI doesn’t have a concept of truth- it relies on statistical probabilities that have been input into the software itself. So whilst AI isn’t intrinsically racist or sexist or et cetera, it’s learning to reinforce these ideas because they are present in what’s on the internet already, and people’s preconceived ideas of what they’re actually putting into the software. So an example that’s given is when Amazon created an AI resume-reading software to save time and money. The AI was given loads of examples of successful resumes over the last 10 years – I know where this is going - and AI learned that, contrary to the intentions of the developers, Amazon didn’t hire any women. Hm.

H: Backfired!

T: Okay! Loads of tests were run to mitigate the bias with this, but eventually the project was ditched because it couldn’t be sufficiently de-biased. So, I mean, in this case Amazon decided to ditch it because they knew it was going to be problematic in the long run, good! But there are other examples where things haven’t been thoroughly investigated like Amazon did, again this can sort of perpetuate this idea of AI being racist, sexist, et cetera. So, thoughts on that, then?

H: So, I’ve got a big – I feel like this is going to be a common theme here – but I’ve got a bit of beef where AI is used for things like resume-reading, or job adverts even, I would go that far. Purely because I’m very much of the opinion that if someone can’t be bothered to write it or read it, why should I be bothered to involve myself? So for example if someone’s not going to be bothered to read my resume, why have I put so much thought and effort into writing one?

T: Sure.

H: Because I’ve written it for a person, and for that to kind of get judged immediately by a computer or by a software? That doesn’t really match up with my approach to things like this, to be honest.

T: Okay.

H: I’m kind of a bit more of the opinion that if I’ve been bothered to write it, I expect someone to be bothered to read it.

T: Sure.

H: I can see how this would be really useful in terms of saving time and money, especially for like, you know, Amazon: they go through millions of resumes. But yeah, I personally struggle to see how it can be used helpfully when you’re going to meet these people anyway.

T: Okay, I’m going to use an analogy and hopefully it makes some semblance of sense. I think the rise in AI at the moment is probably just an evolution of where we were always going to be with the internet when that first became a thing. I guarantee there were conversations that were happening, very similar to this right now, where someone said, “I’m not going to sit on Google to find my answer when it’s written in a book.”

H: Yeah, yeah, that’s fair. 

T: If someone’s taken the time to write this book, I’m going to go out of my way and find the information that I need in the book. Completely justifiable, I get it, especially if that’s how things have always been, that’s how you’ve always gained your information. Did it stick?

H: So, yeah, I get what you’re saying, because we’ve got like the feature to search a book essentially, which is what the internet is, right?

T: Yeah, absolutely, yeah.

H: But someone’s still kind of consciously put thought into that – into the information that’s going in there. And I know – I know that this is going to probably come out to the same sort of a situation, like you say, it’s just kind of an evolution of it... Yeah. I’m still a bit nervous about it, I’ll be honest.

T: Do you think that humans are more capable of – this is a completely rhetorical question, of course they are. My question was going to be, do you think humans are more capable of understanding the nuance with something like a CV, for example, whereas a computer or AI might miss it, and that means that there’s not a truly equal approach to CV sifting, or the recruitment process, however that might look for people?

H: Yeah, I think there’s – like you say, I think it’s a rhetorical question – but yeah, I think... The bit for me, I think is the accountability that’s lacking. Because you can go into an interview and kind of be aware of the person that you’re talking to, on your resume obviously you sell yourself well. And again, maybe AI are just really well trained to notice this, but with a human reading it I think you kind of get a – you know who you’re writing for. So I think it’s more the fact that I wouldn’t know that my resume would be read by AI and sifted through. If that was explicit, I think I would have less problem with it.

T: Oh, okay.

H: Because I would be able to manage that situation a bit more. I think the problem that I have currently, and I do think it’s part of the evolution of it, is that at the moment, things feel quite insidious and quite hidden and we’re using AI behind the scenes, and it’s not always obvious and clear and transparent in that.

T: Yes, I think there’s definitely issues around transparency.

H: Yeah, and that’s where my nerves kind of come in. So for example actually, the captcha things to identify yourself as human? It was only actually reading this that I realised that that’s training AI.

T: Ah, okay.

H: And that makes me quite unnerved, I’ll be honest? Because like we’re training it to recognise like stop signs, and stairs, and various hazards: people on roads, whatever. And I think that kind of pressure on those really basic, really automated checks are being used to train self-driving cars? That doesn’t seem like a good way of crowdsourcing safety for such a dangerous feature? So yeah. I think it’s the transparency that I’ve got issue with, to be honest.

T: Okay, a couple of things that I wanted to pick up there. I feel like we’re in for a long one, if you’re listening, guys! I am just going to be a little bit out there in what I’m going to say now: do we think though, in the long term, when AI is out of its infancy, people are much more transparent about how they use it, people have a better understanding of how to use it, people are using it with honest intentions. Do we think that we’ll see the end of late-stage capitalism?

H: No.

T: I do.

H: Ooh! Interesting, okay.

T: I do, I think that this could be – and the operative word there is “could” –

H: In theory.

T: In theory, this could be a way for humans to get back to what they do best, which is not the working constantly, at the grind, doing all this stuff that feels really high effort constantly, and we’ll probably be given some space back to enjoy life which, I think anybody who wants to have a conversation about late-stage capitalism will tell you that that’s absolutely not where we are at at the moment. Our leisure and our enjoyment of life is very much limited by the world around us and how we’ve developed, this individual approach and capitalism et cetera. So I think, potentially, like I say, there is potential here for this to be used in a way where we actually regain a bit of what it is to be human and – because at the moment we’re doing a lot of the jobs of the robots, so to speak. Maybe it would be good to hand that back to them. I think where this probably is not the best use of it, and it won’t work in the long term in terms of handing the jobs back to the robots, so to speak, is around creative industries. Because, I mean have you seen some of the stuff that AI comes out with when it’s-

H: Oh yeah. 

T: When it’s related to creative industries? Some of it’s absolute garbage, and you can see from a mile off that it’s been created by AI, and there’s like ways that you can start to spot if it’s been written.

H: Like the fingers in AI art and things.

T: Yeah, yeah. And you can start to see that so I mean, ideally, I wouldn’t say that that is the best use of AI. Like it’s got its place, I guess, but I would say that it’s better to be using AI for things where we can start to think about giving a bit of joy back to humans.

H: Yes. See, I really agree with all of that – I feel like I’m absolutely hitting the cynicism mark here... My wariness, I think is the appropriate word, comes with who’s deciding what jobs are worthy of giving back to the robots. Like because actually if it’s being decided by the current people kind of in positions of power, that is the arts, and that is creation, and artists, and writers, and those kinds of jobs. So yes, I agree in theory. I think in reality it’ll really depend on who’s marketing it, and who’s buying it.

T: For sure. Blackman goes on to talk about ways that AI should be regulated, and I think this is particularly interesting. So he talks about the way that Microsoft have done loads
around AI ethical risks and they’ve involved senior leaders, but still rolled
out their Bing chatbot too quickly in ways that contradict their own principles. That was basically because they wanted a market share,
and the benefits of getting ahead, and this shows us that “businesses can’t
self-regulate" when it comes to money being on the line.

H: Yeah, this is where my concern comes in.

T: Yes, totally. So therefore, we really need robust protections from governmental levels,
because the people using these AI tools aren’t necessarily going to use them responsibly, which is pretty
much what I said at the very beginning, isn’t it? It’s about the way that
people use this software, it’s not the software itself. So yeah, he mentions that
AI is used to approve or deny mortgages or loan applications, it can be used
to look at job applications, how it serves people in adverts, self driving
cars, you know there are so many risky ways that AI can be used, and I think it’s
only sensible to make sure that some of that is regulated.

H: Yeah, definitely.

T: Whereas it’s just not at the moment. So this last one sounds very neutral, but he mentions Facebook serving ads for houses to buy for White people and houses to
rent for Black people which is discriminatory. That’s interesting.

H: Yeah, right? And yeah, this is the kind of – it’s so insidiously leaning in certain places.

T: Yes.

H: And yeah, well if that’s what it’s been taught to do? Yeah.

T: Yeah, yeah, it’s creepy. And I see this a lot with TikTok, for example. So you have something called a ‘For You Page’ on TikTok, if you’re not familiar, and depending on the type of person it thinks you are, it will show you certain videos. And there was a woman on there talking about her boyfriend and how their For You pages are so different.

H: Oh interesting.

T: And she’s under the impression that he’s a very well-rounded, lovely individual in all honesty, and she said, “I’ve never seen my boyfriend go out of his way to look at content like from Andrew Tate,” for example. And if you don’t know who Andrew Tate is, Andrew Tate’s been responsible for a massive shift in how misogyny presents itself socially, within schools, within general everyday life, and he’s been a sort of figurehead for some really quite hateful, harmful rhetoric around women. But yeah and he was getting like Andrew Tate content and she was completely baffled by this, and the type of stuff that she’s getting is like prepping you to have babies and stuff like that, like it’s so pointed and this is just who it thinks you are, and it will learn certain things like the type of content I get is all around food because I like cooking, so I go through mine and it’s just like recipes galore and that kind of stuff. But there is the occasional video that I get that I’m like what on earth is this?

H: Yeah, how have I ended up with this!?

T: How have I ended up here? This is not for me, you’ve got it wrong! But it’s almost like what’s it insidiously trying to imply when we’re going through these things? And I mean this is a classic example, so if you’re White, you’re buying a house, but if you’re Black, you’re renting a house.

H: Yeah.

T: Hmm. Very problematic. When it comes to situations like healthcare, where doctors and nurses are being recommended to pay more attention to White patients than Black patients – I mean that’s borderline criminal, isn’t it?

H: Yeah, yeah, 100%.

T: It’s really not great. So there’s already these incidences where a lot of this is happening, so what do we think about how we can then regulate this a little bit better, and the discrimination element that’s coming into it here?

H: Yeah. I think obviously there is a huge chunk in terms of regulation which I will be honest – most of this is going to go completely over my head, but I really, really think there should be some in place. And again, for me personally, I would really like to see the clarity of how it’s being used, why it’s being used. Like that’s the bit for me that makes it unnerving or not is the intention behind it. Because, as you very rightly said, the tool itself cannot think this. It’s got no concept of truth, it’s got no concept of the nuances at play within society, it is just learning. So actually a huge amount of this comes from the intention behind it: if that intention is solely to save us money, I’m suspect of that.

T: Yeah, yes.

H: Like, no. I would much rather -  and admittedly I’m in a position that can, you know, be able to do this, but I would much rather pay an extra 50p to assure that no humans were being hurt in the making of my chocolate bar, you know? So yeah, I think that regulation is really important. I would like to know as well kind of where the intention’s come from, because I think that will help at least to understand where the discrimination is coming from? And kind of add that level of critical thinking into the reception of it. So, for example, the TikTok feeds or like any For You page, really, if you kind of understand why it’s giving you what you’re getting, you can kind of go, “Oh that’s appearing because I find true crime interesting and it’s picked up on those words, and it’s linked that to me,” so you can think critically about it, understand where it’s come from, it’s not an out of the blue, “We think you’re interested in racism!”

T: Yeah. Really odd.

H: Because it also kind of toes the line for me between like there must be a decision from somewhere that wants me to see that? And whether or not that’s a conscious mind or not, I don’t like it.

T: No. Yeah, I would tend to agree with you there. There’s got to be something around just the massive transparency around how it’s being used, what it’s being used for, and to echo your thoughts, if it’s just for cash, maybe not? Maybe leave it where it is? Yeah it’s, there’s a lot of ground to cover. Like I’m not very clued up around laws in other countries, but what I would say about UK laws when it comes to internet usage, data protection, social media, et cetera, we are so behind the curve in terms of laws and protections for people when they are abused online, when they are a victim of cyber-crime, all this kind of stuff, there is not the law and infrastructure in place to support the citizens of this country around it. So all I’m saying is; don’t hold your breath? For us getting this regulation around AI done any time soon. Because we’re not ahead of the curve with this at all.

H: And I think that’s kind of why I’m a bit reluctant to engage with it at all, is because I’ve yet to really see even the bare minimum of  what I would expect in terms of AI being used responsibly. And that kind of, if it’s not being nipped in the bud, it doesn’t fill me with faith for how it’s going to be used later when it’s more sophisticated, but we haven’t trained it to be more sophisticated? And also, we haven’t been trained to use this.

T: No.

H: And I think this is kind of where I get a bit nervous with like businesses using it, is because it’s all very well kind of using the tools, but- so for instance, with writing a job advert, I think it’s a really helpful tool in that situation, but if we hand it over completely, that’s a really dangerous thing? So, I mean, I admittedly don’t trust AI to do my spellcheck so I am coming at this from like a complete hate it sort of perspective, but it would be the kind of same as like letting auto-predict write an essay. And yeah, I think it’s useful as a tool to help us.

T: Yes. 

H: I don’t think it should be the opposite where it does the bulk of the work and we fix it.

T: I understand, yeah. I’m with you, I think it can be used to make you more efficient – to make you more efficient – not just do the job in its entirety for you and that’s that.

H: Yeah, with no checks, and just let it do its thing!

T: Mm, wouldn’t do that! Would not advise.

MUSIC

T: So our second article today is called “Baltimore teacher accused of using AI to create fake, racist recording of principal.” Wow. We are coming in with the big articles today! This case is essentially about a recording which surfaced in January this year where a school principal can be heard complaining about students and faculty members, and in particular making discriminatory comments about Black students and Jewish people in the community. Woah, okay.

H: Yeah.

T: In April, Baltimore country police revealed that they had acquired conclusive evidence after forensic analysis, and a second opinion, and further investigation that the recording wasn’t authentic. It was created by another faculty member – woah this is just –

H: I know, it gets really juicy, this one.

T: This is juicy! It was created by another faculty member in retaliation against the principal, who had essentially been investigating and talking to this person about performance at work and other issues – the article gives some more details.

H: Yeah, if you want the full goss, like read the article, because it’s great.

T: That is some piping hot tea, that, isn’t it?

H: Truly.

T: Wow, okay!

H: I initially started this like, you know, article about AI, this’ll be interesting! Got into it – I was so invested after like the first paragraph I was like “Ooh, tell me more!”

T: This is wild, really, isn’t it? So this person has gone out of their way to manufacture a sound recording. 

H: Yeah.

T: How have they done that?

H: So the person who was essentially arrested for framing the principal – they’d apparently used the school’s network to search for OpenAI tools, which, you know, foolish anyway, using your school’s
network to do this! And he’d allegedly used ‘large language models’ that practice ‘deep
learning’, which basically is pulling in data from various internet sources, uses text inputted by you the user, and it uses that to produce
conversational results. So it’s essentially that kind of like deep fakes thing?

T: Yep.

H: But yeah, they’d done it on a school network which- rookie error, but.

T: Yeah, that is a rookie mistake! That is... That’s extreme, I think, when.

H: Yeah, I don’t think this is what AI is mostly being used for, I wouldn’t say that.

T: No, I would agree.

H: I’ll give it that one!

T: Yeah, but it does just go to show how dangerous it can be in the wrong hands.

H: 100% And because as well like that information that it was being given to create a situation is also going forward to help AI learn.

T: Yeah.

H: So that is absolutely not real! Like that is being created in retaliation for the purpose of vengeance!

T: And I just wonder what has actually, like you say, what’s been put into this piece of software now where this person’s gone, “Right okay, I need you to make a sound recording of my principal—” Throws in some recordings of the principal, “—and can you make him say X, Y, and Z?” And the X, Y, and Z that he’s put in is around Black people and Jewish people.

H: Yeah.

T: Is that not terrifying?

H Yes!

T: And to then have this hanging over the entire school as well.

H: And the wider community as well, like there’s so much in this. And I think that’s the bit that kind of unnerves me, is that firstly the ease that it was created with, accessibility perhaps, that it was created with? All of this is open source. But also the difficulty that they had in revealing that it was fake? Like you were saying it kind of went through further investigation, second opinion, forensic analysis, and it was only then that it was revealed as inauthentic. So if it’s that easy to create and that hard to work out that it’s fake, that- yeah, it doesn’t – feels dystopian.

T: We clearly need experts in this industry who are able to decipher what’s real and what’s not. That’s clearly going to be a job that’s in high demand moving forward. Because I don’t think AI’s going anywhere. I think it’s here to stay.

H: That’s it, we just need to get better at using it and detecting it.

T: For sure, yeah. So it’s like we’ve said, around that regulation it’s around having the people who are experts to decipher what’s real and what’s not, it’s around having people who are putting the right information into this type of software. But yeah, this, this story is wild. And like I say, it just goes to show how extreme AI can be in the wrong hands and how it can be used in the wrong hands, should I say.

H: Yeah.

MUSIC

H: So our spotlight today is a person called Arshin Adib-Moghaddam, and he is a Professor in Global Thought and Comparative Philosophies at SOAS University of London and Fellow of Hughes Hall, University of Cambridge. He’s a really – I feel like such a nerd here – a really cool professor! So he published a book called ‘Is Artificial Intelligence Racist?’ in 2023, but he’s also kind of currently directing the SOAS Centre for AI Futures. So what you were saying about kind of we need experts in this field, probably fairly swiftly.

T: Yeah, quickly.

H: Yeah, he’s in that space. So his other research interests are kind of topics to do with like global peace, political, social psychology, and the politics of power and resistance. Honestly, he just sounds like a really cool guy.

T: Sounds like he’s got a cool job as well.

H: Right? Yeah. But I actually, I really want to go and read book because I feel like as opposed to the incredibly click-baity articles that we’ve picked today, I really want to know more about this, because I think, like, personally I am quite uninformed with a lot of AI stuff, so I’m scared of it. I would like to understand a little bit more. Admittedly I will probably still be scared of it, but at least I’d like to know where that comes from, you know?

T: Yes, exactly, I think it’s always good to have a wider understanding of how this is all going to work, like we’ve just said.

H: Yeah, and the practicality of it.

T: Yeah. It’s not going anywhere, so the best thing that we can do is understand it. And you can still have your opinions, obviously, whether you’re going to use it or not, whether you think it’s trash, you know, whether you think it’s the most harmful thing that’s ever been created, it doesn’t matter. That’s not the point. The point is that you’ll be informed.

H: Yes, exactly.

T: And I think that’s largely what a lot of people are missing these days; it’s being able to make informed decisions on things. So yeah, that’s really great, and I’ll definitely be doing the same and keeping an eye out for any work that he does in the future.

MUSIC 

T: So, questions that have come in for this episode, then. The first one is: if the developers of AI tech are diverse, would that remove the biases in AI? What do you think?

H: I’m imagining yes? And actually I think Blackman kind of mentions this from the first article in that he kind of says like with that regulation coming in, it’s going to be essential that we have diverse voices doing that regulation. I guess having one voice training the AI is going to make that one voice louder, if you’ve obviously got loads of voices contributing, then you’re going to get a better coverage.

T: Yeah. I think it’s a bit of a no-brainer. Like we’ve seen in healthcare probably, I would say, over the last 50 years, the more diverse that healthcare has become, and the people who are actually doing the work in healthcare, the better healthcare outcomes have become for people generally. So when you have more Black doctors in healthcare, you have better outcomes for Black people. When you have people who are from the LGBTQ community in healthcare, you have better outcomes for people who are in the LGBTQ community.

H: Wild, right?

T: You know? Like that’s literally how it works! That’s why you need diversity in companies, because it does allow for better business outcomes all round, no matter what type of organisation you are. So, I think it directly applies to this as well. If you are someone who has got a lived experience that’s different from the typical person that’s been inputting information into AI, you’re going to train it to do, to think slightly differently – I always think it’s hilarious that you can use the word ‘think’ with AI, because it’s not really thinking at all, is it? But you can put stuff into the software that’s going to be inherently less racist, less sexist, less homophobic, you’d hope. But it’s going to take a concerted effort to find the right people, to be putting the information that’s actually useful, not creepy, not dangerous, into AI and then we can hopefully get to a point, probably a long time from now, where it gives the best outcomes, not just the typical outcome, or an outcome that’s probably not that favourable to a lot of minority communities.

H: Yeah. I do find it really interesting as well though how if you’ve got a more diverse group of people creating it, developing it, you’re going to create essentially one entity that is really diverse.

T: Oh yeah.

H: And that feels really strange because you’re going to get like billions of voices coming from the same thing, which will make it, I imagine, really inconsistent? 

T: I bet it would, yeah, because there’s obviously going to be clashes in belief systems, in the way that people do things, in cultural differences, all of this stuff, so it, I suppose, it’s not going to be a truly fully intersectional being, is it? That’s never going to happen, but I guess the point is, like I say, to probably have the-

H: Evens it out a little bit.

T: Yeah, and like just better outcomes. The wording there is better outcomes, you know, it’s not-

H: Or at least more thorough outcomes, so even if it is presenting the potentially racist, misogynistic views, at least that might be tempered with a “This is this view, another view is...” I don’t know. But, so yeah I guess like better outcomes would be more well-rounded, clearer?

T: I think there’s something to be said as well around the current demographics of people that are working in tech. And current demographics of people who are working in creative industries et cetera. We know that actually the world of tech is still very much White male. And I think that actually there’s steps and strides that need to be made in tech as a whole before we start to tackle the problems with AI, if I’m being completely honest.

H: Yes, yeah, I think I agree.

T: I think there’s got to be some movement there, pretty sharpish, which you would hope would have that trickle-down effect to how we can better improve AI software as well.

H: Yeah, definitely.

T: So the second question that we’ve had today is around: how can AI bring communities together? What do you think? Like I’m thinking about it in a context of maybe like translations?

H: Yeah, that would make sense. I’ll be honest – maybe it’s my scepticism showing here – I can’t think of many other ways? Other than maybe points of discussion. Because I – I mean admittedly I’ve been in spaces where we’ve looked at, or like me and my friends perhaps have like looked at a piece of AI-generated art, and kind of – this sounds so nerdy – used it as a bit of a conversation starter, and been like, “Oh it’s interesting that it’s thought this where we asked that, and kind of given the prompt that it was given, interesting that it came up with that.” So I guess that’s a point of bringing communities together, is using it as a bit of a conversation tool? But yeah I guess translation would be the obvious one. Actually that is the one I will hold my hands up and admit that I do use is like Google translate.

T: Yeah yeah, sure.

H: Admittedly I am still very sceptical of that, because I know that it’s really bad with idioms, and some of the translations I’ve seen on there are... wrong.

T: I, yeah, absolutely, I’m being catapulted now back to my A-levels when I was doing French and being like “Yeah, this is not right!” And then your teacher being like “Please. Do not put this through Google translate. It is wrong.” Because again, it’s only as good as the information that’s being put into it by somebody else, obviously. But yeah, I think it can help with language barriers within communities, I think, possibly, could it be used for things like – so you know we said a couple of episodes ago, sometimes it takes a bit of reflection to know what you want to say in a kind of discussion where you’ve got different beliefs?

H: Yeah?

T: Could it be used in instances like that, so if you’re not potentially a person that thinks on the spot, about how you want to try and get someone around to your way of thinking, or trying to make them see it from your personal experience and your point of view, is there a way that you could use AI to kind of-

H: Practise that? Yeah.

T: Yeah, yeah, kind of like dump your thoughts into it, almost, and help – the AI can then like kind of sift through and help you organise them a little bit better, I guess?

H: Ah, that’s an interesting one, yeah, because if, again, if you’re using it as a tool to help like it’s not like you’re kind of going, I don’t know, “Chuck it in there, you do the work for me,” like you’re using it – working with it.

T: Mhmm.

H: Yeah, because actually things like – is it Google Docs? Has a diversity reader?

T: Does it?

H: Yeah, it was Google Docs or it might be Microsoft? Where, so for example if you’re typing on – I think it’s Microsoft – typing on a Word document, and, for example, you forget to capitalise the B in ‘Black’.

T: Yep.

H: It’ll kind of add on to that and say kind of “for inclusivity reasons, this is usually capitalised.”

T: That’s great!

H: So yeah, I guess that could be really helpful, because if – so for example, when I’m writing, that might not be something that I’m picking up immediately, doing naturally, but the more that it corrects me, the more it is on my radar. So yeah.

T: That’s really useful, actually.

H: That’s that one. 

T: Well done, Microsoft.

H: Yeah, well done Microsoft!

T: Yeah, I agree, I think that if you’re going to use it with good intentions, you’re not going to use it to just do the work for you, you’re working with it – I quite liked it when you said that: you’re working with the tool, not the tool just doing everything for you – then you’re probably going to get a fairly decent outcome, I would say.

H: I’m just thinking as well, things like voice recognition software.

T: Yes!

H: It’s appeared on like Teams and things, and in our work calls. But actually it’s really helpful – or could be when it gets better at recognising accents – really helpful for like D/deaf communities, hard of hearing communities. It gives that kind of subtitles, essentially, but live subtitles, which is quite useful.

T: Absolutely.

H: And that’s AI, I guess?

T: Yeah! It’s in the same field of it, isn’t it? And I think that’s a really great point as well, that we’ve probably not touched on quite as much, is- 

H: Good for accessibility features.

T: Great for accessibility. Yeah, absolutely. And trying to give a bit of equity back to people who historically have struggled with communicating when they don’t have the right tools available. So yeah, I think that’s definitely very useful. 

MUSIC 

T: Okay, well, good conversation! We thought it was going to be, especially when we were coming at it from really different points of view – I always think that makes a really good conversation. Hope you have enjoyed it if you’re listening at home. Thanks for joining us for this episode of D&I Digest. Remember that you can follow us on our website and social media, and we hope you’ll come back and listen again next month. So it’s bye from me.

H: And it’s bye from me.

Both: Bye!