
We Get Real AF
We Get Real AF
Ep 08: Tania Duarte, Co-Founder of We & AI: The Ethics of AI
Artificial Intelligence (AI) tracks us at airports, assesses job candidates & more. How are ethics being applied to this fast-evolving tech? We speak with Tania Duarte, founder of the U.K.'s We & AI, educating & empowering people to have a voice on AI.
Find Tania Duarte Online:
LinkedIn https://www.linkedin.com/in/tanduarte/
Twitter: https://twitter.com/TanDuarte
Find We and AI Online:
Twitter: https://twitter.com/weandai
Linkedn: https://www.linkedin.com/company/we-and-ai/
Instagram: https://www.instagram.com/we_and_ai/
Website: https://www.weandai.org
Referenced:
UN Global Pulse Project: https://www.unglobalpulse.org/project/using-radio-broadcasts-to-augment-early-detection-of-health-risks/
IEEE: https://standards.ieee.org/news/2017/ieee_p7004.html
Montreal Ethics Institute: https://montrealethics.ai/
We Get Real AF Podcast Credits:
Producers & Hosts: Vanessa Alava & Sue Robinson
Vanessa Alava
LinkedIn: https://www.linkedin.com/in/vanessahalava/
Instagram: https://www.instagram.com/vanessahalava/
Twitter: https://twitter.com/vanessahalava
Sue Robinson
LinkedIn: https://www.linkedin.com/in/sue-robinson-29025623/
Instagram: https://www.instagram.com/memyselfandfinds/
Twitter: https://twitter.com/sociallysue_
Audio Producer/Editor: Sam Mclean
Instagram: https://www.instagram.com/mcleansounds/
Website: www.inphase.biz
Audio Music Track Title: Beatles Unite
Artist: Rachel K. Collier
YouTube Channel: https://www.youtube.com/channel/UCiHnYgtOn8u9YovYplMeXcw
Instagram: https://www.instagram.com/rachelkcollier/
Website: https://www.rachelkcollier.com
Intro Voice-Over Artist: Veronica Horta
LinkedIn: https://www.linkedin.com/in/veronicahorta/
Cover Artwork Photo Credit: https://unsplash.com/@alicemoore
We Get Real AF Podcast Online
Instagram: https://www.instagram.com/wegetrealaf/
Twitter: https://twitter.com/wegetrealaf
Facebook: https://www.facebook.com/wegetrealaf/
LinkedIn: https://www.linkedin.com/company/wegetrealaf
Website: https://wegetrealaf.com
Support the show (https://wegetrealaf.com/how-you-can-help)
Sue Robinson
Artificial Intelligence or AI is a term you hear a lot about these days. It powers Siri and Alexa, helps doctors detect diseases, and even helps people find love in the case of dating apps, but in the process of doing all these good things, it's also gathering very personal information about you. It's sharing that information, and it's making decisions and assumptions about you. This has huge implications for personal privacy, social justice, and individual freedom and even national security. Yet there's no uniform global set of ethical standards around this transformative technology. Facebook was called out a few years ago for using AI to monitors people's posts for suicidal content and evaluate their mental state. In China classroom cameras and AI monitors, watch children's attention to gauge their interest and aptitude for certain subjects. Next. So should we be worried? we're diving into the importance of ethics and AI and how we can all be better informed with today's boss babe joining us all the way from London, England. Tonya Duarte with the public awareness and education group We and AI. Welcome Tanya.
Vanessa Alava
Welcome.
Tania Duarte
Hello. Thank you very much. That was a great intro. Delighted to be here.
Sue
Thank you. We're so delighted to have you. And before we get into some of these difficult questions we would love to for you to share a little bit about your background and how you came to We and AI.
Tania
Thank you. So I've been in industry one way or the other for 30 years now, but a lot of that has been in marketing in business consultancy. And I've always been really interested in innovation throughout that. The last four years I ended up working - often working - with some tech startups at for a business magazine, business events company, we were looking at disruptive tech – it’s called disruption and disruptive business models, basically emerging technologies. And we started off four years ago on the journey of educating innovation leaders about AI, how they should implement AI in their business for better business efficiency and etc, etc. And during that journey, I started to uncover more and more things which were making me uncomfortable. I think after however many years working in business and also working in marketing, I was also starting to feel like was…was I on the right side of history in kind of incentivizing people to rush ahead with, with technology almost sometimes for technology's sake. And I just got to the stage in my life where I felt incredibly impassioned to do something about it. And the tipping point for me, was learning about some of the gender case studies where AI has really amplified bias against women, of course, since then I've learned that that's just the tip of the iceberg. But that was the bit that after however many years on this planet feeling that and hoping that we were going the right in the right direction, in terms of equality for women, then suddenly realizing that it can all be rolled back in an instant, really, really quickly. And nobody seemed to know about it. And that was a massive wake up call for me. So yeah, I quit my, quit my job six months notice and started thinking about setting up a not-for-profit to see how I could use those marketing skills, use my background and communications to help spread awareness about some of the issues around AI.
Sue
Well, first of all, kudos to you for having the courage to see something that concerned you and actually make a major career shift to do something about and to be an activist. And then if you would just sort of define for our listeners, what AI is on kind of a high level because I think we all have sort of an idea of - you know that it's our smart devices or whatever. But if you could just provide us with sort of a definition of what it is that we're talking about here when we're referencing AI?
Tania
That's such a good question. And the more expert one gets in AI, I think the harder one finds this question. I wouldn't put myself in the expert category. But certainly if you ever asked someone who's creating it, they'll spend about an hour telling you what it's not. So I think to try and make it as simple as possible, we define it as not just artificial intelligence, but, but also anything that uses big data in a way that is too complex for people to use. So it's kind of data driven technologies, artificial intelligence, basically, machines, programs, working in a way that is beyond - is so complex, with so many inputs, that it's beyond the power of a human to do it, that's a very “layman's term” way of describing it. But I think that's important because it can be, you can start getting into definitions of, you know, kind of neural networks and deep learning. And I think that some people also get very hung up on what's real AI and what's just automation. What's just, you know, what's just algorithms? What's and to some extent, I think it doesn't matter too much. It matters if you're using it for a marketing campaign and you're trying to hoodwink people over what you're selling them. But in terms of us understanding what AI is, I think it's more important to think about the fact that it's about big data and about models that are using it in a mathematical statistical way and what, what implication that has, I think it's much more important to think about the implications.
Vanessa
Can you share with us some examples of how people currently experience AI and potential ethical vulnerabilities that they may be exposed to.
Tania
So I think before I do that, I do also want to say that I’m not don't in any way want to come across as a kind of technophobe, or someone who is anti-tech, because I think for a lot of people, in order to understand AI, more, which is definitely the aim of We and AI, you actually need to understand the good things as well. Because otherwise you just think, why are we doing this, right? I mean, and people get scared, and there's this whole kind of thread going on about computers are going to, you know, robots are going to steal our jobs and the technical singularity and in it, I think if people are coming from a position of fear, then they're not engaging and what we really want is for people to engage with being part of saying what Kind of an AI future that we want? What's good? So I would preface this by saying that there are lots of vulnerabilities as you allude to, but it's in the context of developing technologies that are huge, you have a huge potential to really deliver, you know, a lot of transformational and positive change. However, yes, on the way there…it relies on a lot of data and a lot of modeling. And the problem is data often is not necessarily as representative as it should be. And models are not always made as kind of with enough thought about the data as they should be. And sometimes the whole project is flawed from its very conception because as humans, we're flawed and we have data that is biased and we are biased. So all of these things together, have led to some case studies. Some of them you mentioned at the beginning of your intro, but kind of Quite topical ones, I think now are when airports were scanning, scanning people with basically to take your temperature to check if you had COVID or not. So you had to hold a handheld thermometer, and images of people holding these handheld thermometers were being flagged by machine learning. So by computer vision, if it was a white hand holding a thermometer, it was flagged as a thermometer or a kind of digital device. If it was a black hand holding the thermometer, or dark skinned hand, it was being flagged as a gun.
Sue/Vanessa
WOW
Tania
And so this is the kind of thing that I mean that the machine basically learns to interpret images or interpret data based on the other sets of data that it has and it it draws correlations. It teaches itself to make connections. So in the same way that there was a program that was just flagging dogs, so it was identifying dogs with computer vision.
But some of the dogs were actually wolves on snowy backgrounds, which meant that some dogs, if they had snow behind them, were being identified as wolves, not dogs. So what the machine is doing is learning from the context and drawing its own conclusions. Snow means Wolf, as opposed to thinking necessarily about the features of a dog. And how that differs from a wolf is this kind of noise and a lack of our ability to say, to correct, in some cases, until it's too late. So in some in these instances where it's obvious, isn't it we can look at that and say, That's not a gun. But if it's being used in systems where there isn't anyone overseeing that or there isn't anyone flagging that, then it's being, it's making decisions. And this is what happens in more hidden systems than with computer vision. So for example, in recruitment, and there's been lots of case studies on recruitment platforms, learning what people get what jobs and trying to optimize candidates for employers based on previous successful candidates, well, using historic data, which is always a big problem. Often women aren't shown high paid jobs historically, they don't have them. So why would you show them but then of course, that's where the amplification comes in. Because then guess what you don't get women applying for them in those situations it’s very hard to see. If you didn't get shown that job ad, you don't know you didn't get shown that job. Right? So it's quite easy, it's invisible. It's really difficult to then to know that you're being discriminated against and that's why part of our mission is to make the invisible, visible And there are two issues with facial recognition in this context. In the context of race, which is one of which is a lot of the reasons people complain about it is it because it has a very, very high false positive rate for blame people, for people who are darker skinned, the computer cannot get good matches very well. So I think up to 98%, false positives or something but also there's a wider issue over facial recognition, which is actually how it's being implemented in the first place. So I think some of the concern is over the accuracy, you know, that the accuracy and some of the concern is over what kind of neighborhoods it's being used to police and how police are using it as part of a bigger systemic justice system?
Vanessa
I actually have a quick follow up question. Well, maybe it's not quick answer. But I'd love to know. And I'm sure our audience too, obviously, you have these computer systems that are searching the internet research, and that informs how they the AI works. How do humans come into this? How to how does the human component come into it? And just when you mentioned the the Dark hands and the thermometer and that automatically recognizes that as a gun? Is that the research that the computer is pulling from the internet? Or is that actually a human putting that into the computer's quote unquote, brain?
Tania
Basically, yes, there are humans involved, certainly over over supervised learning, where they, they basically grade the AI and it's a big job. There's kind of whole farms of people are often earning very little money, looking at really kind of dull images sometimes just kind of saying yes, no, and I think the human, the human element in in training, AI is is not widely recognized that you do need armies of people. And there's also again, slightly contentious ways in which all of us are training AI sometimes without, without knowing.
Vanessa
Yes. In that we talked, we talked about CAPTCHA a lot, which I Susana brought to my attention when they were talking. She's like, yeah, we're teaching is confusing, like, What are you talking about? She's like, yeah, you know, the pictures were like, okay, pick up the, you know, out of this sequence of picture of the pictures, basically broken up into squares, with pieces or a traffic light and what pieces aren't, and I'm like, Oh, my gosh, we're training computers. You're right.
Tania
Yeah, I thought I thought I was helping the world be really safe and secure. Actually, I'm trading bots. Yeah, I find those I find those tests really hard. And the reason they're quite hard is is because we're trying to show a robot I think, or machine. In what different circumstances a traffic light could not look like a traffic light but it is a traffic light and it needs Humans to do that, because that's really tricky. I mean, how many times have you seen the ones where they're kind of road signs? And you have to look at it, you have to look at it twice. So So yeah, that is the kind of way where, where we're training, we're training machines. Another way is with voice recognition. So there's been a few scandals over this where, you know, we've been told that that voice recognition that you know, that that Alexa is not listening in to everything we say only when there's the “wake” word, but then there's stories that in actual fact, in order for Alexa to pay for the researchers to know that Alexa is understanding things correctly, they have to listen too… they have to listen. And so whether it's around the wake word or not, if they're not checking that Alexa is understanding us, they can't make Alexa better. So, on the one hand, you're like, Well, you know, me personally, I definitely want Alexa to get better because the number of times I come up with absolute, you know, absolute junk in response to what I just said, and I think it's okay, the machines are not taking over anytime soon, I just asked for a certain song. And instead I got a recipe to something I kind of wanted to get better. But on the other, you know, it's very important that we're aware of exactly when humans are involved. So as much as it's important for us to know when machines are involved, it's also important for us to know when machines when humans are listening in as well.
Sue
Well, and it's interesting that you're talking about bots, because I literally just listened to another podcast this morning where they were talking about artificial intelligence and how bots are getting are being trained to be smarter to listen for voice inflection, to be able to serve people better in a customer service sort of role, but also over time as their voices become more normal and human sounding and as human avatars start to look more More realistic and are associated with a bot? At what point is it unethical to know not know really, whether you're talking to a real how do you come up with a uniform system for AI ethics?
Tania
Oh, that's that's the million dollar question, isn't it or or trillion dollar? Probably. It is. There are 160 different frameworks that have been produced by some amazing bodies, whether it's The IEEE standards or you know, international bodies, that there are so many done by the big tech companies who are working really hard in this area. Quite often it's very easy to blame big tech for everything and call for greater regulation. But actually, I think quite often, some of the people most involved in technology, are those that want the greatest amount of clarity on what they're allowed to do, because they are aware of the power that they have, and ---need some help with it, basically, I think the there is some step forwards and international because the other thing is it has to be ultimately the best way we're going to ground. This is international, which is very difficult in you gave a case study at the beginning of abuses in China and how things differ across the world. It's very hard thing to get international agreement on. But there are some moves forward that there's recently been a global partnership on AI announced where the EU, Britain, Canada, the US - various countries are coming together to try and get some standards. But it there's also everyone's own geographical regulations and rules as well, that I know, people are lobbying. But I think a lot of the time not enough people know enough to actually inform the policymakers and often the policymakers don't know enough. So it's why we really want to try and bring the voice of the public in. And very importantly, when we say the public, we don't, we mean the business people, as people, as citizens working in organizations as well. So whether you work in a tech company, or you work in a business that uses technology, you you also are a parent, you also use the healthcare system. You live in certain neighborhoods and these issues affect us all not only do we have a right to get involved in discussions about tech, which is really difficult, because you always feel like it's going to go over your head or that, you know, you need to be some kind of super smart geek, but actually, no, it's about isn't technology is is actually all about human input and human outcomes. So it not only do you have that right to get involved, you have a responsibility, because it's really is going to be our future AI particularly. And everyone has that responsibility to try and put pressure whether it's internal pressure isn't in the business you work for, or whether it's asking questions or business As a consumer, or whether it's getting involved in in politics or helping out with groups representing underrepresented people who may have their data rights being eroded, I kind of feel everyone's got the responsibility to at least know how, how it affects them.
Vanessa
Mm hmm. I have a question. When we initially spoke to you about being a guest on our podcast, you gave a shout out to Canada for the strides they're making in AI ethics. Can you highlight some of those things that they're doing to bring awareness to AI ethics and what we can learn from them and their progression?
Tania
Well, I think Canada has always been as a, as a nation has has always taken AI quite seriously and I think out of that there's a really busy ecosystem in developing AI in general. And the Montreal ethics Institute has been really, you know, the forefront of looking at the ethics side of that. But there are also lots of people looking at different ways of design thinking, for example, how you actually design services and products to think about ethics at the beginning rather than go oops, at the end.
Sue
I was thinking yeah, that it seems like this needs to start to some degree at the education level, the people who are writing the algorithms who are being trained as software engineers have to have not just programming understanding, but they have to have ethics training as well. Right? To at least tnow that they should be questioning their assumptions. And is that happening?
Tania
I think it's not happening nearly as much as it should be happening. And we've seen, we've seen the power of in employee activism and employee, kind of yet just engagement and involvement with some of the changes that are happening in some tech companies at the moment. I mean, Facebook employees are obviously being very, very active at the moment. So it doesn't have to start at the top. But what would be great is if at the top, there was an ethics committee, there was ethics policies, lots of companies have these they'll have their principles, their ethical principles. What is often missing is the gap between those ethical principles which, you know, look great, sound great, but there's not the infrastructure or the processes in place to explain to people who are actually not just developing but but but researching, planning, designing, implementing, what do they mean in practice. And it takes support. Because if what you're doing is slowing down the process, if you can deliver something quickly, that hasn't had to be thought about by lots more people that hasn't had to be no kind of de-biased and different data sets and remodels, then the commercial pressure is always going to be to ship quickly. And that that is one of the problems that we have in tech at the moment is that it happens so fast. But then obviously, while you're doing that, it's very difficult to find the space and the mandate to be thinking about, you know, ethical principles. And so in order to get big change happening within companies, we need to show that there is either a compliance risk, and companies are going to lose money that way by fines or there's going to or there's a reputational risk. So a lot of these case studies we've talked about break, you know, kind of international conventions on human rights or, you know, kind of anti discrimination. But there's very little also recourse, certainly in the UK, when that happens, partly because people don't know it's happening. But partly because it's just not, it's just not very easy to do anything about So while compliance is not that much of a risk, we need to encourage people to make it more of a risk by pushing for the legislation. You talked about earlier, more appropriate legislation to the types of technology uses now, but we also need to be acting as, as consumers in terms of showing companies what's going to get them business and what's not going to get them business. So I think we've got to look to ourselves. So yes, we've been conditioned to consume goods in certain ways. It would be from a tech point of view. You know, we don't like too many…we don't like to read terms and conditions we don't like to - we don't like to have to click too many buttons when we're using a product. We like it to be seamless. We like it to be personalized. We like it to work really efficiently. But I think we also have to think about what the, what the implications of that are, and how we how we behave and take some responsibility.
Sue
Yeah, you know, this animal has so many different tentacles to it. And I think you make a really good point because when you talk about legislation being a solution to this, I don't see how that's possible because the legislative process is like a cement shoe. And the technology is evolving at a pace that's like a running shoe right and it's there you're trying to put the cement shoe on one foot and the running shoe On the other foot, you know, and it's just not going to happen. So in my mind, it has to be either liability driven. And making companies really understand because they are charged primarily with answering to their show shareholders. So making them understand the liabilities. And then also market driven to your point, Tanya, consumers have to step up to the plate, and just take the time to say, wait a minute, why do I have to complete the capture in order to use your, your website or whatever, you know, like, it's hard to know where to begin as a consumer. And so that would be my next question to you is what are some practical steps that the everyday person can take to get involved in this issue, both to educate themselves and then also to have a voice?
Tania
Sure, I think this is obviously very close to my heart. It's what we're hoping to to be able to give people more routes to get involved. And I think the first step really is just learning a little bit more about AI and I think what we've seen with people and what people have been learning through the recent questioning of systemic racism is a really good example of, of stepping from that place of thinking, Well, I'm not racist, that's good enough to thinking, actually, I need to do a bit more. And I think if we think about that, not just what we think about that, across all elements of our lives, you know, not only are we not doing any harm, but but what, what good can we do? How can we think about how other people other than ourselves are being affected? I think that's the first step is is kind of a bit of a mind change. And, for me, I'm a classic example. And maybe this is why it hit me so hard. You know, I've always loved technology. I love the personalization. If I get an ad to follow me around the internet. If it's something that's relevant to me, if it's like something really cool that actually I might really Like, I am going to want that ad much more benefits for something completely boring and have no interest to me. So for me, I went through a phase of like, “Hey, have you know much data as you want, because make it work for me.” But what I wasn't seeing at that time was, well, that's fine for me, because I know how this works. I know why that ads follow me around the internet, I know that that you know, cookies are basically working out that you know, on my computer or working out my journey and what my preferences are, etc. So just that bit of knowledge means that I can resist a little bit more because I know that I'm the subject of a campaign. And I think that's a big step forward. Secondly, I'm privileged enough that if I do, go and buy whatever it is, I'm unlikely to get myself in, you know, real dire financial straits, whereas if A: you're kind of your more, having to be more careful with your money too, and also B - you're not aware it's happening. it's very easy to be preyed upon. So that was a big one, I realized that some people, you know, getting into debt or thinking it's serendipity that they should buy this, this, this special dress, because it's like it's a sign it's following them around because they're not seeing how they're being manipulated. So I think the first step is to understand how you might be manipulated, and then make other people aware of that. And some people as a result may choose to turn off their cookies. So you know, you can go to your settings, it means that the websites you go to can't see what other sites you're going to what your history is, you know, what kind of person you are. Some people may well choose to do that, you know, as I said, me personally, I won't, but I think most people don't know that that's an option. So small things like that.
I think another thing is when we use social media is being just critical questions just questioning a little bit some of things that we see. So deep fakes, for example. So that is when we can now create, we can create images and sounds of people that look like it's them, and it's not them. And you cannot tell what's a real person, what's not - there's been horrible cases of people using deep fake to, you know, put people's faces into porn videos, and you cannot tell it's not that person and using it aggressively. Now, these things only work if people are viewing them. They only work if we're sharing them if we're kind of giving them breathing space. So, I think it's also about being quite critical and conscious. And before you press that share button, thinking about how ethical things are, it's very easy you see a story and the other thing is the whole kind of fake news and curated news and ending up in an echo chamber where your news is only coming from one source is maybe questioning that and stepping out you know, have I got a balanced point of balance point of view here. And then the gimmicks that we often see on social media. So little stories like, you know, AI that can tell if you're gay or not, you know, which is hugely contentious. Lots of people would argue not, you know, not actually correct. You know, it's like, it's not it doesn't work, but it's the kind of thing people you know, kind of like, Hey, this is a cool new thing. Let's share it and we understand the motivation there. So it's just having that questioning mind. And then if for people that work in in businesses, I think it's about asking questions or for everybody is asking questions, questioning things in businesses is asking the small questions such as well, have we got an ethics principles? If you have then questions such as well, what does that mean when I'm planning this so that the kind of small questions and the small questions are good but but at some point also, maybe the Big Questions like, should we even be building this? And, and, and that takes a lot of confidence. But it's kind of building up to that. And that takes a bit of education in the same way that we're seeing people now who are educating themselves on, you know, kind of black history that that previously had not thought that it applies to them. Again, I say, this is, this is a great thing. And also we need to be doing it for lots of different interest groups. You know, how does this How does this product that I'm using work for people with disabilities? How does this product I'm creating work for people who have accessibility issues? Let's ask the question.
Vanessa
Tanya, you bring up several great points, common thread being that we all need to hold ourselves account accountable to a higher standard, and not just ourselves, but our employers, our family members, our friends, and if we want to see change, we need to be part of that change. And sometimes, actually asking those questions. It's hard. It's hard being the person in the room being the squeaky wheel. I'm usually that person very, like squeaky I've always been the question asker. And I've always been, like, you know, standing up for the little guy or saying, hey, this doesn't feel right. And sometimes you will get the people to look over at you. Like you have a third eyeball. Or, you know, maybe they're looking at you like, I was thinking the same thing, but they just don't have the voice to do it. So it's hard being that person, but it's so needed. And I think that the more and more I have I Sue I talked about this a lot, that I have faith in the next generation. This young generation has such a beautiful voice, like they stand up to things and I hope that we continue to see that.
Sue
And I also think there's a certain element of honestly, laziness. Right, there's this or maybe maybe the better word is overwhelm, like, people feel overwhelmed by their technology. The reason we don't know about how to turn off cookies on our computers and our phones, is because we're so busy, or we feel so busy and those things aren't easy and intuitive to do. And so we don't take the time to stop and figure it out. And our inaction is our complicity then right in, in creating this ecosystem, where a few who understand the AI are designing it in a way that whether intentionally or not intentionally, probably a combination of both -we're coming up with algorithms that are biased and systems that ultimately will bite us all. So we can't afford to be complacent, you know, lazy or whatever the word is, we can't afford that.
Tania
Yeah, I agree. And on the bias issue, I mean, that's another really important thing we can't afford to do is not question diversity within the industries that we're in, in technology. Um, and I think you know, I haven't said much about them. But it is one of the most important things. That's why a lot of bias creeps in - is because if we're all sitting around a table, we all have the same experiences, it's very difficult to think about how things might affect other people. So the more diverse people you can get in a room, the better your thinking your critical thinking becomes, and the easier it becomes to ask those questions collectively. So I think that is definitely a challenge for everybody is to think about, “am I doing enough to encourage diversity in the workplace and in tech, particularly”, you know, when we come to AI,
Sue
That is a great, great point. And we talk about diversity & inclusion a lot on this show. And you're right, like if you that's a good, practical step that I think businesses can take right is really looking for diverse teams. To come in when they're designing their products, because that's when they'll get those viewpoints that they may not think of on their own. So to me, that's just that's a great takeaway for any business owners who are HR departments who are listening right now.
Vanessa
Well, even you know, if we're talking about market share, if that's, sadly, if that's the only thing they're looking at, I mean, if you have a, like a diverse group of people working on a product or a service, I mean, you're gonna attract a wider group of consumers. So what I mean that just in my head is pragmatic and makes sense.
Tania
I think you're absolutely right. And, and actually, one of the challenges that we have is, in talking about AI bias, particularly, is that we get very hung up on the fairness issue, for obvious reasons. It's very close to our heart. But actually, we should be talking more about the fact that, “do you want to make a robust product that does the job it's meant to do?” Because when you start getting errors and bias, they're not necessarily just, you know, gender bias or racial bias. What it's saying is, you are treating one group of people, incorrect, you're incorrectly weighting something here, this is not a robust model, therefore your product is not as good as it should be. And so it becomes a business, you know, it should become a business issue.
Vanessa
I feel like we could continue talking about this for hours and hours. It's one of those things like the tentacles as Sue was saying earlier. We really could!
Sue
Yeah, yeah, no, this is a fantastic conversation. I think that to be respectful of your time, and because there's no way we're gonna be able to solve all this today, we’ll go ahead and jump into our lightning round, Tanya. And these are questions and just thoughts that we asked all of our guests to share so that we can get to know them on a more personal level. And so I will start us out by asking you to finish this sentence. Women are…
Tania
Diverse. I find it difficult sometimes when women get grouped in any way. So I think it's really important to say that we all have the ability to contribute and behave and achieve so many things. diverse things that I found that quite difficult to pin down to one word other than to say, diverse we we have, you can't limit us, I think
Vanessa/Sue
That's beautifully said /Unlimited
Vanessa
What are 3 pieces of advice you'd give your younger self?
Tania
So this one hurts a little bit. And I'm probably thinking of my much younger self in my 20s, early 20s, particularly, I think, when I worked very, very hard in a quite macho male dominated and rather kind of old school environment. And I think don't seek validation from others would come into it. So I think I was doing a lot of things because I was really seeking external validation. And what I've learned over the years is that that gets you nowhere, made me a bit of money to be fair to stop You know, in my early 20s, so you know, but that, that, that that really, from a kind of personal. Yeah, personal development point of view wasn't good. I think.
Secondly, related to that is also don't judge others. Because when I got sucked into that - I remember I hate to say this and it does hurt. I remember for example, seeing women who were pregnant who had kids leaving work early and you know, we had this horribly toxic work culture and in my early 20s might be like, you know, “that's outrageous they're not you know, they're they're kind of like, they're always out Yeah, morning sickness again.” And that was because I was in that kind of culture but they encouraged judgmental ism. And now I see you know, or so and so's not working hard enough and you don't think about what's going on in their lives. And I think you mentioned the younger generation now and I think that is what's so beautiful now is that doesn't happen so much. And I hope that lots of people aren't having to learn that now. It's just part of who they are. But then also part of that judgmentalism and you know, seeking validation was because I was being very hard on myself so I was kind of expecting life to be hard, you know.
Be kind to yourself is my final my third kind of lesson because I think part of it was about actually just allowing failure or you know, kind of human human humaneness you know, not having to think about being a machine or always right or to just to relax a little bit and be kind to your own achievements, which helps you to be kinder to other people as well.
Sue
100% and i think that you know, those toxic mentalities like being judgmental and looking for outside validation. They always end up pitting women against women. Have you noticed that?
Vanessa
Good point, Sue good point - you need a good group of mentors and you need your tribe, but you should have confidence in yourself. Because even though you're getting those weird looks, sometimes again, the third eyeball, like you could just be that person in the room that the only one saying it out loud, you know, so yeah, absolutely. I love that.
Sue
I do too. Okay, what is your current favorite application of tech for good?
Tania
Well, this, I love this question because just this morning, we had a group of five students from University of London, you've done a little research project on us on AI for good case studies come and introduce me to a brand new one. So this is really is a very current new favorite. And I'd love to give them a big shout out for the work that they've done completely new to AI and helping us to kind of understand how how students perceive AI. And one they picked out was as part of the project was pulse lab compiler, which is a Ugandan un interagency initiative. And in Uganda, the half at least half of households have radio, local radio as their primary source of information. And what this project did was use basically use voice recognition to listen in to, I think 25,000 Ugandans phone into their local radio station a day and kind of chat on it, which is amazing. And the voice recognition listens to all the conversations that are had. So their public conversations, it's you know, they know it's on, it's on radio, and from that, pick up, translate and pick up key themes that have been talked about and they're often about social issues, poverty, climate weather in it helps them to develop aid programs and to inform you know, government about so it's quite an amazing way of kind of listening to the population and helping them accordingly. So all those phon-ins aren't for nothing. They're not just event like I often feel that they are certainly in the UK. So it's kind of big data analysis. So, you know, by people that look at it obviously. And they they feed it into the UN sustainable development goals to help come up with projects. And one of the students updated me on it today and said that it was used in the COVID response. So it was used to help, you know, kind of find hotspots and find where aid was needed and education and information was needed. So that's my, my current tech for good, favorite.
Sue
That is fantastic. And again, speaking to the ingenuity of the younger generation and using tech for good, I just love that.
Vanessa
What issue do you most hope technology will help resolve in the future?
Tania
I guess it's got to be climate change. I nearly did a kind of Miss World world peace Answer cuz, you know, doesn't everyone want that but but i think i think realistically we should be you know, more most I don't know if it's most more achievable, but I think more should be being done to to use the technology we have to think about what is a massive issue?
Sue
What inspires you, Tania?
Tania
Well, I was gonna say, I was thinking I've changed a little bit what my primary answers while I've been on this conversation - my go to answer is people who volunteer, and this isn't just within we and AI is a volunteer organization and it is Every day I'm overwhelmed by the effort and passion and time and expertise people put in for nothing. But but it didn't just come from that my son has been what my son had a brain injury when he was very young and had all sorts of medical issues. And the number of volunteers that helped to raise money for treatment or just to give him you know, some stuff to be fair, we, you know, we didn't even think we needed but people were so desperate to help and volunteer. And that changed my that was one of the biggest changes in my life is seeing how much a fellow human who doesn't know you is prepared to do for you just made me really, really hope for the future of humanity in a way that I hadn't before.
So volunteers are one thing but while we've been talking, it's your third eye thing. Actually. I was really thinking about speaking out and people who speak out and quite topical at the moment the moment is the people that are speaking out and getting a lot of flack for it, there's a there's women in the machine learning community who are speaking out about racial bias, particularly in machine learning. And as much as we've been on this conversation, I've been just taking it as fact, you have the deniers like the climate change deniers who are refusing to accept the issues and Timmy debreu is, is speaking out. There's lots on Twitter who are really speaking out and getting quite a lot of the abuse that you you know, and naysaying that you would expect on social media. And so I'm also saying, What inspires me is the people such as yourself who who question and that squeaky wheel, so squeaky wheels.
Sue
Yes, the squeaky wheel wheels are great. All right. What do you wish to learn more about?
Tania
So I still want to learn a lot more about artificial intelligence. So, learn a bit but I'm not a technologist got a lot more As you've probably seen from my answers to learn, so that's a long term thing. I get distracted from that cuz I'm one of those people who just loves to learn new things every day. I'm ridiculous. I love TikToks so at the moment, I want to learn the shuffle. I want to learn the shuffled arms you know, it's like, every day I learned something completely useless. But um, but yeah, I should say AI is the big one.
Sue
TikTok is pretty fun. I'm gonna have to admit! You know, when my girls are home for the week, they teach me a dance. I feel very trendy. Oh my goodness. Okay, describe the future in one word.
Tania
So here's an optimistic one word: humane. And that's a bit of a play on words for me because one of our missions is to safeguard humans and humanity in the age of machines. So kind of human and And she made us to I know but you just add an E.
Vanessa
All right, Tanya, last one, fill in the blank: blank, like a girl
Tania
Code like a girl. And I’m obviously very keen to get more women into AI and machine learning. I can't code but there are a lot of amazing coders out there.
Sue
Great, Tanya, this has been wonderful today, you've given us so much to think about and really called out the need for all of us to step up to the plate. So that's important and we appreciate you being the squeaky wheel and helping us spread the word today about AI and the need for ethics. If people want To get involved with We and AI, and learn more about the subject, or get in touch with you, what are some social channels that they can find you and we in AI on?
Tania
So we our website is weandai.org. So just we and ai.org and is the same for Twitter window.org. And we're still at early stages. So we really want to be talking to everybody to help build our programs, and help understand more about how to communicate with people from different backgrounds. Were UK focused at the moment. However, AI is a global issue so we have people in the organization that that advise us from all over the world.
Sue
Well, we welcome listeners from across the world. This is a global conversation.
Vanessa
Yeah. Thank you once again, Tanya, you've been lovely. And thank you for just your presence and such interesting and needed conversation. And we hope to keep in touch and have you on the show, eventually, we'll invite you back. So we can talk more about AI ethics. And just because, as we said, we could go on and on, so that's right.
Tania
Oh, well, thank you. It's been an absolute pleasure. I'm very, very pleased to get the chance and look forward to it. Lovely. Thank you.