
The Soap Box Podcast
The politics and marketing podcast for business owners with a social conscience.
Talk about sticky issues, learn how to weave your values into your marketing, and hear from real-life business owners working it all out in real time.
The Soap Box Podcast
How AI could make our systems fairer, with Tamara Polajnar
Let’s be honest, for a lot of people in my world, AI feels like a bit of a villain right now. It’s been decimating creative industries, disrupting business models, and flooding the internet with content that’s at best soulless, and at worst, dangerous. I’m recording this just as the whole Grok-goes-anti-woke mess is playing out, so feelings are certainly mixed.
I’ve got plenty of thoughts about AI, and I’m happy to talk your ear off about them (do send me a DM). But today’s guest offers a different perspective. One that’s a little closer to the utopian vision we used to have: that tech might actually make our lives easier, and free us up to do more fun, creative, human things.
Dr. Tamara Polajnar is a computer scientist with over 25 years of experience in Natural Language Processing. She’s the co-founder and CEO of herEthical AI, a deep-tech company developing ethical, explainable AI for the justice sector. She’s led AI research across academia, fintech, and public-sector innovation – including at the University of Cambridge – and is now focused on building AI tools that don’t just automate processes, but challenge systemic bias and drive real cultural change.
Her team is developing tools to help institutions like police forces and family courts identify victim-blaming language, track bias, and hold their systems to account, with a clear emphasis on fairness, transparency, and survivor-centred innovation.
In this episode, we talk about how bias shows up in official documents, what it really takes to build ethical technology, and how systems grounded in empathy and real-world constraints might be part of the way forward.
If you’re suspicious of AI – good, me too. But this conversation shows what it can look like when it’s built with care and used to create meaningful accountability.
So let’s get into it. Listen to Tamara get on her soapbox.
Tamara's Links:
herEthical.AI Website
Follow herEthical.AI on Instagram
Connect with Tamara on LinkedIn
Looking for more?
Join The Soap Box Community - Peta's membership for businesses with a social conscience is now FREE! Come and join us to survive the current torrid political context!
Follow Peta on Instagram
Find Peta on LinkedIn
Hire Peta to work on your copywriting and brand messaging
Let's be honest, for a lot of people in my world, AI feels like a bit of a villain right now. It's been decimating creative industries, disrupting whole business models, flooding the internet with content that's at best, soulless and at worst, dangerous. I'm recording this just as we've had the Ferrari about, uh, grok going anti woke. I have mixed feelings about AI and I'm quite happy to talk your ear off, individually about them. Uh, send me a dm. But today's guest offers a very different perspective, one that's maybe closer to the slightly utopian, view that some of us had, that technology can make our lives easier and free us up to do. Um. To do things that are more fun and creative. Tamara is a computer scientist with over 25 years of experience, and she's one of the co-founders of her Ethical ai, a company that's using AI to interrogate power, not reinforce it. They're building tools that help police forces and family courts spot victim blaming, language track bias, and hold their systems to account. And together we chat about how bias shows up in official documents. Uh, what it actually takes to build ethical tech and how systems built on empathy, transparency, and real world constraints might just be part of what gets us out of this mess. If you are suspicious of ai, great, me too. But this conversation shows what it can look like when it's built and used with care. So let's get into it. Here is Tamara getting on her soapbox. Tamara, it is really lovely to have you on this site box. Thanks for joining me.
Tamara:Hi. Hi, Peter.
Peta:How are you doing?
Tamara:Um, good. Yeah, it's, uh, so sunny right now, so hopefully that lasts for a little bit longer.
Peta:Yeah, we are loving the sun at the moment. It's very cool. Um, so Tamara and I met through the female Founders Rise, kind of, um, community, which is full of amazing, incredible women who are founding businesses and, and building businesses. Um, but for people who don't know you, Tamara, can you give them a little bit of a intro into who you are and what you do and kind of how you got there?
Tamara:Yeah, so I'm a computer scientist by background, and I've been doing that. Uh, for 25 years. I'm so glad we don't have video right now.
Peta:It was a
Tamara:So. 25 years. Um, and that was with my undergrad. But then, uh, for my specialization, I, I went into natural language processing, which is now what most people think of ai. Although AI is so much more so like robotics and all sorts of cool stuff that people do. So everybody's work has been reduced to basically what I used to do, which is kind of nice. Um, so yeah. Large language models is where it's at right now, but there was a lot of work put in to get to that space, and that's also kind of disappeared. Um, yeah, now I'm working in AI as opposed to machine learning or natural language processing, which are the kind of code words for, um, proper background. And then, um. I worked in all sorts of different places, so I worked, uh, a few universities and then, uh, as a research assistant, a research associate. Uh, and then I worked at, uh, a startup called Mrs. Wordsmith, where we built, uh, educational resources for kids to do with words and linguistics and improving their vocabulary. Then I worked at the Royal Society of Chemistry, and I worked at a FinTech startup as a chief science officer. When I finished working there, I started on an academic project again, which was to do with evaluating algorithms in policing and writing a, uh, like a guide, uh, to how to do ethical. Machine learning and policing, um, made a mistake there in advertising and not calling it an AI guide. So, because I just didn't believe in the wording, obviously. And, uh, then, um, on the back of that, we decided, so there was two, two of the people who worked there was, there was a lot of project, the two of the people who worked on there, um, with me closely, uh, Ruth and Hazel and I, we decided to start a company. Uh, to keep working and helping people in this space and mainly police forces. Um, they're really, really struggling with getting up to speed with technology. Um, there's lots of reasons for this, but we were interested in working in gender-based violence space because that was where Ruth and Hazel came from originally, and that a lot of the projects we were working on. Uh, with the police were in that area as well. Um, then as we were wrapping up that project and we're starting thinking about our company, we met, uh, Tony, who was retiring from Devon and Corwell Police, and so I. We decide to bring him in. He always talks about it as some sort of hijacking because he was going, oh, retiring. And we're like, are you sure you need to retire? And he's like, yeah, yeah, yeah, no, I'm just gonna go on holidays and stuff. And we're like, no, we'd like you to work with us. And he is like, yeah, no, I'd like to help in any way possible. We're like, okay, great. You're a co-founder, just like
Peta:No more
Tamara:me. So he's our di hire and, and, uh, then we started. Last year in April, and we started this goal of helping police forces get up up to their AI training people providing really customized services and stuff. And then with the election and all sorts of stuff, public sectors just become a complete nightmare. So, but in the meantime, we're forging on and we're working on our own interests and, and products. So we came, we started working with a company called Right to Quality, a nonprofit called. Right to equality. We're trying to improve transparency in family courts. Uh, we, with them, we built an application that detects victim blaming in communications between people and police or, um, their transcripts in court or the court judgements or whatever. Basically in the justice system. So we have that out and people are using it. That that's where we are at right now. That's my 25 years in a flash.
Peta:Like, yeah, it's like a whistle stop tour of like a ridiculous amount of experience and skill. Um, so we chatted briefly about this, um, a few, a couple of weeks ago. Time has no meaning, I dunno, when it was, um, a while ago. And it's absolutely fascinating because. It feels like there's a lot of discourse around where maybe police forces get things wrong, um, or concerns about where AI kind of isn't maybe as ethical or unbiased as it could be. There's a lot of chat about it, but not very many people that I've seen working towards solving those problems or like correcting for those particular issues. So it was really nice to kind of, to talk to you about how you are helping those organizations use these tools and use these systems in a way that improves outcomes for actual people.
Tamara:I think there is a lot of talk about AI and ethics, um, and it sounds, it makes it sound so much more complicated than it is. It is literally about spending time on the right parts of development and evaluation. It's about being mindful, it's about thinking about things from. Other people's point of view, which I guess is often quite difficult.
Peta:Yeah.
Tamara:But it, you know, you can only, you can't mitigate for everything, but you can, you know, do your best to look at how, uh, what you're building is affecting different subsections of people who might be impacted by it. Um. There is, yeah, there's a lot of academic work in trying to, for example, detect misogyny online or deep fakes and, and all the other kind of misuses of ai. But there hasn't been much work in applying this to the criminal justice system or helping people navigate the justice system, um, in a way that's more fair. But hopefully, hopefully there's a lot, there's so much to be done. So I think there's a lot of space for more tools like this to come out.
Peta:Yeah, definitely. Why is this? Like the thing that gets you excited, why is like this thing, your soapbox?
Tamara:I, I guess like the. Even though I did a bunch of other stuff, when it came to sort of thinking about what should my projects be, I got pretty, pretty excited about the use of language and how, how it can influence people. And I was thinking before I, I went left my academia and I was thinking about will I write some project proposals for myself or will I go and work, uh, and get different kind of experiences in industry? I was looking at, um. How, for example, um, different newspapers might express the same things with different kind of bias in them and, and for, especially when it came to sort of gender bias as well. Um, and so I was looking about it in media and this was, you know, and po potentially like tweets and stuff like that, but I ha. I hadn't thought about it within the justice system until I started working with police and saw how difficult. It can be for people, things like if you, I mean, I have, luckily I haven't had that much interaction with police or other than, you know, or family courts, or courts in general, but things that you, basically, everything that somebody tells you about their experience is shocking and appalling. But no, that's not how it works. Like they should be not biased that they should. They should listen to people. They should give you a fair chance. Um, for example, then if you, if you're a victim of crime and you're supposed to testify in your own trial, you're not allowed to follow your own trial. You don't know what is said about you, you get wheeled out just for like, um, the, your own testimony, and then you actually get wheeled out of the court again, and you can't follow even after you have testified. So a lot of people don't know what's been said about them in courts. Um, it just, yeah, it's pretty wild. It's not like it's on tv, you know, like you, if you interact, it seems like a lot of it is just general sense of powerlessness and confusion because it's, it's so complicated. Um, and, and big things are on the line, like your, your children, your, your livelihood and all, you know, um. A lot of things are on the line, so people are quite panicked, but then they're judged for being panicked and, and a lot of this stuff comes out in text. A lot of it's procedural. So we're hoping to look at other expressions of, of bias later on that are not just expressions in language like victim blaming is, but yeah. Uh, other things as well. But yeah, it's, it's just, I don't know, there's just the injustice of it, but also the ability, like to see what can be done and. See the problems really clearly and think of solutions and all that's left is just the power to build them, you know? So, yeah.
Peta:That's very cool. So as you've been looking into this and working with kind of different organizations, what sort of examples have you seen of how. Things are not necessarily being used ethically or where there is bias in the system.
Tamara:So I mean, there's quite a lot of. Newspaper articles about how things are not used ethically. Um, luckily the people that we've worked with, uh, two police forces that we worked with were both interested in doing things ethically and trying to mitigate the, the issues ahead of time. Some, so. That hasn't been a problem from my experience. Uh, I mean, the, where it arises is just lack of experience. So for example, not, not knowing what there is out there or what algorithms there, there, or how to evaluate them properly or just basically lack of knowledge. So, I mean, it really does come down to public sector funding for them, you know, it always does. And, and also the inability to follow through. So, um, it's difficult for. Some, some things that might be relatively simple to be built internally for, for the people who are talented within the public sector, to maintain that project for a long time because of the different, the way the funding works for internal projects. And often this stuff is bought in from other funder for, from companies. They may not put in the effort to do the evaluation, um, or the way that the model was designed, because it's ai, it doesn't take in all the internal considerations into factor. If it was, if it was designed externally and then there is no follow through in evaluation and maintenance and all those things, and so then that's where the ethical gaps start to show is that. The execution of the idea, um, rather than the intentions at the start.
Peta:So you've got these tools, but if you, if you. Don't know how to use them effectively or kind of, or maintain them properly, then they end up creating these outcomes that wouldn't. Yeah, unintentional.
Tamara:Yeah. And if you don't, um, do the evaluation, for example, if you're working in a very specific area and you take a generic tool and you don't do the evaluation. Um, on your data or in your processes, you don't see how it's affecting your workforce or you're not seeing how it's affecting the users that you're, that in the AI decisions or the AI processing impacts, um, in your, in your, um, basically area. So for example, for police, like they might have different demographics based on their area. They might be more rural than or than metropolitan or the other way around. And that affects. What the data is like. And so depending on what the model is doing, if it's not tested on this new, basically virtually new use case, then you don't know what it's doing for your people. So yeah,
Peta:Yeah, it like some things aren't relevant, some outcomes aren't the same. Some biases and kind of assumptions are different depending on where you live in the country.
Tamara:definitely.
Peta:Yeah. Okay. Is there, I mean, so controversially, is there an argument that they shouldn't be using these systems at all?
Tamara:Um, I think it depends on what they're trying to do with it. So, yeah, I think I've, I've read arguments that, no, you shouldn't be doing this, but, um, everybody else in the world is going to be using AI and so if they are not proactive about learning about it, they're gonna end up, um. Not understanding the crimes that are committed by ai, for example, that's a problem. But also because they're completely, um, underfunded and understaffed. Um, not using the full potential of AI for, for improving day-to-day, um, really boring tasks, for example. Um, then we'll make them fall, be further behind. As well. So I think productivity is definitely a thing that they should be using it for, but that also has to be done in the right way because as we know, models can hallucinate. And so if you're hallucinating, uh, police reports without being able to reference, you know, where the information is in the original report and, and like linking or, or have a quite a lot of interpretation put on top of the summaries, for example. Then you have an issue. So it's really important to, even if it's just simple summarization to, to test and to make sure that it's working correctly because people are really sensitive to connotations, um, and interpretations. And if those are not managed in a correct way, it could probably, it could give the wrong impression, even just a summary. It could give a wrong impression to the user what's what was originally said or. What was implied in by, say, for example, the, the officer who took down the notes initially, but I think there's like, there's a lot of scope to do, a lot of really, really cool and useful stuff. Uh, what happens often is that, um, people jump to things they understand. So operations, so they wanna optimize like, um, staffing, which might be okay, but then they also wanna optimize risk management. Which needs to be done in a really, really careful way. And it requires not only evaluation of the algorithm, but like evaluation of operations as well.
Peta:Mm-hmm. That's really interesting. Do you, so it's really interesting that you say people tend to jump to things that they understand easily. So in a similar way to a, to I'd be like, oh, great, I can get an AI note taker on a call. Um, then I might jump at that, but I. I'm not necessarily going to decide to use it to, I dunno, craft a brand strategy because that feels a bit big and complicated. What kind of, what attitudes are there in these organizations kind of, that you've seen towards using AI for this kinda stuff? Are they nervous or, um, does it because it depend on different things like kind of age or how technic savvy they are.
Tamara:Oh, definitely. Yeah, there's a lot of, um. Reluctance to adopt them. There's also nervousness to adopt things that cost a lot upfront. So even though you can save money in the future, they don't have any to spare right now. So they don't wanna put it in right now. But yeah, as far as understanding the, the complexity cities of ai, there is, there is almost a just lack of basic knowledge of, of like what it can do for you and. You don't even need to know how it works necessarily, but you can, you can understand what the pitfalls are, what are the, the strengths and weaknesses and, um, what it can do for your organization. And that's, that's missing from the top. Um, on the way. Um, bottom up, the implementation side, the, um, IT teams are not ready to manage the extra work, so they, there needs to be investment into specialist IT teams for. Innovation and then, um, you know, at least a, a couple of people within the organization who understand AI and how to build it themselves. Even if they're gonna outsource all of their ai, they need somebody who's gonna evaluate that for them. So that needs to be put in. And those are certain upfront costs that a lot of people are not willing to put in, but it's, I think it's extremely necessary for it to go forward. And, and there's talks of like. So, so the way the, for example, policing and I guess NHS as well is structured, is it's very separated. So policing, there's 43 different police forces. They, they struggle with sharing information and they, they don't buy things together. So there's an, there is going to be a drive to, to make a national center for AI and stuff, but all these things take a really long time to implement and convert and do. Right. So, so there needs to be. There need to be other solutions as well, organic solutions, um, from within police forces to, to integrate into that national center as well when it comes about. Yeah.
Peta:Yeah. Yes. I can imagine that having 43 different systems and then is yeah, is almost as bad as, yeah, them not talking to each other at the moment. Like you just end up with them individually buying 43 different tools rather than working together.
Tamara:Or, or, yeah, basically designing 43 versions
Peta:Yeah.
Tamara:of the same tool as well.
Peta:Yes. Just a slight waste of time. Like, so, so there are operational, um, and like structural considerations too, like about information sharing, but also about like, these are completely new jobs or new positions that you'd have to kind of staff that you wouldn't necessarily think about if you were running a police force. Like you've got your Yeah, so it's all, it's, it's a different kind of mindset.
Tamara:It's a different mindset, but in every organization. Well, also there's also the fact that like some of these roles in the wild cost. Are really well re ru rated. And so you need to find people who are interested in working or excited to working to next to nothing like me, but
Peta:Yeah.
Tamara:just like, I'm so excited about this. I'll, I'll talk about it all day for free. But, uh, but yeah, so, so there are people in, in the forces who can be upskilled for these sort of things, but that again, like the initiatives it, which sort of. Bloom and then die. Um, what happens to people is in the, in police forces, and I guess a lot other, other organiza large organizations is they, they might go on a course, so they might get really excited about something they might get a year's worth of training. I. But then there is no, there is no follow through funding to keep them in that new position for which they've trained for and then they go. And so you have this sort of natural talent drain where you like, where all the people who are most proactive and most interested in things just end up leaving, which is unfortunate. And that's something they really need to foster because people are driven to work in public sector with, for, you know, with a, with a natural passion. And you don't wanna drain that away through bureaucracy. Yeah. So.
Peta:Yeah, no, that lack of long-term kind of planning, long-term thinking, long-term funding is, is a big problem in that it's interestingly what my email to my list was about today was about youth services. So, um, cool. So when we talked before, we, um, we spent a little bit of time discussing kind of how this impacts, um, or can recognize victim blaming, which was really interesting. Um, can you tell us a little bit about that, how that works?
Tamara:Yeah, so. A victim blaming is exactly what it is. So if, um, if somebody's a, a victim of crime, then there, the words that are used to describe their role in that crime often shift the blame for them being a victim back onto themselves. So, you know, like, why is she still with him or, you know. Why did he go out at night? You know, it's not a safe neighborhood and stuff like that, so, so this kind of, yeah, disregarding the fact that somebody. Injured somebody and it's entirely their fault, you know? Um, and this gets more complex in cases where the evidence, you can't, you don't have CCTV of somebody's living room and you don't know what's happening there, for example. So domestic abuse or, uh, if, if it's, if it's more subtle, it's worse like course of control, where, where one person can feel completely trapped and the other person feels entitled, so they don't actually understand there's like a cognitive disconnect that, that they're actually. Cau cau causing damage to somebody else, or, you know, they're actually actively malicious. But, you know, so, so there is this kind of difficulty in evidencing these sort of crimes. And then because, um, a lot of services or staff are, have, have a lot of staff term. So before, uh, the Anthony Tony, our co-founder, you know, explains that, you know, he's, he was a police officer for 31 years and he retired. But because there are changes to way pensions are, for example, paid out to police officers or because there were so, such massive layoffs in 2010, the, the experience was lost. And now the police officers stay in for five years. A lot of them are really young. Uh, anytime you go to like a policing conference every second. Every presentation talks about, oh, they should know more about trauma informed interview practices. They should know more about this, they should know more about this. Um, and they, they just don't because as a whole, they're like 18-year-old kids. So.
Peta:Yeah, that's fine.
Tamara:So, so that, so, so you get, you know, these young people who, who may have a different, um, sort of social view on, for example, what is coercion when it comes to sexual assault? Because of, as we know, you know, lots of things are happening that are changing the way young people. See social aspects of our lives. Um, so they, they might go in and they're, they're supposed to take down notes about a rape or, you know, sexual or, or coercive control relationship. They don't understand, but even equally, like older judges may not understand either, because I. Um, you know, domestic abuse is a crime, but it's not a crime that is often seen or policed properly. Then you end up in family court and you have to make your criminal case in a civil court where you have a adversarial system, where you have yourself against your abuser. And you have to make your case not sound panicked, not sound like scared that you're gonna lose your children or that you're gonna lose your livelihood and make your case to a judge who fundamentally may sees like a thousand of these cases. And it's just, I. Really, really like bored of the whole thing, you know? So, so, um, so it's really hard, like, you know, you, you know, there's, there's a lack of empathy and a lack of understanding, and these are very complex issues and a lot of the things when people try to explain what's happened to them are sort of like, and then this, and then they did this thing to me and that, and that made me feel like this and, and then this thing. And it just feels like a lot of little. Issues. Um, whereas actually it's a, it's a whole course of conduct and, and you need to be able to describe it. So, so yeah. So often you get this kind of, um, lack of understanding, which is described through, um, that comes through as victim blaming. Um, and this happens, for example, yeah, it, it occurs in no further action reports. So if a police, uh, force cannot, a police officer cannot. Progress a case, they might write a no further action letter or explanation of why they couldn't progress a case. And it'd be like, well, they waited too long to report. Um, you know, they didn't preserve the evidence. They didn't, you know, this. And, and it is just like, okay, there's some things that need to be said and there's some things that need to be said differently. So, um, you know, um, so yeah, there is, there's a lot of this, um. But, but victim blaming language in itself is a clue to other people that something is not worth pursuing. And so it's really important to eradicate it from the first point of contact all the way through the system so that people can get fair access to justice.
Peta:Yeah. And so does, are you, the systems that you're working on, do they help identify that? And help teams kind of notice.
Tamara:So what we're hoping for at moment, we we're having, uh, people testing the system out and we've tested it ourselves, uh, against court judgements. That's the data that we have free access to. Um, and we have kind of made some synthetic data to test out other situations, but it's not as strong as having real data, real throughput. So we have had real users put through our own data to, through what our system does is. It can, um, given, given some text, plain text, or PDF, not scanned, just actual text, not images. Um, it can go through and it can read basically sentence by sentence and identify, uh, given the context of the whole document. Whether there is Vic, there are victim blaming state statements. We have a taxonomy of different types of statements that it can identify and it can identify severity, whether it was obvious, moderate or subtle. It can identify who the speaker was. So because in, in, for example, in judgments and appeals, you have lots of different people presenting evidence. You need to be able to say, if it was like a judge or the, like, somebody. The perpetrator themselves given evidence and blaming the victim or so on. So you have these different layers of speakers, um, and we need to identify that as well. And then the model gives reasoning and explains why it thinks something is victim blaming, as the model gets more uncertain, for example. The reasoning breaks down, and so you can read it and you'd be like, actually, hmm, I don't understand what it's saying here. So the reasoning is there to explain why something was there, but also to give you a feeling of whether this is like the model is reaching. And this usually just happens in the subtle cases, the obvious ones, they're, they're quite obvious, but some of the subtle stuff, it's a bit. Trickier to interpret. So it still highlights it just in case it's relevant, but, uh, you know, the reasoning might indicate that it's not quite good. Um, and so, so that's the important thing that's about like explainability and transparency and allowing people to be able to see what's evidenced in situ. So, so it will annotate the PDF with highlights and, and notes so that you can see. In context what's happening, and then perhaps you can take this if you want to, to litigate. You can take this to a lawyer and be, this is my body of evidence, this is what I have been through. Um, the AI has highlighted these things. You know, the AI does not know if any of this is, um, legally relevant. But you know, can you go through and, and see if there's a case here, for example? So yeah, it does give people a feeling of validation, whether it's legally relevant or not, because it says, yeah, you've been through crap and bad things were said about you. So, so there's that. That's the other purpose, um, from an organizational point of view. You know, uh, looking over a document of like 40 pages might take you 40 minutes, right? Like if you're in earnest trying to like go through or more longer, right? Like not 40 pages, say 10 pages would take you over 40 minutes and you, you go, go in earnest looking for, for clues then. Then, um, you know, having absolute like tons and tons of documents in your organization, you could sample them or you could go through every single one of them. You could see, uh, there are patterns in victim blaming. Could you potentially address this through training? Lots of organizations have training against this thing, but there's no follow through. There's no tracking of before and after. Did we, did our training make difference? Is it addressing the exact types of. Cognitive issues that people are having with this? Are they blaming the victim because of their behavior? Are they, is there some sort of procedural aspects that are. Coming out as victim blaming, what is going on? Can we identify what's going on and can we improve this? Um, it could potentially, you know, pick people out for extra training as well. Um, that wouldn't be a bad thing, right? Um, it might flag up some of the issues that police forces have had, um, with behavior that they only come apparent when people. When, when, you know, it hits the newspapers and that's, you know, it'll be proactive to get ahead of this stuff. Um, you know, we don't really wanna shame people. We just, we just want the system to get better, basically. Yeah.
Peta:Yeah, that's really interesting that it could be used to evidence the effectiveness of different training or different kind of programs.
Tamara:Yeah. I mean, the training is only as good as like, I. As much as people take it on board. Right. So if somebody is good at taking tests, but they don't actually care, like they'll, they'll memorize for the
Peta:yeah. You can do all the,
Tamara:do that quiz. Yeah.
Peta:You can tick the right boxes, but you still, yeah. Yeah.
Tamara:You've not, you're not embodied it. Yeah. Or you just don't understand, like you are like, yeah, whatever. I don't understand. I don't empathize. With this issue and then you, then you need sort of deeper sort of training, which, which explains to you what the issue is and why this is a problem for people and how it has knock on effects and persecutions or whatever. Right.
Peta:Yeah, it'd be an interesting way of, um, of helping people come face to face with their own assumptions and own kind of really deep seated unconscious. Prejudices, I guess.'cause one of the other things that we talked about was how, how you can make sure that the model picks up male victim blaming as well as female. So I think that, um, yeah, the more that we can be aware of those kind of biases and prejudices, um, the better, especially in those kind of contexts.
Tamara:Yeah, and you, you should, um. Basically slice and dice your model in all, like, results in all sorts of intersectional ways just to make sure that it's working. I think for example, if we had an example of what, uh, DWP models being extremely biased against, you know, um, disabled people, which is like, this is your whole data set, so you know, you gotta be, you know, and that's the other things, like, it's, it's kind of, sometimes it's really hard to get them. The data for people who do not want to engage with the system. So I think in family courts, in, in, it's very hard to find, for example, same sex, couple data or like, just, just volume wise. Um, in the free data, it's hard to to do it because, um, the, the cases that are published are cherry picked for their law input. And so, unless. Unless something seminal happened in that case, it's not gonna make it, it's not gonna be published, it's not gonna be anonymized. So we don't know, like, and, you know, I don't even know what the throughput is or the, you know, gender balance or, or relationship types they go through. So yeah, the, it'll be, it's hard, it's hard to get, you know, all the different data to do it. Right. But you can, you can simulate some of that as well.
Peta:I guess that's, that's one of the things that I, that I think about every now and again with AI more generally, is that it's only good as the. It's only as good as the data that gets put in and, and we don't have a huge amount of control over the kind of data that does get put in, in a similar way. Like you are relying on the courts to, to publish things that Yeah. Are kind of a decent range, but actually they're really only interested in the interesting bits of law, which is fair enough because like that's, that's their job. That's what they do. Um, but yeah, it's, it's an interesting, um, hampering of. Yeah, of how to build these systems into more ethical, more kind of representative things.
Tamara:I guess that's, um, I mean that's why some, some addressing some of those issues are, is why like chat GPT or cloud or the chat bots are. Have the guardrails that they have in them. So, and those are put in post training. So even if the training data was biased in a certain way, then, um, through interactions with human annotators and label or, or pre-canned answer sets, um, they try to refocus the, the behavior of the model afterwards to be more inclusive, more. Peppy, I think. I think basically Chad, GPT is just going overboard now. It's like so mega positive on everything. Like,
Peta:like
Tamara:but I mean, that's what we like, right? That's, that's what our biases are. We like, we like really confident peppy people, even if there's no, like nothing to back this up. And I, and. While it's an amazing tool to have filled with a lot of stuff, it, it does get on my like brain sometimes. Just like, she's like, I can't, I can't do this. This is not the moment to be peppy.
Peta:no, no, no. It told me I was a genius the other day and I was like, thanks. But actually like that kind of feedback's not helpful. That doesn't help me go, how can I push this idea and change it in a different way? That just makes me feel nice for about five minutes.
Tamara:I get, I get, I get into it as well sometimes. Like I'm, because you mirror, right? So, so like, get to this mirroring thing with, I'm, I'm coding and I'm using, uh, Claude.'cause it's quite good for helping with Coding. I was like, just like working with it. And I'm like, oh no, we got something wrong together. This is the error I get. And it goes back, oh.
Peta:There's like a pair of over excited puppies. Just like kind of, yeah. Yapping
Tamara:oh, I found somebody who just loves debugging as much as me. Like, thanks, Claude.
Peta:I mean, that makes the whole Yeah. The whole, um, circumstance and Yeah. A little bit happier, a little bit like,'cause you're a nicer afternoon. Okay. So what's, so what's the next step now for her ethical ai? I.
Tamara:Yeah, so we have a, a plan of, um, growing our linguistic armory. So with victim blaming, what we had is, um, l linguistic patterns, let's call it abuse, but o of an abuse type of like flipping. Sentiment onto somebody else in a single kind of extractable sentence or paragraph, several sentiment. Um, and now it's about looking things that are patterns. So either, you know, for example, can, can misogyny be detected as amount of victim blaming or amount of blame, for example, detected, uh, aimed at females versus aimed at males. You know, or, um, procedural aspects for in, in law, that's one thing. Like are they choosing to give somebody more airtime? Are they asking them different types of questions? You know, when they delve into the psychology of a woman in a relationship, do they also delve in the psychology of the male in the relationship? Uh, for example, that comes up a lot. Um. So there, there's that aspect, for example, looking at outside a single kind of sentence. Then there's aspect of, uh, looking at crimes like coercive control or romance fraud crimes, which are um, a series of patterns, intentions. And, and speech acts. So there is love bombing, there's manipulation, there's grooming, then there is trauma bonding where you test out the boundaries of doing crappy stuff to people and, and testing out their commitment to the relationship. Then there is, and so on and so on and so on, and there's a pattern. Can we identify these patterns? Can we, can we identify these crimes? Um, and, and that's like a, a much bigger piece of work, but we're really interested in that. And can we then deliver something that will help people identify these patterns, evidence them, and make it easier for them to report? So that's, that's where we're going next
Peta:Just a little thing then. Just, yeah.
Tamara:This just a little thing and you heard it here first, so exclusive.
Peta:That sounds fascinating. Cool. Um, no, that's great. Um, if people want to find out what you're up to, if they wanna come and get involved, um, or yeah, follow you, where should they go?
Tamara:so we have our website, heretical ai, um, and. I'm sure there'll be links along with
Peta:Well, there'll be links in the
Tamara:Yeah. Yeah. Um, but also we're active on LinkedIn, less active on some of the other platforms. I think we have, um. Facebook and Instagram. Uh, I have and Blue Sky, but I haven't been on it very often. Um, I have, I have, uh, certain thoughts about what Twitter has become, so I think we're trying to stay off that. But yeah, it shouldn't be hard. You can Google my name, I'm think, I'm pretty much the only one with my name. And you can, I can find a link to chat
Peta:Cool. No, that's great. I will put all of that info in the show notes and um, yeah, people can come and find out what you is getting up to and follow where you go from here. But that, that was absolutely fascinating. I'm so like, it sometimes the, sometimes the discourse around AI can be quite depressing. Um, but I love seeing positive possibilities in these new technologies and the fact that they can be used to. To help victims of crime and to help make police forces more kind of receptive is, yeah, it's, it's made, it's made me feel a lot better about the whole thing.
Tamara:Oh, that's wonderful. Thanks for chatting today.
Peta:Not a problem.