Justice, Disrupted
Justice, Disrupted
Shami Chakrabarti and Dr Susie Alegre in conversation – In the age of AI, can justice be smarter?
National Event 2025 Special
At ‘In the Age of AI, can justice be smarter?’ expert speakers: Claire Feasey, Harmeet Sandhu, Dr Susie Alegre and Shami Chakrabarti gave an overview of the potential contained within AI for improving efficiency and driving change, whilst keeping a clear-eyed view on the rights and well-being of individuals. The talks were followed by an audience Q&A.
In this episode:
Leading British human rights lawyer and campaigner Shami Chakrabarti is in conversation with Dr Susie Alegre, a barrister specialising in international human rights law and technology. They discuss the potential human impact in the rise of AI.
Shami Chakrabarti is the author of On Liberty, Of Women and Human Rights: the Case for the Defence.
Dr Susie Alegre is the author of Freedom to Think and Human Rights, Robot Wrongs: Being Human in the Age of AI.
If this conference were an office, if we were all in the office together, I I would say that you've heard this morning from our inspirational sales team for AI. You've heard from the IT crowd. And the bad news is that now you're meeting with legal
And I warn you that this road that we've been tempted to to walk down is it you're heading for legal. Um so from a human rights perspective to put it a slightly more positively what are the sorts of rights and freedoms that we're that we're talking about here that are engaged in this space. Remember AI is touching every part of our lives. We heard about it Netflix etc etc. your insurability, the education. But in the criminal justice space, the issues are particularly acute. You're more likely to be in legal because because of the rights and freedoms that are being engaged in the criminal justice space as opposed to dating on an app or, you know, being being sold the same romcoms on Netflix because that's what you've been looking at in the past. So, this is this is this is very serious. the right to life. So that's the that's the duty of the state to protect people's lives, whether potential victims or actual victims of crime or or whether people in custody um or or under supervision. Crucial. The right not to be subject to inhuman and degrading treatment or torture. The right not to be enslaved or or or forced to work. the right to liberty, crucially, the right to a fair trial in in matters that are important to to your life and your other rights and freedoms. The right to respect for your private and family life. That's already sort of been alluded to, but you can imagine a number. And this is not just data protection doesn't cut it. That's not enough. We're talking about holistically somebody's private and family life versus total surveillance etc etc right um we're talking about freedom of conscience freedom of expression not least in the protest space um we're talking about crucially I think um the right not to be discriminated against in the context of other rights and freedoms and of course we know anybody that's watched Netflix anybody that's watched coded bias or read anything about this technology will tell you about coded bias about builtin bias because these algorithms this technology was originally designed by humans living in real societies with all the inherent and historic biases. So this has to be set in context also in a democracy that's party to the European Convention on Human Rights. That's where those rights that I listed came from. Any interferences with people's rights, including, you know, lawful interferences because they're in um they're engaged in the system. They've been convicted of a crime, for example. Any interferences need to be in accordance with law. And when I say law, I mean law that is debated in parliament chambers that the public have engaged with and that can be that can be seen as clear law. And it is my view. I'm not here to speak for north of the border, but in England and Wales where I operate and Susie and I between us have about 60 years of legal practice between us. She looks better on it than I do. um in England and Wales, I am convinced that some of the embryonic uses of um of this technology are unlawful and I'm very clear about that and I've said it in the media and I put it in my book. So, and and we can talk about uh live facial recognition technology in particular. Whether you think it's a good or bad thing, whether you think it's discriminatory, whether you think it's reliable, whether you think it's total surveillance, there is no legislation for it. There is no legislation for it in England and Wales. And yet it has been rolled out by the cops off their own back because they were taken to lunch a few times by tech companies and thought, "This would be good fun, wouldn't it, guys?" and they choose who to engage with. They hand over the data. They get the chequebooks out. It's public money, public data, sensitive data. They design the watch lists. They decide whether the technology is reliable or not. No statute whatsoever in a in a justice system and a country that is supposed to be governed by law. So that's just to just to shift the the tone just a just a tiny bit. Um Susie. Yeah. No, I mean I think it's really important to go back to that question of sort of what are human rights and what are the things that are at play and one of the things um that I mean and you know back when the human rights act came in when you were at Liberty and I was at justice you know public authorities around the country were getting training on human rights and what it actually means in practice rather than reading about it in the media where you'll get answers which don't really reflect what the law says. And I think one of the really important points when you're thinking about human rights, the human rights act, how it applies, which it does apply to everything you're doing in this space is to think about whether what you're doing is justified, whether it's necessary, and whether it's proportionate. And those are really practical questions. They're legal questions, but actually they're practical questions. And I remember I was invited to some online consultation about track and trace back in the day when we were all sitting at home. I was the only lawyer who was invited to this consultation, you know, pro bono. Come and talk about ethics. And I said, well, does it work? Because if it doesn't work, if it doesn't prevent the spread, then there's no justification and it's never going to be compliant with human rights law. It's not about privacy versus health. It does it work. And that I think is a really really crucial question and one of the big challenges I think with thinking about AI or technology more broadly in the justice system and you know I don't know that the Scottish justice system maybe it's you know all uh all bells all whistles up here compared to England and Wales but I do have experience in the English uh justice system where and I mean you know it's quite telling when the PowerPoint clicker doesn't work. Well, that's how the English justice system functions. It's like we actually don't really need transcripts. I mean, we do need transcripts sometimes, but mostly we need Microsoft Word to work or, you know, for the electronic bundle to stop spinning so that you can read it or, you know, when it's not stopping then to be able to print it out so that you can at least read it as a backup on paper. So, I think that question of what is working, what do you need, where are you putting the money, and making sure that the really basic things work, you know, have you got Wi-Fi in the building? Well, if you haven't, then none of this is going to be worth putting money into. So, I think that's a really, really big question. Even more basic than the Wi-Fi, is the roof leaking? Have you got to empty the courtroom because there's rack in the roof? Well, AI that could, you know, look at the court system and identify which court's going to have to be closed down to be fixed. That would be fantastic. And maybe that's where you want to be putting the priority for using AI in the justice system. A really practical uh practical question. And again, going to this issue of speedy justice. Um, and you know, I mean, I no longer have skin in the game as a criminal barrister. I was a criminal barrister um in the late 90s. I stopped in 2000 and then went to be a temp secretary which quadrupled my income from being uh a barrister despite the fact that I was doing crown court trials, extradition, all of that. And it's much worse now. So that a lot of the issues you'll find in the speed of justice is that nobody's paying the lawyers. So they're not going to do the work. They're not going to turn up and and justice will not happen. And so I think you really need to think again about these really basic structural questions of where you're putting the money. Do you need a new big uh cash injection into technology or do you need to pay legal aid uh to make sure that people are represented? Because the other thing when you look at savings on legal aid, you'll find people turning up with pointless cases that had they had access to a lawyer might have been resolved 10 years ago. uh and you would have saved all of that court time, all of that problem. Um and so I think not necessarily looking only at tech solutions, like I said, there may be tech solutions to discrete points, but making sure that you're dealing with the real problems underlying justice and you know, a chatbot is not going to solve uh the backlog in the court system in any way, shape or form. So I think you know thinking about AI possibly in a different way thinking about AI as ways to identify what are those problems as I say using AI in the the infrastructure of the court system to identify where the problems are. You got to close down this court because you got to fix the roof where are you going to send the judges so that you can carry on uh with minimal uh problems. Those are really serious problems certainly south of the border. Um so use AI uh for those questions. Um because I think as well one of the one of the real challenges is that uh you know and we heard about it a bit earlier about having to monitor and check accountability. Policing AI is extra work. You know having to manage what's gone wrong is extra work. And if you know you only need to look at the post office scandal to see how badly things can go wrong. Yeah. And the other thing on going back to paying the lawyers is I don't know again hope if it's affected people north of the border but the legal aid hack uh has meant that lawyers are not being paid that massive amounts of sensitive data h have been stolen and so cyber security as well is a really really big problem. The more you the more you are focused on sort of collecting the data in many ways the more vulnerable that data is. And I think um I mean the next thing as well that that I think it's useful to talk about is generative AI um and how how useful that is in the justice system and I don't know if you have thoughts on generative AI. I think I would go back to the concept of code and code is an interesting word isn't it and it's I don't think it's a coincidence that code is code as in computer code but code is also legal code but with traditional legal code um there is an element of transparency and accountability about it because a bill is presented you know to to parliament whether the Scottish Parliament or the Westminster Parliament and we can have a proper row about it and there are amendments and there are votes and there's sometimes public consultation but there's an element of transparency about the code and and where we settle on the law. You know what should the new protest laws look like? What should stop and search powers look like? What should a new crime look like? What you know what are the sentencing maxima? And you can read it. And the idea is that any citizen or at least a citizen with some basic legal advice can know what the code is. Now, with computer code, with algorithms that sit in a black box owned by a private company that that the local police have had lunch with and made a deal with, that code is making decisions about people too. So for in the cont I use the example again of live facial recognition technology that's being deployed by the Met police being deployed in Wales and a couple of other forces I think now in England and Wales the code is not published the code i.e the the decision making tree deciding what happens to people who gets who gets stopped in search who is on the watch list all of that is hidden from public view and that's a democratic deficit as well as a legal deficit and that's why I say that by definition it does not comply with the convention on human rights in particular the right to respect for your private and family life but in a criminal justice context really important decisions are going to be made about which kid gets pulled off the street because in real time their face via the camera on top of the police van was matched to a list that the police drew up themselves with no parliamentary or judicial supervision. you know, and I think the the algorithmic decision-making issue I think is really problematic and I've had a personal experience of that uh in the last year which I almost you know I try not to be paranoid it's you know I sat there for a year writing a book about human rights um and uh AI and algorithmic decision-making and suddenly I found I was flagged by my council uh for having a single person discount for an investigation for council tax fraud and this was having spent six months in personal discussions with the council where they had given me a single person discount. Two months later, I got flagged. Two months after that, over 1,000 pounds was taken straight out of my bank account just before Christmas for backdated council tax, which I did not owe. I obviously went completely ballistic. Um, and as a friend of mine said, the algorithm did not see you at the other end of that flagging exercise. And I'm sure there were lots of people in the council hiding under tables every time I was phoning them up, but it took about six months to sort it out. Um, and it was just because I have a sort of non-standard family situation and I I saw it and I felt absolutely chilled. Last week the UK government announced how many millions they have saved through combating single person discount fraud. And I thought, well, I wonder if my thousand pounds is in that figure and how many other people could not fight back because like I said, it took me six months of arguing, turning up at the council, calling the council, you know, pro and my real concern was that I've been flagged as a fraud. And I was like that that is really really concerning to me more than the 1,000. And Susie, you mentioned this concept of generative AI. So this is AI that's teaching itself now. Yeah. So that it can it can look at the internet, it can look at the data that you've given it. Yeah. And it will teach itself. Yeah. But it goes to that problem. So you know, Claire's example of the the speeding tickets or whatever, it's it's kind of the same as the single person fraud. So yes, it saved them loads of time identifying me as a as a risk. It cost them huge amounts of time and I have no doubt huge amounts of stress plus threat of legal challenges, etc., etc. So it is a cautionary tale of you know be careful when you think about things being straightforward and simple because I think the single person discount is an absolute classic flag waving. This is really straightforward and you know I mean Claire gave a German example of of how you know they've gone further in Germany than they've so far gone in this country and effectively making sentencing decisions as they do in the US. In the US they're making sentencing decisions that are partly informed by AI which I think is is absolutely chilling. The idea that this is some kind of administrative process uh I find absolutely chilling not to understand that a sentencing decision involves mitigation, involves personal circumstances, involves a involves a whole range of complex human uh rationale is is absolutely terrifying. In the in England and Wales um prison categorisation is already being informed by AI. whether you go to category this or category that. So what level of prison regime you get is already being informed by a potentially also things like parole. Um and so I you know I was talking to somebody in probation who was explaining that you know you get this risk assessment but the problem with it is if you put in all the data and somebody's flagged as high risk so they don't get to go home. But potentially the reason they're high risk is because they don't have a fixed address to go to. Yeah. Well that's something you could have sorted out that would have radically reduced. If you don't know that that is the reason why they're high risk, you can't address the problems. And so that again comes to really this this sort of question of explanability. It's not just a concept. It has to be it has to be real. And then you also have to think about is the time of a person looking at that and doing the risk assessment in the first place. Is that quicker and more efficient than having the AI do it and then having to have it checked and unpicked? And I think that again is a really big practical question. I think there's a real danger of baking into this system, this non-transparent system, baking in every injustice that already exists in your country, in your town. So if we're using AI, say not for a judicial type decision, but just to decide where police resources should be deployed, that seems relatively harmless, doesn't it? except that we're always sending the cops to the poorer parts of town. Now, crime goes on everywhere, but in the city of London, where the arms deals and the insider trading and just as many sex offences anywhere else are happen, that's not where the cops are going to be sent by by AI. They're going to be sent to the places where all the data is of the stops and the searches and the the arrests historically with a class bias with a racial bias because and particularly with generative AI which is teaching itself on the basis of the data that is out there and decades of criminal justice data will be data about poor people and black people. I mean the other thing I think with generative AI um and again Claire referred to this arms race lawyers and I think for the justice system this is I mean certainly in England and Wales it's already happening I'm sure it is here too. It's going to be very quickly a big crisis in the justice system. So there was just before the summer a divisional court decision in England and Wales called Ayende and I'd recommend having a a look at that. Um and it was about it was under what's called the Hamid jurisdiction. So where lawyers have been referred to the divisional court because of their behaviour misleading the court and it was two separate cases where barristers or well one case was solicitors, one case was a barrister presented made-up cases in their pleadings. But what's more the barrister in particular she doubled down when the judge said these are made-up cases. She said no no no I've photocopied them. I've got a file. Uh I don't know where the file is now, but you know, and he said, "Well, you can't have photocopied them because they're not real." Uh so, um you know, it it that's also a lesson. Don't double down. If anybody asks you, just fess up very fast and apologise profusely. Um but they both the solicitor and the barrister have been referred uh to their professional standards bodies. There are almost weekly cases of this with lawyers using uh generative AI but and people sort of focus on this idea of hallucinated cases or non-existent cases. But I saw another recent case which is in a family court again in England and Wales where a litigant in person father had used generative AI to draft his witness statement. Well, if you use generative AR to draft a witness statement and it's not really what happened, that's perjury. And you know, it doesn't matter whether it was legally correct, it's not legally correct if it's nonsense. And so this idea that using AI will, you know, solve the law for you, solve access to justice. And and I've been asked a lot with people saying, well, people can't afford a lawyer, so surely it's better. It's like frankly if you go and ask you know the person in the street um and you do that outside a court there's a greater chance that they're going to know what they're talking about than asking you know a generic generative AI model. And even this thing about summaries. Yes. Again that sounded pretty harmless didn't it? We just do summaries. But a summary is an evaluation of what is important. What is important? I can remember sitting in a curry house when chat GPT was first born with a young with a young friend and colleague who was very enthusiastic about chat GPT. What should we ask it? What should we ask it? Ask it whether Margaret Thatcher was an environmentalist.
Was Margaret Thatcher an environmentalist? Margaret Thatcher was the prime minister of the United Kingdom from 1979. She is not normally regarded as an environmentalist. I said, "What about her 1989 speech to the UN General Assembly warning of climate change?" And then came the petulant teenager response from chat GPT. Most people probably wouldn't think of Margaret Thatcher as an environmentalist, but what is what is the thoughtful answer to my question? what is the best summary of a case or the facts or the interview with the client. You know, these are evaluative um exercises. They're not they're not they don't lend themselves very well, I think, to automation. It hadn't stolen your book yet. That's the thing. If you ask it now, it'll know. Yeah. There's no doubt your your book is in there. It's been gobbled up by AI. Yeah. And I think that's a really good point as well, this question about summaries and and note taking. And it's also really, you know, it's really tempting to see those things as just administrative tasks, but they're actually thought processes. It's about you thinking what matters, what's important, and that might be legal points, it might be factual points. Um, and different people might take different things out of it. And certainly from the perspective of sort of, you know, lawyers using generative AI to draft skeleton arguments and pleadings, what you're being paid for is your ability to think and strategise uh to to be human, if you like. And if you're relying on generative AI to do that, you're just not doing your job. It's it's that straightforward. And one of the things that I mean one of my latest articles which was called AI makes you stupid if you want to go and look it up you can tell uh what uh what the answer is was not a question it's just a statement but um there is increasing research that shows that reliance on generative AI affects not only your thinking process while you're using it but further down the line. So they did some uh research with sort of three cohorts. They had one lot that were asked to write essays using generative AI, one lot that were asked to write essays using Google or sort of search engines, and one lot that were asked to write essays just using their own brains. Unsurprisingly, the ones using their own brains, their brains lit up while they were writing the essays. So, while they were being monitored, it was clear that they had a lot of neural activity going on, not so much the other two, particularly not the generative AI cohort. And they did sort of three sections like that. What they found as well with the generative AI cohort was they couldn't remember what they'd written, which of course they couldn't because they didn't write it. They couldn't tell you what it was about. They couldn't quote from it because they didn't write it effectively. Whereas the cohort who did their own work could. But the really disturbing thing is that when they switched them around, the people who had been using the generative AI when they were left on their own performed much less well than they had to start with. Well, can you remember phone numbers anymore? No, I can't. And pilots have been deskilled by so much autopilot. And so it go and so it goes on. And GPS as well. there's there's evidence that reliance on GPS has actually reduced our brain capacity for spatial awareness and finding our way around. So I think you do need to think ahead as to if you're saving here what might be the cost further down and I mean it is and I you know I should confess I'm not Scottish although I did study in Edinburgh but I am from the aisle of man so I do have a cultural disposition to rain on everybody's parade. Um and and maybe as a lawyer that's a good thing because you need to be thinking what's the worst that could happen and I think looking down the road is really important particularly when you're in the justice sector is what could possibly go wrong and is that worse than what you know what might happen if I don't do this you know it's about weighing up those those questions really really seriously and practically you wanted to talk a little bit as well about the live facial recognition which obviously a very live what is I sort of intro introduced it. You do you know what I'm talking about when I say live facial recognition? So So we we typically have a have a camera on a on a police vehicle and we go and park that vehicle on Oxford Street. Why did they choose Oxford Street? Because Rolex watches were being were being robbed in Oxford Street and this is very distressing for tourists and rich people and whatever. But they picked various other parts of the country to deploy this experimental technology. You park your police van in in the street. The camera is is circling and picking up the images of innocent people walking down the street. But the police have with the tech company they're engaged with constructed a watch list of faces that may be matched with the strangers walking walking down the street. And if there's a match, then the police will use their stop and search powers. This is suspicionless stop and search. They will and they've and they've triggered those powers in this particular area for stop and search without suspicion. If there's a match, the person gets stopped and searched. Now, I went to a to a secret invitation only met police briefing on this pilot um a couple of years ago and they revealed that they just they were a law unto themselves. They decided who to contract with on a local basis. They decided who was on the watch list. And by the way, the watch list included missing persons and victims of crime as well as potential suspects. So you've just some people disappear for a reason. Some people leave leave abusive relationships or organised crime and and go off and start a new life. But you're on the watch list as a missing person or as a past victim of crime. You're walking down the street. you get you get you get stopped and searched by the by the police. Some people are are now on police databases because they were on protests and you know that we're having a very live debate certainly in England and Wales, maybe in Scotland too about about the right to protest. Cameras are ubiquitous and even without cameras um so many of the of the population's uh facial images are now online that you can you you can pretty much have to you can pretty much have total surveillance of the population if that's what you're trying to achieve. The thing about live facial recognition is that we're not looking for a suspect because we've picked up CCTV footage of the person who robbed the bank or or whatever it is. We are we are scooping up um innocent people living their lives and and constantly matching them against these watch lists that can be accurate, inaccurate, biased, less biased. and and crucially for me, there's a total democratic deficit because the public never had a proper debate about this and there's no statute governing it. There was never we've had the police and criminal evidence act in um in England and Wales since 1984 which tightly regulated police powers and and put them as many as possible into statute rather than just leaving them in the common law. And that was a innovation under a Thatcher government. I say again in 1984, what are the powers of arrest? How should you be treated in the police station? You know what offences are arrestable? What are less serious and not arrestable? And yet all of this is going on without legislation without law because the police got taken to lunch by the companies and decided to use public money to get into contracts. And they think it's going to save police time. We don't have enough officers. We've got we've got morale problems. We've had discipline problems. There's a new scandal every week on Panorama. We can't recruit enough police officers, so we'll use the technology and that will be a fix. Well, I I really question that. So, there are ongoing legal challenges about whether this regime even complies with article 8, the right to respect for private and family life that allows interferences in accordance with law. There are discrimination issues and there's a democratic trust issue because nobody asked the public uh let let alone their representatives in parliament. You know, and I think that that goes back again to the post office scandal. I think that it's really important to remember that it's not the technology that's violating people's human rights. It's about making sure that when you are using technology as a public authority that you are using it in a way that complies with human rights. And we really have, I think, in the UK more broadly, I think Scotland has maybe been slightly less than England and Wales, but I don't know where it is today, but to have sort of forgotten the nuts and bolts of the human rights act and how to think about it. And you know, we've heard about data protection impact assessments. You know, I would urge you if you're thinking about adopting technology for any reason in the justice um system, do a human rights impact assessment. Look at the Human Rights Act. Look at all the rights and consider how they might be implicated. And it may well be that the answer is you can't do it. You know, be prepared to just say can't do it. Do something else. There will be areas in the justice system where technology will absolutely help and improve things, but it won't necessarily be the ones that you're hearing about um on a daily basis. And I think again going back to that question of human rights and the importance of human rights and the rule of law for our uh societies and this question of protest you know being under attack. If we land up with laws that are restricting more and more our right to protest you know and you combine that with live facial recognition your chances of being picked up maybe even because you happen to be in the area. you weren't even going on a protest, but you'd already done three and this was the fourth time you were in the wrong place at the wrong time and you get picked up because you're a persistent protester. You know, these are things that are becoming increasingly of concern, not just uh in the UK, but globally. You know, we really need to go back and think about human rights and the rule of law um and uh defend them. And I think um you know in both my books I've I looked back at technology and the past and human rights in the past and where human rights laws came from. And I think I just I've got one quote um from Albert Spear who was one of um Hitler's ministers who at Nuremberg had a sort of epiphany and gave a speech his final speech at Nuremberg which was about technology and human rights in that context given given the way that technology had had been engaged ingeniously including with private companies in tow to um to support the third right. Yeah. So he said and this is you know what when was Nuremberg? Late in the 40s. Yeah. Late 40s. Late 40s. Today the danger of being threatened by technocracy threatens every country in the world. In modern dictatorship this appears to me inevitable. Therefore the more technical the world becomes, the more necessary is the promotion of individual freedom and the individual's awareness of himself as a counterbalance. And I think, you know, not that I I think it's so important to remember the lessons from history and remember where human rights came from and why they matter for all of us, not just for defendants or victims in the criminal justice system, but for for all of us. And so again, I think focusing on human rights, the human rights law, the very practical way that it applies in specific cases is really important when you're thinking about whether to adopt technology or not adopt technology, which may have pluses and minuses for human rights in different use cases, but keep that at the heart of your thinking and decision-m. And Susie's book is called Human Rights Robot Wrongs. It's published by Atlantic. Mine is called Human Rights, the Case for the Defense. It's published by Penguin. And that's all we're selling. Thanks a lot.