Justice, Disrupted
Justice, Disrupted
Panel discussion and audience Q&A – In the age of AI, can justice be smarter?
National Event 2025 Special
At ‘In the Age of AI, can justice be smarter?’ expert speakers: Claire Feasey, Harmeet Sandhu, Dr Susie Alegre and Shami Chakrabarti gave an overview of the potential contained within AI for improving efficiency and driving change, whilst keeping a clear-eyed view on the rights and well-being of individuals. The talks were followed by an audience Q&A.
In this episode:
A full panel discussion between Claire Feasey, Harmeet Sandhu, Dr Susie Alegre and Shami Chakrabarti. The session is chaired by Mairi Clare Rodgers and questions are taken from the audience.
We’re going to start off with... You've all been using some tech at the break, I think, to vote on our first question: How does AI affect the most vulnerable people? Thank you Freya So shall we start off with.. well we’ve just heard from Susie and Shami actually. Claire, do you want to kick off? The most vulnerable people are are probably the victims and also we have offenders that are very vulnerable too and they all have families. I think that when we look at how technology can support those people, I think that we need to make sure that whatever we do with technology, and and Susie and Shami have been very very clear about some of the risks with technology, we need to make sure that these models that are used have absolute explainability and transparency and the code should be seen. You should be able to see the code. If you can't see the code, you should not be using it, right? You should not be using something that you do not understand how it's built. But in terms of looking after those that are vulnerable, you're also going to have people that are digitally excluded. And digital exclusion is a real thing and we need to make sure that there are backup solutions for those that are digitally excluded. I have an 85-year-old mother. We have the NHS app in England. IBM support the NHS and we work with the NHS on it. She cannot access her blood tests and I've shown her I can't tell you how many times. So actually she needs someone to explain it to her. She needs a doctor’s surgery telephone number to speak to someone and we need to make sure that there are the backup solutions for those that are digitally excluded as well. A great example from Claire actually and I think that it's not just the 85-year-old mother. There are so many people who, whether they can access an app or not, need so much more as victims actually need a human to hold their hand through one of the worst experiences of their lives. That's rape victims. That's that pretty much anyone who's been a victim of crime. And yes, I do agree that it ought to be possible to follow the progress of your case and have information about appointments or things that you wish that you would know as a victim about the progress of a case. And for too long in in our criminal justice system victims have been a sort of collateral part of the system rather than at the heart of it. And actually it's often been the European Convention on Human Rights that is that has vindicated victim's rights because in the traditional common law system we've got an adversarial system: prosecutors and defendants. It's the ECHR that said, "Oh, actually cross-examining a rape victim in person is inhumane in degrading treatment." That used to happen at the old Bailey on a regular basis before the ECHR. But just as it's good to have information at your fingertips if you can look at your blood tests or whatever, you still want the doctor. You still want the victim support at court, and maybe a system that says that the victim who comes to court and, I'm afraid, victims will sometimes have to come to court because that's a fair trial but maybe the waiting room should be a separate waiting room from the waiting room where the defendants are. And that's Susie's point about basic old-fashioned infrastructure and that separated waiting room can't be solved purely by an app. And I think that the other thing we were talking about is people's jobs being made redundant because of technology. Now that's happened! You know, remember the miners, you know, we we've had de-industrialization and now we've got a new industrial revolution. There are good jobs for people in the caring economy. There are good jobs for people, you know, holding your 85-year-old mother's hand through the NHS or indeed a victim's hand through the criminal justice system. And we should be investing in those things. We can't ‘tech’ those away. We can use tech where it's appropriate to support, you know, information sharing but we can't take the humanity out of the system because that is not justice. I think as well, on the vulnerability question, and, I mean, yes victims are obviously vulnerable but it's really important in the justice system just to remember that everyone is innocent. Everyone is presumed innocent until proven guilty. And I mean, you know, the example I gave about the council tax, I mean, that was absolutely terrifying and gruelling for six months of my life. And I'm someone who has the capacity to fight about it. If I didn't, I don't know where I'd be. And I mean, if you can even imagine the postmasters, you know, if you imagine that you're suddenly accused of something because you've been flagged, you know, that is absolutely gutting for anybody. Those lives and marriages and businesses destroyed, those four or five suicides. And Fujitsu and the post office. Well, ‘this is perfect technology’. ‘This is going to speed things up’. ‘This is going to be efficient’. That's why I say we should have ‘the Postmasters’ tattooed. Anybody is trying to sell us a tech solution to a complex justice problem in particular. Well, and that shift as well in the justice system to not having a presumption that the computer's right. Yeah. You know, that's extremely important because then you're flipping the presumption of innocence, and i think that’s a really big problem... Do you remember this? Traditionally in the Police and Criminal Evidence Act, it was for the prosecution. Whenever the prosecution wanted to rely on evidence that came from a computer system, they had to at least make out and demonstrate that the system was working properly. And at some point, in the late 80s and early 90s it was decided that this was too onerous on the prosecution to have to even just produce a certificate or have someone demonstrate... no doubt... that it slowed down the criminal justice system. And so, in the Youth Justice and Criminal Evidence Act of 1999, a very innocuous, I think it was section 60, of that act, the relevant provision in PACE was just repealed. That little innocuous repeal that people barely noticed when that legislation passed would have caused great hardship to the postmasters because they're going, "But this can't be true. I know this doesn't add up. I know there's a problem." You've all seen the TV show, right? You've all seen Mr. Bates and the Post Office. They're sitting there through the night looking at their records. They know that there's something wrong with the system, but there's a presumption in law that the computer system is working. Now, that is still that provision, that presumption is still on the statute book, and I say it needs to be reversed. Yeah. And that's, you know, being accused of crimes that didn't even exist. It's not, you know, it's not being wrongly accused or, you know, being the wrong person. It's literally made-up crimes. And so I think you really do have to keep that front and centre. What is the purpose of live facial recognition technology? What is it? Is it preventing crime? Okay, well then what kind of crime? How serious a problem is that? Is it justified? Is it necessary? And crucially, is it proportionate? And proportionality is nearly always the key to every human rights law question. And you know, live facial recognition on every street at all times, it's probably not going to pass that test is the reality. Regardless, just because we can do something doesn't mean we should do it, right? Just because we can do it and it's cool technology. Look, I I've got an iPhone. I live in the cloud like everybody else. All my devices talk to each other. It's cool convenient technology, but it doesn't mean it's always right or the right thing to use at a particular... and it's not just in justice - it's across the piece. I mean, I think in the future, we've got to say that this stuff has to be properly regulated. As with pharmaceuticals and weapons, some of this stuff needs to be regulated. Some of this stuff needs to be banned. Some of this stuff needs to be banned for some uses and not other uses. And in some cases, there needs to be sovereign capacity. I.e. the state should own the technology and the data and it should not necessarily be in private hands. But this is a debate that we really need to have because the technology and the business and the political appetites they move at pace and the ethical, public, and legal debate is way behind where it needs to be. I agree with a lot of things that's been said. I live in Oxford and they're just introducing congestion charge in Oxford. And there's a huge population who do not know what to do and they have not done any human rights impact assessment for sure and that will exclude a lot of people - elderly people. We have a multicultural society now and, you know, I I think that AI will, or technology, you know, will exclude some part. On the other hand it does help you know in our courts we have a lot of multilingual multicultural people who come in through the court system, witnesses, and it really helps them to, you know, if they have a translated form in their own language from the accessibility perspective that really helps and that's what I'm saying like you know to use the technology for a purpose, a clear purpose, but also have human in the loop at all times. That's that goes without saying. When we were testing these technologies, we could see hallucinations. We could see bias built in everywhere. But that was the reason that we stopped and we, you know, tested and we put a lot of guardrails in and we do have the human rights impact assessment.... Just to ask you about that, just to understand in a bit more detail. So, it it's one thing to maybe use translation technology to help you translate your forms, your court forms in umpteen different languages, but what about a criminal trial where the defendant or the complainant, let's make it a serious criminal trial, say a rape trial, where one or both of these key witnesses needs interpretation. Do we not think we might want a live human interpreter for that trial or are we going to rely on some Alexa type device that we've created for the purposes of rape trials? No, again when I was mentioning about the translation journey and the and the proof of concept you're doing and again we are very early stage here at the ideation and understanding you know the courtroom setting and everything. I think that all these solutions should be used as a backup or with something. Absolutely the emotions which come out from the emotions of a witness that has to be taken into account and machine can't really translate that of how it's been said but in the courtroom the judge can see that emotions even if it's in a different language... I think again I think it depends very much on the circumstances and I can see how translation and even interpreting live might good use. Cases particularly where we've got limited human capacity for that and even in some cases you might just not be able to get an interpreter and there are you know realistic problems potentially with human interpreters as well, it's not, you know it's not all rosy necessarily, so I think it is very much a detailed use case by use case, circumstance by circumstance. That’s... absolutely... I wanted to give an example, one of our Sheriff Principal’s, Sheriff Principal Anwar she was on the bench and there was a translator translating and she was speaking in Urdu and Sheriff Principal actually speaks Urdu well and she had to stop the interpreter to say whatever you just said is wrong. So, you know, we will have the same things in the human element as well. I want to say this again - a human has to be in the loop. We are not advanced yet. No, look, I get that and, you know, everybody talks about the human in the loop. Everyone talks about guardrails, but we need to drill down into what that means. Absolutely. Where in the loop? Which human? Which cases? What are the limits? What is proportionate? And I just said this we can't just be making this up as we go along. There have to be rules. I mean, this should be governed by court rules. Absolutely. When is it appropriate and when is it not appropriate to use a robot interpreter because of the lack of availability of a human interpreter? And yes human interpreters can make errors but they face consequences as well. They can lose their jobs. They can, you know, there are consequences that come from humans getting it wrong less so with the AI. I've heard people, enthusiasts, say well human judges are fallible so why shouldn't we have AI judges. No see this is the point the... when we talk about the judges and and and everything I I just think that human in the loop... is... I cannot see a time which comes where judgment will be done by AI. I just don't believe in it ,there are too many elements, too much involved but what we can do is to again the burden of administration can be solved and that's the key point you know everything we're trying to do is the burden just taking off the noise against and so then we have a debate and a discussion and an examination and a detailed evaluation about what is administration. Yes, what is just administration and what is the administration of justice, and where those lines are and that is not completely straight forward either. But we need to have that kind of conversation. I think legislation stops us quite a lot. So, I hope I was clear in my speech that it's all well and good when we see this shiny AI which can solve everything. But we have to be, you know, we we've got to be careful with this – it's people's life depends on this. And that's what I was trying to portray like, you know, there are models the even if you see the code it doesn't work like that it's not that simple... This people's life, really, people's life liberty depends on this and you know this is the reason, this is the thing that we're doing at courts is to look for the purpose to see which are the lowest risk and high benefit, you know, we're not really using a lot of generative AI but even with that, you know, we are trying to see what can go wrong and and and that's the main thing it's a lot involved. Fiona Rutherford put it really clearly recently. She was at a public committee and and I think any exploration into where AI can be used needs to be clearly understood in terms of the impact on the users, the outcomes that it's expected to deliver, how it's built and to take very small use cases and she was talking about actually starting at the ground level of the repetitive task type stuff rather than anything that affects someone's liberty. And that ground level has got to be some of the most paper pushing work that could be sped up. Now, we, I use AI all the time and it produces summaries and reports, but I read them all. I read them and then I think, "Oh, I forgot that bit, but it's forgotten this bit". And then I use it to produce a summary of my meeting notes or whatever. It needs checking and balancing right now, but today AI is as bad as it's ever going to be because tomorrow exponentially it's improving. And I think by looking at some of the repetitive tasks that do not impact decisions on people's liberty, so the administration of justice, let's look at just some of the paperwork and the scheduling and and predictive maintenance of buildings and all of those things and how we can make sure that victims are not coming into contact with defenders and so on. All of that can be made smarter. The summaries are the thinking points and I mean, certainly for me, the idea that someone's going to give me a transcript that I'm then going to have to read is a complete nightmare as opposed to my notes that I'm taking as I go through something noting what I think is important. That's much quicker than having to go through a transcript or somebody else's or a technical summary. So the summary is the justice and I think that is, you know, it depends what it is and the public understand, and the understanding of, I mean, I'll give you an example, I mean, how many times have you read about a case in the newspaper and thought that's not quite right, that's not quite what the decision said let's take something really sexy and controversial like For Women Scotland in the Supreme Court. Now, a summary of that is really really crucial and you will get a different summary depending on which newspaper you're reading. So, is the summary a woman is a biological woman - full stop? Is that a summary? Is that a fair and accurate summary of For Women Scotland? Or is it for the purposes of protection under the Equality Act, a woman is a biological woman? Is that a fair summary? Or I could give you three, four, five without even going to AI. I could give you different versions with a slightly different emphasis or spin on that judgment. And it's really important as far as I'm concerned. That's such a controversial and important decision from our Supreme Court. I'm picking that, you know, to entertain you. But it's equally important when it's a summary of a particular family's case. That determines the outcome of who gets custody or, you know, who gets the house or who gets access to their children, who gets deported from the country. You know the summary is the justice. Yeah. So we just have to be a little bit careful about what is just paperwork. Yeah. what is just administration and what is justice itself that is inspiring confidence or skepticism in the rule of law itself. I think, as well, I mean one of the things I'd say for everybody working in the justice system is this question of accountability and, you know, you mentioned it earlier about, you know, well, there are consequences for a human. Number one prediction is that when we start seeing cases relating to technology they're likely to be judicial reviews about technology being used by public authorities. That will be the front line. It affects people's lives directly. But also, you know, it's easier to bring a judicial review than to try and sue a company. Well, people will start arguing that they didn't get a fair trial. If there's a suggestion that too much of the actual judicial part of the process was not conducted by a human. People will argue that they didn't get a fair trial and if there's opacity around the decision making how can they challenge... how can i challenge something that I cannot see. If someone... if AI is making part of the assessment of me, whether it's for sentencing or parole or prisoner categorization, anything that's going to affect my quality of life... how long I stay inside, what conditions, what license conditions - I'm going to start challenging that because I don't think I've had a fair shout because if the, you know, if the judge relied on the AI and the AI was making assessments based on data that I cannot see it may or may not be accurate but if I can't see it I can't challenge it and, of course, this is already happening in insurability in employability people not getting job interviews. That’s true. Yes on the basis of an AI tool that was used by the employee the employer saved lots of time but the consequences and, of course, that's they made up in court later. I'm sure we must have some questions from the room... so ah yes Keith I suppose I should fess up first: the only reason I can hear you is because of AI my hearing aid has got an AI chip in it which allows me to participate in this so I I'm no anti- AI I'm not anti-technology. I suppose my question relates to is there a threat in AI that we conflate efficiency and effectiveness? While I absolutely agree with the debate about the what is justice, but I think it's bigger than that because for example, while you wouldn't... nobody wants to stand in the way of making the court system more efficient, but the danger in that is beyond the court. So if you speed up the justice system all the way faster, faster, faster, it reaches the decision point with in and Scotland, the sheriff and a judge. What happens beyond that? Because if you move it faster and faster, the help that people require, so at the end of court, you're either in prison or you're in the community. So if you're in the community, the resources that are available to the community, if you make justice faster and faster and faster, you need to have something at the other end that supports people in their journey beyond that immediate point of justice. If we keep moving faster and faster and processing people faster and faster and we put them back into community, how do we keep pace with that? And that's something that was mentioned earlier on I think which is um I think it was when Susie and Shami was talking about where do you put the cash? I would say how does it impact just to go back a little bit I would say differently. There's so many different scenarios that you couldn't program something in an AI way to answer the question around the human component that's why you would always need it because there are so many variables and this is just mathematics. So you will always need that and I think the kind of key thing here with regard to this is we have to look seriously as has been said by everyone about the use case and that's about testing everything extremely well as we have done in Scotland and one of the points you mentioned earlier I think it's important to kind of recognize there is nothing sexy about providing brilliant networks across the court system but that's what the Scottish government invested in making sure that the network that was available and the Wi-Fi capability was evenly spread across everyone. Now my background is in banking and believe it or not 30 years ago and that's fully 30 years ago the banks tried to get rid of cheque books. They still haven't managed it quite yet. But look how far we've come in so many ways in the way that we can now manage our money. We didn't say, "Let's stop. Let's stop what we're doing and give everyone the opportunity to move at the slowest moving part. We looked at where the fastest moving parts were that were safe and we could easily move into these spaces. So that's why I think the debate today has been so good because it's given us both sides of that. And I am not at all in favour of any sales pitches that look to do something that looks absolutely fantastic and doesn't work at all. That's what we need to be guarded of. We need to make sure that everything we put into this has got those safeguards. And what are they? Well, we've got lots of those safeguards with people in the justice system just now. Usually when you go to a courtroom, it's the judge, the sheriff that decides how we're going to take these things forward because we have got lots of capabilities. But it's back to what I said earlier with the variables. There is never a one-size-fits-all in the justice system. And that's why we need humans to make sure we take the correct path forward. And I think some of the things we've done have actually, although they seem so incredibly slow, they've been quite quick and we don't get second chances with a lot of this. So we need to be more cautious, make sure the testing's there and pick the right use cases to make a difference. And there are many, many of those in the justice system without actually impacting a lot of the things that have been spoke of. So it's how we use this and how cautiously we use it going forward. We can't not we can't ignore this. We can't... I'll make one more point because it was mentioned earlier and I think it's related. When you look at cyber security just now I've I invest now infinitely more and resources than I did previously. The criminals use it just now. They have no set of rules, nothing. And if we can't fight that fire with the same fire, we will lose. We must open our eyes up. The risk is now far greater not to use it where it needs to be used because we'll be exposed ourselves to so many other things because we've chosen the digital route and we can't unpick that. It's like uninventing electricity. It's like we'd stay in the caves. We need to move on with some of this but cautiously, really cautiously. I think that's really really interesting and the banking example is obviously a very good one. So many of us now do digital banking. What is a cheque book? You know what is cash? Yes. Have you used cash since COVID? I don't you know but at the same time there's been an exclusion that's come with that. There are banks now who who will not open a bank account for a poor person. And so with the digital exclusion comes the the exclusion from the economy you know. But I think in Scotland because of the site because of devolution, because of a devolved criminal justice system, health system, there is actually an opportunity to be a leader in the regulation of this technology and there should be. You've got a smallish population, You've got a devolved situation, you know, with your own parliament, your own criminal justice system. Scotland could innovate to be a world leader in regulation and legislation and you start with the public debate and then you have political leadership and debate about what the rules should be. So we were talking about, you know, the court system and we can use the tools as we've heard in administration but not in justice itself but all of that should be in law. The actual rules about what we do and what we don't do should not be ad hoc. They should be in rules, and the rules should be under a statute that could be passed in the Scottish Parliament this year, next year about what those guard rails are because otherwise we're all just making things up as we go along. Shami, I think I absolutely agree with this but the pace that these tools are coming out. But you don't have to take them. No, absolutely. You can just say no! Absolutely. At the pace this is coming on. I think the pace at the policies that I was talking about earlier are not catching up. But your representatives should be your representatives in Holyrood should be asked to engage with this and to legislate for the framework. And that's what we're doing now. Yes, absolutely. But it also needs curation, right? Because the technology is moving at such a pace that policy needs curation so that it keeps a pace as well because you don't want to have a policy that's suddenly outdated. You need to have a framework under which that can be agreed. We do it for pharmaceuticals, don't we? With pharmaceuticals, you have a regime where there's there's an approval regime in your medicine standards, in your NHS procurement, whatever it is. People are innovating, trialling new drugs the whole time, but before they can be unleashed on the population as a whole, they have to be approved. You could go down that regulatory model. I think though as well going back to your original point about how things have to be done in accordance with the law. Well, if you haven't got a legal basis for it, you can't do it. And actually, we do have the Human Rights Act. It applies here as it does south of the border. It's been very much forgotten. But if you... but that's where people are going to get sued is because you failed to follow the really basic question of ‘is it in accordance with the law’? And I think your your point about the hearing aid, I actually did have an acknowledgement in my book about the use of Microsoft dictate because I had a frozen shoulder. So half the book I hand wrote it and then read it into Microsoft dictate because typing was was hurting my shoulder. I think it's really important not to be AI is one thing. It depends on the technology. It depends on the use case, depends on the context. And that I think is the really important question that it's not about being for or against technology. It's about identifying what technology, why does it do, what is it, you know, is it cost effective. I think all of those questions get lost in in a very high level debate or all and the question of sort of saying, oh, it's moving so fast, we can't legislate. Well, we actually have the legislation, we just need to apply it. And I think one of the big challenges is access to justice. So people are not able to effectively enforce the laws that we have because of access to justice not because the laws are not there but eventually people will be bringing those challenges and then you know the answers will filter down. So I think it's not about you know that there may be needs in in discrete areas for new regulation and new laws but actually if you go back and apply the the broader legal frameworks you'll often find the answers there. Let's take another question from we've got another question just here. Um thank you uh thank you for this. Uh my name is Yanosh. I'm from the University of Glasgow. I'm a criminologist and thank you very much for this talk. I mean and I think the the kind of the way the speakers have been put in this room I think is highly interesting the way there's an IBM representative and and what we're talking about here um I mean IBM has a long history and and what is I think very important to highlight here is we mentioned the postmaster we mentioned the the third reich and I mean electronic tabulators at the beginning of the last century have been used for efficiency to improve uh the censorship like the the sensiment of of of populations. So the problem basically was that it took 10 years to calculate how many people were in in the country and you could automate that through tabulators and that was a very good fantastic technology that was invested upon is still today a leading computer system. Um however what it applied what it what it did it was it collected information on citizens that voluntary handed out information that became more and more and more and more and more precise which then became fundamental tool for what happened in Germany and Germany has been mentioned a few times today. This is not simply Germany. This has happened in South Africa during an apartheid. use the same exact technologies to exclude segregate and target individuals and these technologies uh what I'm saying here is that what AI is implying in all this is how much of this information can become automated and the computational power which the dots can be put together it can be done for fantastically legally protected human rights uh whatever um keeping in mind all of is however what we need to think about is we have a history of how things have gone terribly not simply because the the the computer was not working the system was not working the the post whatever Fujitsu system was flawed no these were machines that worked fantastically but they have been abused so I'm my my question is are we keeping in mind how things could go terribly wrong in this even particularly in the political states we're in we're quite privilege here in Europe. Uh there's other countries who have been using these technologies for military purposes, warfare systematically on certain populations. What are the safeguards we're taking in consideration here? It's a very interesting situation and you you bring together the background of technology and then what it's used for. So in essence, are we saying that we should never have invented the motor car because someone might be able to use it in a hit and run? How you use the technology and the technology need to be split apart because a BMW might be a fantastic machine for various people for various uses. the fact that it's used to drive into a bank is not BMW's doing, right? So, so you need to divorce the technology from the use. Here in criminal justice, when we're looking at this, we need to actually put the two together. We actually need to look at the technology and how it's used and the use cases to make sure that it is following the rule of law that it is transparent, that it is explainable, and that we pick off the more administrative tasks first like scheduling, like predictive maintenance, like making sure that court cases can be heard and that the ticks are all in the boxes for the evidence and the translators going to turn up. All of those things that are more administrative, let's pick off those because those small ripples make a big effect on clearing the backlogs. Okay? And clearing the backlogs and swifter justice. Then you think, where's the knock-on effect into prison? Well, 25% of the cases in prison at the moment are on remand. So perhaps you'd be addressing some of that. But then how do you look after the offenders in the community and how can technology perhaps support that because there's not going to be enough resources with the budgets to to do that and how could you do that in a humane way. I think it's... you've come up with a very interesting quandry because you cannot blame the BMW for the hit and run. However, within justice, we need to look at the technology and make sure that it is developed within the safeguarding guard rails of explanability, transparency, privacy, robustness, trust. It is tested and tested and tested again and goes through audits and checks for lack of bias and so on so forth. These these are things that have to be done slowly, carefully, with accountability because the BMW and the crime need to be put together when you're looking at these small cases. Just to... I agree with that as far as it goes about, of course, you wouldn't ban the car because cars have been used in hit and run. I suppose environmentalists might turn back the clock on the motor vehicle for other reasons, but but what about the nuclear bomb? What about Einstein's remorse? The greatest mistake of my life, said Einstein, not long before his death in the US was writing that letter, writing that letter with other scientists to Roosevelt urging the acceleration of the nuclear program. If you were one of those Manhattan Project lawyers and, you know, if you could turn the clock back, I think a lot of people would because a nuclear bomb is always a nuclear bomb. It's never... it's never a carving knife can feed a family or kill a family. So what do we do? We ban carrying carving knives in public places, but obviously everyone's got a carving knife in their home. And that's more like your car analogy, I'm guessing. But there are some technologies that probably should be banned. Some that should not be in anybody's hands, should be in very restricted hands for very restricted uses. But I think I I think in the end this is a democrat... democratic question. It cannot be left to the scientists, the technologists to corporations to individuals alone. We have to have a societal and ultimately democratic decision-making process about this and that will require law. I think as well on the just on the accountability point I think you know I would say that because I'm a lawyer but it's much more complicated. So it depends, always, and I think it's been one of the real challenges with technology is about where the accountability lies and that goes, you know, if you're in procurement you want to be thinking about your contracts and precisely where the accountability lies because that's going to be the big legal area coming down the track the ownership of the technology that is being in the context of AI. The AI is is is being built on the basis of data that is coming from the public sector ownership but also the responsibility for when it goes wrong. So that that I think is going to be a really and the post office. Exactly. But I think on the question of sort of what you ban, one of the the examples that I've used which I think is really interesting is subliminal advertising. And so in my first book I looked at the right to freedom of thought which is an absolute right. Subliminal advertising is banned in Europe. It was banned. Nobody knows whether it even works. You don't need to know that it works to know that it's an incredibly dangerous, undemocratic technology that should not be allowed. So in Europe it's not and and that's it. And I think what we've seen in sort of in the information space including now as well in kind of you know AI generative AI sort of engagement with AI very personalized engagement is a sort of boiling frog that we've actually got a situation of subliminal advertising but we got there very slowly so nobody really realized that that's what it was but I think now that's that's starting to change. So in answer, sort of vaguely, to your question, I think absolutely it's really it's complicated. It's difficult and that's why for me I want to focus on the human rights law and the rule of law and the democratic systems rather than on the technology. The technology is just how you're, you know, it... it's just the icing on the cake. Like it's the tools, but actually what we really need to do is to protect that and if the technology in specific cases undermines that, then ban the technology. Uh but that's going to depend... Or it might enhance it in other ways. But you start with with what your society is about and what you're trying to achieve. That's where you begin. Whereas in a sense even the structure of today has been the other way round hasn't it that we started with this is cool tech and then we and then at the end of the day you know we're having the we have to come and rain on everyone's parade... to come and rain everyone's parade. I think the point that I wanted to say again was the lawful basis, I think we've talked about it many time and I've repeated it in the in the speech as well there has to be a lawful basis um especially for using not only AI but technology in any of the criminal criminal justice sectorand that that's the point. Thinking about in terms of how we might legislate for something like this. Shami you had mentioned um how we have um statutory law and that sets clearly out the code um for um law um uh whereas computer code is not codified anywhere. It's kind of hidden and with uh the discussion on explanability and transparency what might a potential solution look like? um for instance the facial recognition technology that's being used by the Met uh or the EBC transcription technology that's been used by SCTS. Um would it be possible that we could legislate in such a way where uh public bodies that are using a specific tool for a specific purpose um must publish the code that they're using say in a GitHub document. Um so that question I so agree with you. So I can imagine a future where if some of this technology is going to be used in in certain sensitive areas, the computer code could literally be a schedule in the in the primary legislation. Now I might not be able to read it, but some people can't read statutes, but the point is that somebody else could read it. And so there would be an element of of accountability and transpar... transparency and scrutiny. And you know there'll be future legislators who will be able to read code um and if they can't themselves they will have advisers who can but I think I think you make a very important point that if some of this stuff is to be used at all um we need to we need to see you know the code needs to sit within the computer code needs to be an addendum to the legal code and even as I say even if it's gibberish to me it doesn't matter because it will be perfectly readable by someone else and that that is public accountability and transparency in a way that now there are companies who won't like that because they say commercial confidentiality and the answer is ‘tough’ because if you want to be procured by the state by taxpayers paid out of money as Fujitsu was for that outrageous scandal and by the way they still haven't been banned from public contracts in the United Kingdom which I think is totally bizarre, let alone shameful if you want to contract with us in this sensitive space. Um, you this is this is this is the price you pay for the innovation. You get to do this innovative work. You get to partner with us. You get public money. You get the you get the data, but in exchange these are these are the rules of the game and you don't get to to to hug that code close or or lock it in a black box. There will be public accountability. I completely agree and I think that the explanability and transparency needs to be absolutely at the forefront. Um and I think that any company that cannot give that explanability and transparency and keeps it in a black box as Shami says shouldn't shouldn't be invited to do to do work with us. And when we look at the postmaster scandal uh and the post office scandal... you... Fujitsu were handed go fix it for us and they went and did it. Um now I don't believe that that's how AI should work and I think when we're looking at technology we need to look at collaborating and working with the public sector so that they are in charge. These are tools, right? But your use cases and the things that affect your users the most. For example, victim support and personalized journeys or offender management in the community and mindfulness training and all of those things should be developed with you and your users in mind. Not... not bring in a big tech firm and the big tech firm takes over. This is done in partnership with with a real co-creation. But the transparency and the the ethical basis should be there from the get-go. And if a tech company can't give you that, then they shouldn't be working with you. Would doing such a thing, say um putting the code out there so that it's open source, create any sort of um significant security risk in terms of national security, especially if it's being using by being used by police forces and and court service. I can see nodding. Um but uh further to that would um large tech companies who are often brought in to do these contracts like um IBM, Palunteer um Microsoft, Google potentially um would they be likely to withdraw from public sector contracts because of that concern about their confidentiality um and or code ownership. Um... potentially then setting the public sector back uh with technology. No, no, I don't think so. So, I think that that yes, everything's developed in the open. However, when you're looking at at areas that have got very secure use cases, you actually build a private system. And what we didn't get into today was sustainability. So, um, and and actually Karyn started the day, didn't she, talking about one one chat GBT case is one bottle of water. Now, if we're going to go on these massive models and be talking about specific things, um, we need a small model. We don't need a model that can change a carburetor in a car or bake a Victoria sponge cake if we're doing one thing. Now you can have these private models that you can have the transparency in the code but keep that from connecting back out to the internet. So it is absolutely all of our in-house systems that we use ourselves do not connect with the outside world. Um and that's why we look after critical national infrastructure. We look after some of the big fraud detections for the banks and and so on because it does not go outside. So I don't think there's a problem with it at all. Um how you design it and how you work with the authority is that's the important bit but the transparency of that between you needs to be there. Where do we invest in the system because that's what it was. The question was around the system and I think there is a real opportunity in Scotland for us to look at things differently. Look at much much better collaboration. Look at the system... the justice system and any of the investigations and any of the looking that we do rather than looking at it at each independent agency because that's the part for me where AI could really help with the data that we can see all the way through. If you pull this lever it might look good in this part of the system, but the impact is somewhere else in the system. We need to look at everything through that systems thinking lens which I think is really important. The other part, I mean... can I just say one other point here with regard to the entire debate today which I find has been fascinating and really good and informative but something that a question I've asked a number of audiences in the past when I look at this. Why... I'm going to build on the cars. Why were brakes invented? Anyone can answer. Why do you think brakes in cars were invented? So you could slow down. The reality is brakes in cars were invented. So you could go much faster and safely stop and slow down when you needed to. That's why it's so important that we have all of these checks and balances because it might feel as if we're going slow, but I still think, as I've said earlier, we're going quickly and we need to make sure that we've got the brakes in place where we've got those checks and balances to allow us to embrace the technology and go much faster. And that's why I think the whole debate today has been fascinating because it's given us those perspectives, those checks and balances. And that's what we need in Scotland. And we do have an opportunity because Scotland has given the world so much. And it's innovation and its creation. Lots of fantastic things for the good that used everywhere across across the globe today. This is another opportunity. And I agree with what everyone said. We're of a size where we could do something very special. And that's a great place to to bring this to a close. And I've got so many questions as well. I'd like to have asked about Palunteer around predictive, you know, predictive technology in terms of things like you know does anybody remember the film Minority Report and there is something around how we're predicting I mean you know I mentioned you know at the beginning about who's at risk of violence... I mean I think these are really big questions and Ian mentioned around you know what's happening at local authority level and we already know that some local authorities are sort of running ahead and then others have been slower to take it on. So, there are some really fascinating questions. I hope you enjoyed it. I mean, I really enjoyed it. I feel like I know so much more, but I've got so many other questions. I, you know, I need to read more. I need to be more engaged. And, you know, we've mentioned a lot about victims and and whatever else. We also know that we've got people who are accused of crime who are in the system who are often victimized as well. We like to think of them as separate groups and the Venn diagram is huge. I often think that that human element in the system when you've got someone in front of you, in fact there is a sheriff here who you know when I was talking about about it was a virtual courts and I know this isn't about AI but who said to me it's much easier to remind someone who's on a two-dimensional screen than a weeping person who's in the dock. And there is something around that around seeing the person that I think you can never forget. I trained as a psychologist so I would say that I quite like the human endeavor. I like the fact that we're meeting each other today and not on a screen. I really miss it. Human beings are born connected and I think we should never really forget that. Thank you so much for coming. Take your time. Go and speak to someone. Try and grab Susie and Shami and and everybody else here before they leave and have a great afternoon. But thanks very much for coming