The Security Circle

EP 162 Guardrails, Justice, and Accountability in a Digital Age with Dean Armstrong KC King's Counsel (Barrister), Experienced Litigator, Chambers & Partners No.1 Silk for Cryptoassets 2025

Yoyo Hamblen Season 1 Episode 162

Send us a text

Podcast Summary

In this episode of The Security Circle Podcast, Yolanda “Yoyo” Hamblen is joined by Dean Armstrong KC, one of the UK’s leading barristers at the intersection of law, technology, and accountability.

Dean brings rare clarity to some of the most complex and consequential issues facing society today — from AI regulation and agentic systems, to crypto and blockchain litigation, to safeguarding, abuse, and institutional responsibility.

The conversation begins with the rapidly evolving landscape of artificial intelligence, exploring the stark contrast between the European Union’s risk-based AI Act and the UK’s principle-based, lighter-touch regulatory approach. Dean explains why flexibility without certainty creates legal risk for organisations, and why AI is now too pervasive for governments to avoid meaningful intervention.

A central theme of the episode is human responsibility. Dean examines the legal and ethical challenges of assigning accountability when AI systems act autonomously, warning that innovation must never replace human judgement. He discusses the growing likelihood of litigation as AI becomes more capable, more opaque, and more embedded in decision-making across industries.

The discussion then turns to cryptocurrency and blockchain, drawing on Dean’s experience in major international crypto litigation, including work connected to the FTX insolvency. He challenges common misconceptions, explaining why blockchain is often an evidential asset rather than a liability, and why crypto and digital assets play a critical role globally — particularly in regions without stable banking systems. The episode also explores stablecoins, tokenisation, and why the UK risks falling behind in the digital asset economy.

Beyond technology, the conversation carries profound moral weight. Dean speaks candidly about his work representing victims in historic abuse cases and his growing focus on safeguarding in sport, emphasising that safe spaces are not optional — whether in workplaces, public institutions, or youth environments. He highlights how abuse thrives where oversight is weak and accountability is avoided, and why law has a duty to bring uncomfortable truths into the light.

Throughout the episode, Dean returns to a consistent message:

progress without guardrails is not progress at all.

This is a powerful, thoughtful conversation about guardrails, justice, and accountability in a digital age — and why the law must evolve without losing sight of the people it exists to protect.

About Dean

Dean is a top ranked, award-winning King’s Counsel Barrister in the United Kingdom. He has represented clients in numerous high profile cases and is a trusted advisor on a wide range of legal issues. He has deep expertise in criminal and civil corporate responsibility, law, cyber law, Artificial Intelligence, data law including GDPR, blockchain, cryptocurrencies and NFTs. 

His accolades include:

Chambers & Partners No.1 Silk for Cryptoassets 2024, 2025 (current)
Legal 500 Leading Silk for Crypto and Blockchain Assets 2025 Edition (current)
Frequent speaker at international conferences and in the media

Key publications include:

Cyber Security Law and Practice 
Lexis Nexis: first ed. 2017; second ed. 2019
with Dan Hyde and Sam Thomas 

Cyber Litigation: The Legal Principles 
Bloomsbury Professional: 2021
with Fergus McCombie and Ceri Davis

https://www.linkedin.com/in/dean-armstrong-kc-b5b2769b/

Security Circle ⭕️ is an IFPOD production for IFPO the International Foundation of Protection Officers

If you enjoy the security circle podcast, please like share and comment or even better. Leave us a fab review We can be found on all podcast platforms. Be sure to subscribe. The security circle every Thursday. We love Thursdays. Hi, I'm Yolanda And welcome to the Security Circle Podcast, produced in association with IFPO, the International Foundation for Protection Officers. This podcast is all about connection, bringing you closer to the greatest minds, boldest thinkers, trailblazers, and change makers across the security industry. Whether you are here to grow your network, spark new ideas, or simply feel more connected to the world of protection and risk, you are in the right place wherever you are listening from. Thank you for being a part of the Security Circle journey..

Yoyo:

Okay. Finally, it's taken a while, hasn't it? To Yeah. To get together and do this. Uh, but I'd like to introduce Dean Armstrong, the one and only AC Vanguard Chambers, and an expert in crypto blockchain data and ai. In fact, I put in brackets the data lawyer, Dean Armstrong, welcome to the Security Circle Podcast. How you doing?

Dean:

Very well. Thank you. Very nice to be here. No,

Yoyo:

it's nice. I'm, I'm beginning a trend of people who are rocking up on TV occasionally, and I'm like, ah, he's on the list. Uh, you've got a lot. Yeah. You've got a lot going on, haven't you? But let's talk about AI legislation. This is one of the big subjects that's very topical right now. We know that we finally got some AI legislation in the uk. Good. I know people who have contributed to the writing of it, and that's, that's something, but it's got pitfalls. I'd love for you to explain, certainly from your perspective, where we can see issues coming up in the future because of those pitfalls.

Dean:

Yes. I think the first thing to mention is that it's probably a, well, it is a stretch to say that it's AI legislation, but there, there's a fundamental difference, um, in the world at the moment, but, but more, most particularly a fundamental difference between the European approach to AI regulation and. As we sit here today, the, the, the current UK approach, um, the, the, the, the, the two are exemplified really in this way that the European Union has passed, has approved. The EU AI Act, which is slightly weird since it's a European, um, wide matter, but it's called the AI Act. And that provides, and we can go into some more details later, but that provides. Pretty much a, a, a full regulatory process and profile of how things are regulated in relation to ai. So it, it instills into the culture of regulation, a number of different levels of risk. So some things are prohibited completely. Some are deemed high risk, some are deemed medium risk and some are deemed low risk. Now, obviously, depending upon the risk assessment, and it's a risk averse piece of regulation. Depending on that depends then on the obligations that, that the party, the deployer, the maker has to make in terms of its attempts at regulation that the UK has taken pretty much a contrary approach. So, so the UK approach is. Is to build any AI regulatory aspects upon existing legal frameworks. Um, and therefore the white paper, that really is the thing that is going, that, that was published in July 24 and then has been. When I say confirmed, but very much sort of adopted, uh, the same pattern in by the new then government in 25. The, the, the UK is going down a, a much less heavily regulated road, uh, much more of a principle base, um, approach, um, to try and hit what it. Sees as the big problems with ai, but also not overregulating it so that it, because it sees its value to the economy. So it, it's gonna be a very interesting, um. Way of which is better, which works out better. Um, is it a beta max VHS type scenario whereby someone, uh, one, one group has gone down the principled based white paper route of let, let's adopt some practices and see what the individual regulators feel. Or alternatively the, the European approach, which is very specific, very strategic, very ordered. And so what, what we have is almost, um, that, I won't say battle, but it's an interesting concept because it's very, very different. It, it's an interesting concept in a number of ways because. What we had with, uh, when the UK was part of the European Union, obviously we were very much in the vanguard of drafting, um, the GDPR. Uh, and it's important to stress that the basis of the EU Act is very much A-G-D-P-R type regulation. Now they're clearly not the same, but, but because all matters. AI really are governed through effectively, the, the fuel of them is data. Um, the AI Act is very much based upon similar lines in terms of the types of regulation, the types of classification, the types of documentation and process and practice that that parties have to abide by. And obviously we were. Um, and we left, um, just after, um, the implementation of the GDPR, but the UK has, as I say, since that time, veered away from the approach that the European Union is taking. And so that's of significance because where we are going with this is, um. There is at least the certainty for parties in Europe, and I'll come on to whether, how much and, and the extent to which UK PLC may be affected by the EU Act later, but. There is very much the process of certainty within the eu, whereas with the UK white paper approach, if I can use that shorthand, um, less certainty, some would argue better flexibility, some that, that may or may not be true. Um, there is the overriding concern, if I'm going to be candid about it, that. The principle based approach, first of all, leaves gaps and is it left? Vague. I think it's a fair expression to say vague. Certainly in comparison to the EU approach. Is it left vague? The government would probably say deliberately, but there's at least the sneaking suspicion that it hasn't really got around to thinking about it enough. Um, and that aspect I think will cause potential problems for UK PLC. Because they won't know really what they have to, to do, uh, being, I hope, not flippant, but accurate. The only real way is to come to people like me for an opinion because there isn't the certainty of, um, the regulation says this. So, so with the, the UK approach, it's quite an interesting different approach. What, what, what the UK approach. Really has, as its heart are, are, are three fundamental objectives, the use of AO a AI to drive growth and prosperity, the increase in public trust in ai, and the strengthening of the UK's position as a global leader and what. They do. So rather than appointing a regulator, what it proposes the white paper, and I keep coming back when I'm talking about the UK approach to, um, the white paper as opposed to, uh, anything else, uh, is that existing regulators should extend their responsibilities by incorporating AI and the, the, the, the white paper sets out some. Aspects which complement that approach. So in common with other things, and I'm happy to sort of go into more detail of what these look like in other areas if it, if it helps, but there's a sandbox initiative. There's horizon scanning, there's promotion of interoperability with international regulatory frameworks. There's voluntary guidance, and that's interesting. Um, voluntary, um, is obviously always interesting, uh, in terms of where it ends us up really. Um, and there is an, there's a, there's an establishment of technical standards in collaboration with the. AI Standards Hub and, and the AI Council, so. The, the, the quote really in a nutshell is, is the proposal in the UK of a proportionate, layered approach to applying available AI standards, involving regulators identifying technical standards and encouraging their adoption. Uh, in support of the integration of those principles. So it's very interesting. It's, it's one of those, as I say, European Union very heavy on, well, relatively speaking, very heavy on regulation. Um, and not unusual for a European directive, UK approach outta the eu. Very different in terms of a structured principle based approach.

Yoyo:

But we're humans and we don't really like being told what to do. The innovators hearing you now, the ones that want to, you know, generate the new excitement around AI and agenda ai, they're gonna hear what you just said and thought, oh, this is ridiculous. You know, stopped here, stopped here, blocked here, blocked here. There's a big clash, isn't there, between the two? How can businesses who want to innovate. Who are encouraged to innovate, do this in a way that's safe and have the protection later down the line that actually we did do the best that we could based on what we knew at the time. And I'd like you to do a little bit of horizon scanning yourself. I wanna ask you the question. Where are we gonna be in 10 years time? If we were to include this, if we were to look at vaping as a metaphor for ai, everybody knows vaping's bad for your lungs. They've regulated it now. They've using, they're using enforcement now, but people will still do it. So where are we gonna be? Is business gonna have bad lungs basically in 10 years?

Dean:

Um, I think. Where do I think we'll end up? I think that probably we'll end up in the UK with some form of, of synthesis of some regulation. My personal view, and it is only a personal view, is that AI is going to be too big and too all embracing for there not to be some form of um. Government ability to intervene, um, where I think it might be envisaged and I think what the government is doing. So I've done some work on the ai, the government's proposals and AI and music, for example. Um, and, and copyright and using, um, uh, previous works and, uh, uh, uh, in terms of how that affects and impacts on o on on. Music composers and how their work can be used and adapted, and whether that should be sanctioned and, and what AI is doing to that. The government's very much, well, my interpretation of what I'm seeing is the government's very much directed at. AI is being used in music. Let's have a look at how that works. AI is being used in the law. Let's look at how that works. So the answer to your question about the state of the, of the lungs, I, I think we're probably going to have much more AI specific, um, sectors. That's very much appears to be the case. Uh, just to give you some form of. Of education about that. The, the, the, the white paper, very much envisages three layers. So it talks about, I. Regulators to create and promote adoption of sector agnostic standards to risk management. Um, it talks about in another layer, regulators to provide additional standards addressing specific issues as associated with transparency and bias. And bias is one of the big issues that it needs to deal with. And also regulators to provide specific technical standards. So. What I, what I, what I glean from this, uh, is that it's gonna be more bespoke, or at least the desire is, is to be more bespoke. One of the criticisms I personally don't share. The criticism in the same way, but one of the criticisms of the eus approach is that the EU is a bit of a one size or attempt to one size fits all. So, so, so what we have here is. It doesn't really matter which industry you are in, you are going to have to, if your system is, for example, high risk, then you've got to do specific things that would be similar in terms of obligations dependent upon which industry you are in. So the car industry has the same. Profiles, the pro, the same obligations in terms of processes that say the tech industry does, or the building industry does, or whatever it might be. I, I don't, I, I think the government. Um, well, certainly they professed to, uh, and I say the government, it's, it's only fair that I make it absolutely clear that it's not party political because the white paper was under the Tories and then effectively has been, I won't say adopted, they'd probably like me to use the word adopted, but the new government of let's follow, say, follow the same principles, um, and, um. What they are seeking to do, or at least they are professing to do, is to basically say, well, this is much more sector based. Uh, uh, and one of the, um, one of the things that it talks about, and it's interesting in the white paper, it doesn't define. AI in a regulatory sense. It doesn't offer a strict definition of the term, unlike the eu, which, which, which gives lots of definitions of, of the aspects that it's talking about. But, uh, and what it's. Really trying to capture, but, but what the white paper does, it defines AI in the sense it defines it, but only in relation to two characteristics. Uh, uh, um, and, and that will, it says, generate the need for a regulatory response. So those characteristics are adaptivity. So, um. The training and operation of patterns in data which aren't easily discernible to humans. Your point about, um, about perhaps the scary aspect of ai. Um, and the ability through that training to perform new forms of inference, not directly envisaged by human programmers. That's a really key point. Uh, and the second, but related aspects to this form of definition, but not definition is, is autonomy. Um, so the ability of the. Aspect of AI that we are discussing or is being dealt with to make it difficult to assign responsibility for the AI outputs. So, so those are, that's really where the UK government is, is looking. That's where it sees the mischief, if you like. Um, and, and it's, it's that touch that it feels the need or will, does appear to feel the need to intervene. And it, and it's interesting. In a sense where we started is that you spoke about legislation. The very fact is there isn't legislation, there is a white paper. We are a long way from, well, not a long way, but potentially a long way from. Legislation. So as you will be well aware, white papers are some way to the finished article. They are, there's lots of discussion to go. So, so I think that that in a nutshell, the really interesting thing as far as I'm concerned as a practitioner is it's a really stark difference. So, so you could have, so UK PLC for example, could have, um. Let's take a really good example of a car company or a bank, which has a European presence, but also a UK presence. There's lots of those. Um, there's the potential then, well more than the potential, but by the look of it, to have to be heavily regulated if they're deploying a system in an EU member state. But to have the lighter touch. Um, need to, well, when I say comply with the white paper, we're not there in legislation yet, but at least to comply with the principles that the government have set down, that's pretty difficult. Um, what do I advise? Um, I advise that you start from the blueprint of. The eu, you see where you don't have to be quite so heavily processed, um, in the uk and you ended up end up probably with, with, I won't say a hybrid, but you have a series of processes and programs, what actually look after, um, your obligations in the eu, but also allow you to comply at least with the spirit of the principles. But that's really difficult. Really difficult.

Yoyo:

There have been some cases haven't there, where there's been AI disobedience. Yeah. And, and when I was at Bletchley Park a couple of weeks ago for, you know, the equivalent of like an AI summit locally for tech leaders, I kind of entered. Steam, still very conservative around how we adopt ai, with guardrails. So this is my position personally. Guardrails, guardrails, guardrails, you know, let's just do it safely. But I did come out of it after a week of attending a few sessions where I can appreciate the innovator's perspective because there are very differing. Standpoints and objectives and goals and the common thread, I think that most responsible, people are talking about is about having the guardrails, doing it safely, and having those human touch points along the way. Yeah, but we've already seen, you know, it's like taking the child to the playground. You could watch the child 364 days a year. The child always goes to the swings first. The one day that you don't watch the child because you just assume they're going to the, you know, you, those guardrails aren't there. You know, you've taken it for granted that it's gonna do what it normally does. That predictability aspect, and it does something different is the day that. We've got to kind of consider like a zero day day for us in terms of ai. We can't take our eye off the ball, can we? Because it's disobedient.

Dean:

It, it is. Um, I'm not sure I'm quite sharing the view that the robots are taking over the world. I don't, I don't share that.

Yoyo:

Haven't you seen the terminated Dean?

Dean:

Well, I know Yes, exactly. But that, but I, I'm, I'm, I'm still confident that that is still in the realms of science fiction. I'm not suggesting that, uh, there aren't capabilities that we all need to be concerned about, but I, I, I do think what I think would be an an enormous shame. Is if the enormous, incredibly big opportunities that AI creates are somehow limited because of a fear factor, because for me, the impact on daily life, the impact on the economy, the impact upon our quality of life. That AI can bring is, is so huge that, that we, we mustn't be scared of it. Um, of course we need our political masters to make sure that it's, um, controlled is not the right word, but is, but is, but we, we can see where the weaknesses are, where the concerns are, and we seek to. Uh, make the, to mitigate the, any issues with those. But, but the, the white paper, in fairness to it, does set out five value focused principles. So, so those in a nutshell are safety and insecurity. Um, and, and that obviously means that that regulators, wherever they may be. Need to ensure that they're technically secure and they're functionally reliable. Um, that any AI system has appropriate transparency and explainability, and that's quite an interesting word, explainability, which is effectively to tell us about what actually we, we are dealing with here. Um, and, uh, there the others, uh, are, um. Contestability and redress fairness and accountability. These are all, uh, it could be level leveled at the government white paper, that these are all a bit abstract, but it comes back to this philosophical debate almost that that. Are we going down, which appears to be this government, the UK government's approach that it's a principle based low light touch. Um, look at the processing or is it much more rigid? Formulaic, you're in a high risk. Therefore that means this, you're in a medium risk. Therefore that means that, so it's from a legal point of view, it's phenomenally interesting.

Yoyo:

I think from a legal point of view, I think we're gonna see a lot more experts in this field. I can see insurance leaning into providing solutions for AI mishaps. Maybe, I don't know. It, it feels very much greenfield at the moment, doesn't it? But I dunno about you. I'm thinking we're both, thinking the same thing when we feel that this is gonna be a very litigious space going forward and. Internationally, my goodness. You know, business is global now. What sort of problems do you see or legal gaps do you see going forward in terms of, how we deal with this internationally?

Dean:

Well, um, there are some gaps. Um, I, I think, I think there are, but we've already highlighted it's not a gap, but it's, uh, I think, uh. A potential forum for disagreement. The, the difference in terms of how the EU and the uk, for example, regulate. So something which is, let's say, potentially compliant if the UK follows through on these principles, um, may not, in fact often would not be necessarily compliant in the eu. So I think we've got the conflict of laws conflict. Issue there, which I think is pretty significant. Um, the AI space is already a wash with litigation. Um, there's, as I've mentioned music before, there's various, uh. Litigation. Yeah, there's art, there's also litigation against the, the large language models. There's litigation against, for example, or involving, um, I, I can't remember whether, which one. Well, perhaps I can, but I won't say. Um, one of the large language models, which basically scrapes data from across the internet, which say when you ask it a question. Um, you can ask it a question and where you end up with, with the answer maybe wholly. Different to actually the reality. And that's cause there was one example where a group of students asked, um, a, a, a chat about, um, their, their teacher. It just happened to be very sadly that that teacher had the same name as a convicted pedophile. And, um, that therefore spewed out a lot of detail about. Someone who clearly wasn't the person they were asking about, which caused huge consternation, particularly for the individual, but across the board. So, so we are going to be seeing, um, lots of litigation. One of the things and one of the areas I think principally that we are going to be seeing a lot of it is where the, where the AI takes us and something that you. Alluded to earlier, the lack of human involvement because what we mustn't kid ourselves about is that the AI and the AI is a generic term. So when I'm using that term, it means a multitude of things. But the AI that we have at the moment, um, will obviously develop, it will become more and more and more. Sophisticated, it will have greater ability to do more and more tasks and it will have an adaptability. Where we are is, is that as AI becomes more trainable as AI becomes, has a greater ability. Joys a greater ability to, um, think for itself. That's where I think the litigation's going to be, um, really rife because one of the things that we have in life, particularly these days and in in my advanced years, I've noticed it much more than used to be the case. There is this culture of there's always someone to blame. And that's a really interesting concept around ai.

Yoyo:

Legal's thrived on it, hasn't it?

Dean:

Yeah. Well, and, and, and you know, I mean that it's very, very rare these days that there is just an accident that it, most of the time. Even if it is just an accident most of the

Yoyo:

time stupidity.

Dean:

Yeah. But, but it's, but but that people are always looking for someone to blame. Lawyers are very quick speaking against my own here. Lawyers are very quick to, to jump onto situations, which actually were simply accidents. But, um, there is a responsibility. I think one of the really interesting areas is going to be that, um. Again, if there's lots of background noise, you'll have to stop me, um, because the other ladies come to pick up the dog now. But, um, one of the areas is going to be if you have an AI system, who has that responsibility there. There is an example actually of where you've got a lot more help if you are in Europe, that if you are in the uk, because they are classifying the system. So the AI system is being classified as a high risk or prohibited or medium risk. And so the ability really to find out the root of potential responsibility is greater than if you have a principle based white paper, which defines. The, the AI system in the way that I've just described, so, so I think legally very interesting. I think, um, lawyers are going to be very busy occupied by those kind of concepts. There's also in the legal profession itself. The challenges that the professional bodies such as the law society and in particular the law society, the BSB, obviously the barristers, but law society, um, which regulates solicitors, which are a much bigger body, they've put in already protocols about, um. When AI can be used, what circumstances? You can use it. Uh, it shouldn't be used indiscriminately. There is a place for it. So there's the challenges of the legal concepts, but there's also the challenges to the lawyers themselves and the ability of a lawyer to use AI processes to it. It comes back though. As with everything, it comes back to, are humans going to be involved in the operability of these systems and the extent to which humans are involved, but probably more importantly, the extent to which they can control the output. Mm-hmm. If I had, if someone had to ask me in a nutshell, what, what is, what is, what is gonna be the, those challenges that challenge. I would say that's what it is because there's absolutely nothing wrong with, as I do in my work. If you get a very large, complicated case, it's an enormous help to feed. The details in and say, can you give me a timeline? That's an enormous help to me because timelines are all in, in legal cases. It's where it's, it's where the detail is, which is key. That's, that in and of itself, in my opinion, is, is of huge value. And it's a value add in terms of what? What is less? Value add and it's much more potentially pernicious is basically saying, I'm really busy. Please draft these pleadings for me, or Please write this letter for me, or please draft this email for me, or please draft this response because the moment that you are giving yourself away to the AI system as a lawyer or as any professional candidly. Or as any individual candidly, um, that's when you are losing control and you are allowing the system to be the thinker as opposed to you. Yeah. And that, that, that, and that's. In fairness to both aspects, both the the UK government in its approach, but the EU as well, that is, I think, giving them the benefit of the doubt at the center of their thinking, that the, the EU is, as it always is, is always based upon the impact of fundamental human rights and freedoms. So, so processes are designed to regulate anything which might impact those. The UK is coming at it, as I say, from a very different angle, but by virtue of its concern over fairness and robustness and transparency and the like. It's also coming to look at that mischief of. Are we giving over control to AI systems and, and that that, that is interesting. And that's again, to, to har back really to something you asked earlier in 10 years time, it will be the ability of the regulators in whichever way that they, um, perform their function in 10 years time, it will be the ability of those regulators. To actually allow AI systems to develop so that they're a, a, a thing for good, but not to develop so that they overtake our, and more importantly, take over as opposed to, take over our thinking.

Yoyo:

Anyone who's interested in looking at mischievous, funny examples on how AI has uh, behaved. There's a really good example of the cleaning robot. I dunno if you've heard about this Dean. It basically hid the dirt. Um, a simulated cleaning robot was tasked with removing mess. Uh, it learned to just hide the virtual dirt under a rug rather than clean it from its perspective. Job done. Reward achieved and it was that classic outta sight out of mind hack. But we've got to start thinking, and whilst these are funny, you know, on larger scales, we've gotta start thinking, you know, this, this is a, um, a sophisticated tool that looks for solutioning. It doesn't mean to say that it's gonna solution in a way that's. Right, ethical, sensible life, preserving, et cetera, et cetera. Um, my next question to you is gonna be about the pace of Agen ai. I think it's worth, asking you this question because we are now seeing the rise of agen AI systems that can act, plan, take decisions on our behalf. You know, from a legal standpoint, how far are we from being able to assign responsibility when an AI agent acts autonomously?

Dean:

Well, I think, I think that is, that almost harks back to something I was talking about earlier. Um, so the rise of agen AI is, is, is certainly of huge significance to take an example for the legal profession. So, so. The, the AI Act would seek to address those issues because it would classify the particular system as one of the, say, prohibited, high risk, medium risk, low risk. So it would seek to deal with the challenges in that way. Um. The UK adopting its white paper principles, wouldn't, wouldn't do the same. So we've, we've already seen, and I mentioned, uh, the legal profession earlier, but the Law commission has actually suggested granting legal personality to some AI systems. So a law, a, an AI law firm, if you like, which is. So, and, and I think it's interesting before anyone thinks, well, that's too much of a grant, that's too much of an abdicating of re abdication of responsibility. Mm-hmm. I think what we must remember is, uh, something which I've certainly referred to, and I think it's important that we are faithful to. You can't ignore something as significant as the AI systems. So it, the Law Commission's attempt to try and embrace agen ai, I think is to be welcomed because at least it's seeking to engage with the process. It's not sticking its head in the sand and saying, this isn't. Anything to do with us and we, we are just not interested in where it's taking it. So what it's seeking to do by, well, in my view, by recognizing agent AI and AI personalities, uh, at legal personalities in AI systems, it is actually understand where it can take you. Um, for example, in. The extent to which it's safe to assign liability in the absence of human intent. That's the key. The extent to which you can assign liability in the absence of human intent. That's what the law commission is grappling with, but I would commend that to a lot of other industries because as I said before, there's an awful lot of tasks. Back to my timeline, example. If you are asking AI to give you a, to create a timeline from a very complicated set of documents, um, then that is asking that to do a task, but you're not assigning liability. You are signing. What you are doing is you are actually asking it to. Alright, there's potential bias potentially in terms of what it regards as, as important and in the timeline, but, but because it's an aid memoir, it's a work in progress, you can be checking that and that's something so, so, so I think that the, the, the, the AI systems and the attempt by the law commission to assign legal personality in those circumstances. Definitely for me to be welcomed because there are a lot of areas where you are actually stopping short of assigning liability where, where you are assigning liability or have the potential to do so, then clearly that's of concern. Um. The judges are also in are also issuing judicial guidance about when AI is appropriate, when it can be used, when it can't be. So there are, I welcome the fact that there's not this head in the sand. Let's ignore AI in terms of the legal profession that that's the one I know best, but, but there are an awful lot of things in industry and commerce. That AI can do extremely efficiently and it won't threaten the safety. It won't threaten the fairness. It won't threaten employment. Um. So it's important that we embrace that It's very, very important that we don't turn or set our face against the agent AI because I think there's a huge place for it provided, and I come back to my point. We are not sign assigning liability in the absence of human intent.

Yoyo:

Perfect. And look, um, I'm gonna take this opportunity to hit you up for a little bit of crypto, uh, stuff. You have also worked on some of the most notable cases around, um, crypto litigation and blockchain. A lot of people don't know about blockchain, really, and some people have very, very little knowledge. You acted for an overseas client, didn't you? In the FTX insolvency.

Dean:

What are you I did for the regulator, actually for The Bahamas regulator. Yeah.

Yoyo:

Fantastic. Um, and, and I think the work that you've done couldn't be underestimated. Uh, and, and what you are learning in terms of biometric and technology regulatory matters. So I'd love for you to give us a high level overview on the trends that you are seeing in cryptocurrency litigation right now. What do people need to know?

Dean:

Um, well, I think it's two different questions. I think the trends I'm seeing in cryptocurrency generally. Um, are different to the cryptocurrency litigation aspects. And the reason I make that distinction is because obviously litigation tends to be historical. So, so what we've had in the crypto world has been historically, and that's why a lot of the litigation that I'm involved in currently is talking about has been a lot of. Cryptocurrencies for the bad guys. It's a fraudsters charter. It's um, it's something which a lot of people to quote the phrase won't touch. Um, I think that certainly was accurate in terms of that's what the litigation. Has been, and to a degree, continues to be about what, what? What is much more interesting and much more positive as far as I'm concerned, is I do as well as the litigation. I do the advisory piece a lot on cryptocurrency and in particular on digital assets and tokenization. And what I'm seeing more and more is, and I'll use this phrase, which I use a lot, the democratization of. Um, the ability to use tokenization, what that brings, it brings more people into areas that they otherwise would've been outsiders to. And I'm a big, big fan of cryptocurrency in that aspect as well as others. But what we are also seeing, particularly since there has been the sea change in the United States towards, in particular, stable coin, um, stable coins. And again, this is probably an area, um, where I feel that the UK government are missing a pretty large trick here. Um, there's suggestions that stable coin market in the next few years will be in the trillions. 3 trillion. 3.3 trillion I think was the, was the, was the number, the stable coin market in the UK is about 580,000. So we are well adrift. So what I'm seeing in cryptocurrency litigation, I'm still seeing and doing and recovering money for people who the bad guys have taken. But I'm also doing a lot of advisory work on taxonomy of tokens, on the use of, um, how digital assets can aid and democratize, democratize. Um. The access to significant funds. I'm advising platforms on, on how they can, um, again, through tokenization, they can access, um. Investors, whether they be professional or or otherwise. So I'm seeing a real freeing up of the financial markets and the FinTech tech markets because of the digital assets revolution that's taken place. There's been an awful lot and. If I may say respectfully an awful lot based upon vested interest of, um, challenge to cryptocurrency, it suits a lot of people to say it's a fraud. This charter. I'm not gonna sit here and say that there aren't bad guys in cryptocurrency, but I always make this point and it, it's part of my work on the blockchain. The blockchain in and of itself, which mustn't be confused as something which is cryptocurrency. Cryptocurrency is put on the blockchain, but the value and the use of the blockchain is much, much greater. Uh, certainly for example, in use of, um, art and music to actually, um, show the derivation of work and when it was created and the like. So. At least when you are dealing with cryptocurrency, because the blockchain is immutable, you actually to a degree and to an A level, you have some form of evidential audit trail, which we lawyers love. Um, and um, that does not exist if you are passing over. Tens of thousands of pounds of cash or dollars of cash in the Sainsbury's car park in a plastic bag. Mm-hmm. So, um, I think what we must do in Crip, uh, for cryptocurrency is again, perhaps similar in terms of the ai, really big mistake. Um, for people to set their face against it and just say it's a fraud, says charter. It, it's, there are issues with it. And there, and there's another example, which is interesting and, and, but familiar probably now to this, the European Union is bringing in its mecca regulations, which again, is almost, it is a very stringent, um, overview of, uh, very stringent regime of regulation of crypto and digital assets. We in the UK are much more, um. Case by case basis. We have introduced the third thing, which is a, the third, which is a digital asset. So, so there are, there are real world assets, there are things in action. And now there's the third thing, which tends to be, um, much more crypto focused, but we aren't going to the same level of regulation as Mika, which is in Europe. So again, a similar pattern. What will. What will, uh, prevail will be of huge interest. But I think in terms of the long answer to your question, cryptocurrency litigation, um, will change character. It will become much more financially based in terms of disputes, probably amongst shareholders and, and, and investors in crypto. Um. Crypto ventures or ventures, which are funded by crypto, I think we're gonna see less. We'll see some, but we'll see less uh, in time of the bad guys stealing the wallets, that will still happen, but it won't be as prevalent, but probably because it'll become much more widespread, I think. What we also, what we need to do though is to focus upon the real positives that crypto and digital assets are bringing us. The, the inherent use of stable coins, which are, I'm a huge, huge fan of in terms of their ability to, to democratize and be flexible in financial arrangements and the ability of people to recognize that crypto in and of itself is not a bad thing, but is actually something which. Um, and we mustn't forget. We, we privilege people in the west here that we enjoy a hard currency and we enjoy banking stability. For other parts of the world, crypto is the way where you can actually have a financial interest, um, which is not to be challenged by soft currency or banking, um, irregularities. So we mustn't forget that crypto has a worldwide role to play, not just because we don't see it as fitting into our ecosystem.

Yoyo:

You literally led me beautifully, uh, into China actually, because we know that China's not broadly embracing decentralized cryptocurrencies like, you know, Bitcoin and Ethereum which are very, very well known. It's actually kind of pushing its own central bank currency, the digital one, and, experimenting with paying employees, especially public sector employees in its own digital currency. And let's look here.'cause, you know. The audience is broadly a security sort of future looking audience. Are China going for the, the kind of dominance here in terms of shifting the balance of where world currency sits in the future?

Dean:

Well, I'm actually not gonna be critical of China here. I think China is doing what every other big nation will do as well. Mm-hmm. Um, I will be. Astonished if there's not a digital dollar in short order, I will be astonished if there's, if there's not a digital euro in short order, I will be astonished if there's not a digital pound in short order. Um, does it help a regime which. Has one of its, um, or objectives, it might not agree with this, but the, the, the, certainly the, the dollar being at the center of world currencies is not something which China is hugely in love with. Does the aspect then of its own, um, created cryptocurrency help with that? Absolutely. Do I think that it's more important than that? Absolutely, because. What we have to remember is that digital currencies aren't going anywhere. The old order will want them to go somewhere. But that won't happen. Mm-hmm. So digital ev, every single developed country or group of countries will need some form of digitization for their currency because otherwise they will be so far behind the times, the ability to transact with speed, the ability to cut out a huge amount of turn. That is all something, which is all, all is very attractive to entrepreneurs and, and, and, and the like. And, and, and I, I, for one, welcome that as a view. So it answers to your question, is it, is there a, is China doing things for its own motives? Yes. I don't blame them for that. Do I think it's doing something which others won't do or are not about to do? Definitely not. There's gonna be a digital currency across the world.

Yoyo:

Now, uh, as we wrap up to the end, I can't let you go without noting that you are representing the alleged victims in the potential civil claims against Harod in relation to historic sexual abuse claims tied to Mohamad Alfa. That's a hugely, hugely important case, and you must, have your own views about being involved with such a. A, a public case, but you are not just involved with that. Are you, your Compass North is headed towards sports, right?

Dean:

Yes. So from the historical perspective of, of being involved in the abuse, um. Perpetrated we say by Muhammad Al fired and, and facilitated by Herod's. Um, it's led me and, uh, uh, and some colleagues on to, um, look at how. Young people who, well, their, their trust is put into sports coaches, how that is not repaid, how they are abused, because dropping your son and daughter off at the local swimming club or the local gymnastics club or whatever it might be, you expect them to be having something which is enjoyable to them. Which is fulfilling to them. But sadly, all too often due, we say to a lack of regulation of the individuals, but licensed through, um, regulated clubs and governed clubs, abuse takes place and has taken place. In fact, we've, there's some horror stories of people we represent, so. Abuse in sport. I'm also, um, uh, representing, um, the family of a, a young female footballer who took her own life. Um, and, and that is a, is a matter for determination. So I'm not bracketing, I'm not saying the abuse has been proved in any way stretch or or form, but the, the abuse in sport generally. Is something which we are looking at very closely because the safe spaces that women are entitled to have when they go to work at Herod's remain and are as important as the safe spaces that a child has or a young adult has when they go to do their gymnastics training or their archery training or their swimming training or whatever it might be. And, and those safe spaces are something which we as a society need to extend as opposed to turn a blind eye to. And, and my concern is that it's all too difficult and it's all too embarrassing and we rather sweep it under the carpet. And I think that's what's happened. Uh, our job is to remove it from under the carpet and bring it to the public's eye. Uh, and bring people accountable who have abused their position. Um, and abuse in sports is a very long term project because sadly there's a lot of it. And, um, and it will only begin to get better, uh, if and when people bring it to the public imagination now. Sports bodies will say, I'm not saying they're wrong or right. Sports bodies will say that they've done a huge amount in terms of development of safeguarding. I think that's fair, but the question will be, is it enough and is it sufficiently enough as we continue through? Um, because if you have a child who is either talented or just wants to go and enjoy a swim. Or do gymnastics or whatever, they are entitled to be in a safe space. And so this process that we are engaging in, um, and are very much engaged in at the moment, we have a number of clients already and we are growing by the day is of huge importance. Um, so that is an aspect of something which, which transcends, um. Sport, generally sport needs to look and has to look at itself and as sport, which gives us so much and we rightly are proud of how much, um, sport and access to sport we give. It also has responsibilities, which must be observed.

Yoyo:

Dean Armstrong, I'm lost for words, and that's very, very rare. I can only thank you for giving us your time and I'm certainly speaking on behalf of everybody listening. You know, just thank you so much for your service and all the good work that you're doing. Thank you for joining us on the Security Circle podcast.

Dean:

Thank you very much.