A Class Act - Navigating the AI Landscape in 2026

S2E7: Navigating AI Ethics in the Real World: Red Lines & Risk Tiers in Practice

Michael Young MBN Solutions Data & Leadership Season 2 Episode 7

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 45:11

AI is moving fast. Most organisations are trying to keep up.

But the real question isn’t “should we use it?” anymore, it’s whether you can deploy it without creating risks you can’t explain later.

In this episode, Michael Young is joined by Jeff Watkins (former CTO, fractional Chief AI Officer / advisor) to get specific about what “responsible AI” looks like in practice.

Michael asks Jeff:

  • What does responsible AI look like day-to-day as a way teams make decisions, not a set of statements?
  • Where do you draw red lines (what you won’t do with AI) and who has the authority to stop a deployment?
  • How do you triage Gen AI use cases using a simple risk-tier approach (green/amber/red) so governance supports delivery?
  • What does evidence actually look like and how do you prove controls are working in real operations?
  • When AI causes harm, who owns the decision and what does accountability mean in practice?

Guest information

Jeff Watkins is a CTO and AI strategist with 25+ years of experience leading digital and AI transformation. Former CTO at CreateFuture, he led AI Enablement and now advises boards and delivery teams on secure, responsible GenAI, where AI, security, governance, and culture collide.

Connect with Jeff Watkins on LinkedIn: 

https://www.linkedin.com/in/jeff-watkins-cto/

About Your Host: 

Michael Young is the Managing Director of MBN Solutions and host of A Class Act: Conversations in Data & Leadership. With nearly two decades of experience helping organisations build high-performing data, analytics, and AI teams, he brings frontline insight into what makes exceptional talent and leadership.

Please note: The views expressed by the guest in this episode are their own and do not reflect those of their current or former employers.

Subscribe & Follow: Enjoying the series? Don’t miss an episode. Follow A Class Act wherever you get your podcasts and leave us a review to help more listeners discover these conversations in data and leadership.

Connect with MBN Solutions:

If you’re hiring for data or AI leadership roles, scaling AI capability, or rethinking how talent supports your AI strategy, the team at NBN partners with organisations to build data and AI teams that actually deliver.

MBN Solutions on LinkedIn – Explore industry updates, groundbreaking projects, and the latest in data and analytics talent. Join the conversation and discover how MBN Solutions is shaping the future of data leadership. Visit the company page here.

Visit MBN Solutions website at www.mbnsolutions.com

Well, welcome back to Class Act season two, navigating the AI Landscape. If you're a leader right now, you probably feeling the tension. AI is moving faster than any technology we've ever seen before. Every week there's a new model. A new agent, a new promise, and the pressure is to act. Almost every organization now says the same thing. We want to use AI responsibly. We care about ethics. We want to do it the right way. But when you scratch beneath the surface, many leaders struggle to answer a simple question. What does responsible AI actually mean when real commercial decisions are on the line? That's what today's episode is all about. I'm joined by Jeff Watkins, fractional Chief AI Officer and CTO AI advisor, keynote speaker, and one of the UK's most thoughtful voices in AI ethics. Security and adoption. Jeff has spent more than 25 years building and leading technology at scale, across financial services, healthcare, government, and the private sector. By the end of this episode, you'll understand why AI principles matter more than ever. How to move from ethical intent to real world execution, and how leaders can set guiding principles that enable innovation rather than slow it, down. If you're watching on YouTube, make sure to subscribe. Jeff, really proud to have you here. Welcome to a Class Act I thank you very much. I should hire you as my PM man. We've got your wonderful introduction. Oh, not at all. It's been good, kind, obviously we're, we've just started talking and now and agAIn, and having a few conversations over LinkedIn and stuff, so it's been good to get to know you, so I'm really excited about this one. Oh, me too. So, so Jeff, listeners who may not know your background, could you give us a quick snapshot of your journey and explAIn how you came to focus so heavily on AI ethics, governance, and responsible adoption? Sure. I'm a lifetime technologist. I've, I started coding when I was six years old on something called the Commodore Vic 20. And, dunno how many of your are. Listenership will be old enough to remember such a machine had four kilobytes of ram. It's pretty primitive and I got the bug. I'm one of those very few people in life who are fortunate enough to, from the very beginning, know what they wanted to do with their lives and actually get to do it. And I'm sure plenty of people wanted to be fire people, astronauts, something, something, a surgeon and never get to do it. Whereas I, I, I, from a very young age decided I wanted to work with technology and managed to be able to do it, which is great. So I entered into the, I, I went through the traditional route of going to university back when it was still affordable to do so. Doing computer science and then moved into tech consultancy for a company called BGSS in 1999 and BGSS, , was only 23 people then, and grew to be several thousand people and recently sold to CGI. I moved between different consults becoming. Chief for technology and then chief product and technology officer, and then Chief Technology Officer. Different places. And, but the biggest thing I think is, is, and the answer. So the second part of the question is why, why I, I've come to focus so heavily on things like ethics and governance and responsible adoption. It's, I've been through the.com bubble bursting and the boom, I've seen the shift into the cloud. But what along the way? So I, I'm a technology, so I cared about the technology and that's really all I cared about for the first, you know, decade plus of my career. And then I realized how much of an effect technology has on people's lives as the more it gets adopted and that can't be ignored anymore.'cause back when it was just my first things were writing, banking, interchange systems that shipped money between banks, they, they didn't really affect end users or people that much. But now when you see the high street's completely changed. Banking, you know, bank, branches have all but closed down. How we interact and how we build relationships is completely, fundamentally changed. I remember maybe 10 years ago, 15 years ago, if you sAId you met somebody online, you were a bit weird. That's a weird thing to do Now. Something like 90% of people meet online. So technology and , the web and mobile apps have fundamentally changed our lives and AI. Or at least this wave of AIr.'cause AIr has been going for a very long time, that since the 1950s this could be the is and the reason this unlike something maybe like. A blockchAIn, which everybody's banging on about for a few years. This is a cultural zeitgeist. This is, you know, people now have AI on their phone and they on their tv have it on their fridge and on their kettle. It's just a big cultural change. And what I think we need to, looking at the, how the web has changed our interactions, and I think AI has more power still to change how we live, is we need to do it in a way that. It's human first and not profits first. And it needs to support and enhance the human condition. For the many, not for the few. So that's why we need to think about how we ethically, , implement it. So the, the first thing, when you think about the threats of AI harms most people, and this, so I do a lot of stuff on AI safety and we've been doing courses with Blue Impact, which are great if you ever get a chance to do one of the courses. And one of 'em is the existential fact, the Terminator. Story, but that's not really, that's not really, the problem here, in my opinion. At least not, not the, not the closest problem. To us, the closest problem is, is people misusing AI either accidentally or on purpose, and especially, , larger technology organisations, That's why I'm so invested in this is because things could go really badly if we don't, and we need voices in this space Now we certAInly do a hundred percent and I think even from my point of view, like working in the kind of data talent market and AI as well. Just over the past, I would even say the past month or two, I'm just starting to hear the more a i, ethics, guardrAIls, safety, governance. It's starting to come into all conversations, so it's definitely an interesting time. Mm-hmm. But you have lived through multiple technology waves. What, what would you say it is about this moment in AI that makes principles and ethics feel less like a nice to have and more like a kind of leadership imperative? So, as I sAId, I think it's the biggest cultural ze guess since the web. But, but, but importantly I left a little bit of narrative out there is if you look at the industrial evolutions, they're getting closer together. And this, we're arguing now that the argument is now we're in another industrial revolution, , and the other one only started in the mid nineties, whereas previous ones lasted, you know, a hundred plus years. So this is compressing time and it, and it's compressing adoption. And because everybody will be at some point, forced to adopt AI in some ways, or form or another, or face extinction, it's important that they do it well. They do it well, and they do it. In ways that supports humans, as in people want to use their services. And also in, in line with things like ever moving,, regulation, like the EU AI Act. And you know that thanks to the Brussels effect, this will start to spread. Now, the EU AI Act is actually quite different to GDPI in many ways. GDPR is a lot of management and governance, you know, because it's data handling, governance, and management, whereas. The AI Act requires a lot more , on the technical layer to think about it. So I was discussing this with somebody recently and they, they, they sort of clued me into this and I was like, all right, that actually makes sense that it's not quite just GDPR V two for AI. It's a little bit more, complex than that. So if you don't, if you don't have a good handle on this and you can't adopt it, responsibly and ethically. It's probably worse than not adopting it at all. And you might be left on the, unfortunately, can't consent to the scrap heap of organizational history. So I think this is, this is, this is an existential threat from organisations not because of, you know, the Terminator problems, but because of people who don't do it well, will just be forced out of business, in my opinion at least. Yeah, and I totally agree. I do. And almost every organization they say it wants to use AI responsibly. In your experience, why do AI principles so often remAIn slideware rather than shaping the real decisions. It's like anything when you have values or a vision, , , there's kind of two types of organizations. There's ones who just go, oh, we need a values and vision.'cause organizations have values and vision and there's those who live in never break character. You know, like you get method actors who never break character all the way through filming and like. Value, genuine values driven and vision driven organizations never break character, and they tend to be the ones that are they, they're all or nothing. They're either very successful , or unfortunately, they, their value vision just , don't really matter to the world., And they, they die out. But they're very much live or die by them. So unfortunately, principles the same kind of thing. They can be seen as words and words are very cheap. You know, nowadays you can put words out there and trade offs are, but the problem is trade offs in organizations are expensive. So. If they're not properly adopted, then they have no teeth. The written as aspirations, but not constrAInts. You know, we'd like it to be fAIr and equitable, but then you don't say about in any ways, like, how are you gonna do that? How are you gonna make those decisions? Reality? So if nothing is actually a constrAInt, then nothing changes, you know, nothing ever changes. You can't expect it to change. And also quite often is if there's nobody owning AI and owning a accountable firm. And not only for what AI does, but also how it's implemented. So if it's no, if it's just some principles on a page, it's nobody's job. No. They'll be like, well, it doesn't really apply to me. I'll just do what I want. And quite often that then means that incentives, incentives fight agAInst them. So if you've got people incentivized, people are, are measured like good hearts law, you know, measured agAInst certAIn things If those incentives outweigh the benefits of these principles, then they, they'll all just get left in the dust. And this is the same with o, other elements of sort of non-revenue generating things like cybersecurity. If the incentive structure is wrong, this is how financial crashes happen.'cause if the incentive structures are wrong, people will act to, to meet those, those numbers and then think bad things will happen. So I think the difference, right? Principles fAIl when they're treated as values alone and they work when they're actually treated as requirements. So each principle needs to be turned into a rule, a test, an owner and a gate, rather than just being like an aspirational value. Yeah, and I love that. Totally agree as well. I've I've worked in businesses and had my own business and you know, how cheap it can be when you have these vision days and value days. But if you're not really loving them and practicing them and moving them forward, it's, it can be a waste of time. Mm-hmm. And it's a different level now. Mm-hmm. So, so when leaders come to you asking for help defining AI principles, what's usually driving the conversations? Is it regulation, risk, reputation? This is what we found when we first started working when I was at Create Future i'm now doing my own independent,, AI consultancy, which you'll probably shout that out at some point. Is we did our first piece of work with, with a very well known name, and it was very, this was early 2023, so it was very successful and we thought everybody would want this, but it turns out. The organizations in the uk, especially small to medium sized, and even large organizations in early 2023, were much further left in this journey. They had no idea they had very low AI literacy. And that meant that all of those things, people were worried about risk, re regulation, reputation, all those kind of things. But there also, there's a big fear of missing out. But a lot of it's fear driven because they don't understand and they wanted to feel like the, the, they wanted legitimacy. They feel like, you know, can we actually adopt AI in a way that feels authentic, that people who adopt and trust, that won't put us at risk of the other of the reputation and regulation, all those kind of things. They don't want the blowback and a lot of those lacked an understanding of AI, so very low levels of literacy that then that you need those to feed into. I think to start having talks. Real serious conversations about AI principles and AI policies and adoption. So there's a lot of moving parts that, and the mAIn thing is just because they don't understand that it's like saying, let's engage in as a particular board game or something like that. But you've never seen that board game. You never, you don't know the rules. So you could just go, sure, but I don't know what I'm doing. But once you know the rules and engagement, you may not be the best strategist in it. You may not be the ultimate expert in it, but at least you know the rules. At least you know the game, you know the board. And that's really important, because organizations aren't just about ethics theater. They want to know, right? How do we capture the upside of AI?'cause all the boards want this, but without losing control of the important things that matter to them, the trust and reputation and stuff like that. And it's, it's it very interesting. You sometimes wonder like some of the organizations that have actually got someone actually owning it in-house, like it's their like, full-time job. I've spoke to probably quite a lot of people lately where they've touched on it and they've maybe done it as part of another 10 jobs. So it's, I think it's getting to a stage now where it's gonna get much more real. I was actually from a running my recruitment point of view, I was actually thinking to actually look at hiring somebody now who specializes just in the subject alone, but because I think it's going to get much bigger. Sure, yeah, a hundred percent. And you've talked about helping organizations move from AI hype to secure production grade delivery. How do strong guiding principles actually enable better outcomes rather than slowing teams down? So one of the things I think that, that slows teams down. So nowadays, coding , and thanks to actually partially to AI tooling as well, but also things like to do cloud native platforms and SaaS platforms and local platforms. Building software is actually not that hard, the code itself. The hard bit is building the right thing and building it right? So good AI principles, what they do is remove uncertAInty.'cause uncertAInty breeds doubt. And it means, it, it breeds wrong decisioning, it means regret work. So having a nAIled down ethics and, principles that lead into policy and into, basically they end up being decision accelerators. They remove them ambiguity, they let teams move forward with a lot of more confidence 'cause instead of having debates about what, about this or that, you know, if you, if you see actually we, we never use per our client's personally identifiable information in this as as like part of the detAIl of the principles. Then there's no argument to be had. It's been agreed at the exec level and board level. We just get on, we, we don't do that. So, and that reduces rework. It stops people getting stuck at the last mile. You know, when you go, when, when security and compliance and governance, people look at your system before you go on live, 'cause you haven't engaged 'em soon enough they go, I'm not sure we can put this live. It stops you falling at that, that hurdle. Clear constrAInts help people move faster. They help people move. They help people be more creative 'cause working with, within a universe of no constrAInts, people just, you can just do anything and it becomes bewildering. They also help standardize the architecture underneath it all. Not just the enterprise architecture, not just a technical architecture. So principles do not slow delivery. It's quite the opposite. It feels paradoxical 'cause you think it would do, it's uncertAInty slows delivery principles replace uncertAInty with like reusable patterns of practice. Yeah, no, totally. And When you help an organisation define its AI principles? What are the core non-negotiables that should always be present? Yeah. So I did, I did some principle. So it's primary research on this across over 50 different organisations internationally, public and private sector. And like what are the ones that everybody really cares about? And there were five, and this is in order, it was transparency, accountability, fAIrness, privacy, and security. So I'll go over those in, in, in order. And I think, and this feels like, oh, maybe security should be high. But I think actually when it comes to transparency, it's the primary key.'cause you people need to know when AI, AI is being used and for what purposes, what's being used for and what it's limits are, but also how secure all the other pieces around it. So that transparency, it's like with, when you're adopting agile, transparency is such a primary key to it. It's really, really important. And that was by far the most important principle. There's accountability because there must be a clear AI, you know, owner for every AI system and decision pathway. And that's also really important to people.'cause that leads into then like, well what happens when there's fAIrness and things like, you know, that include like harm reduction. How do we make sure we're not disadvantaging people? How do we actually then if, if you've got fAIrness in place, how do you actually put redress in place if there is something goes wrong? And then things that privacy. And security a bit more self-explanatory. Although privacy in the case, privacy has two different meanings, and I was discussing this with somebody about a year or so ago in cybersecurity. It means like keeping stuff, people's stuff kind of con like the cia, a try confidential, but also privacy., In many other terms, it's like the right to your own personal privacy, and that means that maybe things like the right to your data portability, which is actually quite hard with AI. If your data's gone into a model, you can't just expunge the data out of a database very easily, picking data, picking learned things out, a model actually quite difficult. So I think there were few of those I think that was really important as well things like safety and reliability, making it fit for purpose, really important. And, and the, and contestability appeal, override and recourse. And these are the, these are the ones I think that you should start with and you can combine some of these together. I'd usually say to organizations, don't try and have a list of 20 principles'cause if there's, you know, it's like when, say when everything's important, nothing is, try and keep it to five, but maybe put some of those together, like maybe put privacy and security or security and privacy together. Maybe put fAIrness and contestability together. Put a couple together so you've got like this, this little list of things that it, you can get it. You need to do a little bit of, I think wordsmithing here to make it feel like everybody can pick it and go, right, I understand what we're moving towards here. And these principles are our top ones. They're not necessarily all of our rules and guardrails, but they're the ones where we're considering things that we should absolutely not in any way compromise on. Yeah, absolutely. Oh, thanks for that. And, and how do leaders balance speed V safety, innovation, V control, mm, commercial pressure V versus ethical responsibility without ending up with the principles that are either meaningless or paralyzed? Mm-hmm. So. I think then you need to think about how you actually turn everything into a risk-based operating model. So it's not just a set of inspirational words, so you can use bounded speed, so it's not move fast versus being safe, it defines where we can move quickly. So imagine and the AI, the AI Act kind of outline something similar to this. Basically, think of it almost like a traffic light model. There are low risk internal use cases, or even some low risk, external use cases that are, , like, say like a recommendation system, but internally, maybe it's internal drafting things, summarization of nonsensitive content, lightweight controls and glare. They're green kits and you can just move fast with those. You've got very simple controls in place. There's amba stuff where there's more something that's a little bit more customer facing, like customer facing chat bots where it's maybe the sense, the use case is a little bit more, has more risk involved, decision support systems. So then you need a little bit, then you need to think about, you can't just go fast and break things with decision support 'cause if you're turning people down for I don't know, uh, loans or other financial things that, that could be really damaging. So you don't wanna move fast and break things with that. Then you've got the red stuff, and that could be stuff that, and the EU Act has kind of got one of these layers, like you must never. But there'd be some stuff where you might do it in, in the red zone, but you'd probably wanna strongly consider not doing it or at least putting loads of control around it. Be that things like hiring, using AI for hiring, , medical decisioning, anything with safeguarding. So if you are writing out a chat bot for, students who are under the age of 18, there's a strong elements of safeguarding, then you need to really think about how that's implemented and put in place. So there's got to be lots of formal sign off for that. Yeah. And the question is, do you do this at all with AI Frankly? Now, there may be in the future it might be, it might be might, you might just be too ahead of your time a little bit. And, and the another thing is right, is start to pre-approve patterns, not projects. If you're doing this every single time for every, like going from first principle, every single time, of course it's gonna slow down. But once you've got the patterns, the primitives of the building blocks of things, like here's the things we can do, here's the types of things we can do. If you are going to take use case A and use case B is very similar to use case A, there's no extra risk and you've already passed use case A and it's already effectively in production Use case B should be simple for you. You shouldn't have to go through the whole rigor roll agAIn. If you first time you put something into life, if you're putting minor changes into life or something, you wouldn't necessarily need to go through the whole life process you should be able to, you know, do continuous delivery. Same kind of thing with AI patterns is think about how you actually reuse these things rather than going from, from first principles. But also the align, align everybody's incentives around this, making sure that people are all the areas of the delivery life cycle are thinking with the same hymn sheet, and have very, very fast escalation paths. Really you have to be quite strict on outcomes and harms, but flexible on some of the methods around, you know, principles should create the guardrails that let teams deliver much faster and not become like, a maze that they kind of have to escape like rats really. Yeah. No. Fascinating. This will be really good for a lot of the people listening to the show. Definitely. Is there any AI principles you don't trust when you see them written down? Because they sound good, but are vague, performative, or impossible to operationalize? The devil's in the detail So there's a few principle shaped sentences that read really nice and do almost nothing. You know, we all use AI ethically. Whose ethics is that? Measured by how? First of all, ethics is never a checklist. Having done AI ethics certifications and courses is one of the first things to teach is that you can't just have a, a checklist of AI Ethics. Ethics are systems and they're complex, and they're also quite different between organizations. This is why some people wouldn't choose to work in certAIn industries, that doesn't mean that there's, there's a universal 'cause there are no universal objective ethics Really, which is frustrating, but so yeah. We will be transparent, like, okay, transparent. To whom is that to the board? Is that to regulators? Is that to users? We will avoid bias and ensure fairness and bias free is actually impossible entirely. And it's very, it's almost impossible. It's, and it's impractical entirely for your bias. Like you need to figure out what fairness and bias criteria you're going to try and minimize and, and how to prove that. There's a few of these, but if you can't turn a principle to some kind of observable, falsifiable test, some kind of no go, no, go. Then it's probably just a slogan on the website that looks really nice. So I'm not agAInst words, the words of any of these. I'm agAInst them being unqualified. And the principle has to have some kind of cost you're willing to pay. They need to be clear and explicit. Don't make them something really vague that you can hide behind. You know, even if you do have a top level one word, principle of transparency, underneath that needs to be some kind of rolldown where you actually qualify that. No. Totally. And one of the hardest challenges is translating principles into behavior for engineers, product teams, and data scientists. What does good implementation actually look like at a team level? Yeah, I mean, that, that's, I think that's an interesting one because one thing I do really take a lot of heart from it is, and when I first joined the industry, people in, in product engineering, we didn't even call it product engineering, then largely didn't understand ethics or care about 'em. Because, 'cause for the most part, they weren't touching users that much. I'd say most people nowadays, or a good chunk of people nowadays, are much more ethically tuned in than they were. So at the team level, how do you get this, first of all, make sure everyone understands the risk tiers, you know, these traffic light kind of system. So people know we're working on something high risk here, so we need to, we need to really think more about it versus this is the lowest possible risk internal use case. We don't need to worry too much about this. We still have some guardrails. But building these guardrails really, not just guidance, actually have guardrails that you can review agAInst and go, We're not gonna ship this, but also try to build against secure patterns and, and good patterns as well. You know, if you, if you're building lots of rag systems, make sure you've got a pattern catalog and reusable things that mean you don't have to think about it afresh every time. People shouldn't have to spend, people should think about the ethics. You should think about the, they should be thinking about , the consequences of the AI they're building, but really they want to think about the business value they're delivering. So try and make that simple by again, having primitives that are packaged up that you can reuse, , but also making sure there's ownership and oversight, making sure people are involved. Shift left on, lemme talk about shift left, security shift, left this shift, left that quality, ethics, security all should be shift left. Make sure there's ownership, making sure there's real. Human oversight, but then also thinking about how you monitor that in production as well, because it's not just, it's, it's not a one and done. It's not, it's not one, one and done thing in a delivery. Systems are living, breathing things, especially AI based systems. I mean, when responsible AI's done well, you've put it into primitives and and genuine guardrails and tests. It should be as invisible as possible because it's baked into all your workflow, your platforms, your templates, your libraries, and the teams don't have to necessarily remember all the principles. You know, they should be all kind of part of the system. And could you share an example, anonymized if needed, where a clear ethical stance, a guiding principle, generally changed a technical or product decision yeah, and this, this, this is about automation and it was where we, we, we are having a good conversation about how far, how much do we want to automate certain tasks away, and how much do we want and how much do we want the human to look? the sensitivity of what they're doing was hard decision on this one hard, like very clear statement. We will not be going to replace any people with AI and all important decisioning will be done with a human in the loop. So that shapes the solution. That means there's no agentic system automatically turning down a particular case or thing. That's a hard decision now that in the future may be somewhat, limiting, some might say, but then you think, well, all of the other bits of the AI automation that helped reduce that workload on that team were very, you know, the mAIn thing was, is the, very busy team, very busy workload. The other things you can implement reduce 80% of that work or a good chunk of that work, that means they can make a better choice when it comes to the decisioning process. They don't have to rush that. Imagine if you've got better systems and tools and documentation that may allow you to make better decisions.'cause some of the other drudge work that is is actually humans are not as good at the more inaccurate. It's taking their, their, their sucking the air out the room. And actually what they need to do is make this quite sensitive decision based on evidence. So it's better for the better for the people, it's better for the customer. Yeah, no, I totally agree. A good, great example. Thanks. And you've led large scale AI literacy and upskilling programs. How important is AI literacy in making ethical principles stick, especially for non-technical leaders and boards? So, I can't even stress how incredibly important is because it, agAIn, going back to the game example, how do you govern or make any kind of strategy in a game you have no understanding of. And that means that some organizations go they just get the tech team to start using AI 'cause they don't understand the risks. They want the benefits of AI without the groundwork, which is great up until an incident happens. And there's also lots of organizations that are far too fearful. And I think in the UK we're, we're quite a conservative bunch. And I don't mean that politically conserv, it means in we're we're quite reserved. And we would quite, a lot of organizations, especially SMEs are, are too fearful of it 'cause they don't understand how to constrain the risks of using AI. So there's a very, there's, they're so afraid of it that they're willing to forego the benefits of using modern AI and that could eventually destroy the business. In, in reality, the lack of AI literacy at the board and exec level will usually lead to AI getting stuck in pilot hell not delivering the business benefits expected. So as with have a lot of other things, a lot of this modeling has to come from top down. Don't just leave AI to tech people'cause AI is a cultural phenomenon., And it's not just the mechanisms of AI, it's the systems around it and it's the methods of use that are cultural zeitgeist and that the board and the exec absolutely need to understand this and set the hand on the teller and start modeling good behavior. That's fascinating times it is, and sitting at the intersection of AI security and governance, where the organization most often underestimate risk when deploying generative and agentic AI? Oh, this is a big question just because of the, so I was recently doing a talk at, at a large security group and I was worried that, 'cause he was talking to AI security, specifically LLM security, and I was worried that I'd be roundly whipped out the room because it was, I was telling them, you know, everything they already knew and I was, I asked the people in the room about 200 people, how many people had a good handle on LLM security and not a single hand went up. So. Security consents are not well understood even by, security professionals in this field, frankly. So wider organizations, even less so now then we start talking about agent AI. Once AI breaks out the chat window, the threats kind of multiply. They have access to file systems, APIs, databases, payment services, all kinds of things that are quite worrying you, you know. And the attacks agAInst these systems are novel and sometimes not very intuitive. And sometimes it's not even just attacks, it's just stuff going wrong. Because when we, it's not just agents anymore. It won't just be agents soon, it'll be flocks of agents, it'll be collaborative groups of agents. It'll be effectively organizations of agents who will act in emergent ways, not even necessarily because of attack, just like that attack in emergent ways. So people need to understand how to threat model this quite urgently, I'd say, before you start playing with it too hard and really doing some very big damage to your organization or potentially running our business. We need, this is one thing I do with people is help them understand how to threat model or actually one of the threat modeling with them. How to underst understand how to threat model these kinds of systems.'cause it's a very diff so there's traditional security still exists and, and still matters in these cases, but there's a whole new world of security concerns that you need to think about how to model 'em. No, you get worried even listen, you talk like that and I'm not even kind of running a in a large scale enterprise. You could spend hours just on this topic. It's, there's a lot to it, but I, yeah, I would, but it would make it far too long An episode. Oh, maybe another one. I'm looking forward. I'm looking forward to sharing this. It's, yeah, it's just when you're talking, you realize how important it is because accountability is a huge issue when an AI system causes harm or unintended consequences. Who actually owns that decision in practice? Oh, this is, this is a philosophical question as much as it is a technical question. You know, under our current laws and understanding of what AI is and isn't. It can't be the AI. Let's be really clear, there's a really good, , book called Robot Rules by Jacob Turner, and it talks about there's, there's not a set of laws for horses. So when a horse tramples somebody, the horse isn't legally responsible as as a legally responsible entity. They are an entity in, in, after the fact, just in case. Just in the same way their house is an entity, but they're not legally responsible for their actions. So the risk is, is not, is, is, is really, is is not having an accountability structure inside your organization 'cause the organization will be then found responsible and that will find its way up to the CEO or something like that. And so it shouldn't, so it'll come as a surprise when something bad happens and the fingers start pointing. And, the best thing is to have accountable under sport AI. So you can't just make the developers themselves of the systems accountable because a lot of these risks in AI, well, some of those might be, security risks. Some of them might be just, just the, the organizations taking an unacceptable risk in how they're leveraging AI and you can't hold the developers necessarily responsible for that 'cause they don't necessarily have the background in that particular field to understand what they've done. You know, it could, sometimes it could be the third parties. Imagine if the you are using open AI's APIs and those APIs do something really bad 'cause of open AI's oversight, what they're doing. That could be the third party. Or it could be even be the end user if they're misusing AI. So that we OpenAI a while ago found various unfriendly nation states using their services. For bad uses of AI and, and we're blocking it. But as the implementer, mostly it's going to be you, the accountability, frankly. So you should have named people and you should have enough, you should be doing due diligence on suppliers to make sure you're not using dodgy ones. And where, where's the data going? What's the data being used for? You know, have you put enough protections in place to, to prevent misuses of real AI services? So really it's who owns that decision is you get to choose and it has to be somebody, and it might, it probably be more than one person, and it should, and those people should be speaking to legal and to user groups and to the security groups, et cetera. If the system can actually affect people, it needs a named person who can press the stop button. And if you do not have that facility. You have a, a potential big problem waiting for you. But the thing is, it shouldn't be hard to get there. That that's one of the, I think one of the easier things to solve, to be honest with you. look, with regulation evolving rapidly, how should leaders think about designing AI principles that are future proof? Not just compliant today. Hmm. Yeah, I think people, people should be thinking about this now. And I as to say, start with something that's something that's reasonably robust. Now principles are not outcomes and then not laws, but I'd start with things like looking at the EU AI Act to get a good understanding of it and also take inspiration from organizations who are known to be using AI ethically. But principles are outcomes you want, and they're not references.'cause laws do change. So the EU AI Act will change. Outcomes don't, you know, so, you know, people, people being able to contest out, decisions survives any micro changes in legislation , and you should anchor all of these to risk and not technologies. If I, anything, all of in all of this conversation, technology is the last thing you're talking about. Imagine it's an alien technology and the actual, doesn't matter if it's on AWS or on Microsoft, it's all about management, management of risk. And design these, design these things for auditability by default. Make sure you can actually test these things so you can test them, and you can get the results of those tests. And you, and if you ask the question, you know, where are our principles at , do you feel like they're working for us? Can you prove that we're sticking to 'em? If you can't answer any of these questions and you've got a big problem, and think about making them as evergreen. Try and make them as evergreen as possible. Don't make them too specific on individual things or likely to change. It is like with core values, if you have core values, you shouldn't be changing your core values every week. That means that you don't have robust core values that are evergreen. Cause like compliance, any given time is a snapshot of time. You know, it it because, and the world moves, because future proofing these is, is a kind of a capability. So you think about the ability to evidence your intent, test for harm and adapt safely, you know as the rules and the technology both change underneath you. Yeah. I can imagine. Over, over the years, you've, you have sat in many boardrooms as well, Jeff. Mm-hmm., When you're in a boardroom, what signals tells you, like a leadership team generally understand AI ethics versus simply repeating the right language? I think there's a few things. First of all, that they know about the named accountability and it exists, that they understand the accountability challenges,, they're asking for direct evidence. They understand enough about it to ask for direct evidence of things., They can name what the, what the, the, the, the yeses and the nos are when it comes to goes and no GOs and they say, no, we don't, we definitely don't do that. If they can emphatically answer those things, they probably have a reasonable grasp of them. And they're talking concrete scenarios and not abstractions. You know, they think about, here's exactly what could go wrong in our business context. And what we detect and how we shut it down. So really there's a few, there's a few things that when people talk in concrete terms, it probably means they've got a good handle on it. You know, there there's performative stuff you sell. We're, we're fully committed to something, something, something, something. It's funny, it's like, you know, when people say to be honest and then they lie to your face. When people say, we're fully committed to, sometimes it's go, are you committed to this at all? Do you even care about this? I think of just using policy as a crutch and just saying, well, our policy say such and such, without any further rational thought behind it, is usually a good sign that they're not really taking it, it seriously, it's quite performative. Ethicsand maturity I think it's all really visible when leaders can describe the trade offs, you know, and when you've worked close to the lines, 'cause it's where the lines are in the ethics that it's hard, you know, if you're close to a line, that's where it gets really interesting and where you choose fAIrness over speed or revenue and things like that, you know, and how you actually prove that that worked. And if a board only asked one or two questions about AI principles, what would you say those questions should be? I think the only asked two, and I think there's, a couple that make the entire system of their thinking reveal itself. And it's like, first of all, it's where are our red lines. And who has the authority to stop in the case of us hitting a red line. You know, if they can't answer specifically on that, you know, it's just a bit performative. And then the second one is, you know, was it, oh, people always quote we demming saying, in God we trust all those must bring evidence or data. And Deming never sAId that, which is amusing because, 'cause it's based on evidence and data. But, same in this, you know. In God we trust all others, when it comes to ethics, must bring evidence. You know, how do we evidence compliance with our principles in real operations? If they can't answer that, then it's, it's theater. If we could squeeze in the third one, it's like, you tell me what's the most likely way AI could harm your customers or your staff or your business in our context. And if they have no idea, then haven't thought about it enough. Can you take, get off of it quick? Yeah. Really do. They do. We're coming close to the end here Mm-hmm. Hmm. So, for a leader listening, who knows, they should define AI principles but hasn't started yet. What is the first practical step they should take in the next 30 days? Well, I, I'd get them to contact me and hire my services. Yeah, of course, of course. I think that would be a good start. But think about what, what would I suggest, like is do a focused AI reality check. Doing some in some workshops tend to one, like a, one, like a one page planner page effectively, you know? And really what, what you things, well you, first of all, you need somebody who's accountable. You need some kind of accountable sponsor. Yeah. You need to also know what the heck's going on already 'cause was it over 85% of organizations have got people in them using generative AI for day-to-day work. Only 40% have got actual licenses. Which says the, the, the great AI gap is massive. So you need to understand what is actually happening in your organization. I cannot stress that enough 'cause it's, and it's like a bring out your dead. It's an amnesty. If you've been doing it, you're not gonna get shot. Within reason. Just you need to tell us. We need to find out what is going on and there'll be, and there'll be fragmentation, there'll be cost savings all over the place. Pick some use cases that you need to think about. Yeah. And then think about what, then think about ideating that what, what the trade offs and the risks and what data is involved, who's affected, how much is gonna cost? And then start, how do we build out these principles? Which ones do you have to like with the values you have to battle test them another way. Which of these ones are we interested in? And take your top five principle statements in really plain English, no jargon. Or whatever your native language is really plain And what are the non-negotiable rules behind them and who are the accountable people owns them and what is, and then also like what's your risk tiering model for stuff as well?'cause you'll know in your own organization what's the low high risk. So don't start with some fancy manifesto and, and words really start with like your real use cases, the red lines, the owners, and publish your V 0.1 paper that that sets you in motion. Ah, great advice. Great advice. And finally, if you could leave leaders with one mindset shift about AI ethics, some thing that would generally change how they approach AI decisions, what would it be? Stop a treating AI ethics as a, oh, we do this to be nice to, to make, you know, maybe it makes us look good as a B core, something like that. And start treating it as like critical systems system safety. AI ethics isn't some hand wavy statement of intent. It's risk engineering for human impact of your users, your staff, and the world, frankly, you know, when people start thinking of it as safety critical, and absolutely critical for the business, rather than being oh, it's these nice, nice things that we put on our webpage. You know, they start thinking about in more in security terms, like, what could go wrong and how will we know when it goes wrong? They'll start to funding this of the actual testing, the monitoring and things like that an instant response. We have really strict red lines and things like that. You know, if it matters enough to you to actually use AI and automate it, it should matter enough for you to make it safety critical, you know, safety critical system in, in that, in that particular safety critical tier. You know, even the green tier needs to be secure. You don't get, people always say, we don't need a Rolls Royce solution. Yeah, you can build a car that doesn't have, you know, a nice wall, a dash and everything, but you can't build cars nowadays without seat belts. You know, it's not legal. So cut your cloth to your, to your risk level, but always make sure even the, even the green, even the green light scenarios still need basic levels of security and safety and, and, and ethics behind them. I've genuinely learned a lot here myself. It's a subject I don't know loads about, so I've, I've done more listening today, so it's been really, really interesting for me. So I know, I know the people who listen to the podcast will get a lot from this as well. Definitely. Really good. Hey, Jeff, just before we go, can you just let people listening are watching know where to find you and how they can get in touch with you. Sure. Okay. So, you can find me on LinkedIn. I'm Jeff Watkins. For those listening, you won't know what I look like, but hopefully you'll see the thumbnails , I am just opening now so by the time this podcast goes out, you'll probably fully launched a, a small business called North Star Intelligence. And that is AI consultancy. That's what I'll be doing most of my time fractional pieces, not, it doesn't necessarily need to be huge statements of work. It'll be working with a number of different industries on AI strategy, AI ethics, AI policy, governance, and also the technical elements of it. I've just finished my AI master's degree, which is really fun. So the technical elements as well. You can also find me on the Compromising Positions Podcast, which I run with my partner, Leanne Potter. If you're searching for that podcast, it's on all your normal podcast platforms. Search for compromising positions, podcast not compromising positions unless you, especially not on a work laptop, unless you wanna have the security team asking you some. Awkward questions, you know, and reach out to me, reach out to me, listen to the podcast, follow my business when it launches on LinkedIn and just have a, have a, have a conversation. I look forward to it. Ah, brilliant, Jeff, I look forward to continuing these conversations with you and see where you can take your business as well. Hugely successful. Okay, well thanks very much and yeah, thank you. Thank you. And I'll speak to you soon. Cheer Jeff. Thanks. Bye-Bye.