
Digitally Curious
Digitally Curious is a show all about the near-term future with actionable advice from a range of global experts Order the book that showcases these episodes at https://curious.click/order
Your host is leading Futurist and AI Expert Andrew Grill, a dynamic and visionary tech leader with over three decades of experience steering technology companies towards innovative success.
Known for his captivating global keynotes, Andrew offers practical and actionable advice, making him a trusted advisor at the board level for companies such as Vodafone, Adobe, DHL, Nike, Nestle, Bupa, Wella, Mars, Sanofi, Dell Technologies, and the NHS.
His new book “Digitally Curious”, from Wiley delves into how technology intertwines with society’s fabric and provides actionable advice for any audience across a broad range of topics.
A former Global Managing Partner at IBM, five-time TEDx speaker, and someone who has performed more than 550 times on the world stage, he is no stranger to providing strategic advice to senior leaders across multiple industries.
Andrew’s unique blend of an engineering background, digital advocacy, and thought leadership positions him as a pivotal figure in shaping the future of technology.
Find out more about Andrew at actionablefuturist.com
Digitally Curious
S7 Episode 3: AI Guardrails: Navigating the Ethical Future of Technology
What happens when we prioritise innovation over ethics in AI development? For the 100th episode of the Digitally Curious Podcast, Kerry Sheehan, a machine learning specialist with a fascinating journey from journalism to AI policy, explores this critical question as she shares powerful insights on responsible AI implementation.
Kerry takes us on a compelling exploration of AI guardrails, comparing them to bowling alley bumpers that prevent technologies from causing harm. Her work with the British Standards Institute has helped establish frameworks rooted in fairness, transparency, and human oversight – creating what she calls "shared language for responsible development" without stifling innovation.
The conversation reveals profound insights about diversity in AI development teams. "If the teams building AI systems don't represent those that the end results will serve, it's not ethical," Kerry asserts. She compares bias to bad seasoning that ruins an otherwise excellent recipe, highlighting how diverse perspectives throughout the development lifecycle are essential for creating fair, beneficial systems.
Kerry's expertise shines as she discusses emerging ethical challenges in AI, from foundation models to synthetic data and agentic systems. She advocates for guardrails that function as supportive scaffolding rather than restrictive handcuffs – principle-driven frameworks with room for context that allow developers to be agile while maintaining ethical boundaries.
What makes this episode particularly valuable are the actionable takeaways: audit your existing AI systems for fairness, develop clear governance frameworks you could confidently explain to others, add ethical reviews to project boards, and include people with diverse lived experiences in your design meetings. These practical steps can help organisations build AI systems that truly work for everyone, not just the privileged few.
This is an important conversation about making AI work for humanity rather than against it. Kerry's perspective will transform how you think about responsible technology implementation in your organisation.
More information
Kerry on LinkedIn
Thanks for listening to Digitally Curious. You can buy the book that showcases these episodes at curious.click/order
Your Host is Actionable Futurist® Andrew Grill
For more on Andrew - what he speaks about and recent talks, please visit ActionableFuturist.com
Andrew's Social Channels
Andrew on LinkedIn
@AndrewGrill on Twitter
@Andrew.Grill on Instagram
Keynote speeches here
Order Digitally Curious
Welcome to Digitally Curious, a podcast to help you navigate the future of AI and beyond. Your host is world-renowned futurist and author of Digitally Curious.
Speaker 2:Andrew Grill. Today on the show, I'm joined by Kerry Sheehan, an award-winning AI policy and strategy expert with extensive experience in responsible AI implementation, governance and ethics. As a qualified machine learning developer and former strategic advisor to the Alan Turing Institute, she brings a unique perspective to AI ethics and guardrails. Her work spans across the UK government BSI AI Standards Group and various sectors, including health tech and business.
Speaker 3:Welcome, kerry Brilliant. Thank you. Good to see you, andrew, and great to be here.
Speaker 2:Now. Your career path is fascinating. It's evolved from journalism and PR to becoming a recognised AI policy expert and machine learning specialist. Perhaps you could share your journey that led you from journalism to AI, ethics and guardrails.
Speaker 3:Yeah, I guess on the face of it it does sound a bit of an odd one. How do you go from the creative side to more the data side? The two often don't collide, although everyone has to have the data, the math skills, analytical skills now. So journalism and PR, where storytelling and public trust are everything. But I was always curious about what happens behind the scenes. And even just very quickly going back, it seems too many years now they were looking at automating certain parts of the news. How do you automate email inboxes? Because they're just getting swamped day in, day out. And that has led, you know, many news outlets to today to build fully blown data platforms, ai-led news. You see AI, ai led news reports now.
Speaker 3:So that's what sparked my interest in talking to some of the techies and some of the journalists, more on the tech side of things, going back those years. So it's more about you know who controls the narrative, who gets left behind, what's really going on behind the scenes, you know, in those world. And that led me to the data side and eventually machine learning. The turning point came when I realized these tools weren't just technical. They were actually shaping lives. You know whether that's narrative that we hear in the media, right down to decisions that are made on UI everyone else now, day in, day out. And that's when I kind of thought oh, what is this all about? You know ethics and governance, and it led me from there to go through some really steep learning curves, particularly getting picked up by a lady called Dame Wendy Hall, who I've got all the time in the world for. She picked me up and basically said put your money where your mouth is. You need to go on a tough learning curve if you really want to be credible. And that's what I did.
Speaker 2:So for our non-technical listeners, what are AI guardrails and why are they important?
Speaker 3:Well, we often hear about AI guardrails. So I think I was thinking about this how do you best position it for a general lay person that isn't as much into the technical detail as some of us out there? So I think, if you think of AI guardrails, they're like the bumpers in the bowling alley. You know, I still do use them sometimes, which can be quite fun. It's not glamorous in that sense, but they do stop things from going completely off track. You know, technically they're limits, safeguards, built into AI systems to keep them from causing harm, discriminating, making decisions. We can't understand or reverse, but I quite like that bowling alley analogy.
Speaker 2:The other one would be the bumper cars when you're in a Dodgeman car and it's helping you not bump into the other people. So I think in this brave new world of AI, regulations and standards are going to be so important. When I and I'm an engineer, you're probably aware of that so when we can pick our phone up and we arrive in France and it just works, that's because of standards and basically having standards that work the GSM standards. So the same is going to have to apply to AI, so that it works around the world, but also so it works for us, not against us. So your role as an advisor to the British Standard Institute's AI Standards Group what principles have guided the development of global AI standards and how do these standards serve as ethical guardrails for AI development?
Speaker 3:Yeah, I mean. First of all I'd like to say, you know, a big well done to standards making bodies. You know, like the BSI in the UK, that is really starting to lead the way on some of the AI standards development within the UK and across the world. It's not easy by any means to get to consensus across the world and it really is about fostering good practice. So people may or may not be aware, you know some of these standards. They don't stand up in a court of law but they can be used in a court of law and it really is about fostering good practice and hopefully one day it will become best practice.
Speaker 3:So standards are really rooted, you know, in areas of ai such as fairness, accountability, transparency, particularly human oversight. Um, there's a really good bsi standard, the ai management standard 42001. That's etched in my brain now. Um, and that is the case because it really is, you know, a good checklist for making sure you are doing the right thing and on the right side. And ultimately, standards provide basically a shared language for responsible development, which is what we're all trying to to get to, and not to slow down innovation, because that's not what anyone wants to do, but to try and give it some structure. As we said. What are the guardrails that needs to be in place, and it gives architects both creativity and building codes, I think in a sense. So I am a keen advocate of them.
Speaker 2:Now you spent some time at the Alan Turing Institute. Let's just step back. Alan Turing, famously, back in 1950, wrote a white paper and his first line of that paper was can machines think? Could you just maybe give us a bit more colour about what the Alan Turing Institute is about and what your work entailed there? Working on ethical considerations data and AI institution.
Speaker 3:It's obviously grounded in the principles of Alan Turing, is widely believed to be the forefather of AI. It is a research academic institute as well and it's leading the way in again making sure governance, ethics are the priorities, but also going into some really interesting research areas that will help drive you know, the UK, the economies globally forwards, but making sure, again, we are staying on the right side. So I was approached to be part of a strategic advisory group when the Alan Turing Institute was developing its latest five-year strategy a couple of years ago. I was very privileged to be able to do that with some very esteemed people, just to provide insights, challenge ideas. To what should the Alan Turing Institute be focusing on? Going forwards? It's a very pivotal time for many of these organizations out there.
Speaker 3:As we know, the pace of AI has increased exponentially over the past couple of years. It looks like it will continue to. So how do we really enable the ati to be forward thinking, support innovation, be a center hub for, you know, ethics and governance, whilst not losing pace with what's going on out there? And again, one of the hardest challenges that you'll always face in these positions is sort of balancing bold innovation, because that's where we all want to go. We often hear it's talked about with long-term strategy can be a bit of a dichotomy there. The short-term innovation let's get going, and is this really in the longer-term interests of people? So, in a nutshell, I I helped shape, you know, parts of the strategy around explainability, inclusion. You know working with others and supporting them to develop some of the challenge areas which are still ongoing till today.
Speaker 2:So just pick up on one thing you meant there. You talked about explainability. Now, I learned about this term a couple of years ago in talking to some of my AI experts on the podcast. Again, for a non-technical audience what is explainability and why is it so important?
Speaker 3:So it's really important that you know if anyone is using, say, you've got the end users of an AI system, they can understand enough of how that system has made decisions on them. So, for example, you apply for a mortgage, a banking service, and you get turned down. There's no real explanation for it. You think, oh, I was all right on one, two, three, maybe not four. And then you call them up or you email them or go on live chat and I've actually done this as an experiment. Can you explain to me how you've made this decision? And they say, no, it's just our criteria, it's commercially sensitive, we won't tell you.
Speaker 3:So that is one end. People need to understand that, because how can we put in place effective challenge mechanisms for consumers, customers, people, citizens and redress mechanisms if we don't understand it? And the other side is obviously making sure that the tech developers are fully putting into those AI life cycles all the different various touch points. What exactly is this algorithm doing? Now, that can get quite complicated for certain audiences when you're talking about national security and things, but at a general level, we need to understand how these decisions are being made, what algorithms are being used, are they the right ones, and was the data the right data? Was it ultimately any bias or discrimination caused? And do you even know you're interacting with AI? That's going to be a big thing going forwards.
Speaker 2:So there's been a lot of work in the EU around the EU AI Act and I talk about this a lot because it's probably one of the first bits of regulation that's actually found its way from the decision room out into the real world. How would you characterise the current UK approach to AI regulation and how effective is this principles-based approach?
Speaker 3:So the UK's approach to AI regulation is based on the five AI principles. They are the principles of the previous government and that has now moved forward into the current AI opportunities action plan, which is really drilling down into those areas and stating you know how they will support innovation, economic growth, how they will support uptake of AI across many, many sectors, if not the whole economy. The principles-based approach and it's not my approach, but I have a good understanding of it is really to support innovation. So when you're talking about regulation of AI in the UK, you know many regulators have been and are working up their approach to regulation, whether that's via call for inputs, consultations, putting out guidance to industry, again to foster that good practice going forwards. It's always based on outcomes. So those principles allow companies, sectors, professions. We want you to use ai, use ai responsibly, sensibly, use ai for whatever your interests may be, be inclusive, but ultimately make sure you have fair, transparent, explainable, appropriate outcomes for people. So it's not too restrictive. It really is an enabler to innovation.
Speaker 2:So I love. You've mentioned the word actionable a few times, which I love because it's in my brand. I call myself the actionable futurist, as everyone knows. So what frameworks are most effective when actually implementing ethical AI in organizations? How do you go about deploying ethical AI in an organization that's been running for 10 or 15 years?
Speaker 3:If you think about ethical AI, some of it is an extension of basic good practice governance, risk governance, good business decisions and looking at those possible impacts or unintended consequences, whether that's to the business, to the people that work for you, that work with you, the communities that you serve, the society, um at large, but with the added extension of having that real fundamental ai um understanding and the and the outcomes um and things like that.
Speaker 3:I mean there are lots of um frameworks popping up out there, there are lots of guidance pieces popping up out there, so I'd always recommend having a look to them, whether that's OECD, again, british Standards Institute has got its 42001 management standard which acts as a framework for things that are considered good practice to have in place.
Speaker 3:But one which I also really like is the outer framework, the assessment list for trustworthy AI and impact, any impact assessments that are tied to real user outcomes. So again, it's how do we move from theory to practice to be practicable, to actually help, you know, organizations and business move forwards? But if I'm thinking about what is the most effective thing here, you know frameworks I mean diverse teams asking uncomfortable questions early, for example. That's where real responsibility starts and that can also help you to shape an agile framework. You can have the founding principles of your framework in place, but depending on what systems, tools, programs you're developing or even buying, because it's all about your understanding as well as your internal development. Whichever path you choose or you may choose both that's where real responsibility starts.
Speaker 2:Let's just touch on diversity for a moment there, because what I understood from the research a few years ago was that diversity is so important when you're actually running up AI teams, Because in the past, I think there was famously a Google or an HP team that did an image recognition system that wasn't able to recognize people of color because the people developing it weren't of color. So how important is diversity of thought, diversity of skill thinking, diversity of all types when you actually run an AI project, so that the people that are developing it have very wide range of diversity and you're not getting that bias in the model.
Speaker 3:This is the vital question. This is the ultimate number one on the checklist for me. I mean, I have been heard to say a few times you know, if the team's building the AI systems or coming up with the AI ideas aren't diverse and ultimately, if they don't represent those that the end results are going to serve it not ethical, we should be saying no. Going back to the, to the drawing board. I mean that's hypothetically. I mean, how do we do that? Um, realistically?
Speaker 3:So it's about bias, mitigation approaches, and I I like to think of bias as bad seasoning. So I've gone from the bowling alley to now we've, we're in the, we're cooking the dinner, we've got some, some bad seasoning. You can ruin a great recipe with just a little, and that's what I tend to do quite often. But you can mitigate things through.
Speaker 3:You know, you can look at things like data, rigorous testing, but across demographics, that's really important because it can't always be a one-size-fits-all, particularly for some of these big systems that we're now seeing in place which are making decisions on diverse population. I mean, you've mentioned the example there, the discrimination against people of a certain colour. We're not always saying this has been done deliberately. It could just be an actual oversight. It could be an indication that there wasn't enough oversight points or think points and guardrails in place, but by having actual humans with lived experience in various different areas review the results, I think that can only be better, whether that's ideation, implementation, but all the way across the life cycle. So really that is no different to building diverse, high-performing teams currently, and diversity of thought comes in all different forms it's demographics, it's various skill sets, critical thinking. It's not always the typical, obvious things either.
Speaker 2:I read actually, before ChatGPT 3.5 was deployed, they got 40 people from Upwork 40 being a small number, they can manage, and I read the white paper about that. You might give us some color to that. So how important was it for them to basically have those 40 people and throw all sorts of ethical challenges at them so they could see what they're going to get Now? Day one I think it still was a bit rough at the edges. It still was a bit rough at the edges, but when people realise that they actually did that, is that a really good example of before you set an AI system to work, actually trying it on real diverse subjects.
Speaker 3:Absolutely. I wouldn't let anything leave the building, so to say, without actually fully testing it, and testing it with a variety of end users, and that is, across the diversity mix, whether, again, that's demographic age, cultural norms, different communities and populations, different languages, for example, but also to ensure digital inclusion as well, those who may have barriers to access, and you're discriminating against them before you've even started. So there's lots to think about there.
Speaker 2:So you've done a lot of work with UK government departments for service development and innovation. What have been the most challenging ethical dilemmas you've encountered when implementing these systems and how did you address them?
Speaker 3:Like working with any government across the world. There is a lot to balance the world. There is a lot to to to balance um, and at the heart of it, um is always inclusion, user-led design, um, user-led contributions, etc. Again, it goes back to that point of who are these systems going going to, to serve and trying to make them as fair and equitable as possible, um. One project that I did work on that I can talk about, which was very interesting, was supporting farmers, for example. You know how to provide smarter, more efficient services to farmers to get the subsidies out of the door and enable them to get the food to market to keep the nation fed. They do an absolutely phenomenal job and some of that was, you know, education we can't do things on paper anymore, it's just not going to work but also ensuring that, as you move more towards automated decision making, you are reassuring people that fairness isn't just a checkbook. So you have to look at those kind of areas. So some of that is about communication and engagement and inclusion, to ensure that you are bringing people along with you.
Speaker 3:Again, explainability, appropriate transparency. They fully understand what you do and that's embedded into every stage, whether that is, you know, the data collection. This is how we're going to do it now model training, human oversight. And for me, what is the best mechanism? I think it's regular independent audits, transparency to users and not just what the AI does, but why and what's in it for the end user. These are the outcomes we're getting. We think they're fair, they're right, but this is the benefits to you, whether that's smarter, more efficient access to services, decisions, money in your pocket, whatever that may be.
Speaker 2:A lot of clients I'm working with. The edict from on high has been we're doing AI, which is very broad, and so of course people are getting. They want to get to market quickly, so innovation and speed to market is important. But how can organisations balance responsible AI use with competitive advantage in these rapidly evolving markets?
Speaker 3:If we think about responsible AI, it shouldn't be a burden. It shouldn't be a burden for any company, business or entity out there. Ultimately, it's about your brand's reputation strategy. That's what you always have to think of. You know, I do believe reputation has been won and lost on ai going forwards, um, whether or not the end result is right or not. That could again, um, just be that appropriate transparency piece. But the companies that are doing this right are really attracting the talent.
Speaker 3:We often hear that there's a bit of a skill shortage out there, and now we're all going on a mass upskilling, which is absolutely fantastic. They're winning contracts and building trust, and I do foresee that some may possibly be making ethical shortcuts. You might save money today, but it's going to cost you, perhaps the market tomorrow. So there is a balancing act there. Yes, you can come up with the ideation, prototype, minimal, viable products and quickly get them out there and tested, but you still need those appropriate guardrails in there. And again, when you've got profiles and profits involved, you can kind of see why some of these things may happen.
Speaker 3:But actually I think you know it is for people, people whether you're working in tech devops or you're, you know, a business function supporting, you've got to stand up and say, right, is this the right thing? Where are these checkpoints? Where's our pilot? Where's our testing? Okay, this is now good to go, and we, you still need those guardrails and those test points as you move um along. Um, it can't just be. Let's, let's chuck up AI decision-making system and put it out there and see what happens in the wild, particularly not en masse, because your reputation will be shot down.
Speaker 2:So are there any companies or areas or industries that you think are getting it right and people should look to those, as this is best practice? Maybe we can call out a couple of really great examples.
Speaker 3:For me. I'm always mindful of saying best practice because often best practice is considered to be gold-plated beyond the lawyers. But we're just trying to foster, obviously, good practice. In the future we may move to best practice because obviously this is, you know, work in progress for many of us out there. I quite like the nuclear industries and the aviation industry, so the real safety conscious industries I think you know high profile brands, you know high street brands. Customer service facing entities could have a lot to learn from those areas. And that's mostly because they work on a functional safety model which, again, you can find it, you know, have a look online.
Speaker 3:To the extent you know, with aviation, for example, most of us get on a plane. Now I don't even think about, oh, is this going to get me there? What's going to happen? It may be a floating thought in some people's minds. You sit down, you think, oh, am I going to get my cup of tea? Oh, my gosh, is the wi-fi going to work? When am I going to get off? Type thing. And that's because they've stuck to the functional safety principles and just kept driving down the risk as much as possible to what is believed to be the lowest acceptable standard, and they still work on that today. So I think there's a lot to learn there from a governance point of view particularly, particularly because they're dealing with really high consequence areas and those which could be catastrophic to life.
Speaker 2:So we're seeing a lot of public models and now I'm seeing with clients. People want to go inside the enterprise and build their own enterprise, gpt. What are the differing considerations they should have around guardrails, when it's inside the enterprise and inside the firewall?
Speaker 3:There does seem to be a trend of building in-house LLMs. Everyone seems to want an LLM. I'd say, well, what for, again, if it's making you smarter, more efficient and, of course, you can build the walls around it so you keep your data. And you can do that with some of the big tech companies or you can do it yourself. I think you just need to consider what you actually want it to be used for whether it's to scan through documents, to to provide people with summaries quicker, or whether, again, it's to help that ideation to move to the next critical thinking decision making stage.
Speaker 3:Building some of these systems can still be quite expensive, so, again, you've got to work out you know the return on investment on some of these. But but again, there is a saying out there just because we can doesn't mean we should. So again, it's about really understanding what you need them for. And would an LLM be part of your future business operating model going forwards? Would it dock into your current and future infrastructure? For example, are you going to start to automate your work processes and your workflows and move that forwards, or are you really just thinking pie in the sky, piecemeal? Everyone else has got an LLM, so I think we should have one.
Speaker 2:So, looking ahead, what emerging ethical challenges do you anticipate in AI development, and what guardrails should organisations be establishing now to prepare for these challenges?
Speaker 3:It's always about trying to be on the front foot with sort of your AI, emerging technologies, quantum, for example, and the whole innovation piece, whilst doing it responsibly. So some of the things that I'm looking at quite closely you know, sort of foundation models, see where they go, synthetic data, for example there's a lot of discussion out there on that and also agentic AI quite closely. I think agentic AI has got good promise. And again, quantum, because quantum and AI will be closely linked, particularly going forwards. And also there is the ethical consideration for quantum AI, whether that's quantum-fuelled AI or they work quite closely together. So we will need, whichever way we go, we will need guardrails for autonomy, accountability and governance. I mean, some of this is technology agnostic. It has to be going forwards and that would also place, I think, whether it's people in the boardroom or DevOps in a good place going forwards, especially as AI begins to act as a decision maker rather than the tool, and that's when it will get very interesting, I think. And sort of guardrails, I mean, I think for me it's about guardrails without rigidity, because obviously we have to be agile now with AI If we're asking people across the AI life cycle at various points in the systems to be agile. Now, with AI, if we're asking people across the AI life cycle at various points in the systems to be agile, you might need to switch the system off there.
Speaker 3:Pause. That that's not quite right. Or somebody's asked their data to come out, whether it's anonymised or not. What are you going to do there? So there shouldn't be handcuffs is what I'm saying. There should be more like the scaffolding. That's how I think of guardrails and for me, the best ones are principle-driven ones with room for context. This is going to be really important going forward. What is the context? Ai can make decisions all day long if we get this right, but what is the context here before we actually go ahead with that system or with that end decision, for example, whether that's internally a business or externally to end users? You know, for example, how you handle explainability in a health app, for example, from recommendation system. You know may differ from somewhere else and ultimately, standards as part of this give direction, but they're not a dictatorship. So there's, you know, agile scaffolding, but not handcuffs. We need to be not rigid, but have those pause points and those shut off points and restart points as necessary.
Speaker 2:Just picking up on the agentic theme agentic AI or AI agents. This is where AI will be somewhat autonomous. So, as you said, it's making decisions based on various inputs. What I've always thought is that it's going to be about trust. So I would trust an agentic system to book me a meeting. I'm not sure I would trust it yet to access my bank account and pay for something. So where will trust come into the world of agentic AI? Do you think?
Speaker 3:I think that goes back to you have to build trust to start with. Again, there's a lot of discussion about how do you treat people fairly with AI. You know there are current laws in place in many countries. People, consumers, citizens have to be treated fairly. What does that actually mean? That is to be determined by you know, whoever's developing the systems and the ultimate decision makers, and it also goes back to, I think, brand reputation, organizational tech development, trust. You need the right guardrails in place. You need to explain you know what you're doing with these systems for, ultimately, people to to trust them and they need to do what they say they're going to do after you've explained it to people.
Speaker 3:Because Because ultimately, you know, human beings have a fear of the unknown. There is a psychological element to this and AI, for many, can be the fear of the unknown. We know AI, for example, has got a bit of a marketing problem out there. The robots are coming. You see robot hands over the typewriters, the glowing brains, for example. It's got nothing to do with it and that's from some quite high profile commentators and some organisations. So we just have to kind of be measured with this and a lot of it will come down to explainability and then into the future. You know, ai will just become the tool. It won't be the story anymore. That's where I hope that this ultimately goes.
Speaker 2:So regulation is a hot topic. Every government wants to be seen to be regulating things properly. How can companies best prepare for what will be probably fairly heavy regulation, depending on the country and the culture?
Speaker 3:Yeah, this is always an interesting question and it's particularly important for those that may have to operate across various different regulatory jurisdictions. You know you earlier referenced the EU AI Act, for example, across the EU member states. Uk is no longer a member of the EU, although we have continued with, you know, the general data protection regulation, the data side of things, which is a big wraparound for AI. I think, ultimately, wherever you operate, it's about understanding if you are in a regulated sector, of which there are many. I think in the UK alone there's 90 to 100 different regulated sectors, some well-known, some key economic regulators and some quite niche areas. It depends where you are operating Keeping in touch with what they're doing, understanding any call for inputs or consultations that they're putting out and, you know, reading them quite thoroughly, responding to them. If you don't have your say um that, that that's up to you. But that is um a mechanism and they are quite willing to have conversations with people and it's about understanding what the approaches are. I mean the euai act. It's no secret, it's on the eu commission website. Um, we all knew it was coming. You know it's a phased approach and 2026 is a big date for when you know some of the bigger things will start to be implemented. For example, the eu is putting on regular webinars now for businesses that operate in those countries, through those countries, to those countries. You may not be based in the member state, but you might take data from them or operate through them um see if you can attend some of those um, have a look at what guidance is being put out, have a look at what information is being put out and then start to map it out. You know, put in place.
Speaker 3:Ok, I operate in the UK. I need to ensure these are the principles I adhere to. Most regulators in the UK are doing it against current legislative frameworks. So you've already got your licensing agreements and codes that you need to adhere to. There's various laws in place that you would still need to adhere to, and you know you need to foster that good practice. So it's just about mapping it out in that sense, and I would think that you know anyone that is in a regulated sector, whether it's in one jurisdiction or a couple or many, wherever it is in the world, a couple or many, wherever it is in the world. If you haven't been thinking about this already, what's been going on? But it's never too late.
Speaker 2:Some great practical tips there. We're almost out of time, but we're at my favourite part of the show, the quickfire round, where we learn more about our guests. I'm going to fire some questions at you, iphone or Android.
Speaker 3:Android. I like systems I can open up and poke around in Window or aisle, aisle. I like the freedom to get up and go, metaphorically and literally.
Speaker 2:Your biggest hope for this year and next.
Speaker 3:That we stop asking if AI needs ethics and start asking how to build it in as default.
Speaker 2:I wish that AI could do all of my.
Speaker 3:If it could triage my inbox for me in an effective way, I think I'd give it a hug. The app you use most on your phone? Probably an app called Notion. It's becoming a bit like my second brain at the moment. The best piece of advice you've ever received Don't just build things people can use. Build things they should use. What are you reading at the moment? I've started to read a good book. It was recommended to me A lady called Wendy Liu, which I'm sure a lot of your listeners will have heard of. It's called Abolished Silicon Valley. It's quite provocative, but it could be quite timely. Who should I invite next onto the podcast? I think maybe someone from a community organisation working on tech justice. I think they might be able to give you a completely different lens on AI. How do you want to be?
Speaker 2:remembered.
Speaker 3:I think it's someone who'd helped AI work for everyone, and not just the powerful.
Speaker 2:So we're all about actionable things here. What three actionable things should our audience do?
Speaker 3:today to prepare for the threats and opportunities from AI, ethics and guardrails. If you're using AI already, audit the AI system that you use. Think about how it treats people. What's your governance framework? Could you confidently disclose that to somebody or communicate it in a very easy, basic way? Add an ethical review to your project board or your working groups, whatever your structure is within your business organisation, and include someone new with lived experience in your next design meeting. We can't stand still and I think there's always people out there that can add something new, great actual advice.
Speaker 2:Kerry, a fascinating discussion. I've learned so much today. How can we find out more about you and your work?
Speaker 3:Well, linkedin would be the best place. I do go back to people. I was happy to have conversations. I'm not saying I know all the answers by any stretch of the imagination, but we're all great together. We can always find someone that can answer any questions and I promise I will get back to everyone and I'm a bit behind, but I will respond.
Speaker 2:Kerry, thank you so much for your time today you're welcome good fun thanks.
Speaker 1:Thank you for listening to Digitally Curious. You can find all of our previous shows at digitallycuriousai. Andrew's new book, Digitally Curious, is available at digitallycuriousai. You can find out more about Andrew and how he helps corporates become more digitally curious with keynote speeches and C-suite workshops at digitallycuriousai. Until next time, we invite you to stay digitally curious.