#DigitalFrontiers
#DigitalFrontiers brings together influential voices to share their expert insights and practical wisdom on the intersection of law, AI and emerging technology.
Hosted by Partner and technology law expert Richard Nicholas, each episode features down-to-earth conversations with leaders from the legal sector and beyond, exploring the human side of AI innovation and digital transformation.
#DigitalFrontiers
The Role of Ethics in AI projects
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Meet Sidra And Her Mission
Richard NicholasHi, this is Richard Nicholas, and today I'm joined by Sidra Hassan, who is an AI ethicist working in various businesses. So I wonder, Sidra, if you could describe the sort of work that you do for businesses.
What AI Governance Looks Like In Practice
SPEAKER_01Absolutely. Hi, Richard. It's really nice to be here. Oh, the kind of work that I do for companies is a really good question because my background is consultancy. And as anyone who's worked with consultants or in a consultancy knows, kind of wear many hats depending on the time of day, the time of week, the time of month. But the sort of work that I tend to do is really helping clients adopt AI and help them adopt it with a strong governance foundation. And that governance for me then extends into ethics. So what that looks like is conducting workshops, understanding pain points, finding the use cases for our clients that deliver the most return on investment for them, deliver value for their customers, but also helps them be, you know, a strategic player in the AI space and set them up for success in the long term. And really, my passion is definitely, I mean, yeah, it bleeds into my work with AI ethics and governance. But I love to take our clients on that journey as well, especially if they're from a non-I would say regulated sector, about how governance and ethics at the start of AI adoption can really set them up for success. And I think, yeah, that it's that message is landing easier now as we are starting to see some of the harms that AI can have on, you know, users, especially vulnerable groups like children. But yeah, that a lot of what I do is the education piece, the discovery piece, the workshops, um, and implementation for our clients.
Richard NicholasYes. And I've certain I've certainly heard you talk about ethics before in AI, and I've seen a lot of the work that you do. And how did you how did you get into this whole field and what led you to this?
SPEAKER_01Oh, it's uh I think as many people know, it's a bit of a you know, squiggly career, but it probably goes back to when I was in university. So very inspired as a high schooler by my modern studies teachers and learning about institutional racism and you know, society and sociology and those aspects, that kind of led me to pursue um studying international politics at university and understanding how some of these things work at that global scale. That interested me quite a lot. Um, then I went on to do my master's in public policy, and really the the takeaway in a nutshell was policymakers have little, you know, knowledge, little time, many issues. How do they make it work? Which is very I found that that's actually been the most relevant takeaway throughout my career. Um, but yeah, when I graduated, I really wanted to go into international like diplomacy or you know, international organizations like the UN or Brussels. Um I was quickly humbled because I don't speak another European language, I don't have, you know, a dad who could bankroll my many internships in Brussels. So then I started to look at what else is out there, and really I fell into tech. Um so looking at graduate schemes, seeing what aligns with my transferable skills the most, tech was the most um aligned, and I did a graduate um, you know, graduate ship, is it called? Graduate scheme with a consultancy. So started my career there, and during the first week, we were paired up with a mentor who was at the time quite actually forward thinking in the fact that he was looking at digital ethics. So our you know, team that we got put into the task that we were looking at was how do you map digital ethics on a product canvas? And honestly, that was it. That was the moment where the stars aligned, and I was like, wow, okay, the things I care about and the things I have lived experience of, such as facing bias, discrimination, or you know, like exclusion, I can actually bring that into tech. Like that is a problem within tech as well. Um, and from then on, I kind of knew that that's where I wanted to be, but because it's not such a linear journey, I really had to figure out how to plot, you know, the stepping stones in order to get to where I want to go. So a lot of people, it's I think it's quite funny, will come to me old colleagues, old even friends, and they'll be like, Oh, you know, it's really, really cool what you're doing. Um, how did you get into it? And I think they don't what they don't realize is that this has been six, seven years in the making. So this has been actually a long, a long process where they've not seen the behind the scenes and only maybe the last two years where things have aligned and actually it's much more straightforward, and what I do is much more relevant and explainable. But yeah, I from then I thought, let me learn the tricks of the trade first. So let me get my breadth and depth of expertise and consultancy. So naturally, I followed business analysis, user research, product management, all while talking to my colleagues about digital ethics. So, you know, why is it important? Why does it matter? And at the time there was quite a lot of fringe cases where there had been um violations of these kind of ethics, I guess, and bringing them to even lunch and lounge to my colleagues and saying, you know, this is why digital ethics matters, this is what it is, this is what we can do. And a lot of it was an education piece. And then in my last job, there was an internal AI team that was created that I then joined, and as part of that, that's kind of where everything took off a little bit more, and that's where I was focusing on the governance side of things, the ethics side of things, AI implementation, a lot more in a more structured manner rather than abstract trying to bring things together.
Richard NicholasI love that love that so moving from sort of international politics and how you might change the world from a policy-making perspective to actually finding a way of doing that, but in technology and and AI.
From Global Policy To AI Governance
Mapping AI Harms For Accountability
SPEAKER_01Um which I feel has only become more relevant now, actually. I feel like it's very full circle, isn't it? Now what I'm looking at is AI policy and governance, and actually, how do the how do we enhance unilateral cooperation between different states and different international organizations through work that I'm doing with a nonprofit ethical AI alliance? And it's so interesting. Like we're putting together um, you know, an AI harms map that we've created for the United Nations General Assembly happening later in New York this month. And actually, all of the things I learned and wanted to do are they're all kind of fusing and coming together, and it's a really nice serendipitous moment. But yeah.
Richard NicholasYes. And what what is the AI harms map then? What's what what it what does that do for businesses?
SPEAKER_01So, well, uh, firstly, through the nonprofit, we're not targeting businesses, we're more looking at how do you bring accountability and oversight into the AI governance and in big tech that we're probably seeing a big lack of right now. So, how do we help intergovernmental organizations like the United Nations or EU or other NGOs to actually hold big tech and other players to account? Because so far AI governance doesn't seem to be lacking some of that from what I've seen. So I speak from my own kind of personal opinion, you know, some of that accountability mechanism. It's not strong enough. And we see that because there are still harms being, you know, exacerbating social issues among you know mental health with young children, um, or or children across the age range actually, or even vulnerable groups who we're seeing, you know, they're quite um susceptible to AI psychosis and things like that. So yeah.
Fear As Fuel And The Parachute Metaphor
Richard NicholasCertainly a worthy, a worthy fight. There's uh looks like you can't um you can't escape the news right now of various things that have gone wrong in AI or the sorts of things that are uh being pushed on children particularly. I know for even in my even in my own uh practice looking at AI products and that sort of thing, there are there are things that I've seen already where people have come to me for advice where you think um I wouldn't want this uh I wouldn't want this available to the general public. There are things that people should not have the ability to be able to do or use on uh on the wider public. So they're coming to me because they're they're they're trying to do the right thing as as I'm sure they are when they they come to you, so they're trying to um make sure they comply with the the law, get things right.
SPEAKER_01Um, you know, people ask me what gets me out of bed in the morning. So like clients have asked me this, you know, various people like colleagues, etc. And they're like, Oh, you know, is it your communication? Because you tend to do a lot of like you get in I quite enjoy communicating. I would say it's one of my strengths. Or are they like, is it, you know, what what motivates you? And I'm like, genuinely fear, fear at what is happening in the AI space and the way governments are treating it as a race, but then the ways that product teams are adopting AI without actually the knowledge of how far reaching the consequences are, it's not really the same as building a website where there are, you know, governance, risk, and compliance is baked in, it's a field and discipline that's more established. We have even accessibility rules. It's not quite like that. It's it's definitely has a much more sinister angle and aspect to it that we are not seeing because of how people are using it, because the technology is so new. So I always laugh and I think I throw people off, but I'm like, it's it's fear that you know people are building the plane and nobody's building the parachute. And I need to wake up and I need to build the parachute because you know, people are people have needed it and they're going to need it in the future, and they need it right now, and there's not a lot of people building that in the way that it counteracts some of the harms that the plane crashing could have.
Richard NicholasNo, that's interesting. You're talking about building the building the parachute whilst the plane is being constructed or could or putting in parachutes while the plane's being constructed and um putting those sort of guardrails in place, I suppose. But how do you how do you actually do that in practice? What what does that mean then for the for the people that you work with?
Two-Pronged Client Strategy For AI
Frameworks, Values, And Real-World Incidents
Winning Buy-In From Legal And Leadership
SPEAKER_01So for me and the way that I approach it for my clients is probably a two-pronged strategy. The first is talk about what they know, so that is usually governance, and a lot of teams will, and sometimes it's a case actually, I'm talking to a client where governance is not really part of the process because we're just talking about AI implementation generally. Again, this goes back to you know, my first point about consultants wearing many hats. Sometimes it's pure how do we just plug and play? Um, and it's meeting them where they're at and what their use case is, and then determining what value they're wanting out of it. Is it more revenue? A lot of the time, I think it's kind of a given. It will be customer loyalty of some sort. You know, they're building it for our customer, they want to have that customer satisfaction. So then finding out what their motivations are and then bringing the conversation to education. So I think meeting them where they're at helps you to establish that trust. It definitely does for me. And we're we're saying, absolutely, we're here to help you with what it is that you're looking, you know, to deploy or to achieve. And then saying, but have you thought about actually, you know, having strong foundations? And have you thought about maybe how the EU AI Act might um, you know, apply to what we're trying to do? Have you thought about the customer data that you're ingesting and how you're going to be using this? Have you thought about anonymized data? How is your data quality? So I think that's how I tend to approach it. So I never, I mean, my title is AI Ethicist, and I absolutely cling dearly onto that because that's what I love to do and talk about, and hopefully bring the customers and clients on the journey. But it is very much a journey. So it starts from where they are, then talking about governance, or maybe it's even data quality to begin with, then talking about governance, and then starting to, once we've really, you know, grasped the meat of what it is in terms of governance that is required, then bringing in in that education piece about ethics. And actually, why should we be going slightly further? And this can depend from organization to organization, but why should we be going a little bit further than what the current GRC discipline covers? Because we're seeing unprecedented, you know, unprecedented technology, unprecedented use of the technology, and as a result, unprecedented consequences of using that technology. So I think it's it's not, I don't go in and I'm like, okay, I am the ethicist. Let's talk about morals and values, and you know, how are we gonna bring that into our AI machines? And what do you think is good and what do you think is bad? Absolutely not. It's a lot more of a it's journey, it's um understanding what the organization's needs are. Um, you know, it's really helpful because we've got as well frameworks that are industry recognized, like the NIST framework and even the EU AI Act, it has values such as transparency, safety, accountability, trust, those kind of things. So that's after you've you know spoken about governance, then you can start to bring in some of those elements of recognized frameworks that help you build that ethics conversation in. Because ultimately, I I always say this is that when someone sets out to build something, whether they're a solopreneur, whether they're a startup, whether they're a fully fledged multi-corp, you know, organization, no one sets out to harm people. People set out for one of two goals, and usually both, is to make money and to solve a problem. Like those are usually the two motivators for people starting a business or like running a business, etc. And I think at no point does anybody think, oh, you know, I'm this will harm people, it doesn't matter, let's cut corners. No, those are almost accidental and then bureaucratic, difficult political decisions that end up happening, and as a consequence, people are harmed. But actually, what if we didn't have to what if we could take some of that control back? What if we could take some of those um learnings and apply them at the start of this? So we've seen you know other technologies and technology new kind of eras with cloud, with the internet as well. So, how can we take the learnings from there and almost do a post-mortem? So actually, we've already there's already, and I tend to do this with clients as well, and any kind of like talks that I do. Here are the facts, and here are the examples where AI has gone wrong and where facial recognition has shown racial racial discrimination and a black man has been unfairly incarcerated. Here is where Chat GPT has been shown to be sexist and it's led to a female employee not being progressed over her male employee, right? All of these things, and I'm sure there's a plethora more of examples out there now, sadly. How can we look at those and actually work backwards so that we're implementing some of the things that would prevent that in our governance conversation, some of the things that would prevent that in our data analysis conversation, some of those things in actually the value that we're trying to bring to our customers and how we might increase our revenue by improving that loyalty. So I think, yeah, that is I've spoken quite a lot, but it's not a one size fits all. It's very much a conversation and a journey.
Richard NicholasNo, I see I see that, and that sort of bringing back control and controlling the the process and the outcomes, I suspect, is very much the focus for many of the sort of general counsel and in-house lawyers that listen to this podcast, for instance. So I sp I imagine that you might get some some level of pushback when you're introducing ethics into a business, and people wondering how is this going to how is this going to slow a project down? What's this going to uh what does this really mean in terms of um progressing the adoption of AI? I suppose for in-house lawyers in particular, and those sorts of people who are looking to to to get a grip with to get a grip on the AI that's being introduced into a business, what what would you suggest to them in terms of sort of lessons that you've learned from making sure that companies adopt uh ethics within their AI practices?
A Bleak Forecast And The Road Ahead
SPEAKER_01I think one thing I probably have to acknowledge that is probably slightly easier for me coming at it from a consultancy background, right? Rather than a lawyer. Because I imagine people might have their backs up a lot when they're like, okay, a lawyer is coming in to talk to me about ethics. That might mean something different than Sidra, the like AI ethics consultant coming in and talking to me about AI ethics. So I think one, I want to acknowledge that it's I probably have some privilege there and it might be a little bit easier. But I think if I was to get give advice or any tips, maybe I'd probably frame it as that way, is that it is hard. Uh, people do see, and and I I'm preaching to the choir with the audience that'll be listening to this, you know, um governance and regulation is seen as red tape to innovation, especially right now when we're in this really big, ever-growing innovation AI hype cycle that is being you know fueled in San Francisco. Um, and we're seeing the kind of the the yeah, we're seeing the fallout from that here as well. And in Europe. What I would say is oh, I I I feel like I'm cheating because I feel like the tips that I'm gonna give the audience is probably like, yeah, of course, that's an obvious one. I'm already doing that. But I think I I like to then it's easier for regulated industries because you start off with regulation, right? You want to be regulation, you want to be compliant, you you don't want to have a fine, it's a much easier in. I think the other way is maybe the audit angle. So even companies who aren't highly regulated, but actually, you know, by incorporating this sort of governance and then ethics into the mix helps them to have a transparent audit process. And then I would say the other ones would be, and we're seeing more and more companies, although I don't know if this is being rolled back as a result of uh American policies again, but um, you know, that are socially conscious, so for example, B Corp companies or those who are really quite strong about their social values and their organizational values, and they want to do the right thing. I think speaking to them about AI ethics is again that that there's an in. It fits with their social values and what they're actually trying to do for their customers and generally, generally for the ecosystem that they are kind of within, then that helps to have those conversations and say, okay, well, have you thought about X, Y, and Z? And what I find really useful is especially when you've got maybe hesitant cus like clients, is using examples that are related to the industry or field and saying, Well, actually, you know, maybe there's been this misstep, or this is what a company has done and it's landed them in hot waters or with a fine, whatever that may be. Here's how you can avoid that and actually be industry leading in how you choose to adopt this and how you choose to roll this out. So I think, yeah, it's very context-dependent, which will probably be a theme throughout most of the answers I I give. But yeah, it's understanding what the customer or client's motivations are and how and there will always be a road back to the ethical side of it or the governance side of it to the very least. So I think that's probably yeah, what I would say.
Reputation, Differentiation, And Doing Good
Richard NicholasNo, that's that's really useful. So sort of regulated areas, uh, an easy start. uh legal issues and fines, an easy start. But if you're struggling to convince convince people as a lawyer that you should have some role in ethics, then looking at the sorts of things that can go wrong. And I know I've looked I've looked at the past at um I think there's a website called incident.ai which covers the sorts of incidents that various businesses have had some more comedic than others, but various issues that companies have had with with AI in different fields. So that might be a perhaps a decent starting point for someone trying to convince a business that ethics should play a role.
SPEAKER_01Absolutely there's an MIT incident tracker and an MIT risk tracker as well. I've been looking into them recently actually looking at the taxonomy and things like that. And yeah those are also good sources that are publicly available that try to kind of crowdsource instances um that also might help the conversation. And I think one thing might be useful is just even self-reflect and like why are you speaking to that client about AI ethics? Why do you want them to care? Why should they care? Because if you find that if you if you can answer that question then it helps to figure out how to approach the conversation because usually it'll be like you're trying to prevent or avoid your client from doing X, Y, or Z, right? So then that can help bring that conversation to where the starting point needs to be.
Richard NicholasExcellent. No it's good good thinking. And in terms of the the future then of AI and ethics I think you said at the start what's what motive motivated you a great deal was the fear of the sort of race towards um AGI or um creation of um faster and more powerful AI models and being implemented very quickly. What do you see for the future of sort of ethics and AI at my personal prediction I feel like this is quite a bleak one.
SPEAKER_01I think in the next three to five years we're going to see a growing amount of AI related incidences. And we're we're going we're going to see a lot of fallout. I think the last two weeks in itself have been we've seen quite a few things. And I think unfortunately you know minority and vulnerable groups are probably going to be at the disproportionately affected by AI induced harms. So I think that these harms will continue before they're too large to ignore. So I think if you if you liken it similar I was reading a governance paper recently and it was saying you know when cars were introduced the number of fatalities shot up but they shot up for a good was it maybe like 20 year period before any legislation was and I wouldn't quote me on the number of years but it was for a a period of time before legislation actually happened. But also social adaptation regulation to help manage and curb those fatalities and then they dropped off to quite a low number um I think something similar might happen with with AI where we see the number of harms go up quite a lot before we see regulation and also again social adaptation of how we use AI in society before those harms start to come down. So yeah I think the next three to five years are going to be really interesting for lawyers for sure and those who are keeping an eye on this space.
Richard NicholasOkay so possibly possibly a a backlash based on the number of harms um that we see out there possibly something that might trigger that sort of movement but in the meantime in terms of businesses who want to keep their reputation and um actually show that they are a force for social good in the world actually there's a there's a role for ethics right now just as they're implementing AI.
Connect With Sidra And Final Thoughts
SPEAKER_01Yeah to prepare for that future I think absolutely there definitely is and we're already seeing how reputation of businesses can harm them quite a lot and especially in this AI space where there's a new AI startup every day. So I think those that are going to stand out and be the differentiators in this post kind of you know post-harm world let's say they will be the ones who will have made those conscious decisions earlier on. Those will will be the ones who thought about the ethics um the social you know demographic of those using their product and the regulations before others okay excellent that sounds a good point to end on really but I I'm sure people who have heard what you've had to say here and um have heard about the what you do for for businesses and also and in other sectors actually in terms of making sure that um say behave ethically with AI I'm sure there will be people that you would like to uh connect to it be great to hear the sorts of people you would you could do with connecting with and and how can they reach you? So I'll start backwards because it's easier to tell to say where to reach me. So on LinkedIn just Siddhartha um you know connect with me there send me a message uh you can send me an email as well so I'm usually approachable and well yeah easy to be contacted um and then the source of people that I'm looking to be connected with I think there's not a specific group or person but really just anyone who finds this conversation interesting wants to know more about it is already doing a lot in this space um just those who also believe you know that we should be building the parachute um and want to be building a parachute be more than happy to connect with them equally people who completely disagree with my point of view I'd love to speak with them too because you know it's always I'll only get better when there's uh without being in my own echo chamber you know like breaking out of that so people who completely disagree yeah anyone really who finds this topic interesting and has a view on it one way or another um yeah like connect with me on LinkedIn I'm always happy to have a chat or a conversation uh it's a really interesting space and I think a lot is going to happen as I mentioned in the next few years. So yeah.
Richard NicholasExcellent. So a fantastic time to watch this space and connect with you if if you'd like to join the conversation. So um thank you very much Sidra that's that's really useful and it's great to hear what you're doing in the in the space and how you're approaching um ethics. For those who want to get in touch please do uh get in touch with Sidra directly um meanwhile thank you very much again and um Sidra Hussein hopefully we'll speak to you soon. Thank you very much.
SPEAKER_01Thank you