The Responsible Business Podcast

Ethical Machines: How to Design AI for Not Bad

Rehumanize Institute Season 2 Episode 2

In this episode, Reid Blackman, Ph.D., author of Ethical Machines, shares how to design AI for not bad by following a no-nonsense approach.


00:00.86
Kris Østergaard, Rehumanize Institute
So we're here on the responsible business podcast and today I'm speaking with ah readid black man so readid is the author of a new book called ethical machines your concise guide to totally unbiased transparent and respectful Ai that is exactly what we're going to talk about today and how to. Built for ethical ai and what that even means ah Reid is an Ai ethics advisor he is the Ceo of virtue consultants that help companies mitigate ethical risk. He holds a ph d and is ah a former professor of philosophy at cogate university and the university of North Carolina ah his work has been published in Howard Business Review Tick Crunch Venture Beat Wall Street Journal etcwallstreetjournal yeah etc and he also volunteers his time as the chief ethics officer at the nonprofit profit Government Blockchain Association amongst many other things. Ah, but ah, first of all read. Thank you so much for joining me here on the podcast and so before we sort of ah dive into the topic of ethical ai and what that even means it would be wonderful to hear a little bit about your journey. What got you from a to b to c to.

00:58.99
Reid Blackman
Yes, yeah, it's my pleasure. Thanks for having me.

01:14.55
Kris Østergaard, Rehumanize Institute
Ah, working with mitigating ethical risk.

01:16.60
Reid Blackman
So Yeah I didn't think I would be doing this frankly I wound up in a place where I didn't anticipate So I've always you know when I when I first got to University I sort of became obsessed with philosophy in general and ethics in particular at the time I was particularly fascinated by. Really abstract questions about the foundations of ethics is it real is it subjective is it is it B S is it. You know are there actually objective standards that cut across cultures. So You know I became obsessed with that question questions like that questions about say the nature of free will also fascinated me. They still do. Um, so undergrad graduate school professor always working on these things specializing in ethics and at the time I Also you know when I was in grad school I was I founded a fireworks wholesaling company. And the reason that's relevant is because it explains how I became a mentor to startups when I was a professor which explains how I started thinking? Oh I Want to do something new and exciting like these students who are doing these new exciting startups and so I had this idea for an ethics consultancy but I didn't think that there was a market for it at the time and in fact I think I was right? but.

02:31.49
Reid Blackman
At some point around the time of Cambridge analytica non coincidentally I started hearing the alarm bells around the societal impacts of Ai and other technologies and I thought okay I see what's going on here the the ethical risks of these technologies are also reputational risks and ultimately will be regulatory and legal risks.

So these companies better got their ethical houses in order. But they're not going to know how to do that. But I do so to make a ah long story short I left academia and I started my consultancy and then the book really came as part and parcel of all of doing that work of doing research doing the writing um speaking to people. And the book is really a kind of you know me almost in my role as professor um trying to explain to a non-tech audience. What's up with all the ethical risks associated with Ai what is Ai and why does why does it give rise to these ethical risks.

03:24.62
Kris Østergaard, Rehumanize Institute
Yeah, and  so what I really love about your book is that it' um is very no nonsenseen I think is is cutting to the chase and  tries to to help the reader understand what they need to understand and also how to go about in the last 2 chapters. Primarily ah hands-on for executives and developers on you know what do you need to think about what do you need to do and and and maybe we'll start. We can start our conversation here and um, a little bit broader to begin with because you have some pretty interesting conversations in the book I think about how to even understand. The ah the topic of ethics and  what is it? What is it not and and and 1 thing. Ah, um, for many of us that um ah maybe not that familiar with ah discussing ethics in a business context and a technology context. Is that? ah we hear ethics that we think you know Aristotle and Kant and Plato and people who lived hundreds and thousands of years ago and um what's the relevance to us and how do you even connect that to being a business person going about that data work and you have some. You have interesting opinions about that. So. So maybe we can start there. What about all those philosophers.

04:44.32
Reid Blackman
Yeah they're wonderful I love reading their stuff I love teaching them um writing about them. It's an absolutely awful idea to bring them into the discussion about how to mitigate ethical risks in a business context. They've got very robust theories that people might. People will not agree with um and that's just not necessary. So one of the things that that that those theorists are trying to do is give these big theories that explain why something is good or bad and that's all good and well but we don't have time to engage in that in a business context. If for instance. You think that say discriminating against people of color is wrong on broadly aristotelian grounds and I think it's wrong on broadly Kantian grounds. We don't need to agree on those we don't need to to come to an agreement about who's right aristotle or kant in order for us to say okay discritin against people of color is bad. We're onboard with that now. Let's talk about bias or risk mitigation strategies and tactics right? We just get right to it. We don't need the background theory that explains why these things are good or bad. So I think that getting rid of those things or rather not bringing those things into the discussion in a business context. Is important it just leads people completely astray. It's not necessary fascinating as those discussions are right? So on the weekend by all means talk about Aristotle and Khan um, but if you're ah a data science team or you're a senior executive who needs to approve deployment of an ai or or denied deployment. You know.

06:17.42
Kris Østergaard, Rehumanize Institute
Yeah, and and so I guess what you're saying if I understand you correctly is that while their theories they provide some sort of Frameworks as to how to think about acting in an in an ethical manner. Those Frameworks are perhaps.

06:17.65
Reid Blackman
That was at the time.

06:34.73
Kris Østergaard, Rehumanize Institute
Not as ah, specific to a business context or maybe I don't know too big and too abstract or.

06:44.70
Reid Blackman
Yeah, they're just engaged in in a project that's not immediately relevant to ah you know to what most people are doing in business right now I mean like I said you might think um so here here's 1 version. Um.

07:00.11
Reid Blackman
The right thing to do this is this is roughly kant. Let's just say the right thing to do is that which is in conformity with the correct ethical principles something along those lines. Okay and independently of whether independently what the consequences are okay. That's Kant's view that's more specifically that's a deontological view of ethics that the right thing to do is that which is in compliance or in accordance with certain principles the utilitarian or the consequentialist view. There's lots to say here but roughly the right thing to do is that which brings about the best consequences.

So it's not about compliance with principles. It's about bringing about the best consequences which is the right view I don't you know literally thousands of years discussing you know at least 100 yeah thousands of years discussing this kind of thing. It's not important all we need to know in in the context of Ai in business in particular is.

Okay, there are certain cases in which we need to have explainable outputs or there are certain cases in which we need to be transparent about the kinds of decisions we're making about how we design the ai or there are certain decisions that we need to make in order to mitigate the probability of discriminating against say.

Women or people of color at scale. You don't need a general theory about rightness in order in order to make those kinds of decisions. It's about spotting the bad stuff and those big theorists are not about they're not.

Primarily the business of spotting the bad stuff they're in the business of explaining why the bad stuff is bad which is a great noble fascinating enterprise. It's just not the kind of thing that businesses need to concern themselves with.

08:43.30
Kris Østergaard, Rehumanize Institute
No, but ah, but what they do need to concern themselves with is um, what kind of ah values they want to live up to right and and and and you discussed that also and and and we'll get back to that. But maybe even before that another Common. Ah claim and you write about it in your book and and I would agree ah to this an often heard conversation when it comes to ethics is yeah, ethics is subjective right? So anybody's opinion is equally ah good as. As someone else's Opinion. So How you know how can you use ethics as a tool to anything if it's completely subjective.

09:25.45
Reid Blackman
Okay, that's a big question and I can't answer all of it. Um, right now there's an entire chapter of a dedicated I dedicate to it in my book. Um, okay, a couple things to say number one I don't really think most people think ethics is subjective or at least I think that they they probably hold Contrary beliefs.

So for instance, some people think ethics is subjective but then they think genocide is definitely bad rape. Definitely bad and wrong I don't care where you are. It's bad. So I think people typically hold these contradictory beliefs. Um, you can't both think that genocide or rape is bad um across all cultures. It's objective.

And also that ethics is objective those are contradictctory. But I think people somehow manage to hold those 2 contradictory beliefs. So 1 thing to point out is just that people do have I think in many cases objective you know beliefs that certain ethical claims are objective like that genocide and rape is bad. Okay, that's 1 thing to say.

Second thing to say is that you still need to articulate what your standards are um and that's really what say writing an ethics statement is about or as I talk about in the book an ethical creating ethical case laws about you need something like ethical ethical standards or ethical risk standards. Those might be. Identical to your regulatory and legal risk standards or ideally they go above and beyond what mere compliance with the law requires what your standards ought to be is going to vary and that's okay, that doesn't mean that um ethics is subjective. So for instance, there would maybe take an analogy There are certain positions on the political left that are ethically unacceptable and there are certain positions on the political right? that are ethically unacceptable and then there's all the in-be between the more or less reasonable political views where there's reasonable disagreement to be had. I take it that different companies can hold different ethical standards. There can be different. There can be reasonable disagreement. Um Patagonia for instance has particular ethical standards I don't expect those to be the ethical standards of all companies and there are some companies that don't have Patagonia's ethical standards but they still occupy. You know, ah space within reason about what the ethical standards are I take it that companies can articulate those standards last thing I'll say and you can tell me if I'm answering your question or not is I don't think it's important to articulate the ethical ideals of the company at least first and foremost.

Think it's first and foremost important that the company be able to articulate its ethical nightmares. What are the things that your company could intentionally or unintentionally do that would absolutely be an ethical nightmare specify what those things are and then put guardrails in place to stop you from realizing those nightmares from making those nightmares a reality.

12:13.20
Kris Østergaard, Rehumanize Institute
Yeah, interesting and I guess that's also related to the another discussion that you are having which is ai for good versus Ai for not bad and so so talk a little bit about that.

12:29.28
Reid Blackman
Yeah, sure so there's there's lots of people in the ai at the space I think it gets confusing because you've got you know you've got people at corporations talking about. Um, ai ethics and you have activists and you have nonprofits and students and and academics and it's all it's all a bit much It's there's a lot of noise there I think that the most important distinction to make at least for businesses is between what I call what I call Ai for good and Ai for not bad. Usually Ai for good and Ai ethics is equated. They think oh we're we're doing ai ethics so we're for Ai for good and I say hold on the second ai for good is about creating quote unquote positive social impact with the powerful tool that is Ai and that's a great thing most businesses though. Don't have the resources to do that or they do, but it's not core operations at Csr or something along those lines. So what I think businesses need to focus on is ai for not bad which is essentially look. We're going to pursue the ah the business objectives that we're going to pursue. How do we do it in a way that's not ethically reputationally.

Regulatorily and legally problematic messed up wrong. So it's ai ethical risk mitigation. So it doesn't ask the question. How do we maximize positive social impact with the powerful tool that is ai. It's how do we make sure that as we pursue the ends or the goals that we're doing. We don't choose means to those ends that realize a bunch of ethical risks that's Ai for not bad and.

14:00.76
Kris Østergaard, Rehumanize Institute
Yeah, and then and you list I guess 3 core risk issues with Ai which is biased ai black box algorithms and violations of of privacy. Maybe we can touch briefly upon this and then talk about. Ah, the how tos of ah of of moving forward and.

14:22.36
Reid Blackman
So yeah, that's there's lots of ways there lots of ethical risks that you could realize using ai I focus on those 3 that you mentioned because they're likely to be met given how Ai or ml works and I'm using Ai and ml machine learning interchangeably so.

It's the nature of the beast that is machine learning. It's the way that machine learning works that it's recognizing phenomenally complex patterns and ah in data vast trobes of data and these patterns can be too complex for humans to understand and so it's making quote unquote decisions or creating outputs.

Based on a mathematical calculation. We can't wrap our heads around and so you've got the black box problem. It's recognizing phenomenally complex patterns and data some of those patterns can be discriminatory in nature and so you get the biased or discriminatory ah model problem in Ai.

It's it's learning. It's it's it's identifying those patterns after you know training on tremendous amounts of data and that data is often about people and all else equal the more data it has the more examples it has to learn from The. The more accurate it gets but that just means that organizations are highly incentivized to gather as much data often about people as they can so they can better train their ai but a lot of times acquisition and storage of that data itself constitutes a violation of Privacy. So. There's more.. There's a lot more to say here but um for the sake of Brevity Brevity I'll just say so you can see by virtue of how ml works you get the black box Problem. You get the Bias Problem. You get the privacy problem. Um, and there and there's more ways in which those problems get exacerbated by how Ml works but that's the that's the core of it aside from those big 3.

There's also all the use case-specific ethical risks so you choose to make self-driving cars powered by Ai now there's this new ethical risk of killing and maiming people with your ai so on. So.

16:26.24
Kris Østergaard, Rehumanize Institute
Yeah, and a couple of things that I'm I'm curious to to dive a little bit deeper into the notion of these black box algorithms and explainability is um is ah a word that gets ah talked about a lot when ah when it comes to Ai right? and. And and you and you discuss different types of explainability and ah and and I guess 1 question that sort of pops to mind is ah, really trying to nail in you know, regardless of what kind of organization you are a part of when does explainability matter when does it not matter because. Clearly it cannot be equally important regardless of what it is that you do.

17:07.49
Reid Blackman
Yeah, so maybe it'll help to make sure that the audience um gets what we're talking about when we talk about explainability because that term gets used in different ways. So look when I think about what Ai is I just think that software that learns. By example I think that's the core of what Ai is at least at at a conceptual level. It's software that learns. By example.

The examples are data. You know so you give your so it learns by lots of data. So you've got your training data. You've got the the pattern that it recognizes so those are your those are your inputs then you've got your outputs um and you've got everything between the inputs and the output so you give it a bunch of examples of your what your dog of what your dog Pepe looks like. It learns the pepe pattern in all those pictures and now when you give it a new photo of a dog it says yes, that's Pepe or no, that's not Pepe or whatever the in between the inputs and the outputs it's recognizing some phenomenally complex mathematical pattern and it's too complex. It can be too complex for us to to grasp. And that's the black box problem. So explainable ai is of course when you can explain what's going on between the inputs and the outputs it might be utterly insignificant whether your ai is explainable so long as it just works. So for instance in the Pepe example, the dog recognizing as long as it gets it right.

Yep, it gets this is my says this is my dog and that is in fact, my dog when it says this is not my dog. In fact, it's not my dog great. You don't care low stakes. No worries. But if it's making a prediction about whether the patient will develop diabetes in the next two years or whether this person will be a good candidate for the job.

Or a good student at this University or is a good person to so an ad to rent a house from versus buy a house. Well now the stakes are a lot higher and so then we're going to care. Can we explain why we gave the person this decision this output. There's there's another. Um, feature of this which I think is is phenomenally important but often overlooked when we're talking about Explainability. We're often talking about what gets called Global Explainability. So What are the rules of the game that transform inputs to outputs what are the rules of the game that transform.

Inputs to outputs and that question or that that issue is really important because if you don't know the rules of the game. You can't assess whether the rules of the game are fair or just or reasonable are good and when you're when you're say telling people we're going to tell you what your credit score Is. Or whether you get a mortgage or whether you get a job interview based on the rules of the game of the model but we can't explain what the rules of the game of the model are and therefore we can't assess whether the rules of the game are fair that looks like it's a big problem.

19:51.55
Kris Østergaard, Rehumanize Institute
Yeah, and ah so ah, the notion of I guess Also when you're talking about fairness for instance. Ah and it relates to another discussion that you are that you are having. Ah I guess you know fairness you talk about harm also and harming. Versus wronging people though and these are sort of and you have a whole discussion of whether something is a value or not a value. Maybe that is is related to this also but these thick words in the sense right? that they can mean a lot of different things and the. And I guess importance of us being very precise in what kind of language and terminology we are using in order to describe what we're trying to achieve and and how you are be how how we are behaving can can you talk a little bit about that.

20:39.48
Reid Blackman
Yeah, things things get complicated here. So yes I'm a stickler for the you know word using words precisely that obviously comes at least in part from my training as a philosopher also for my general intolerance for bs and I think that you know lots of Bs a cars because we allow people to be slippery with their words.

So I think it's important to be really precise and it's also because the kinds of questions that we ask with the words that we use guide the kinds of answers that we offer. So for instance, one of the things that might feel like it's a pet peeve of mine. Maybe it's something a bit more than that. It's ah it's a full-fledged objection is when people talk about.

Solely what are the harms of ai but harm is a very particular notion. Um, it has to do with feeling a certain way. Um I think the notion of being wronged is what's more important what we want to focus on is not sometimes you can you know. Sometimes harming someone is wronging them sometimes wronging them entails harming them. But there there are ways of breaking these sometimes you can harm someone but you don't wrong them to take 1 to take 1 simple example. Ah in an act of self-defense you harm someone without wronging them. So it's ethically permissible. Um.

You can also wrong someone without harming them. Um I don't know what's some examples here. Ah,, let's say you know I called the newspaper and I tell them that you've done some really disgusting things at a playground or something along those lines. Um, I've wronged you even you haven't figured it out yet. You haven't found out yet. Ah nonetheless I've wronged you maybe even they do a further investigation and they don't even bother you because they're like that was nonsense but I've wronged you even though I haven't harmed you um I mean in general cases in which I've I've betrayed someone and they don't find out I've still wronged them.

Even if they don't experience harm. Um, there's I'm sure there's other examples that just aren't springing to mind.

22:35.25
Kris Østergaard, Rehumanize Institute
And that yeah and and and I guess what it comes down to and and something that um, that been has been on my mind for for a while I I feel that when ah when it comes to notion of ethics in ah in. Business context or ah and talking about responsible business etc and in many ways it's early days and I feel we've we've we've done some research into this and even in some of the biggest most successful companies in the world. And it's also early days. They are um and they are you know, experimenting a lot. They're trying to learn and they don't have all the answers yet and don't necessarily know what? Um And. What does it constitute to be a responsible organization. Fully they're trying to figure that out and and I feel that that part of this is is related to the notion of and you know developing a language for ethics. And um, you know how? how do we?? How do we talk about this Stuff. How do we understand this stuff in order to ah translate that into our actual behavior right? which I feel is is is related to.

Know some of the discussions that that you are having here as well. So so so so thereby figuring out what your ethical standards are or you know the ah values you want to adhere to in order to feel that you are. And you know ah conducting ethical behavior in regards to leveraging Ai leveraging technology or or doing your business is is a big 1 right? so so um you know how should the executor think about this.

24:22.21
Reid Blackman
So I actually don't There's a way in which I actually don't think this stuff is that difficult at least I mean I've been studying this stuff for a long time. Um, as as my partner and you know the truth of the matter is that in many ways we we know what to do? there's.

The issue is not I don't think the hardest issue is figuring out. What do we do to be responsible. Um, and I don't I don't really think it's early days I mean corporations have been around for many corporations have been around for many many many years. They've had plenty of time to think about general in general How to be a responsible business and a responsible steward of.

Of people, planet, etc. Um, the real problem is lack of political will at an organization to actually put this stuff into practice if senior leaders say yeah, we're going to do this. We're going to be serious about for instance doing our Ai ethical risk due diligence.

If they really want to do it. You can get a robust ai ethical risk program up and running in maybe about a year depends on this you know I'm talking I'm thinking about mostly multinationals. Obviously if it's your startup or something like that much faster. Um 50 person hundred person thousand person company a lot faster. But. You're a multinational. You know you do an ethics statement. Um, you do a gap in a feasibility analysis. You create a framework and a plan for rolling out that framework. You can get that done in less than a year. Um, but companies don't. You know they're waiting. They're they're waiting and seeing when's the eu ai going to get passed probably next week is it going to be enforced how how severely will it be enforced so I don't think that the difficulty is hey how do we even think what ethics? How do we do this I think it's lacking the political will to do it once they want to do it number 1 create some standards like with an Ai ethics statement number 2 and this is the step that I think most organizations miss. Um, we could talk about this. You know, well let me let me I'll say stuff about implementation and how to implement but did I did I answer your question.

A lot of companies will they'll rush to implementation or they'll you know they'll say they'll create something like an Ai ethics statement so they have some high-level principles I think usually those statements are not done very well because they're not action guiding. They're not really standards setting. They're just sort of vague commitments to big words like fairness and explainability and transparency um but even putting that to the side they then rush to implementation. Um, and then they run into all these ah obstacles because they're trying to to roll out these principles but the principles are too high level to be implemented. Data scientists. Don't know what they're talking about. It hasn't been harmonized with existing say ah ah enterprise risk management. It has the compliance people aren't aware of this stuff. The people in hr don't know anything about it. So the most important thing to do after you've set your standards is to do a gap in a feasibility analysis. Where does our organization stand relative to these standards right now. What does our current workflow look like what are our current policies look like what does current um product lifecycle look like what does what are the current. Ah um, cyber policies look like relative to this stuff. You know so. What what level of awareness do our people have if you don't know where the organization stands then you're not, you're going to have a ah real difficult time figuring out what needs to be put in place at what point you know with what cadence who needs to be educated what kind of education. Do they need what does onboarding and training need to look like.

So after you do those standard settings which is really important Now you've got to understand where your organization is relative to those standards and then out of that build a framework for Okay, what are the most important gaps to fill and how big of a lift is it to fill those gaps.

28:26.12
Kris Østergaard, Rehumanize Institute
And a framework like that. Ah, what does it look like who should be developing this framework who should oversee it and ensure that the organization lives up to it.

28:37.33
Reid Blackman
A cross-f functional team needs to be involved in some kind of working group. It's it's usually thought that I think it's standardly thought that okay this is a techie problem. Let the technologists figure it out, but it is not solely a tech problem. So like I said earlier your hr person.

As an example needs to be involved one because there's going to be new policies and procedures that need to be complied with by your people that hr may be charged with tracking compliance or educating so so on and so forth. So. Cross-unctional team is important. Not just say your chief analytics officer or chief data officer your Cio or Cto but people from someone from risk cyber compliance hr general counsel. So on you need that cross-unctional team because ultimately the. Ai ethical risk program isn't this or ought not to be a siloed thing that's done by you know junior level data scientists. That's a disaster if it's really going to be robust ethical risk mitigation and of course um compliance with the Eu Ai act it needs to be woven throughout the enterprise.

And it's not going to get woven throughout the enterprise unless the heads of the various business units and other units understand what's going on here and what are we trying to implement and how do we harmonize that across the enterprise.. It's not an easy.. It's It's again, it's not that we know what to do. But then you got to get the people involved to say yeah and we're and we're going to do it.

30:07.90
Kris Østergaard, Rehumanize Institute
Yeah, and so in  your experience getting it won into the awareness of the enterprise and the actions of the Enterprise. What are other things to be aware of here and where do you see that and you know you are successful  and are not successful at doing this. What are the actions to take.

30:28.77
Reid Blackman
Again I think the the it's not a matter of success or lack of success if you have appropriate resources dedicated to putting in place. It's a matter of efficiency or lack of efficiency and I think it's.

The biggest problem is lack of efficiency in the rollout because it remains siloed because a lot of organizations most don't understand that we're talking about organizational change Here. We're not talking about the Chief Data officer tells the people who work under him or her hey look out for these things right. And then deploy some technical tools for Bias identification and mitigate that. Yeah I think that's what a lot of people think that it is. It's oh okay, it's something that Data Science teams need to do and so they start thinking. It's just something that data Science teams need to do they start putting things in place and then they run into. You know existing and you know Enterprise risk management saying that's not how we do things though that that doesn't coalesce with doesn't cohere with doesn't harmonize with what we do your your procedures need to integrate with existing enterprise risk procedures and so if you approach it with this siloed. Siloed manner and the siloed methodology inevitably, you're going to run into roadblocks because for instance that the cyber team says you're you're trying to do X in order to live up to your ethical standards. But that requires you doing certain things with data that we haven't built the proper. So um, security around. So because you didn't you didn't invite cyber in from the start your Ai ethical risk program isn't properly enabled because the cyber team hasn't done what it needs to do in order to enable you to do what you want to do from an ethical and regulatory risk perspective. So again, it's just like every time that happens.

You know you didn't tell risk something you didn't tell hr something you didn't tell cyber something every time that happens it's another slamming on the brakes and recalibrating and internal political maneuverings and and and and could that eventually lead to you know everyone throws their hands up and they've leave it by the side sure but more often than not. It's. More often than not especially in today's regulatory environment it's not about quitting. It's just about wasting lots of time and money when this could have been done um way more efficiently had the gap and feasibility analysis but done in the right way early on.

32:54.00
Kris Østergaard, Rehumanize Institute
Um, and ah so but there could be plenty of people in an organization as you're also pointing to thinking that why does this have anything to do with me right? I mean if they even have ah the the awareness of it. And ah and a sense of perhaps what the ah Ai ethics people are talking about here is um, you know barriers towards me doing my job or it's holding innovation back all of these ah kinds of of barriers. What? Ah what do you see are. And ways to um, you know to not get ah to that point and to I don't know sort of brand. Ah you know the ethics ah people and the ethics job here in in the best possible way as a you know support of achieving your business results.

33:48.67
Reid Blackman
Okay, so there's a lot of things to say here so number 1 different people are going to have different you know motivational lovers and so some people will be attracted to the way that integrating ethical standards into your products you know ah makes your brand look better.

Makes your brand actually better. Um, some will some will find it personally fulfilling. Um for some it'll speak to their ego like how ethical we are right? So there's there's just going to be different motivations by different people. Um, that's 1 thing to say another thing to say is you know I'm not one of these people.

A lot of people in the aethic space who will say things like ethics is always better for innovation and ethics does not hamper innovation and I think these people have sort of um I don't know if this this translates interculturally but they they drank their own kool-aid they ah they've they've just gone too far. Um, it's an empirical claim. You know, ah a claim for scientific investigation if you like the whether in the extent to which integrating ethical standards. Um decreases or increases innovation or decreases or increases Efficiency. So There's a way in which I want to say.

Look. It's an empirical question I haven't done the empirical investigation I don't officially know so that's 1 thing I'll say that said we know it's in some cases. It's going to hamper innovation. In fact, that's kind of the point so we were going to create this. You know these these ah you know.

Boston Dynamic type robots with guns on them. Um, but it would have been a great innovation but we decide not to do it because it flouts are ethical standards. That's a case in which ah innovation has been hampered but probably a good hampering because hampering ethically disastrous innovation is a good Thing. So so. In some cases. It will hamper innovation in some cases. It will positively enable it you know because we thought about the ethical standards. We came up with a solution that we wouldn't have but ah have we wouldn't have had otherwise it's actually better than all the available alternatives. Okay, great. Of course there are going to be cases like that. There are also cases in which you know if we hadn't.

Consider all the ethical stuff. We would've been able to move Faster. Of course there's going to be cases like that if there there will also be cases like this you know because we had you know in the past we've had lots of internal friction and misalignment about whether we should do something or not or how to do it because there wasn't already settled agreement on what our ethical standards are. That led to slowing things down now that we have internal alignment on what our ethical standards are and how we do things we're able to move a lot faster. That's certainly going to be the case sometimes so I think there's going to be loads of examples where it hampers innovation and and increases innovation. And loads of examples where it slows people down and speeds speeds things up. So I I think that's just the way that that it's going to have to go the question then is given the various contributions both positive and negative to um, speed and efficiency. What do you do now. The question is well where do we want to place our bets.

Think do you want to place your bet that oh it'll be fine. Let's just ignore all the ethical risks. Let's carry on what we think is going to be overall the fastest most efficient thing jettison ethics or at least not incorporate it in the first place and just move fast and break things. And we'll just bet that we don't break things so badly that we don't destroy our customer and client trust that we don't destroy our brand. You can you can place that bet I wouldn't advise it myself either from an ethical or a business perspective but one could do it.

37:28.51
Kris Østergaard, Rehumanize Institute
Yeah, sure and we've seen that plenty right? and ah to my from my perspective we are increasingly moving into a space where that is becoming harder right? I mean like the pressure from ah. Whoever stakeholder is relevant to you is  growing and that the complexity and in being seen as ethical and responsible is also growing. so so I guess in in 1 way, you could say if you were able to you know. Build a successful business while doing it in an ethical manner then and you know that in itself is an you know innovation journey if you like if you can align those 2 things. Ah.

38:14.20
Reid Blackman
Yeah, and I Ah yeah and I would emphasize especially for those people who here do in an ethical way. Oh man, you we have to reach to these lofty ethical goals. No build your business in a way that's not ethically risky right? That's a let's a lower that's a lower bar. It's still like.

Ah, high enough bar though such that the business is protected and people are protected and that's that's the number 1 thing right? So I think from an ethical risk but from an ethical perspective ethical rule number one do no harm. Do no wrong. Um, and then business perspective do no harm to your brand to your to your customers and the clients that you serve. So. It's not It's you know when we say build it in an ethical way. We don't mean I don't think we should mean anyway, let's all hold hand and saying hallelu hallelujah and build a a brand that benefits people and planet and everyone is so great and no, no, no build your business in a way that's not ethically risky.

39:06.18
Kris Østergaard, Rehumanize Institute
Um, yeah, and I guess that's to the point of Ai for good versus Ai for not bad that you were making in the beginning. But what? what's your experience and and and you write about that in the book. Also on.

39:10.37
Reid Blackman
Ah, exactly yep.

39:22.51
Kris Østergaard, Rehumanize Institute
You know Kpis and how to use kpis in the best possible way in relationship to ethical ai I mean because we have plenty of kpis in our our daily lives normally right? Do we need new kpis other types of kpis what's their role if if they are to be.

39:41.90
Reid Blackman
Yeah I mean you you do do kpi for this kind of thing I mean you need kpis for you know for lots of things. But for instance to what extent are people aware of the Ai ethical risk program. You got to be able to track that I mean you can write a policy in a word document and and hit send all you know, send it everyone in the company but nobody reads it. It doesn't matter some companies will say they rolled out their ethics program. But then what they mean is they sent an email. So no one knows about it. No one read the email so you got to track you know to what extent do people even know that there's an ethical risk program to what extent do people understand the policies to what extent have the have. Um, they've been implemented.

Across which departments by whom to what extent are people complying with the new procedures to what extent do um, does compliance with new procedures actually mitigate the risks that we were trying to mitigate so you know you got to you got to track these things.

So you do you do need metrics. You do need Kpis for these kinds of things you know you you might use um various automated tools for this kind of thing. How good are our automated tools. How often how often do our our automated tools falsely flag problems. Um, how often do they truly flag problems.

40:54.77
Kris Østergaard, Rehumanize Institute
You also write specifically about Ai ethics committees that I would love for you to to dive a little bit into the relevance of them how to put them together properly. How do they you know work and support the the organization.

40:54.83
Reid Blackman
You can track these things.

Um, so talk about Ai ethics risk Committees is highlighting a particular part of an overall governance structure there I where ethical risk committees are something akin to them is absolutely crucial and that's because. Before usually before things would get to the level of an ai ethical risk Committee. It's going to be looked at by data scientists and data scientists of course have a certain kind of experience a certain kind of training they lack a certain but and which means they also lack certain kinds of experiences and training that say.

People from risk have people from compliance people from cyber people from general counsel ethicists etc have so there are some. There are some products some Ai products that at certain points in their lifecycle. They need to be elevated questions around them around their ethical regulatory risks. Need to be elevated to the appropriate risk committee risk board I don't care what you call it that can cross-functionally vet the product for ethical risks and advise on risk mitigation strategies and tactics. So I think those things are are crucial I think that in some cases. Ah, product teams should be required to consult them and similarly that they should be required to to um comply with if you like the dictates of that ethical risk committee that they should have some degree of authority over product teams at least in high riskk situations high risk products.

Products that say limit or restrict access to some of the basic goods of life like Healthcare Job shelter. So on.

42:56.29
Kris Østergaard, Rehumanize Institute
Yeah, so that's the whole ah consideration as to how much power should they actually hold in an organization are they simply advisory. You can listen to them. You can do what they say or not or do they actually have the power to stop ah business decisions being made.

43:11.94
Reid Blackman
Yeah, if you really really take ethical and regulatory risk Seriously then you make it the case that that ethical risk Committee a has to be consulted when certain criteria are met for the product does it. It have to be everything if you're just making a.

43:13.73
Kris Østergaard, Rehumanize Institute
Other rules of thumb here in your opinion.

43:30.71
Reid Blackman
Machine learning model that predicts when the screws are going to arrive at the Toy Factory Okay, let's not get nuts here. Let's you know? ah we don't need to go to the and ethics board for this sort of thing. But if you're that's how you do something in Healthcare care and or in financial services where you're giving people loans or mortgages or determining you know credit limits or whether they get credit or.

Um, along those lines and those high risk situations. It should be the case that they're required to consult an ethics board and that when the ethics board says you should do X The should is not. It would be nice if but you are hereby required to do X um. There's then a question about whether the ethics ethical risk board can be overruled so say the product you know the the product owner goes to the ethics board and that ethics board says no can the product owner appeal to a more senior executive and can that senior executive overrule that thing it's dangerous.

Um, right from an ethical affirm at least from an ethical perspective because and that senior executive might have dollar signs flashing and you know in front of their eyes might have a promotion in the offing or a bonus in the offing and if this thing gets deployed then they're going to get that bonus of promotion. So they're they're incentivized to just do the quick thing. Will get them What what they need to advance in their careers which you know sadly but obviously happens so making it the case that it can't be overruled is a really powerful way of and of greatly mitigating the probability that you're going to realize with ethical risks. Um.

But of course from a business perspective. It could be dangerous I mean let's be honest, ethical risk boards can be overly sensitive and they say no to things that were they were fine enough and the financial upside makes it the case that it's a worth the risk. Let's say people's lives are not quite on the line here. We're not talking about diagnosion diabetes but something a little bit. I don't know important but not not literally life altering like maybe whether the person you know, ah ad campaign for houses to buy versus houses for rent I use this example because Facebook got fine for showing black people houses to rent and white people houses to buy right? So that's it's high stakes but it's not as high stakes as.

You've got diabetes and so maybe in that in that kind of case. There's certain financial reasons for saying. Yeah, we're not going to go with the ethics board here. We're going to overrule them. Um I don't I don't recommend it. But it's not let's face it from a business perspective. It's not a crazy thing to think.

45:56.39
Kris Østergaard, Rehumanize Institute
Yeah, who who should be on an ethics committee or ethics board.

46:04.33
Reid Blackman
Well, you know it's the sort of folks that I mentioned earlier you know I do think you need. Um I think having an ethicist involved is important um ethicists I think are not important because they're sort of ethical oracles or priests that sort of have greater insight into the right and the good and the true. I Think that they're really good at being able to navigate ethical deliberations and helping other people deliberate navigate those deliberations. So I think that they are very helpful at facilitating those kinds of conversations I think that lawyers are important because there are legal issues and and and regulatory issues and. People from risk Compliance Cyber It's also important to have a ah data scientist involved because if you're talking about um, risk mitigation strategies and tactics and you need to make sure that it's technologically feasible to engage in those tactics right? So if the ethics board says just create some synthetic data sub mitigate those. Those biases if a data scientist doesn't step up and say actually that's way too hard for us given our resources and time that's important to know.

47:10.28
Kris Østergaard, Rehumanize Institute
Um, then there's the notion of and setting up an internal ah committee or Board. Ah, but there's also the notion of you know, involving external people or even having external boards. Ah, looking into ethical risks and and and behavior here. What's your thinking around that.

47:29.19
Reid Blackman
Sorry you're asking whether the board should be internal or external.

47:33.75
Kris Østergaard, Rehumanize Institute
Yeah I mean you you can set up an external I mean it makes sense to do they have you have these internal boards right? They're dealing with ah sensitive issues for the organization so you wouldn't just invite anybody in but there might be something said for involving certain people.

Maybe in certain situations and then there's the whole notion of setting up an entirely external board as a supervisory of some sort of Facebook did something comes to mind right? but it could be in other situations as well.

48:03.32
Reid Blackman
I yeah I think that's exactly right I mean having an having an external advisory board is is fantastic. It's I have not seen it I don't know I mean yes as the Facebook example but that's you know those are for very exceptional cases right? Those are they handle what. I don't know single digit maybe a dozen cases a year or something like that. Ah, against the you know who knows how many decisions that Facebook or meta makes um so having that external board can be really powerful. That said in many cases. The external board is going to have to work with some kind of internal board or something along those lines because a lot of times you know we're talking about an ethical risk appetite in many cases different companies are going to have different ethical risk appetites. They have legal risk appetites. They have business risk appetites. And these things need to be properly balanced and so having an external advisory board can be phenomenally powerful at spotting those ethical risks and advising they will almost never be the final say. Um, they'll just they'll usually just be advisory. Um.

Yeah I recommend them but there I I think I would rather have I mean I think would be better to have an internal board with some external numbers than it is to have a fully externalized board if you're going to get actual integration compliance.

49:28.17
Kris Østergaard, Rehumanize Institute
Yeah, all right? Excellent so and sort of of a summing of our our conversation here. Um companies out there who are you know. Doubling down or considering to double down on the the topics that we've spoken about now and say Ais is becoming a ah bigger theme in our organization. We've got to be sure that we handle things right? and they are. And you know what's the you know what's your top 1 2 3 recommendations for them to ensure that they go about this and the best possible way.

50:05.51
Reid Blackman
In an ideal world. Ah when I work with clients. They start off with a seminar that's attended by the kind of cross-functional group that I've been talking about lots of people talk about Ai ethics a lot of people don't frankly know what they're talking about. But also that everyone's not always on the same page people hear Ai ethics and.

A lot of people hear ai for good and if some people are thinking Ai for good and other people are thinking Ai for not bad. You've already got misalignment so getting people on board with what it is that we're talking about getting them on board with understanding. For instance, what's the black box problem and what are its sources. What what are the? what's the.

Biased or discriminatory model problem and what are its sources. What are the potential um areas of privacy violations. What are the what does the thing about use case specific ethical risks I think doing a kind of for lack of a better word robust. Can you have this a robust crash course.

On ai ethics is a great way to start because that cross-functional team needs to be on the same page they need to be educated if they're going to first start creating those ethical standards with like an with for instance, an ai ethical risk statement and 2 engage in a gap in feasibility analysis because they're going to have to look at hey. In my own department. What am I what? Ah what do I think we're missing or where do we stand relative to these to these standards. So starting with that educational bit I think is really important and from there you move on to a statement and a gap analysis.

51:32.35
Kris Østergaard, Rehumanize Institute
Wonderful now read Obviously people should ah buy and read your book ethical machines. But if they want ah to learn more about you or engage with you as ah as how where should where should they find you.

51:45.99
Reid Blackman
I post pretty frequently on Linkedin. Um, if they're looking for I have a crash course on Ai ethics. Not a robust one hey you know 6 7 minute crash course on Ai ethics at readblachman.comreidblackman.com. There's also I have. I don't know maybe ten Hpr articles on there as well. There's a so there's a section on my website that links to all the articles. So I think readlackman.com has lots of resources and I'll be adding more resources soon. So I think that's the place to go.

52:11.53
Kris Østergaard, Rehumanize Institute
Wonderful. Well, it's been a really interesting conversation here. So thank you so much for joining us on the podcast.

52:17.82
Reid Blackman
That was my pleasure. Thanks for having me and thanks for the great questions.

People on this episode