We Love Ugly Data! The Deep Analysis Podcast

A Legal Perspective on the Ethics of AI

Michael Simon Attorney Season 2 Episode 4

In this podcast, we have an extended conversation with XPAN Law Group and Seventh Samurai attorney Michael Simon. A wide range and highly informative discussion that looks at some of the legal precedents and challenges of applicable legal standards to Artificial Intelligence.

Note: The interview was recorded remotely - unfortunately the sound quality is not what we would like, but the discussion is worth it :-)

Support the show



Alan Pelz-Sharpe  0:03  

Well, hello, welcome to another of our lockdown podcasts, this time with attorney-at-large Michael Simon of XPAN Law Group and Seventh Samurai. It's a really good discussion with a true expert exploring the ethics of AI. And that’s a topic that's top of mind for some, but probably not for most. I think you're gonna learn a lot from Simon today; I certainly did. So let's jump into the conversation.

 

Alan Pelz-Sharpe 0:32

Michael, introduce yourself, and people will realize you've got a very different background to me on this topic.


Michael Simon  0:38  

All right. Thank you, Alan. And it's great to be here, I guess, wherever here is in the virtual world. My name is Michael Simon. I am an attorney with the XPAN Law Group at X P-A-N and they're out of Philly. I'm actually out of the Boston area, starting up the Boston office. We’re focused on cybersecurity, privacy, and the like. At the same time I wear another hat, where I am continuing my legal technology consulting role with a company called Seventh Samurai that I founded about seven years ago. And we focus on legal tech work, but also expert work for law firms and e-discovery, particularly with database systems. And they advise some clients just on how to market and sell to attorneys. I did a lot of events back when we, you know, did events and I miss those. And so my LinkedIn profile describes me... I've named myself as the nerd turned lawyer turned nerd. 


Alan Pelz-Sharpe 1:52

Well, up until recently, you and I didn't know each other, but it turns out we have a mutual friend, Andrew Perry, who's a bit legendary in our industry at the time. And also irascible? and rather wonderful Mr. Horrigan, who worked for me at 451, who I am very, very fond of. And he's my real connection actually, David is my real connection and my real education if I'm being honest on legal technology. And so what I thought we would pick up today is, you know, conversations you and I have been having. So my colleague and I Kashyap, we published a book on AI last year. And honestly, the focus of it was, you know, let's simplify this so that anybody who's interested at least, can get their hands dirty, can figure out what to do without Advanced Math, because AI is a business problem, a business solution. You don't have to be scared of it. But to our surprise, and this is my sort of long-winded intro here. But to our surprise, one of the chapters in particular called “The Dark Side of AI” where we sort of basically explained, yeah, this can really go wrong. And we raise questions such as “What is bias?” And you know, “What is ethical AI?” You know, should you even be using it, even if it's good and fast and efficient? I mean, is it actually good news? So that was really how we connected, I think, Michael. And you know, for my start point, I would say, I have no idea what ethical AI is. I mean, it sounds good. Sounds like something you should be doing. That's a wonderful idea. What it actually means, I'm completely clueless, in the sense that my idea I mean, to give you a to sort of trigger this off here, to give you an example I've given in workshops, I've said, Well, you know, we can talk about racism, we can talk about ageism, we can talk about, you know, we're all the isms that we've heard of. And so now I'm running an HR system with AI, and I figured it all out and it's ethical. But it turns out that whoever was in charge of hiring previously and of course, this is the data we've been using for our AI system, really didn't like ginger-haired people.That's gonna be a tough one to pick up.


Michael Simon  4:14 

Yeah, I'm not sure how that gets identified on a resume. It would certainly be a different kind of thing. In fact, there was one of the original problems in this area, one of the original issues, this famous incident was back in the 80s. I remember it was in England, it was one of the medical colleges. I should know the name by heart, but I don't, but I do remember that the Lancet called it no less than a complete disaster for the English medical system. Take that in whatever grain we want now… Where someone, the admissions officer for medical school, was concerned about too many of those foreigners–I think you could probably do the accent better than me, I won't try–who didn't speak English well enough becoming doctors. So they set up a system, a rudimentary system, to screen out anyone with a non-English sounding last name. And no one quite realized they had done that, no one else other than the HR person, and so for about six or seven years, maybe eight years, all of a sudden, it's screened out, all of a sudden the diversity level of this particular medical school went to zero. And it was actually, previous to that it had been known as one of the more open-minded medical schools in England and it just was a terrible scandal. So there are ways to make this unethical AI deliberate. But in the end, I do think I want to comment on something you said earlier about what is ethical AI–I'm not sure we know is it is it, you know, Space Odyssey 2001 ends with Hal saying tp Dave, “Dave I'm afraid we'll have to have a reasonable discussion about that.”


Alan Pelz-Sharpe 6:05

Surely, Michael, what is ethical to one person is not necessarily to another. I mean, I'm just saying I mean, you know, in my life I've met many. I've been very fortunate. I've met so many people through the years and some of those people I've loved the most, but I can't say they've been the most ethical, in retrospect.


Michael Simon  6:25  

Yeah, I don't know if I want to go there on that one. I'll just say I think we need to at least set certain ground points, you know, the groups that the law now protects, those are hard-won victories over the years and it's not a joke or an ironic statement to say that I am old enough to remember when some of those groups were not protected. You know, back in my original lawyering days, I did a good deal of work with the new, then new, Americans with Disabilities Act. And that was a hard fought battle to get there. So I think we can at least say that if the law is going to designate a group, and we're going to protect it, let's make sure we continue to do so. That's not to say that other groups that don't deserve protection and that aren't protected by the law don't deserve it as well. But I think we have to at least start somewhere with that. I think as well, the other aspect of the transparency is we need to be clear on what this stuff does. Because if we're not, then we can't even know what's fair. 


Alan Pelz-Sharpe 7:38

That's, that's where I'm concerned. I mean, can we even know? Is it even possible to know? I mean, if we have a massive amount of data, we're using deep learning techniques. Who knows?


Michael Simon  7:51  

And I understand. I've talked to the data scientists, I know enough to be dangerous, or at least maybe more depressed by the fact that there are certain types of machine learning where there's a big question mark in the middle of the process, because that's the way the system's built. That old, famous Far Side cartoon with the scientist with the blackboard full of equations and then at the end it says “And then a miracle occurs! Equals, you know, x.” That's how some of these systems work. But I think we just have to observe the results at that point. 


Alan Pelz-Sharpe 8:27

Well, okay. But from a legal standpoint, we've got systems that we have no idea what they're actually doing. We don't know if they're biased or not. How on earth do you legislate for that?


Michael Simon 8:40  

You know, I wish I had a really good answer to that. And the unfortunate answer... the two... there's an unfortunate answer and there's an even more unfortunate answer. The unfortunate answer is, law tends to lag behind our understanding and our ability to adapt, the law lags behind technology. The much more unfortunate answer is that technology and the folks who use it can at times take advantage of that. You know, there was a class I had the honor of teaching back at my alma mater, back in the 2000s on internet law, and at that point, one of the big legal texts of that concept of internet law was Lawrence Lessig’s code. And of course, Lawrence Lessig, Professor Lessig is quite famous for talking about what he calls the law of Silicon Valley, where the technology determines the law. You know, it's not just if you're driving down the street and you see that the speed limit is 45 miles an hour, do you choose to exceed it or not? If we build your car or even take you out of the equation of driving your car, so that it cannot exceed the speed limit, now that is the law of Silicon Valley, the facts supplant the law. I think we've seen enough bad examples of what happens with the law of Silicon Valley to have some nostalgia for when it was just the law of, you know, the law of lawyers, and unfortunately, we're just not there where the courts have a good handle on it. And over and over again, I am... can't say shocked anymore, but just kind of saddened almost at how I see when the courts face the need to deal with AI seriously and carefully that we don't know. And if I can go on for one more minute, I know I'm getting very professorial here. I had some great folks at the big firm of Hogan and Lovells, I've co-authored a larger article for the Yale Journal of Law and Technology at the end of 2018, on a case called Lola versus Skadden. That case involved David Lola, a contract reviewer for e-discovery, a tough and not very rewarding job, who wanted overtime pay, but they wouldn't pay him overtime pay because his employer said no, you're a lawyer and lawyers don't get overtime pay. It's right there in the FLSA. And the district court, there are a bunch of these cases brought similar to that, the district courts got rid of them all, including in this one. For some reason, the Second Circuit decided to review it even though no one had brought it up, not the district court, not the parties. The second circuit, one of the judges just decided at oral argument, we have the transcripts, we've heard them, can't machines do this? Isn't this something that this technology can do? And if that's the case, then how can it be practicing law? And that was the basis of their opinion, just that kind of knee jerk... and yeah, it's a good insight. But it needs to have something more thought-out because now, as we discuss in that law journal article, we start to set a precedent that, just like no one's really sure what's ethical AI, no one can quite describe what the practice of law is. But if we say, as soon as machines can do something, it's not lawyering, that's gonna go nowhere good. 


Alan Pelz-Sharpe 12:24  

So to rewind very quickly here, I mean, you said about that this is the law of Silicon Valley. You know, that's sort of my world. I would say, I would really challenge anybody to take me on on this one, I think that's alive and well. I don't think anybody's backing down on it. I think some of the big tech companies have learned the power of PR now, so that they can occasionally say, “Ah, I can't believe that happened. Oh, no!” You know, they've done all of those things. But you know, at the end of the day, I'm going to sound very unethical myself here. But, you know, if you are a $50 billion company and you're charging ahead, and you know there's a law coming into effect, potentially, maybe in two or three years time, and they might fine you a million dollars. Why would you, why would you care?


Michael Simon  13:19  

Today on Deep Analysis, Alan discovers regulatory capture. You know, I suppose now to make this a fun and exciting podcast, I should accept your challenge. Except for the fact no, I'm no, I'm going to decline it and just do the boring thing and agree wholeheartedly with you. If your business model is invading people's privacy, and we both know, I'm sure you have said it over and over again to where people are bored with you saying it and if you'd like, I can get members of my family who are of course you know, all home now, to come and confirm that they are sick of me saying the phrase, “If you're signed up to something on the internet or doing something where it's all free, and they're giving you stuff, you're not the customer, you're the product.” And if by making those folks the product, you know, that's not a model you can do without capping over a whole bunch of, well, what should be privacy rights. And so it's just the cost of doing business. And so those companies that are making a lot of money at it, we're not seeing that being stopped. And I don't know if we will


Alan Pelz-Sharpe  14:44  

No, I'm not sure we will either. I mean, you said about the law can be slow to keep up or even in any way, shape, or form, keep pace with technology. But you could argue that there's enough data collected at this point that you don’t even need any more data. You know, the horse has already bolted.


Michael Simon  15:05  

You know, it’s a hard sell with any marketing folks, or if you do, do so at a socially distant manner, so they can't hurt you.


Alan Pelz-Sharpe  15:11  

But I guess. I mean, I think the thing is, though, that where maybe we can have some light at the end of the time, is that yes. When we talk about these big issues, we by default, we're going to talk about Amazon. We're going to talk about Facebook and talk about Google. I mean, it's an obvious, right. But the thing is, AI is now coming into the workplace. It's very affordable, it's pretty easy to deploy, very easy to deploy, actually. I mean, even a small business can utilize it. And that trend has started, it's going to accelerate dramatically over the coming years. AI is just there, I mean, I wouldn’t say it’s in all software, that's obviously a wild exaggeration, but it's becoming a common component. So that brings it sort of full scale. Okay. These people are not capturing mass data, petabytes of data to do deep sort of analytics work to identify patterns in customers. You know, they're using it for more mundane things. Here’s the thing: as that comes in, in my mind, and I'd be fascinated to know what you think of this, there's sort of a spectrum here. One end of the spectrum, you've got AI to do stuff, just stuff, right, reading a document. This is a letter A versus a letter B, this is a contract rather than it being an invoice. I mean, it does that kind of thing really fast, really well. And you know, what, who cares, whatever. The other end of the spectrum I’m making decisions on your life and making decisions, as to whether you get a mortgage, I'm making decisions as to whether you get parole. I'm wondering if we as we move forward, we've got to be, we should be maybe clearer that AI isn't a thing. It's some technology that's used in different contexts. And be clear what those contexts are. And to put a spotlight on some of them and say, these, we need to be really careful about. Yeah.


Michael Simon  17:09  

And I think that's a good point. I will, however, point out that the line between AI that reads a document or that looks at letters or looks at words and makes, you know, baseline decisions on those, and an AI that makes decisions that will impact people's lives, that's a spectrum. And at some point, you need to read those documents and use those to make those decisions. You know, it's funny, I sent out to you last week, a decision that you know, has me now more upset that I'll have to go write a 77 journal article on, in and of itself, although it says it's not precedential–I wish they had made it precedential–called Rogers versus Christie out of the Third Circuit from earlier this month. And involves just the kind of thing you've described, you know whether someone should be in jail, setting bail, you know, if you look at who's in jails, some states 75% of the people sitting in jails are there because they don't have money for bail. They haven't been convicted of a thing, at least not for what they're in. They probably did other things, but they're not in jail for what they could be convicted of. They're just sitting there because they don't have the hundred bucks or 1000 bucks or whatever bucks to get themselves out. And you have states that have taken a great deal of action to try to lessen that. Now, as some of those systems, and it would be a podcast in and of itself to talk about the COMPAS system, the one that's the most popular in United States is a complicated algorithm that the folks at ProPublica in 2016 pointed out has impacts, effects that are racist. I hate to use such strong terms. I know us lawyers don't do that. And so, can you have a simpler system? Can you have a better system? And now here's this Rogers versus Christie system, our case, that relates to someone making a claim based upon a product created by, and let me make sure I get the name right, Laura and John Arnold Foundation, is something called the Public Safety Assessment to try and get non-violent offenders out of waiting in jail, particularly if it's unlikely that they're going or at least in most people's determinations, expert determinations that they're unlikely to commit a crime or and particularly a violent crime. And so you have the system and by the way, the algorithm we're talking about, you can see it on the page for that foundation. It literally fits on one screen. It's not a computer program. It's just a series of calculations, that simple math, addition and multiplication, stuff that all of us lawyers, even me can do. And yet here's the crazy thing about this case, the plaintiff is not suing because they're claiming that there was some bias in it that kept them or someone they loved in jail. They're actually suing because they claim the system didn't work on the other side. That it lets someone out of jail pending trial, pending the whole process, who then turned out to not just be violent, they, it's a terrible thing, this June Rogers, someone was left out in New Jersey. He was released with certain restrictions, not bail, but certain restrictions to check in with police, do XY and Z. And three days later, he shot June Rogers’ son 22 times and killed him. It's a terrible thing. I get it, the system failed there. But now we're having someone attack these algorithms from the other side saying, Look, it's defective. It didn't properly protect the public. And there's a long, brutally angry, I have never read a complaint with such anger in it. I understand that obviously, this poor woman is angry. The attorneys transmit a lot of anger in the complaint and the complaint was dismissed. The complaint was dismissed by the district court because they found that particular algorithm, this very simple form of AI was not a “product” under New Jersey law. And even if it was a product, it wasn't distributed commercially by being used by judges to help them make determinations. And the Third Circuit, in a way that's not precedent, we can't quote it for legal effect, but I think people will read it anyway because we're all grasping at whatever straws we can get to figure out what to do and these things, agreed with them. They said AI is not a product and in some ways, I can see where that approach stands. But what upsets me is now, Okay, if AI is not a product, then what is it? How do we legally evaluate it?


Alan Pelz-Sharpe  22:11  

A question has to be asked. I mean, you know, I have a personal interest here in that I've volunteered in prison systems here and in the UK for many years. I think the question has to be asked, I mean, should you even be using AI for that kind of thing?


Michael Simon  22:25  

And that's a very good question, because the opposite of that was back in 2016. In a case called Loomis versus Wisconsin, a prisoner who claimed that he was wrongfully incarcerated or incarcerated for too long under a sentencing AI guideline attacked efficacy and the accuracy and ethics, I guess we could say, that shorthand of the particular program used and the Wisconsin court again, it just I look at what they could have done. They could have forced the creator of that system to come into court and show, and any of the courts could force this, to come in and show how that system works, to provide some transparency to explain it. And instead, the courts declined. They said, we're just going to trust it. And we're also going to say that these trial court judges are not really using it to make a determination, they’re using it for advice. And they're taking all these other things into account. And we're going to list out a couple of pages of other things they need to do. And frankly, we're going to pretend that our criminal justice system has time to do all this. 


Alan Pelz-Sharpe 23:42

But it’s that bizarrely, that human belief that computers don't make mistakes. I mean, I was in a supermarket about a year ago, and I don't remember what I bought, but let's just say it was it was some milk and butter. It was a couple of items. And I'm stood there and the person at the checkout says that’s $52, please. I’m like, what?



Michael Simon 24:06  

That’s some fancy butter.


Alan Pelz-Sharpe  24:08  

No, not it’s not, I said, it’s just two things. And I swear, this person, who I'm sure is a lovely person, but they're just pointing at their screen saying, you know, basically saying, computer says it's 52. It’s 52. There's no discussion here. You know, and a supervisor then heard this and came over and pushed a few buttons and fixed everything, but, but it's that thing, it's like, Yeah, I don't really understand it. But it's really clever. Therefore, I've got to trust it. I mean, that's a fundamental problem with tech in general, I would say. It's not just AI. Sort of moving on and maybe sort of rounding this out, have we anything positive to say? I mean, is there anything here…


Michael Simon 24:51

Oh I’m sorry, you want positive news in the midst of a pandemic?


Alan Pelz-Sharpe  24:56  

No I don’t! Well, we all want some. But I mean, I mean, I don’t know, I'm not an attorney, as you know. And I don't claim to be, you know, what I find fascinating is that I think during this terrible pandemic situation, I don't know, maybe I'm being silly here but I think people are starting to appreciate what they have. I think they are starting to realize that maybe they didn't need so much. I think they're starting to question some things.


Michael Simon  25:21  

These are times of, you know, I'm gonna sound so pretentious here. These are times of great portent, things are going to have to change. Lots of folks are going to come out very badly on this, I hate this. I think we all know the economy is a mess, a lot of things are a mess. But things will, it will drive change and a questioning of some of our assumptions, particularly when those assumptions are not well founded. Like for example, you know, you're not the only one to say that we trust machines too much. Experts have been saying it for years. And It's easy to go, Oh, that's just a grocery clerk, what do they know? But we all do it. We all trust machines far too much at every level, from the most learned amongst us. Hopefully, we will see people start questioning assumptions better. And the assumptions that machines are infallible, the assumptions that we can't or shouldn't try to impose our ethics on the systems and simply accept, hey, the machine’s done it, so that's the law of Silicon Valley. No, no, we need to ask ourselves and in the end, in the article that we wrote for the Yale Journal of Law and Technology, you know, we talked about things that make all of us valuable, things that make all of us, you know, it's about being able to apply human judgment. You know, there are things we do real well yeah, you're right. Those machines are great at telling you whether it's letter A or B. And now we've gone to the next and they've been so for years. And now they've gone to the next step where they can tell you what that word is. And then they've gone the next step, they can tell you what potentially that sentence is, and potentially what the documents surrounding that sentence, what all those paragraphs is, and they can look at, you know, how much money you have in the bank and how many other credit cards you have and how many car loans you have, and whether you're on your own and decide, hey, you should get a loan or not. But at some point, we need to start applying our own human judgment and wisdom, impartiality and accountability and sense of fairness. Because when we take it to the next step, hey, now let's go mine your Facebook contacts, see who you're friends with. And let's check their credit ratings because, you know, it's you know, we want to see who you're running around with, what in circles…. really? Let's see what kind of political opinions you have, maybe we better judge those before we give you a loan. Really? Companies are doing it right now. All of these things, by the way, these are not stuff I'm making up. These are product pitches. There are startups right out now doing that. Doesn't mean we have to agree with it.


Alan Pelz-Sharpe  28:18  

Well, on that cheery note, Michael,


Michael Simon  28:21  

Lesson for today, do not ask the lawyer to be positive. 


Alan Pelz-Sharpe  28:25  

It’s a really important discussion to be having.


Alan Pelz-Sharpe  28:28  

Once again, thank you so much for joining us today. Hope you enjoyed the conversation with Michael as much as I did. If it sparks any thoughts, if you're somewhere on your AI journey, whether you’ve started it yet, whether you’re way down the line, let us know. As an advisory firm, research is core to everything we do. Connecting and learning about your experiences is absolutely crucial to us. So please do reach out. If you want to learn more about AI, you can always check out our book on Amazon, Practical Artificial Intelligence: An Enterprise Playbook, or our website www.deepanalysis.net. Until next time, bye for now.


Transcribed by https://otter.ai


People on this episode