PrivacyLabs Compliance Technology Podcast

The Future of Privacy Law in Artificial Intelligence with Stephen Wu, Esq.

April 28, 2021 Paul
PrivacyLabs Compliance Technology Podcast
The Future of Privacy Law in Artificial Intelligence with Stephen Wu, Esq.
Transcript
Paul Starrett:

Hello, welcome to the privacy labs first podcast, I have the pleasure of having Steve Wu with me here today. First, just a little bit about PrivacyLabs. Remember that is one word PrivacyLabs one word. And we specialize in compliance technology. I will be discussing a bit more about us at the end of this podcast. And I will also give Steve that option as well. But we're here today to talk primarily about artificial intelligence and privacy. And just a quick note that I've known Steve for about 20 years. He's one of my very few go-to people for this type of a thing. He's definitely at the pinnacle of his practice, and someone who I trust implicitly. And so Steve, why don't you tell us about yourself rather than relying on me to summarize what I couldn't possibly do?

Steve Wu:

Thank you so much, Paul, for having me today. I'm a shareholder at Silicon Valley Law Group. And I've been with the firm since 2014. But my practice is I'm a practicing lawyer. And my practice focuses on information technology. And I focus on five areas. One is transactions helping to strike deals, especially for vendors that are looking to sell the product or service. The second piece is on compliance, different information technology compliance requirements, and making sure that my clients comply with the letter of the law. The third piece is on liability, avoiding liability and also working together with the litigation lawyers in my firm to defend our clients' interests in court suits or alternative dispute resolution. The fourth piece is, is incident response. And today, incident response means data breaches, responding to data breaches, but in the future, and we'll talk about some of this today. There, there will be other types of incidents that will occur. accidents, or autonomous vehicles, for example. And the fifth piece is governance. So working on policies and procedures to have internal controls over Information Technology, especially in artificial intelligence, and robotics, and automated driving. So those are the five pieces. I started off my career doing intellectual property law and litigation and spent five years as the second inhouse lawyer at VeriSign Incorporated, which issued digital certificates for secure electronic commerce. And then started my own practice and had 10 years that a firm that I started and since 2014, moved to Silicon Valley Law Group. I've been integrally involved in the American Bar Association, Science and Technology Law Section starting in as co-chair of one of its committees, Information Security Committee, and then becoming a member of the governing council of the section and then later becoming an officer and later chair of the section from 2010 to 2011. I helped start committees like artificial intelligence and robotics, big data, Internet of Things, cloud computing, homeland security, virtual worlds video game law, so startup helps to start a lot of committees within the section. And since 2019, have been Chair of the American Bar Association wide Artificial Intelligence and Robotics National Institute. Along the way, I've published or wrote or co wrote seven books on Information technology security, and also have been working on a lot of other publications book chapters on artificial intelligence and robotics.

Paul Starrett:

Great, well, that's certainly impressive. I just want to again know for people that I've met Steve, you and I met at the Information Security Committee back in 1999. I think it was. And,

Steve Wu:

yeah, one of those books, we wrote a conference, which is the world's biggest Information Security Conference. Right, right. We've been collaborators ever since. And this is just another outgrowth that I can't believe it's like two decades.

Paul Starrett:

No, exactly. And yes, and then. So I was a, the way we met was was I was an engineer at RSA security. And he was a lawyer, and I had passed the bar in my tenure at RSA. And so anyway, we've worked together. We were one of those books you mentioned. Were something we co wrote on digital signatures. Yes. Anyway, so with that said, and also mentioned that you were cum laud at Harvard Law, so you're on paper, you're about as pristine as they come. So thank you for having me. And

Steve Wu:

thank you for your kind introduction.

Paul Starrett:

Yes. So let's get right to this. So we are focusing on compliance technology. We are going to focus primarily on the artificial intelligence, but first, I kind of want to get a sort of a high level aspect of how you approach a new client when you onboard them, assuming it's a fresh client? What are the steps that you go through? To get them to a place of compliance?

Steve Wu:

Yeah, the first step is I want to understand thoroughly the business of this client. What does the client do? How does it do it? What kind of data does it need? What's its data lifecycle? When I say data lifecycle I'm talking about? What kind of data does it collect? How does it collect it? How does it use it? Does it need to share with anybody or disclose it to anybody? How does it store that data? Is is cloud based? Or if it's a software solution, is it on premises? What kinds of security controls does it have in place? And how does it decommission the data? I'll use that term like get rid of it, archive it or destroy it. So looking at the data lifecycle? And I'm also very interested in what are the key risks associated with the use of that technology? What are what are the areas that could cause liability for the company? And I'm also concerned about what are the surprises that customers might expect? In other words are the things about this product or service that would be counterintuitive for the customer, if there are, I want to make sure that customers were on notice of those of those aspects of the product or service. And sometimes there, those are obligations that the customer has to, for example, keep something secure, that they may not be aware, my clients don't have the ability to take care of everything on behalf of a customer. And so there, there might be some area of responsibility that the customer has. And that customer needs to be informed of those responsibilities in order to make sure that it uses the technology in a compliant and secure fashion respecting the privacy of the end users. So just getting to understand that the business really well, understanding the data lifecycle, and then from there trying to look at, where does this customer stand in its compliance efforts? Is it the very beginning has it? I mean, there's some clients that I have that have never done a compliance process before. And it was engineers who started the product or service, and they're just going to market now. And they haven't even thought about these things. And it ranges from bad all the way to pretty sophisticated players who know that they need to take care of information security and privacy and information technology compliance and thought about it all. along the way. The best client scenario that I can think of is one in which the clients work together with somebody either with me or with another lawyer from the very beginning of the design process. That That means looking at the compliance requirements and the potential liabilities from the very beginning while the marketing requirements document is being formulated, while they're thinking about what do we want to build? Or exactly what do we want to build, but in the marketing requirements document, and that's the time to talk about features of this product or service? What what are the customers want? What are their customers want? And and that might trigger some discussions about whether you've got some compliance problems or some liability problems from the very beginning. And hopefully along the way, they continue that this iterative process of looking at compliance liability, all the way through the point where the product becomes generally available to their customers. So those are the initial steps.

Paul Starrett:

Yeah. And I guess, just for the listeners edification, Steve and I have worked in on client matters together. I'm a lawyer to I was inhouse counsel and Steve was my go to outside but out house. outside counsel. So we've had this before, I've to say I'm a lawyer to be more of a technologist now. And we have this sort of synergy between us. But would it be fair to say, Steve, that the it's a risk based thing where you really, that's where you your advice really comes in handy because you have to make that assessment for them. What are the risks in you know, litigation or in regulator? Regular sniffing around? It's, it's kind of a, there's no, there's no magic button. You really have to understand what's going on and iterate through that, as you prepare them for what you would share your...

Steve Wu:

Risk management is a large aspect of what I do. Try and think of what are the laws that apply? How do they apply to this particular client? What's the likelihood of the enforcements? What kinds of penalties and liabilities are associated with violations those laws, those all go into a managed management process for you handling legal risks associated with a client's product or service?

Paul Starrett:

Would it be fair to say this is, again, my own personal feeling but would it be fair to say that data breaches are probably high up on that list because of statutory damages? And just for people who don't know what that is statutory damages where a plaintiff can elect to take a per transaction and or per defendant value, record per record. Okay, thank you. That's why you're on this, rather than right, rather than having to prove those damages up, which can be very difficult. So given that in the class action, possibility that this would be higher up, perhaps, is that a fair statement?

Steve Wu:

Yes, it is a high risk data breaches are high risk area, I think, for a number of reasons. One is that regulators look at data breaches as the ultimate bad outcome. And when legislators or the management of the of the regulatory agencies look and see what what actions you take to really protect the public, going to look at whether these agencies went after high, high profile events, and data breaches of the ultimate high profile events at the moment. So that's one reason. And the second reason is that there are a lot of costs associated with data breaches not only we talked about potential damages paid in settlements or judgments to injured parties, affected parties. We're also talking about the investigation costs, the defense costs, the remediation costs is at the start, add it all up. It gets to be a large number. And so these are high cost types of events.

Paul Starrett:

Then you have the consequential things like reputation damage, and yes, yes. And moving at the top is they say, Great. Well, with regard to so I think unless you think there's something else you wanted to say I think that maybe encapsulates the way we would look at things from a fairly conventional standpoint, moving now to the artificial intelligence, machine learning the automation side. How are you seeing that now? And then, where do you think this is going to go? And just to give the audience a sense, we're going to discuss the new European Union proposal for rules on handling artificial intelligence. Where do you see this now? And how do you think these new rules that we're seeing coming out, affecting that?

Steve Wu:

Yeah, I think I think we're we're heading on a roadmap towards something like the European Union's general data protection regulation. The European Commission had come out with a white paper in 2020, talking about potential, a potential regulatory approach to artificial intelligence. And this proposed artificial intelligence act that came out on April 21 2021, seems like the next step of the European Commission putting out a an introductory set of content to talk about the need for this law and what what, what it addresses and scope, then a summary of what it is then a proposed artificial intelligence act. And there's there's a long list of recitals, and then the sort of like GDPR, there's a long list of recitals, meaning background information to explain what it is. And then there's the proposed law itself. And then some impact. discussion at the end of the document, the whole thing is 108 pages. So I haven't finished reading through every single word of it, but just going through the process of digesting it now. But I see this as being something akin to GDPR. It's not exactly like GDPR, of course, but it has some commonalities with GDPR. And I see this as leading to something that's going to be like GDPR. And and in the United States, we're going to have to live with this because the law has extraterritorial effect, like the general data protection regulation. And as a result, companies that do business and also the the fact that companies do business globally, they don't want to have one set of products and services that are intended for the European market and one set of products and services that are for other parts of the world, I mean, you could take that approach. But I think that's a more difficult approach than trying to meet the highest common denominator. And yes, and try to have compliance everywhere in the world by meeting the highest standard out there. Yes. So I see this as having a global impact down the road. But of course, I mean, as we had discussed before, this is something that's not here today, this is a proposed law. And it's going to be a regulation, meaning it's going to have an EU wide effect. It's not going to be something that is a directive that requires member states to adopt local legislation that might be inconsistent from member state to member state, this is going to be something that will be a regulation, meaning it will be directly applicable to all the member states. For uniformity purposes.

Paul Starrett:

Yes, like,

Steve Wu:

I mean, you could say that the shorthand of this is that it's the next GDPR, you could say that. It's not quite that because of course, it's different subject matter. But I think the impact on the artificial intelligence field is going to have is going to be similar to what what we see in GDPR. With privacy.

Paul Starrett:

Yeah, it seems as though it is becoming one a bit broader in in what it touches on. And it defines things with some greater specificity if my reading was correct. Again, this is your wheelhouse, so I would certainly defer to your thoughts there. Some things that stand out to me is sort of ongoing issues that they feel are important is things like explainability, the ability to understand what machine learning and automation is doing. So one that there's a conceptual soundness, I'm borrowing a term over from banking, where you does the model do what you think it should do? Right? Does that create a risk to the company, but more so for bias and fairness? And for the impact of a person, the legal impact on a person? Where what they call I think the term is automated decision making automated human decision making. Is that did you see something like that in the new law? Or is that?

Steve Wu:

I, I haven't gotten to that, that part of it yet. But as I was reading through it, one of the things that came out was something that that Jacob Turner, one of the authors of a book, a book on artificial intelligence law talks about laws of identification. So laws of identification refer to telling people that interact with an artificial intelligence system that they are dealing with an AI system. That's the transparency angle that I've focused on in my brief read so far. And the example that I give is Google duplex. Have you? This was from 2018? But have you been familiar with what I'm talking about? Yes, yeah. So it's a it's a natural language processing system that could make appointments for you and talk with the person at the other end, where the danger is that the person on the other end say the restaurant worker, or the worker in, in a barber shop, or a beauty salon, doesn't know that it's dealing with an AI system, trying to make an appointment on behalf of somebody else, or make a dinner reservation, right to have somebody else. And so laws of identification would require the AI system to explain to the person on the other end that it is an AI system and to make that disclosure. Right. That was really interesting. Yeah. It was Google, I promised that they would, they would be transparent about the duplex. But I just thought it was interesting that now this is moved into more of a requirement under the proposed law.

Paul Starrett:

Right. And I think there's, I think a lot of this is turning in natural language processing, actually, because of the way in which the fuzziness that goes on with, you know, how do you interpret human language? is there's some extra, some extra issues there. With regard to one of the other major issues is with the high risk area of artificial intelligence. That's where it seems to focus the most on what's required.

Steve Wu:

Well, I think a fundamental part of this AI act that is carried over from the 2020 white paper is dividing artificial intelligence applications into various categories. One would be prohibited, like manipulation or deception, okay. Then there's the high risk types of applications where we're talking about applications that would affect health or safety or the fundamental rights of individuals, and then there are low or minimal risk types of applications. So the the, the act, sets out a framework for distinguishing between those those types of AI applications and provides examples of those. And then talks about what is required for each of those. For high risk applications, you need to have some kind of before the fact before the market entry if assessment of the of the end testing of the application before it goes on the market. And then there would be post sale activities potential, like recalls, if, if it turns out that something is dangerous, or something is violative of fundamental rights. So, think about that contrast that with things in the United States, when we put out products in the United States, often they are low risk enough. I'm talking about like non medical devices, where there is robust, arguably robust post sale regulation, where if something turns out to be defective, in hindsight, then you could have a recall, you could be required to have a recall of your product for the public safety, and then fix it. But that doesn't prevent the initial harms that triggered the recall. Right? I mean, there could be some kind of harm to the public before people figure it out. And then then it's the additional victims of that harm, or it's too late for them. Yeah, can't

Paul Starrett:

you can't unpunch someone?

Steve Wu:

Yeah, all right. But for like medical devices, for some high risk medical devices, for some classes of medical or class three medical devices, then you need to have you need to have an approval basically, from the FDA before you can sell it. So it has to be safe and effective. And that's a different approach between Europe as a general matter and united states that in Europe, the scope of premarket approval, is much broader than it is in the United States.

Paul Starrett:

Right. Got it. Would it be fair to say then, so we can maybe close out this particular topic on is that the new regulation proposed regulation is really a a broader coverage of the definitions and the buckets of different types of AI, and guidance and how the GDPR excuse me, the European Union regulation, we'll approach those, because I think I know that I've been hoping for that, you know, that we have more clarity. And I think that's kind of the spirit of this is to ensure, yeah,

Steve Wu:

yeah, I think so. But I think the one thing that strikes me is that there probably going to be gray areas between high risk and low risk. Sure, we won't know the exact boundaries. And I there's a lot of commonality in the approach that you see in the artificial intelligence Act, which is that there's a lot of malleable language in there. Yes. And I think it's, I am just hoping that down the road, when something like this is actually law applicable law, when it goes into effect, that there's going to be a lot of regulatory leeway, I'll say, of supervisory authorities noticing behavior that they don't like, but that may not be obvious from just reading the law, that they will try to work with companies that are in the fields to promote compliance through informal means, rather than enforcement actions. And fines at first, because I think it's going to take a lot to flesh out what what is allowed and what is not allowed in in detail. Right? Like GDPR. There's a lot of general language in there. So precise judgments about this particular client's application that does this, but not that those trying to fit this general language of GDPR onto those particular activities is sometimes very challenging. I think the same thing is going to happen with the artificial intelligence.

Paul Starrett:

And I think that I would agree, because there are a lot of adjectives in there, despite the fact that you asked one question, it begs to more, you know, give one answer big student questions. When I read what I've read, again, I've not pored over the entirety of it. I have read some good articles on this and, you know, summaries, but there's a lot of new definitions to grapple with. And I think I would agree that they, they're not going to be draconian and heavy handed at the outset, as we cut our teeth on this as a world.

Steve Wu:

It's also the case that it's going to take a couple of years, but before this proposed, artificial intelligence act becomes an actual regulation. And if it becomes an act, if when it becomes an actual regulation, there'll probably be a couple of years of implementation period before it goes into effect. And then after it goes into effect is probably going to be a period of a time when we're gonna have some leeway in terms of regulatory enforcement just because it's so new and people recognize it takes time to adjust and the language is somewhat malleable. And as a result, I think this is, this is a long term process.

Paul Starrett:

It is I do think it has some immediate effect in that if we know, this is where it's going, it seems imminent. It's not if it's when that this could help people kind of gear up for it, maybe to to re emphasize what's already been done with regarding GDPR and other similar privacy regulations, but also to say, hey, this is happening. And that this is something that we can use this to to inspire us to move in the direction that they're expecting.

Steve Wu:

Well, I, I think if you talk about my client base, or other people, other practitioners client basis, I think there's going to be two general approaches. One will be a proactive approach that looks at this and says, this is not law, but this is where the law is heading. So while we understand that the ultimate final language may be different from what we see in this document, trying to put into play some of the principles that are in here, will probably save us money over the long run, because we will be able to start down the process of trying to make our systems better, safer, more ethical, more transparent. And all those general things will probably be in the final law anyway. And it would reduce their legal liability in the meantime. The other approach, which I find that most clients actually take is a wait and see approach, which, which can sometimes be a reason for inaction, an inertia approach, kind of shades into an inertia approach of, well, we don't know what the final law is, and we'll just wait to see what the final law is. And then once the final law comes out, they say, well, we've got two years to comply, so we'll worry about it next year. And then pretty soon, you've got all of a sudden realization, it's coming in few months, or even worse, they say, oh, I guess the law just went into effect, I better worry about it now. And they kind of backload all of their compliance processes, which in my mind is worse, because it requires a lot more energy, time and effort, all in a concentrated period of time to get into compliance. And all of that that stress could have been avoided if they'd taken it as a in a more measured pace all along the way. But I realized that there are some some companies that are going to take a wait and see approach.

Paul Starrett:

Understood, I think that from the technical side was kind of where my firm specializes is that it's easier to bake in digital controls early, rather than have to go back and reverse engineering everything and pull it all rip it up and go back in. One example of that is the software development lifecycle gaffs, we've seen the Solarwinds, where people have people who have been careful with the security of the software development lifecycle have had done well, I think, rather than having to go back and rip up all of their development site, development process, and so forth. Okay, so, um, well, that's good. I think the last thing I was hoping to, that we could just touch on is one thing that I think is a truism, and I've wanted to get your thoughts on is the, the number of different skill sets that have to come in together at the same time, in order to be holistically sound here, and I know that's a perfect world type thing, you do your best. And I don't think any regulators really ever expected to be perfectly you're doing, you're doing the best you can, there's usually decent, but I like to, because that's that's an area where my firm is really kind of set to, to help is to bring together for example, the technologists in one place, and then to have the conversation with the lawyer, as we as you and I have done in the past. Any thoughts you have on where you see that going and the challenges there?

Steve Wu:

I think where you're heading is this is a team effort. This is not just one silo of professionals handling the entire waterfront and problems that that might crop up, crop up compliance challenges, for example. I think you will need management of the company client company say you need technology advisors, you need legal advisors and you need to work together in a coordinated fashion. I think in the past, there have been times when lawyers think they know everything and they start dictating what goes on and they let the legal compliance tail wag the business dog things like that. But then on the other hand, there's some technologists who, who try to put in technology into a company. And they, you find out later on their disconnects between the technology and what the law requires. Right? So I think it takes takes a team working together on an ongoing basis in order to have a true compliance process.

Paul Starrett:

Right. And I would, I would say, I think we can probably call it a day here on this topic. But I would say that it's speaking the same language and keeping it simple. So everyone would understand that there there is an area of explainability, in artificial intelligence and machine learning that I had been speaking loudly about lately. for that very reason. If everyone knows what's going on, then they know that their component part is being accommodated in the greater whole. And when it's simple, it's hard for something unknown to go wrong. So and it also then winds up making it compliant because all these laws require transparency and explainability.

Steve Wu:

Automation can make the compliance process simpler. True. You if you can leverage technology to do more with fewer people, then that's going to make your your compliance process more efficient.

Paul Starrett:

Yes. And that's often one of the the those are the two main reasons to automate whether it includes machine learning or not, is to reduce costs and remove human well not remove but reduce human error as well. Because but you have to be careful because in practice, it means making the the mistake permanent, and repeated. So there's a careful process. But you're absolutely right. I think that it's the solution. It creates a challenge, but it also is a solution in its own right. So all right. Well, listen, I think we found a good place to end here. I would, I think you basically gave a good sense of what your firm does and what you what you do. If there's anything out there. you want to add, I can lead you to that here if you'd like.

Steve Wu:

Yeah, just say that if if anybody needs assistance in this area, and artificial intelligence, law, or compliance with data protection requirements and information technology, please contact me I can be reached at my law firm www.svlg.com, which stands for Silicon Valley Law Group. And my profile and contact information is on our website www.svlg.com for Silicon Valley Law.

Paul Starrett:

Great, great, I know that you also do other kinds of law, but you're at the pinnacle again

Steve Wu:

of AI, legal privacy, and I have a an AI and robotics specific website, ai robotics, law calm. So my contact informations on there as well.

Paul Starrett:

Great. Well, if you're if you got a site like that, you got to be the top of the game. A little bit about PrivacyLab beforehand, here just to help pay the bills. We do the compliance technology side. I'm an attorney and also a technologist, data scientist, more of the latter recently, and I've worked with people like Steve to make sure that we get the right sweet spot of your needs, following risk based analysis. And we really have four basic skill us excuse me, service areas unify, sort of using tools to bring everything into one place, we do automation, which includes machine learning in particular, we also work with cybersecurity and information security standards, and finally, an auditing. And we're really focusing a lot on that now, especially in financial crime and anti money laundering. So with that said, I will bid you adieu thank you, everyone for listening. Thank you so much, Steve, for your time. And we will likely have a follow on podcast with you and I would encourage other people to watch for that.

Steve Wu:

Thank you all for having me today.

Paul Starrett:

All right, thanks.