PrivacyLabs Compliance Technology Podcast

AI Governance and Privacy Law, US and EU Perspectives with Yoann Le Bihan

August 17, 2023 Paul Starrett
PrivacyLabs Compliance Technology Podcast
AI Governance and Privacy Law, US and EU Perspectives with Yoann Le Bihan
Show Notes Transcript

We explore AI governance and privacy law from the perspective of EU and US privacy laws. We touch on "bossware" (use of surveillance software by employers) and where we are with regard to this subject area!

Paul Starrett: Okay. Hello, and welcome to another podcast for PrivacyLabs. We are staying with our AI governance theme here with our latest podcast. And I am delighted to have Yoann Le Bihan, correct me when I get finished here with your name. I apologize just to call you Yoann. Yoann is a privacy lawyer who’s licensed in the state of California, happens to practice from Europe and I’ll let you expand on that in a minute. And our goal here today is to discuss three basic ideas here. 

First, what challenges Yoann is seeing in the privacy area with regard to machine learning and artificial intelligence. Then we’re going to take a quick look at what’s called Bossware, which is an area of employer surveillance. And there are questions about how AI maybe used in that process, and how that may become a challenge in the legal profession. And finally, we’re just going to discuss basically where is AI governance? Where is Yoann seeing this and his colleagues or does anyone know about, do they know what it means? Are they fully immersed in it? We don’t know that and that’s why we’re going to talk to Yoann about that. With all that said, Yoann, please tell us about yourself and your firm and anything else we should know at the outset of this podcast. 

Yoann Le Bihan: Wow. Thank you, Paul. Thank you for having me first and yeah, definitely the pronunciation of my name was perfect. Yoann Le Bihan, that’s perfect. 

Paul Starrett: Okay. 

Yoann Le Bihan: So, I am a lawyer who happened to be an engineer or I’m not sure if I’m an engineer who happens to be a lawyer. It’s a bit of a mix. I started my career when I was 18. I started my first company and I wanted to be in IT. I was really passionate about technology in general. And after a few years working for small and medium businesses, then I switched to full-time consulting and I was a consultant mostly for industries like telecommunications, financial sector. And sometime on the way I wanted to do something else on top of that and I just started studying law out of the blue. And I realized after a couple of years studying law while being still a consultant in IT that there was a need for people who could speak to, on the one hand, the engineers and on the other hand, the lawyers, because they hardly talk to each other. And if they do, they hardly understand each other. 

And so I passed the bar in California in 2019 and then I started my firm. And now I specialize in technology law. So, some people say that tech is more of an industry than a practice area, which is probably true to some extent. But I do privacy as a significant part of my job and other aspects of all the challenges that tech companies may have in their day to day business. That’s what I do. And I help small businesses when they’re very early stage startups to launch their business and follow a compliance path when it’s cheaper to do it, usually at the beginning. But then I also help larger companies through partnerships with other firms. Sometimes I come as kind of an expert, a subject matter expert where I bring some understanding of the technical things for lawyers, and I try to translate legalese for engineers, and they usually like having someone who is able to understand their own challenges. So, that’s what I do. So, privacy, yes, is a very significant part of it. 

I should mention as well that I’m well licensed in California and the District of Columbia, but I’m also based in Luxembourg. So, I have very strong connections with Europe. I work with firms in Europe as well and I have a lot of my work that involves both privacy frameworks from the US of course but also from Europe with the GDPR, the privacy and things like that.

Paul Starrett: Got it. Thank you. Yes, and I think what I’d like to touch on first after hearing what you said is that it seems as though there are privacy issues in California and the states where AI governance may or may not be as much of an issue. But generally speaking, I think it would be good to focus on your European connections and your practice there. So, understanding that machine learning and that there are other podcasts where we talk about how data is really the lifeblood of machine learning and artificial intelligence. So, there are impediments though to getting data, not only getting it at all, but getting it in time to make it worthwhile. What would you say, legally, from the standpoint of privacy law, European or otherwise, that impedes that process and any solutions you might think about?

Yoann Le Bihan: It’s a very tricky question, first, and it’s a very interesting one as well. But this is typically what lawyers love, the tricky question that is the most interesting one. But technically, there is a huge cultural gap between the way privacy, in general, works in the US and in Europe. And I think with AI, we have a typical example of where this cultural gap is very pervasive in the day to day of businesses active in the field. So, when you have an AI company, I think it’s not completely by chance that companies active in AI are mostly based in the US. 

The first reason is, probably, that you would hardly be able to develop in Europe, a company that has the ability to process such a vast amount of data and collect all the data that is required to train properly an AI model in Europe. So, I know that most likely some of my European friends will hate me for saying that, but when you need a business, when you need a framework that allows a high tech business to develop disruptive technology that is data intensive, you can hardly do it from Europe. You really need the business pragmatism that we see in the US. 

So, typically here, I think that a company trying to develop a new AI model in Europe would face a lot of issues and even the US companies doing it are facing a lot of issues with their European clients. And we have seen for instance, Open AI with ChatGPT being banned in Italy a few months ago. And I think it’s a typical issue that we have. But I mean, I have the two visions. From a business perspective, I think it’s a pity that we don’t have the framework in Europe to do it as efficiently as in the US. But on the other hand, we’ll probably talk about it a little bit more when we talk about the Bossware aspects, but people individually are also better protected in the European model. 

Paul Starrett: Got it. That’s very interesting. And something to be honest, I maybe had known, subconsciously, with the idea that AI applications companies are even more eager for data than just your standard use of data and how that data is governed and limited in use because it contains private data. I don’t know if we need to get into the definitions of private data in various jurisdictions, I know that that changes. But that is a very interesting thing and then the difference in the sentiment around privacy rights, where Europe is considered… It’s a basic human right. In the US, it’s something other than that. So, with that said, what are the primary barriers to, one, moving data and two, using it as you see it in the States and in Europe? Maybe some legal approaches to getting around that limitation, if that makes sense, if you need me to clarify that. 

Yoann Le Bihan: No, sure. I think in terms of moving the data, it’s where it’s probably the most visible at the moment because we had the… You probably know that personal data as defined under the GDPR in Europe could hardly be transferred while being compliant with the GDPR until this summer. And it had been impossible to transfer data from the EU to the US following the second challenge of the EU-US data transfer scheme that was in place. So, there was safe harbor and then there was the privacy shield and each time it was invalidated by the European Court of Justice, following challenge by, in both cases, Max Schrems, who is a famous Austrian privacy lawyer. And in both cases, the incompatibility at the root really of the privacy values of Europe and the US was the motivation for invalidating the framework. So, basically companies could no longer easily transfer data from the EU to the US. 

Now, we have a new framework called DPF, US data privacy framework, data protection framework. And we are just waiting to see if the DPF stands the next challenge. So, this is really a European issue, but it’s something that has an impact on the US as well because in the US, we don’t really have these… We rarely have these problems of data transfers. Basically, US companies are very free to transfer data, however it wants. I mean, not for any purpose, but basically, you have a lot of leeway. But the fact that you have these limitations in the EU, and that there is a significant market in the EU and most online companies, most online businesses, sorry, are very active with European market means that you have to comply as a business with the European framework as well. And that is a headache sometimes. 

Paul Starrett: Yes, I think you made a good point there, Yoann. I think it might be good to point out that there are, in my mind, really to separate though highly interrelated and overlapped two issues going on data protection and data privacy. Data protection, the statutes can sometimes restrict the movement of data out beyond a certain border of a sovereign government or a country. And then there’s data privacy, which says what you’re allowed to use, for how long, for what purpose and those types of things. So, I think that might be a good thing to point out. So, for example, it’s GDPR general data protection regulation, not privacy regulation. And protection, of course, includes cyber security, secure data has integrity, make sure it’s integrity and whatever else GDPR says, but I think those are two separate legal issues. What are your thoughts on that? 

Yoann Le Bihan: Well, I’m not really… I think the difference is there but I’m not really sure that assigning the GDPR to the data protection aspect of it is totally accurate. Because technically, when you see how it is implemented on both sides, there is not much difference in what Europeans call data protection and what Americans call privacy laws. Data protection is sometimes maybe a little bit more narrow in the mindset of people making the laws, but very often they really just use it as a synonym for basically privacy. Data protection and the GDPR in Europe is really about protecting privacy. And privacy laws typically like the CCPA or CPRA are doing more or less the same thing, but with a different spirit. But the difference in the spirit of law is basically the difference in the way lawmakers want to approach the issue of privacy and privacy laws on both sides. So, I don’t think there is really this… 

The main thing, however, is that privacy in Europe is also about transferring data where in the US it’s not very much about transfer of data. We have the sale of personal information that is the end scope of the CCPA. But we say sale but it’s basically the same, it’s transfer of data. But we tend to put less barriers to transfer of data in the US than in Europe, but this is essentially where I see the difference actually. Then you have the laws that are about privacy, purely the fundamental concepts of privacy. And these ones are… Well, in California we have privacy in the Constitution which is something that is quite unusual in the US. In Europe it is more pervasive everywhere, so you have privacy as a fundamental right. So, this is a European core value and this is probably where the main difference is. Where privacy is something that is essential in Europe and not always essential in the US from a legal point of view. And California is a bit of, not a niche but an exception to the rule. 

Paul Starrett: Yes. Let me just explain where I was going with that. And I know that before this we discussed this and we’re going to flesh this out, is that let’s just use a very simple example. Let’s say that a company wants to build a model that tells them the buying habits of their international client base. And they need data from Europe and from Brazil and from the United States. So, the idea that they can’t move the data into a central location, which is typically going to be a US based company or just for our purposes. The movement of the data really hinders their ability to build that model and make use of it because automation, machine learning are there to marshal the quest of the company, the commercial sort of value. So, the inability to share data, if you will, across borders is a big issue. 

And if let’s say that you could have some research done in the jurisdictions to build a model and then transfer it later is one way of doing that. But this, I hear, has been a real problem. Then you also have the issue of the typical controls around how long can you use the data, how long should you store it, use it for the purpose you are holding out and so forth. But I think that’s where I was going with that. And I think one of the things I would offer as a solution there is synthetic data is something you can do. But do you see any solutions that could help a machine learning based effort in enterprise or company that does this for a living to get around movement of data? Are there mechanisms there that you can share? 

Yoann Le Bihan: Yeah, I totally get now. Yeah, it’s actually mostly… Well, it’s mostly a problem of getting around… Usually it’s getting around the definition of personal data, personal information depending on which side you’re considering it from. If you can rely, as you mentioned, on synthetic data and you can rely on data that is not personal data, but fall within the definition of anonymous data under the GDPR then you can get rid entirely of it, of all the requirements of GDPR. And that’s the perfect… I mean, for developing models, if you can go to that level where you have genuinely anonymous data, then you’re safe. But it’s very demanding, it’s a very high bar to get to that level of anonymous data. And this is something that is often a shock for companies that are processing personal data and this is probably something that is exceptionally huge for the AI industry. This is where you’re probably more aware than I am of all the intricacies of this AI aspect of things. 

But basically, if you can cross different sources of information and re-identify data that originally you thought was anonymous then the very first set of data is no longer anonymous because it’s capable of being reverted, re-associated to a data subject, a natural person. And this is where AI, which is exceptionally powerful and has this ability to cross information and collect, gather and process a huge amount of data in such a short amount of time and making connections that would not be possible, would not have been possible before, this is where, for AI, we have more challenges than in any other business before any other business model before. Because AI has this capacity of processing large amounts of data in a very smart way, and to potentially re-identify or gather information that is personal and reuse it in different settings where it has an impact on people. 

Paul Starrett: Yes, and you do bring up a very good point that if the data can be modified, that is to say, to remove the private information, even to avoid what is called an inference attack, where you’re able to take the existing model and by, for a lack of a better way of putting it, trial and error, come up with the actual personal information of an individual based on other information by putting it together and inferring that it must be this person or several people. But I think you bring a great point. Still leaves a little bit of a problem in place. And so there’s an area called privacy enhancing technologies. We don’t have time to go into it, but this includes homomorphic encryption, anonymization, psuedononymization, redaction, synthetic data, one of my favorites personally. So, if you could do that, the challenge becomes the more that you remove private data, the less accurate the model is going to be, model performance is going to be. 

So, in order to make that decision, the data all has to be in one place at one time before you apply those privacy technologies. But I don’t think we’re here to solve that problem, we’re just offering some ideas. Here a technology solution comes to the rescue to a legal challenge. But I’m sure there are a set of other legal exceptions to data that is either partially anonymized or the PII is somehow removed or lessened a bit. There are still legal directions that one can take to move data across borders. There are specific ways in which that can be done, exceptions to data. I don’t know. So, [inaudible] want to touch on it, it sounds like a fairly deep rabbit hole but any quick thoughts there? 

Yoann Le Bihan: You mean about categories of data that could be transferred from one place to another without falling within the scope of the requirements of different data protection laws right? 

Paul Starrett: Yes. In other words, are there cases where, exceptions where, and it’s a quick thought. This is probably something that someone like you should be brought in to take care of to get into the specifics with the laws, what the purpose is and all that. But there must be some rough way in which a company might be able to bypass the data movement across borders, even if it contains all or most of its private information, if at all. 

Yoann Le Bihan: Yeah. Well, when you transfer data across borders, so the issue is usually again coming from jurisdictions where you have a framework like the GDPR or usually inspired from the EU GDPR. And there are ways. But the way it is done, historically, over the last few years, since the last Schrem’s II cases, the one that invalidated the privacy shield. For instance, the option that was relied on was the SCC, so the Standard Contractual Clauses that are basically a set of contractual clauses that were issued by the European Commission, saying if you sign those contractual clauses and this document and you comply with it, then you can say that you are more or less adequate. I mean, it’s not more or less, but this is substituted, now it’s native to the adequacy decision that was missing. So, the CJ said you could rely on the SCCs. 

The thing is the SCCs also require safeguards, what they call safeguards, organizational and technical measures that must be in place to make sure that the data is processed in a way that is compliant with the GDPR. So, what happens here is typically when you have certain types of data, and many companies have done this, most companies that are transferring data from the EU to the US typically did that. What happens is that you may not have the right level of safeguards in place, and very often you don’t have it, but it’s what we have seen very often that companies prefer to take the risk and keep the business running, than actually complying, the risk envisioned by the CJU being that intelligence services in the US may have access to the information. And obviously safeguarding against access by intelligence services is very difficult, it’s a very high bar. So, what happens is that most cases I’ve seen over the last few years, the transfers had the appearance of compliance through the SCCs being signed and documenting the safeguards. But if you get into the details then you know that there is a risk of being challenged by a data protection authority in a jjurisdiction. 

So, it’s essentially where I believe the European approach to data protection is not really working, because it’s based on the assumption that companies can stop transferring data, or I mean, to transfer data should comply with, think that they are not able to comply with. And in the end, either companies will not comply with the law, what I’ve seen over the last couple of years, or they will stop working with the market that is an issue. So, it means European companies not being able to work with US companies properly, or US companies saying we have seen many newspaper websites from US newspapers not being accessible from the EU. They will say okay, we are not working with the EU because it’s an issue in terms of privacy. 

Paul Starrett: Got it. So, there is yet again a risk analysis here of the probability of getting caught, and if you do what’s the downside, which is an interesting dynamic there, or facet. So, there are ways of doing this. And I think that our listeners can review that on their own. But suffice it to say that there are mechanisms in place that would allow you to private data, irrespective of a… Well, I suppose there’s a certain point, because there’s a thing called a privacy budget in AI, where you say, well, more that I have to accommodate private data, the less performance my models have, the less valuable it becomes. But if the risk of the loss of private data is low, there’s sort of a balance there. 

Think of a scale, where you move up one the more you move the other one down. So, it becomes looking at the specifics of what’s going on, the data, the purpose and so forth, to find that balance. And that is always a bespoke contextually based decision, which someone like you could help with actually. That’s one reason that they could bring you in. But great. So, I think that the contractual versions, I just want to go back to a few acronyms. The Schrems II, that’s S-H-R-E-M-S-2, I think it was.

Yoann Le Bihan: Yeah, it’s S-C-H-R-E-M-S II. Yeah, it’s from the name of the lawyer that introduced the challenges; Schrems I and Schrems II, the second challenge against the EU-US transfer schemes that were in place. So, Max Schrems is one of the, probably, most famous lawyers in privacy in Europe for having managed to invalidate two European schemes that were at European Commission level had been approved. And he managed to have them invalidated by the CJU. So, the question that everyone has in mind at the moment in the privacy field is, is there going to be a Schrems III coming with the EU-US DPF now. 

Paul Starrett: Got it. Got it.  Good enough the last was CJU. That was a term I heard and I just wanted to make sure that our audience knows what that is. Unless I misheard something. 

Yoann Le Bihan: Sorry, sorry. 

Paul Starrett: I thought I heard you say CJU. 

Yoann Le Bihan: CJEU, yeah. Sorry, the Court of Justice of the European Union. So, the CJEU is like the ECJ or CJEU. I mean, the Court of Justice of the European Union is how we should say it, but many people still say ECJ for European Court of Justice but yeah. 

Paul Starrett: Fine. Yeah, and I’m sure that it’s good for our listeners to maybe use that term for now because that’s the nomenclature. Okay, great. I think that we’ve kind of touched on that given our time that we have for this podcast. So, looking at Bossware briefly, and I’m guessing that this might be more appropriate under California law as far as how we approach it in the podcast. Bossware, for those who don’t know, is a new term that is being leveraged around AI and what it is is basically artificial intelligence machine learning that is used to help with surveillance of the behaviors of an enterprise or company’s employees, which they have a right to do. 

It’s their enterprise, it’s their IT infrastructure, the computers the employees are using belong to the company, and typically the laws in the United States are fairly flexible there. But the AI governance and legal issues are still present. And I know one is bias. But basically what you’re seeing, Yoann, what are your thoughts on this area of the requirements of a company to be able to look for threatening behavior or things that are a problem, which they certainly have a right to? But it does clash with some of the AI-based issues of inherent bias and so on. Do you have any thoughts that you’d like to share? 

Yoann Le Bihan: Well, it’s always the same issue we have with delegating an impacting really important decision making, or kind of a decision making process to a machine. And here AI, in this context, has the ability to process a lot of information and replicate some algorithm that will analyze the way the person behaves on the computer. But the risk we see is that there is a lot of bias we know in the way AI algorithms are designed and flaws that come from here, and the risk of having false positives or issues with the way the system will flag or may flag some of the behaviors of employees. 

So, we have seen in the past issues like with AI mistakenly identifying people. We have seen that a lot with law enforcement arresting the wrong person with the very high level of assurance that the person was the one identified because AI said, oh, that’s the person. We have seen cases of bar exam administration where we know that online proctoring systems that were AI powered were not treating black people and white people in the same way. And we have seen a lot of issues where the AI was biased. And I think that typically with Bossware we probably have the same high risk of having, if not properly implemented, of course, an issue of bias. So, is there a risk of, I don’t know, the way that I use my computer being flagged as a low performer because maybe I prefer to work with my keyboard and my mouse or the other way around. So, yeah, these are small details that put together may have an impact on the way it works for the employees. 

So, now in terms of how Bossware works from a legal point of view and in the US, it’s, as you mentioned, essentially the idea, the underlying idea that the equipment, very often most of the time the equipment is the employer’s equipment. And so the employer is more or less free to use any kind of software that will help protect the environment, assess performance of the employees during working hours. And there are limitations, but they are quite low in terms of what the employer can do, compared to what you can see in other jurisdictions. So, we compare the EU GDPR and the EU data protection and the privacy philosophy in general before to the US and it’s indeed far more open for employers to get into the privacy sphere of employees in the US than typically in Europe. 

Paul Starrett: Interesting. And it’s good to hear that perspective from someone like you who is kind of immersed in this currently in a significant way. I don’t know if I ever appreciated that aspect of the European sentiment towards this. I also think that again, you can run afoul of, because in the United States there are some strong laws against discrimination and bias. And whether that’s for gender or ethnicity or what have you, I think that that’s always there and it is going to be. Despite how free the employers are, the artificial intelligence might cause them to run afoul of those laws that are there that could be violated. And I think that they would want to make sure that their model development would include this. There’s some very high risks there, you know, the way that the US looks at California, in particular, are very employee centric and protective. Good. So, I think we’re kind of running into a time here. We could go on for a long time. 

But I think the last talk we wanted to touch on is, AI governance in general, where is it? And just to define the term AI governance, it’s just this broad idea that, while we want it to provide value to us as people, as businesses, and as government, what have you; you don’t want it to bias against the wrong people. You don’t want people to use AI to attack your systems. You do not want your AI models to do things that you don’t know it’s doing. And so, Yoann, if either from the US and or the European Union perspective, where are we in the legal and technical world on that front?  Do people know what it is, or is it something that is very much in the minds of our profession? Where are you seeing its current status? 

Yoann Le Bihan: I think that everyone in privacy is and has been for the last few months talking about the impact of AI on privacy. But it’s very important here to have the right skills and AI is a different world from the world that we have been evolving in over the last decades in the privacy profession, and this is where we need the experts. And I think I could actually turn the question back to you, Paul, because it’s typically experts in AI governance that we will need. We know that we have been struggling and we are increasingly struggling to find the right privacy professionals on the market at the moment. But we now have this additional issue of finding the right AI professionals. This is probably going to be the next issue finding the right AI experts able to understand and guide companies where it comes to AI governance. 

I think that everyone woke up essentially last year, most people essentially woke up last year about AI, and the fact that AI is a fantastically powerful tool, but has to be used in the right way. And for that, you need to have the right guidelines and the right policies in place. And we have seen the privacy impact of AI in very trivial ways. Like I have a friend who had his fake bio available on the Open AI’s ChatGPT when they typed his name asking for who is that person you got a completely wrong bio where it even linked to a fake URL of a newspaper, I think it was the… No, I will not mention the newspaper because I’m not even sure which one it was, but it was one of the most famous newspapers in the US. And actually, I think these things. And we have recently seen the case of a lawyer who cited cases that had been completely invented by ChatGPT because he did not check the accuracy of the information gathered from ChatGPT. 

So, these are typical examples of where you need to have the right governance in place to know what you can do with AI and how you can get that power playing in your favor and not against you. And this is where AI governance is a very important thing. And everyone, I think, is waking up now and realizing that it’s an essential thing to deal with in the next, probably, few years. Also, at the level of the lawmakers we get this idea that we have privacy frameworks in place that can be applied to AI. But more and more we hear about creating laws that are more specific for AI. A typical example, the AI Act in Europe, which is in line to be the first all-encompassing AI law. 

Paul Starrett: Yeah. I’m sorry, were you finished? I didn’t want to interrupt you. 

Yoann Le Bihan: Yeah. No, no, please go ahead. Go ahead. 

Paul Starrett: Yeah. No, I think, yeah, there is a lack of actual regulation at this point of the recording of this podcast, August of 2023. We do have the NIST AI guidelines and so forth. And we don’t have to get into that but I think you’re right that what you just mentioned is going to be the first law. And that’s going to really make people sit up. Let me just put you on the spot, from one to 10, 10 being we’re all on board, we’re all 100% prepared and knowledgable zero being a we’re just being blindsided by us. Where would you put the number, just general?

Yoann Le Bihan: You mean in general for the whole privacy industry or in general in the world? 

Paul Starrett: Both. 

Yoann Le Bihan: Well, I think the world in general will be very close to zero. Except for a few very specific industries either very high tech companies that were aware of what was happening in their own labs, or maybe some military labs that were probably working on the threat that it may represent in the future. I think, except for those very specific cases, most people were not aware of what was happening. I know a couple of privacy lawyers who were a little bit aware that something was going to happen, even though they are not AI experts themselves, but these are really the exceptions and I suspect that they may have connections with the labs I mentioned before. 

But then, for the privacy profession, I think, overall, people were probably a little bit more aware that this was coming because we had some overviews of what was going to happen with AI and the examples I mentioned before. Like, what happened with face recognition systems applied for law enforcement and these cases that were a little bit more publicized were warnings for the privacy profession that something would be relevant to follow in the following years. But I think the speed at which it became a real thing was really faster than we expected. So, I would say maybe for the overall population, probably zero to one, and for the privacy profession, well, let’s say four. 

Paul Starrett: Okay, got it. That’s a good way to kind of quantify that. I would say we did see a touch of this with GDPR Article 22, correct me if I’m wrong, where they talk about automated decision making, or automation of human decision making and so forth and we don’t have to get into that, but we did see specs of this. But I did want to close out with a conclusion I think that we can come to, or that’s important is that as you do what you do and understanding IT and technology and the law, is the technology capability inside the same head as a person who’s a lawyer. 

And that’s where I think you and I really kind of hit it off is that my background is data science. I’ve been in this for 10 years, mostly in natural language processing which is sort of the LLM bucket. And I’ve a Master’s Degree in Data Science. Not to too my own horn, but I think the way to address this in a responsible way is to find somebody, whoever whether it’s me or you that can bridge that gap because that’s really where the solution lies. So, I’m guessing you agree with that, but maybe you can flesh it out and then we can sort of round down here. 

Yoann Le Bihan: No, sure. I totally agree. I mean, it’s an area where it’s essential to get the skills from experts to address the issues and ideally address them before they become real issues. This is a typical lawyer thing, I know. But it’s better to make sure that we are able to leverage the power of AI, the power of these new technologies like AI by making sure that we do it in a compliant way. Because it’s not only about protecting the business from the risks that come from the law but also making sure that we can use the power of these new tools, because these new tools will be the differentiators on the market tomorrow. And this is where, they are probably already now, but they are increasingly important for the future. And making sure that you have the right people like typically you for AI governance and people that will make sure that you follow the right track and that you do it in the proper way from a legal point of view, from a, more broadly, governance and corporate company governance point of view as well. 

Paul Starrett: Yes, and I’m going to leave you and our audience with a quick example. It’s kind of like putting a six year old behind the wheel of a race car. It’s literally an example. I think it’s completely analogous is that the driver has to drive fast in order to win, but you want someone who knows how to navigate other cars and to be able to finish the race. And so that’s, I think, a great way of doing that. So great, Yoann. I think we’ve probably come up against time here but I always ask every person that I interview, is there anything we haven’t covered that you would like to tell our audience briefly that you think would be good for our audience to know?

Yoann Le Bihan: Yes, because there is one thing I realized afterwards that I forgot to mention about AI and data in the beginning of our discussion. There is a very interesting project in the EU and I think this one is very business friendly and pragmatic and could be also considered by the US at some point. They are working on a piece of legislation that would make it mandatory for companies to allow data subjects to get their data and be able to somehow gather their data and sell it to providers that would use it typically for AI companies that would like to train their models and would be ready to pay for data. So, I know that there are a couple of projects, but one of them is about using non-personal data. And the other one I think is about gathering personal data for a fee. But there are projects around that at the moment and I think this is typically what business friendly laws should look like in the future saying to consumers, okay, instead of being exploited in a way that you cannot manage, let’s offer businesses a fantastic opportunity to use consumers’ data in a way that consumers are somehow getting something out of it. 

Paul Starrett: Interesting. Great. That’s a great… Go ahead, I’m sorry.

Yoann Le Bihan: Yeah. And I think in this area, but this is maybe separate, but maybe something that I can get the details of these projects that you can maybe post it somewhere on the podcast page later. 

Paul Starrett: Yes, we can absolutely do that. What I’ll do is before I post this, I will let you provide those to me when it’s convenient and then we’ll include that at the base of the transcript. 

Yoann Le Bihan: Okay. Awesome. Awesome. Yeah.

Paul Starrett: Okay, great. Well, thank you for that thought and thank you so much for your time today. I’m sure that our audience will find a lot of it invaluable. I did. And your…

Yoann Le Bihan: Well, thank you. Thank you very much for having me and for the time spent on this podcast recording with me. 

Paul Starrett: Of course. Just last, how could people get in touch with you if they would like to? 

Yoann Le Bihan: Well, they can actually go to my website. It’s Yoann, Y-O-A-N-N.law, L-A-W. So, yoann.law, or they can just send my email to contact@yoann, Y-O-A-N-N.law, L-A-W. 

Paul Starrett: Perfect. Okay. Well, thank you again. And we will very likely have you on for another podcast to get into some of these other subjects. 

Yoann Le Bihan: Thank you very much, Paul. That will be a pleasure. 

Note: Yoann indicated he would pass along information to include at the end of this transcript. I am copying content from an email from him here as it provides the best explanation: 

“When I was referring to "projects" at the end of the podcast, the word was not exactly right: the Digital Markets Act (Regulation (EU) 2022/1925) has already been voted on, it became effective in May this year, but companies subject to it ("gatekeepers," essentially the GAMAF) have until 2024 to comply. There is still some uncertainty as to how the relevant provision (art. 6(10)) will be interpreted, but some lawyers believe that it might be a game-changer and a great opportunity for consumers and smaller-scale businesses with disruptive models:

"The gatekeeper shall provide business users and third parties authorised by a business user, at their request, free of charge, with effective, high-quality, continuous and real-time access to, and use of, aggregated and non-aggregated data, including personal data, that is provided for or generated in the context of the use of the relevant core platform services or services provided together with, or in support of, the relevant core platform services by those business users and the end users engaging with the products or services provided by those business users. With regard to personal data, the gatekeeper shall provide for such access to, and use of, personal data only where the data are directly connected with the use effectuated by the end users in respect of the products or services offered by the relevant business user through the relevant core platform service, and when the end users opt in to such sharing by giving their consent."

The expectation is that this new provision would allow new business models, whereby a small startup launching a disruptive product could offer consumers to "sell" them the data (aggregated or not, personal or not) they have with big players (GAMAF), maybe even in the form of a data stream. Which, for instance, could be used to... train AI!”