AHLA's Speaking of Health Law

Artificial Intelligence Impact on Health Equity

October 03, 2023 AHLA Podcasts
AHLA's Speaking of Health Law
Artificial Intelligence Impact on Health Equity
Show Notes Transcript

Artificial intelligence (AI) poses unique opportunities and challenges in the quest for health equity. Roma Sharma, Counsel, Crowell & Moring, and Tienne Anderson, Managing Counsel, St. Jude Children’s Research Hospital, discuss the impact of AI on health equity, some of the nascent AI governing approaches, and health equity considerations that should be considered as both AI technology and regulations are developed. Roma and Tienne spoke about this topic at AHLA’s 2023 Annual Meeting in San Francisco, CA. 

To learn more about AHLA and the educational resources available to the health law community, visit americanhealthlaw.org.

Speaker 1:

<silence>

Speaker 2:

This episode of A H L A speaking of health law is brought to you by A H L A members and donors like you. For more information, visit American Health law.org.

Speaker 3:

Today's episode, we'll be talking about the impact of artificial intelligence or AI on health equity. In our discussion, we'll cover some of the nascent AI governing approaches and health equity considerations that should be considered as both AI technology and regulations are developed. As many of you know, AI has a potential to drastically impact health inequities in a multitude of ways, presenting a unique opportunity to alter the trajectory of humankind. It sounds hyperbolic, but in the case with ai, I , I don't actually think it is here to discuss this issue. Further with us is Tiana Anderson, managing counsel at St. Jude Children's Research Hospital, and myself, I'm Roma Sharma, counsel in Crow and moorings, Washington DC office and director in Kroll Health Solutions, a strategic consulting firm affiliated with Kroll and Mooring

Speaker 4:

Roma. Thank you so much. It is wonderful to be here with you today. Uh, we present it on this topic recently at the ALA's annual meeting. Uh, it was a wonderful opportunity for us to , uh, not only present the research that we've done in this area, but take questions from the audience and really , uh, get a broader perspective on this issue. Uh, so just to recap very briefly before we dive into our discussion today in our H L A presentation , uh, we briefly discuss several questions. First, what is ai? What is health equity? Uh, and that includes the social determinants of health , uh, and then what are considered some of the challenges to advancing health equity and positively impacting the social determinants of health. And then we pivot to , uh, just a very broad , uh, sweep of the legal landscape. What, what are some of the domestic and international efforts to regulate AI and create standards , uh, that address the relationship between AI and health equity , uh, healthcare , um, and then finally , uh, some of the recommendations that we have for public and private sectors , uh, that they should consider as they seek to mitigate potential bias that may occur when using AI systems. Um, again, we wanna use AI in a way that advances health equity , uh, instead of harming it. Um, so this discussion today will focus on diving into some of the issues we've discussed and the policy topics we've covered , uh, in our A H L A presentation in a bit more detail. Uh, we'll explore some of the questions that are still burning for us as we continue to think about , uh, this topic into the future. Um, so first, let's start off with a basic introduction to some of the terms we'll be using throughout this discussion.

Speaker 3:

Great . Sounds great. Let's dive into it. So, to kick things off , uh, I'll start with the definition of artificial intelligence. A term that's used left and right. We hear it in the media. It's, we almost can't get away from it. Uh, but it, it is an interesting concept and in some ways, nebulous. There's a lot of different definitions. Uh, for purposes of this podcast, we'll use a definition that's more or less common , uh, which is the capability of a machine to imitate intelligent human behavior, used to perform complex tasks in a way that is similar to how humans solve problems. In other words, computer algorithms that can think and problem solve like humans. That's generally how we talk about artificial intelligence and what it is . Um, and then there are a couple more relevant definitions , um, in other terms that , uh, folks have probably been hearing around artificial intelligence. Machine learning or ML is a subset of ai, and it's the field of study that gives computers the ability to learn without explicitly being programmed. These are algorithms that, you know, quote unquote get smarter with more data, refining the output, and improving accuracy over time. Um, think of things like Netflix recommendations, predictive text, ad recommendations in your social media feeds. They essentially take in data, your data and monitor your behaviors in some ways, and produce outputs that are smarter, better, add more accurate over time. So they're learning. Uh , and then generative ai, which has been the big ticket item in this past year, and , uh, really has brought AI to the attention of all of society and the world , um, with the creation of chat. G P T in particular , uh, is essentially computer algorithms that create written visual and auditory content given prompts or existing data. Um, and many listeners will have listened to or rather used chat, G P T , um, or some are of its other versions and iterations that exist out there. Uh, it's become really quite popular.

Speaker 4:

And to continue with one of the phrases that we'll be using a lot, health equity, we've defined this essentially as the absence of unfair and avoidable or , uh, remedial differences in health among population groups that we can address, that we understand , uh, that we can impact. Um, one of the, the primary approaches to achieving health equity is to address what are term the social determinants of health. Um, these include, but are not limited to maternal health, early childhood development, housing income, social protection, education, employment, job security , uh, working life conditions, food security , um, right. So all of those various things that, that feed into your experience , uh, as a living, breathing , uh, being, moving through this world , uh, all of those things impact , uh, what we've, what we'll be discussing is health equity. Um, and so we know this, right? And, and we know that all of these things impact our health. Um, we also know that healthcare is one of the most expensive things we do in our country , um, and that , um, our healthcare contin costs continue to rise. Um, and at the same time, disparities , uh, inequities in healthcare can continue to rise. And so , um, there was a recent Urban Institute conference , uh, earlier this year, and one of their senior vice presidents, Kimberly Leary , um, really encapsulated what I think are some of the basic drivers of these continued inequities . The lack of political will, a lack of accountability, and the absence of the voices who have , who've been most affected . And so these, these to me, seem , um, instructive as we consider ai , uh, in this context. Um, and so before we pivot, Roma , anything else you wanna add , uh, to the definitions or the background here , um, before we start , uh, with our next section?

Speaker 3:

No, I think you covered that really nicely, Tian . I think we can jump right into , uh, what, what is taking place around the world,

Speaker 4:

Right? So we look at the headlines. Uh, it's a race, it's a race around the world , uh, to produce AI technologies to benefit from AI to profit, from ai. Um, and at the same time, there has been a rush for governments to regulate ai. Um, so we are going to now turn to a brief update on some of the , uh, regulatory schemes we've covered in more detail elsewhere , uh, and our thoughts on some of these regulatory frameworks and other approaches from around the world. Um, so again, just to orient us, one of the standard bearers is the World Health Organization. Um, and they provided a 2021 W h o ethics and governance of artificial intelligence for health , uh, that laid out , uh, several foundational principles. Um , it was created by leading experts in various fields, including ethics, digital technology, law, human rights, and real world experts minute from ministries of health around the world . Um, so the guidance includes six principles for the use of AI in health. Uh, the first is protection of the autonomy of human decision making in healthcare systems and medical decisions. Um, and that's, that's where , uh, I'd love to just stop and, and chat for a moment , uh, to get started today, because , um, I think upon hearing that principle rings true, right? It's, it seems to be something that we all agree is something we should do. Um, but of course, when we really dig a little bit deeper, it's, it can be in direct conflict with what we hope the ultimate benefit , um, of AI might be. Um, and it , you know, even in providing this principle , uh, the who guidance acknowledges that adoption of AI can lead to situations in which decision making could be or is in fact , uh, transferred to machines.

Speaker 3:

Scary possibility in some ways, right? <laugh> <laugh> ,

Speaker 4:

Yeah. Taken by itself. It , it does sound scary unless there is a deeper understanding of what AI is that goes beyond, beyond oneself even, right? Um, I think some of the, the research that we hope AI will be able to assist us with will really benefit future generations. So , um, and, and AI will need to learn, and, and humans are fallible. And so, you know, thus it makes sense that AI will be as well in some ways. Um, and, and are we okay with that? Um, is that a trade off that Yes, we understand and we're willing to make , um, to re the potential benefits,

Speaker 3:

Right? I I think part of the, part of the conversation on it is how accurate is, is AI in , in this technology? And can we let the technology make decisions? Is it, is it gonna produce equally or better decisions than humans? And , uh, I think a lot of the concern is no , that it, we're not at the place where it can do that just yet. Um , maybe in some cases, maybe not in others. And, and it's just so new that we don't even really know. We haven't seen the mistakes that it can make. And the potential for that is , um, it's, it's tremendous, right? And , and so is the potential for all the good that it can do. So it's, it's really a , a wild card in some ways.

Speaker 4:

And , and even the converse could be true. So what if the AI is in fact accurate or is proven to be accurate, and the human overrides it with an erroneous , uh, interpretation. <laugh>, <laugh>

Speaker 3:

Also possible. The human error is, is something we deal with now all the time,

Speaker 4:

Right? And it's, it's, it , it really will , uh, I think on a case by case basis, right? Have to be settled into a certain space. But this all , uh, to me, runs into one of the other principles is, you know, that we will talk a lot about transparency, transparency, transparency. Um, okay. And so, moving on just a little bit , um, you know, one of the other global partnerships , uh, that , um, is of interest is the global partnership for artificial intelligence , uh, again, a worldwide partnership , um, uh, of various countries. Um, and , and what I, you know, what I find interesting , um, is the extent to which political systems and other attributes will continue to shape and influence this discussion around the use and development of ai. Um, so for example, this global partnership for artificial intelligence , uh, has an application process, right? And , um, you know, countries are asked to comment on their adherence to principles for responsible , uh, development of trustworthy ai , uh, in the O E C D recommend , uh, recommendations. Um, they're asked to talk about how much they're investing in research , um, how much they're investing in testing, and, and really making sure that the benefits of ai , um, are for everyone. Um, so that's , uh, again, to me, it'll be interesting to see how this , um, shapes up with , um, countries of, of differing , um, uh, levels of , uh, adherence to these various , um,

Speaker 3:

Right.

Speaker 4:

Yeah, please .

Speaker 3:

Absolutely. And the , and the idea of the, the partnerships , um, I love that because a lot of the, what's been going on around the world has been very country to country specific , um, or, and, and focused on the individual, the individuals within those countries and, and the governance of the country, them itself . Um, and so remembering to also collaborate and to partner globally and for folks, countries and , and different bodies internationally to get aligned , uh, I think is very important and something that would be great to see more of. And so that, it's interesting that , um, we do have some of that, which I think is positive. Uh , but the turn to, oh, go ahead, tan .

Speaker 4:

No, no, no. I was just gonna say , um, right. And, and again, to what extent might countries be more persuaded to collaborate , uh, because die , right ? Because of potentially accessing , uh, tremendous benefit that they wouldn't otherwise have. Um,

Speaker 3:

Right . And at some point, there's a , a benefit to having consistency even across countries. Um, there is that interest so that technology companies and, and consumers even can have consistent expectations of what laws and regulations apply, don't apply. And, and we're, it's in such a nascent stage right now as we've been discussing that we don't yet have that. Um , but to have this function at a global level, it seems like that type of collaboration is gonna be necessary.

Speaker 4:

Absolutely. Um, and well said. Um, all right . So , uh, turning from the world stage for a moment , uh, to our own backyard , uh, aroma, please tell us what's going on with the us .

Speaker 3:

Sure. I'd be glad to see . So the , the us , uh, is an interesting place as always. And , uh, like you said, our , our backyard, there's been a lot of things going on, especially recently. It's been very interesting to watch over the last couple of years. What we have right now in the United States is essentially a patchwork of existing laws that apply more generally to technology, including ai, for example, consumer protection laws , um, intellectual property privacy and security laws. So there's, there's a , a , a network of different laws in different areas and disciplines that apply to ai, but not specifically to AI necessarily. Uh, because we haven't passed comprehensive AI laws and regulations , um, or even at the federal level, at least. Um, even something short of that, we haven't quite gotten there yet. Uh, and so the laws that we do have on the books, they weren't designed in consideration of ai. So the application of the law can be confusing in many circumstances. Um, and without the specific AI laws and regulations, the essentially how to create the ai, how to use it, how to assess harms and damages , um, all of that is, is somewhat murky and , uh, itself creates confusion and is gonna have to, and , and is already being , uh, played out in court and will be determined through case law , uh, in the absence of, you know, comprehensive legislation that address , uh, directly addresses all of this. Um, and that could take years. Um, and so recognizing that this is an issue, and I think also seeing other countries , uh, move more quickly on AI regulations, the US Congress is , uh, trying to move on this, it seems rather quickly, or they're picking up pace. They've held a number of hearings recently on ai , um, some closed door hearings, and have invited industry to come and provide their input on whether AI needs to be regulated, and if so, how. Um, and there seems to be general consensus among the private sector technology companies that there really is tremendous risk , uh, around ai and it should be regulated. Uh , I think the question now is going to be how to do it. It's one thing for parties to agree that yes, AI has a potential to , uh, create artificial information or fake data and confuse folks and lead to , um, let's say voter confusion or in the healthcare space, health confusion about your own health and safety and wellness , uh, about what is proper treatment, proper diagnosis, all of that. Um, I think, I think there is consensus on that all being quite dangerous and a risk. Uh , but what isn't clear is whether there's consensus on how to regulate those tricky issues. Um, and I think that's gonna take some time to really hammer out , um, the White House and various agencies have issued guidance and blueprint documents highlighting key considerations that are more or less in sync with the W H O guidance , uh, those principles that tn that you just talked about. Um, and so we are seeing the federal government paying a lot of attention to this , uh, including the White House. They've issued some executive orders related to this. Um, as I said, federal agencies have issued guidance as well. Uh, we have the White House blueprint that was issued last year. Um, and so the different governmental bodies at the federal level are thinking hard about this. I think they're aware of the risks associated and trying to gather information and come up with , uh, the right way or ways to regulate this area. And we haven't quite gotten there just yet. Um, so in terms of what we might see, this is speculative, but it does seem like there's actually bipartisan support for regulating ai , um, among our politicians. And I think in large part, that's because of all of the, the competition that surrounds this , uh, the fact that other countries have regulated this space, at least somewhat already. Uh , China, for example, which we'll talk a little bit more about later, the EU is moving fast , um, in the regulation space. And, you know, of course, there's consumer protections that need to be in place, and our politicians wanna protect consumers and their constituents. Uh, but there's a , a big economic angle to this too, that the other countries who may move faster than us, if they do move faster than us in the regulatory space, are gonna end up impacting technology companies and shaping the technology. And various countries, including the US , would like to be leaders in that , I assume , um, and, and have , uh, an impact on how the regulations do end up shaping the entire industry , um, rather than having other countries lead in that manner. And so, for that reason, I think we are seeing bipartisan support , um, and interest in regulating the space. And , uh, like I was saying, it'll just be a matter of how that actually plays out. If , if there can be agreements on what those regulations, what those laws look like , um, then we may see something pass in Congress. Who knows how soon. It could even be by the end of this year. It might take a little longer than that. Um, but I, I would think that we're on the path to pass something, at least where we stand currently. Dan , I know that you've been following a lot of the news in this area. What do you think of the idea of self-regulation , um, in this area where essentially corporations might impose their own restrictions and limitations on themselves? Um, if we don't get federal legislation anytime soon?

Speaker 4:

Yeah, so , um, I, I think , uh, well, first, let me start by saying any regulation is better than no regulation. Um, so we'll start with the positive <laugh>. Um, but , uh, you know, I, I am a bit of a skeptic. I think, again, going back to , um, lack of political will, accountability and transparency, and , uh, you know, having marginalized voices as a part of that conversation. Um, but , you know, we don't seem to be ticking , uh, many of those boxes. Um, and so , um, uh, you know, while I understand, you know, the need to move swiftly, I, I hope that that does not override the need , uh, to, to educate people and to make sure that there's more transparency and discussion , uh, around , uh, what's being proposed.

Speaker 3:

Completely agree with that. And I, I think that corporate responsibility has a role here , um, as it does in, in other sectors as well. But at the end of the day, there's also answering to shareholders, and those interests are not necessarily aligned , um, all the time with the public interest can be. Um , but I , in my view, that's one piece of the regulation, <laugh> , you know, that's one piece of the governance that will be required as this corporate responsibility. I don't know that that alone is gonna get us where we need to get , um, to, to make sure that we are lifting everyone in the process and not, not creating bigger inequities than we already have, or just perpetuating them , uh, you know, like you were saying, Tian .

Speaker 4:

Yeah . So , uh, I'm chuckling because , uh, the, the little note I had written down , uh, market forces are at odds with self-regulation. Mm-hmm . <affirmative> , um, <laugh> . I mean, that's, that's just , um, you know, reality, the reality that we're in right now. And so I think to , to not fully , uh, acknowledge that , um, and to not address , uh, you know, accountability and transparency and , um, making sure that all stakeholders are brought along , uh, including those who are often marginalized. I, I think , um, you know, that's, that's where we'll be buying tomorrow's trouble , um, today, when, when I don't think we need to mm-hmm.

Speaker 3:

<affirmative> mm-hmm. <affirmative> . Um ,

Speaker 4:

Okay. So moving on , <laugh> <laugh> , um, we, we do wanna touch , uh, base on some other international approaches. Um, and so , um, Roma , uh, will you take us to China , uh, as you promised you would , uh, earlier?

Speaker 3:

Sure. Yes. So China has been moving much more quickly in the health , the , um, healthcare , uh, but not just, not specifically healthcare, but really AI regulatory space. Um, I guess on the healthcare note, I'll say that most of the discussion on regulating AI internationally, even within the United States, hasn't been healthcare focused specifically. Um, but there is a lot of , um, a lot of focus on national security that's huge on IP privacy and security, understandably. I mean, those are huge issues. Uh, so I think there's a lot of focus there , uh, in terms of healthcare, that is something that's important and being talked about as well. Uh , 'cause there is so much, you know, as we're talking about potential and also risk associated with how AI is gonna be used in healthcare , uh, you know, with medical devices , uh, and and so forth. But healthcare specific regulations is not something, we're not at that point yet. Um, but we're, I think we're gonna get there for essentially all of the countries who are leading in this charge right now. Um, and so with China, they've taken an approach that has been more case by case specific. Um, they're essentially seeing what comes up in ai, what kind of technology , uh, is being used and being developed presently, and what issues those , uh, new technologies present, like generative generative ai, for example. Um, and then when they identify issues with that, they're taking an approach where they pass a law or a handful of laws related to that, rather than an approach of comprehensive regulation where you really try to capture everything, the , the entire world of AI and all its potentialities , uh, at once, which is what the EU is doing. And it seems like the US might be going in that direction as well. Uh, but, but China's moved quickly because of that. So they've been able to be a bit more nimble in that way, and they've passed some consumer protections , um, and it's one of the first major countries to move quickly on this.

Speaker 4:

And so, Roma , what do you think the, the long game is? What, what advantage do you think they're trying to pick up here?

Speaker 3:

Well , I think this ties back to , uh, what I was mentioning before about this global race to really continue to be a superpower, to be a superpower, to be thought as , uh, um, leading in this area and to shape the industry and to shape the technology , uh, for the globe, essentially. Because many of these laws, you know, when we're talking about laws being passed in China, the eu, the us , uh, there's high likelihood , uh, and it'll be dependent on how the law is written itself , but high likelihood that any companies who are producing AI that's being used in that country , uh, or those countries, is gonna have to comply with those laws. And so it , it will shape the technology. And I think China understands that. I, I think there are also concerned about consumer protections , um, just like pretty much all other countries are. But , uh, they're technological leaders in many ways themselves, and they wanna move fast, and they are , uh, and I think that from a, a geopolitical front, it's very interesting and <laugh> , it creates , uh, a whole layer to, to all of this. Um, and I don't think it's all downside for the US or for other countries who might consider themselves to be competitors with China. Um, I don't think it's all downside that China's moved quickly, because in some ways it provides , uh, a test case to see how some of those laws, consumer protections and so forth, actually play out. And that data, that information can be used to craft some of the more comprehensive regulatory schemes that the EU and the US , for example, are trying to come up with now. So that's , uh, that's a bit on China, which I think is very interesting. Uh, we've talked about some of the global superpowers, but to understand health equity implications, we need to consider developing countries and how they're approaching and handling AI tan . Can you tell us a bit about the efforts in Africa,

Speaker 4:

The benefits for Africa as a whole? Right. I think , um, folks are already pointing out a lack of technology and, and infrastructure and saying they , they stand to benefit a lot, but, but not that much. Um, so that was , uh, from a Price Waterhouse Coopers report in 2017. Um, and , um, to be honest , um, I , you know, I think the outlook would be much different, right? Were they to , to look at it again today , um, and we'll see right where it goes based on how events in the rest of the world develop. Um, but in, in 2021 , um, an alliance of 36 different African countries came together. Um, there's an organization called Smart Africa , uh, that , uh, is focused on this issue and issued a blueprint , uh, artificial intelligence for, for Africa. And , um, it has five different pillars. Um, so I, you know, one of a couple of interesting things about this, but to me, one of the best things about it is the first pillar that they talk about is human capital. Um, and an importance , uh, the importance of educational development. Um, and, and that to me cannot be overstated. Uh, and that's not just Africa, right? But, but all these , uh, all of us who expect to take advantage of, of ai , um, is it something that is going to be widely available and understood or not? And that's , um, you know, that is to me, the , the question that the human question, right? Are , are we going to freely , um, give over our data to, to the greater good , um, or, or not. Um, and, and also, again, their networking, cooperation, collaboration , uh, and pursuit of joint partnerships across private and public sectors. Um , so most recently, smart Africa has joined with the Datasphere initiative , um, to work on an Africa forum on sandboxes for data. And the goal there is to bring together a Pan-African community to enable innovative cross border data governance solutions. Um, so it , so again, I think, you know , that's right . I, yeah. And, and so I think when we're looking at potential, right? Um, again, if we're thinking about huge data sets , yes, we have many that, all that that already exist here, right? But we also understand now, better than ever, what kind of bias and inequity is, is baked into that data, right ? Um , and so , you know, to me , um, opportunities to collect data fresh going forward , um, within large groups, you , you know, that is in and of itself its own , uh, world of possibilities, right? Um, and so I think, you know, nowhere perhaps on the continent are the possibilities greater than, than in Africa. Um, so again, I, I think their focus on, on human capital , um, and, and wanting to instill equality in international best practices from the start , um, and a focus on, on data , um, are just , um, I think exciting developments to, to watch over time. Um, however, continuing on our, our global trek , uh, arguably , um, right, the front runner internationally is the European Union's proposed AI Act. Um, it's expected to pass later this year. Uh, Roma, please tell us what we can expect.

Speaker 3:

Sure , yeah. And , and thanks for covering what is going on in Africa and some of these African countries. I think that's really interesting. And, and also an interesting contrast when we're talking about the us , China, and the eu. Uh, you know, we're , this is a global issue and access, and , you know , the point you were making to access in Africa and building the infrastructure. There's so much potential good use for this technology in African countries and in developing countries, but how do we make sure that folks there can use the data, can use the outputs of the ai, and that the AI is even being designed for them in the first place. You know ? And that's combination of government regulations, government investment, and then the economic incentives. And in the private sector, working with governments, and , um, also seeking out opportunities if they see opportunity, economic opportunities , um, you know, just aligning all those incentives and, and making that happen. Um, and so, so just jumping into the eu , uh, we would be remiss if we didn't cover this since there , there really are front runners here and , uh, quite, quite heavily featured in the news these days. Uh, but essentially, I'll cover this briefly. The EU is very close to passing comprehensive AI legislation , uh, called the Artificial Intelligence Act. It's expected, like Tian said, to pass at the end of this year, it plus or minus a few months, we'll see how that plays out. Uh , but essentially, this really is comprehensive legislation. And like G D P R , uh, we generally expect that this is gonna have global impact , um, and set the tone for AI regulation around the world, given how large the EU is, how many countries are included, and that technology companies are going to wanna market to folks in the EU too. That's a , a big consumer market. Um, and for them to do that, based on how the, the legislation's written here, those companies are gonna have to comply with the e the eus AI Act. Um, and so just high level , essentially the law creates risk-based , uh, categories for different types of AI technology. There's four risk categories, unacceptable risk, which is just prohibited. Um, if you're technology rises to the level of unacceptable risk, it's not allowed to , um, be marketed in, in the eu. There's high risk. These types of AI technologies are permitted , uh, but they're subject to strict OB obligations. Um, and that includes medical devices. The third category is limited risk. Um, and that type of AI has specific transparency obligations, and then there's minimal risk, which is permitted with very few restrictions, generally is considered to be low risk. Um, and essentially here, the, the AI Act focuses on rules around data, quality, transparency, human oversight, and accountability. Uh, they have some, some really practical considerations that they've put into this. It's, it's quite thoughtfully done if you ask me. And it also is very wide in its scope , um, and its reach. So, like I said , uh, if you are a technology company and you're producing technology that's used by users in the eu, this is gonna apply to you. So it has this extra territorial scope. Um, and, and it's , uh, it also comes with some teeth. The penalties for violating certain requirements can rise to the level of 30 million euro, or 6% of a company's total worldwide turnover for the proceeding financial year. Um, so there's your incentive for compliance , uh, if you needed one. So, <laugh> , this is , uh, this is an impactful, a potentially very impactful piece of legislation , uh, that folks should be watching and likely are watching if you're already in this space. But we'll see how this plays out , uh, Tian . So what do you think about this race to regulate ai , uh, among these front runners here, we got the eu, US , and China. Any thoughts on how this might play out and, and whether this is a , you know, good or bad thing?

Speaker 4:

Well, Roma, you already know what I'm going to say. I'm going back to my urban institute , uh, framework, <laugh>, how , how do these frameworks address , uh, political will or the lack thereof , uh, accountability and transparency. Um, and then again, how are we figuring in the voices , um, of those who've been traditionally marginalized in all of these processes and, and considering mm-hmm . <affirmative> . Um, so that, you know, again, I, those are to me , um, just very good touch points for us, right? As, as we move through these discussions and conversations. Um, and , uh, you know, again, when I think about it, right? Perhaps no place is, is currently more , um, relevant , uh, than data , um, right? Because , uh, all this AI that we're talking about, right? All this machine learning, it's, it's powered by data. And , uh, again, the importance of data. We know that you can't train and test the AI on the same set of data, right? So again, to train your a ai, you need a set of data to , uh, test it another set of data, right? Um, to continue to enhance it. You'll need more data , um, going forward. Data, data, data. Um , <laugh> , who , whose data is it? What kind of data are we collecting, right? Do we have the whole picture? There's so many, so many questions. Um, and, and as <laugh>, everyone likes to point out, right? Um, the quality , uh, of the output is limited by the quality of the input <laugh>

Speaker 3:

Absolutely. The set garbage in, garbage out idea and computer programming. I , I think that's just another, you know, catchy way to say what you just said, Ken .

Speaker 4:

Yeah, <laugh>. And , um, you know, when we think about data, right? Um, it, it's not only we need the data, but , uh, uh, infrastructure to support its collection, storage, it's management. Um, we need the , uh, folks with the education to manage the data, to do all these, these things. Um, and so , um, again, I, data to me is, is, is just one of the, and I think privacy is another right there, there are so many different points of this where I think you can look through that lens of, of how do we strengthen political will? Um, uh, so again, I, am I willing to give up my data today? Um, who knows, right? If, if I'm a rational, logical person, which I like to think I am , um, the answer to that question is, is that depends. Um , right? Um , right?

Speaker 3:

Yeah, absolutely.

Speaker 4:

Issues of informed consent. Um, how will my data be used in , in perpetuity? I , I might be interested in these questions going forward. Um, do , do I have an ability to, to direct who my data benefits ultimately , um, might, might that influence me? Um, right. Um , so, and , you know, do I understand how AI works, the identification, et cetera, et cetera. Um, you know, we need a more sophisticated populace from policymakers down to our legislatures , lobbyists, you know, our government researcher students are just everyday folks, right? We all need this better understanding of, of all of these various issues.

Speaker 3:

And then this question of, you know, your data, what to do with it? Do you wanna give your data for X, y , z reasons? That's consent and privacy and, and security potentially too. But there's, you know, this underlying question of do you even get to decide, or is that choice made for you when you essentially, you know, use a very popular or , or common product that you wanna use? And , uh, you know, how much choice do you really have over whether your data is taken and used and, and how it's used? And, you know, how much choice should you be given? I think these are all really important questions to be thinking about.

Speaker 4:

And, and again, these all slide right into accountability and transparency, right? Um, you know, part of our ability to agree on what the potential risks of AI might be , uh, is do in large part, to our understanding of it, right? To our , um, uh, and not only the risks and, and how it works, but also the potential benefits and, and are we thinking largely about that in , in a way that goes beyond the bottom line, the profit line, right? Um, are there other ways to think about it that, that might appeal to people? Um,

Speaker 3:

Accountability is a , a big one and , and an important one. How do you, what do you think about how we get accountability in a space like this?

Speaker 4:

Uh , I mean, again, I, I go back to transparency, right? 'cause you can't even begin to have accountability without that. Um , and so I think accountability is this measure that will , uh, develop over time. Um, right? I think it'll mean very different things in different contexts. But , um, uh, you know, to me, again, is there a bottom line commitment to enhancing transparency, enhancing accountability, mm-hmm . <affirmative> , um, right? I mean, or are we, are , are we gonna be satisfied with kind of check the box , uh, mechanisms, right? Um,

Speaker 3:

And that idea, the concept that you covered when you were talking about Africa, about sandboxes , uh, essentially this idea where the program is open for testing and use and exploration by other parties. I think it's often , um, talked about in the context of government being able to use the, the program that includes AI and test it before it's rolled out to the public. Uh, I think that that is one way to achieve this transparency that you're talking about , uh, and make sure that other important stakeholders and, you know, oversight agencies, for example, are able to do whatever they need to do with the program before it's rolled out. And to do the testing it needs to

Speaker 4:

Absolutely. I mean, I, I completely agree that we have to think about , uh, development , uh, in different ways. Um, with ai, we will have to be creative. We will have to question , um, some of our assumptions. Um, and, and, and it's, again, it's, it'll be necessary, right? To, to really get it to work. For us to, to work, we have to work together. I mean, it's, it's, right. Data is not a person. It's, it's all of us. Um, that's right . Um , yeah. So, so again, right. Sliding into our, our third , um, uh, measure, if you will. Uh, how well are we elevating marginalized voices , um, that, you know, are , are we going to pile an equity on, on inequity , on inequity , um, or are we going to affirmatively and proactively , um, deal with the inequities of our system? Um, you know, those are, those are two separate choices. Um, and , um, I think that , uh, there is enough generalized understanding about the direction that , uh, that we need to go in that , um, hopefully, maybe I'm being optimistic, it's it's morning . Um, but, but I, I think there's enough general understanding that , um, you know, we, we should be able to collaborate , um, in more ways than one . And again, I think , um, smart Africa and, and some of those collaborations , um, the gpi , um, I, I, there are some great collaborations. I, I'll be very interested to see how those develop and continue , uh, in the future and, and what kind of advancements , uh, those various collaborative networks are able to make.

Speaker 3:

Me too. Yeah, I'm, I'm very interested to see how that plays out. And when you talk about elevating marginalized voices in the context of health equity, it's so important, right? Because we're not operating from a place where everyone in society has equal access to healthcare, even now, and take AI out of the picture. Uh, but equal access , uh, job security that could even allow them to purchase healthcare treatments that they might need or take care of their health or eat healthy foods. And, you know , all the, the social determinants of health that you were talking about earlier. Um, these issues are very interrelated. And to make sure that marginalized voices are at the table in the consideration of the design of AI in the usefulness , uh, figuring out what it's gonna be used for, who it's gonna benefit , um, and, and the data, like you're saying, Tian , what data is going into the AI in its testing phase and its training phase? Uh, is it being designed and you , and , um, tested and prepared for only wealthy individuals , uh, only white individuals, and you know, who, who's being marketed , uh, with these products and, and who's it targeted to? Or are we targeting everyone? Are we making sure to include all different populations, different communities? Are we thinking globally, you know, within a country? Are we thinking about how do we make sure we're not exacerbating existing inequities with this technology? And are companies thinking about it? Are regulators thinking about it? And it seems like really both need to be , um, consumers too, and they need to be at the table for, for having these , uh, congressional hearings. And , uh, I think that's, that's great. It's a good step to try to understand and gather information. Part of the gathering information needs to come from all different communities, and, and that includes the marginalized voices that you're talking about. Tian.

Speaker 4:

Completely agree . Um, and we could continue, we could go on , um, <laugh> for

Speaker 3:

A long time. <laugh> , we should . Good .

Speaker 4:

Yeah. Unfortunately, we are , we are coming to the end , uh, of our time. Um, but we wanted to , um, just provide a, a few , uh, guideposts , um, what we hope we would see in any regulatory scheme , um, that all stakeholders must work together, and that no stakeholder perspective , uh, is excluded. Um, that's, that's a, that's a hard one. That's a wide one. Um, I get it. I understand. It's, it's , um, not , uh, not something easy to achieve, right? Um, someone will always disagree, but I think starting with this perspective , um, it's very different, right? From having a closed door meeting with one group of stakeholders , uh, in particular. Um, and so , um, the next few points we have , uh, are really related to data. <laugh> de-identified personal health information must be readily available for analysis , um, by the public and private sectors. Uh, that data must be representative , uh, with particular attention to efforts made to gather data on those who've been traditionally marginalized. Um , and then the data must be transparently managed and protected. Again, building that public trust, that political will , uh, with the fruits of the data's analysis , uh, available , uh, to all. Um, and again, I, these are hard principles, I think , um, to put into practice. Uh, but, you know, would the outcome be worth it? Um , I think, so

Speaker 3:

They're great principles to strive for, and I agree with you Tian, that they're not ne they're not easily obtainable, but these are principles to strive for , uh, and, and have to be considered. I mean, they're , they hit on all of the important topics that we've covered. Um, and essentially to, if we are going to try to achieve health equity as a collective , uh, which we are, and we know it's a priority of the White House and the governments , uh, and we've seen the inequities play out in the pandemic , uh, there's been a lot of writing on this. And it's, it's important , um, from a policy perspective, from a human perspective, for us to try to bridge those gaps. And if we're gonna do that, then I think those principles that you laid out are, are going to be key.

Speaker 4:

And , um, it is time for us to wrap up. Um, so Roma, I , uh, again, I , uh, my last word is data , uh, <laugh>. I, I don't think we can , uh, overemphasize the importance of data. Um, and, and I think we'll need to see real change in, in different directions , uh, before we can truly even begin to comprehend the real benefits , um, of ai, right? So I, I think data on a macro level, we'll have to be more widely and freely available and shared. Um, and at the same time , uh, I think, you know, at the end of , at the other end of the spectrum , um, each of us is going to have to reexamine our, our personal relationship to our data, especially our, our health data. Um, and learn more about why we might want to share it more intentionally.

Speaker 3:

I could not agree more. And, you know, my, my final takeaways are also data themes , uh, for the same reasons that there is , this is all built upon data, and it's, it's absolutely key. And so the , the two takeaways for me are, are really from a practical perspective, this point that we covered about representative data and this concept of garbage and garbage out, but the inverse being true, that high quality representative data in, you're increasing your likelihood of high quality representative data out. Uh, and if, if , what we're trying to do here is make sure that all of society can access this very important technology in the future, then we need to make sure that the computer algorithm design , um, not just the design, but the data that's being input into it itself is representative of all communities. Otherwise, we might fall into a situation where a , a program is tested and trained on data from only one population, let's say , uh, a wealthier , um, more homogenous community. And then it's rolled out to everyone, and it , it could harm communities , uh, that don't have the same characteristics as those that the data was originally designed upon. And so that's, you know, when we're talking about representative data, it's , it's incredibly important. Um, and then the second point is that when we have and use this technology, there needs to be some type of continuous monitoring that occurs of what the outputs are that are produced in the first place. It, it's one thing to talk about the upfront design and the testing, all things that are very important , um, and having that oversight. But then what happens when it, the technology is rolled out and it's being used? Who's watching the output to make sure that there aren't biases that are being played out through the outputs of the ai. And there isn't discrimination and , and other harms , um, that are protecting that , uh, excuse me, that are harming individuals, whether on the whole, or as is likely the case is just we've seen from past history might end up harming marginalized communities more than non marginalized communities. And , uh, what kind of monitoring is taking place to make sure that it's not occurring, and if it is occurring to correct it very quickly. Uh , and I, I think that that has to happen on the corporate level. I think that has to happen on the government level. There has to be collaboration and partnership there , uh, but also thoughtfulness about how do you even identify those issues and, and do we have the right epidemiological , uh, researchers and analysts looking at that data that we need to, and that, that's an investment in and of itself, but a very important one, in my opinion,

Speaker 4:

Which goes back to the, the Hume Element <laugh>. Um, so I, I Roma, thank you so much , uh, for your time today. Uh, thank you so much for partnership , uh, on this. And really, we both wanna say thank you , uh, very much to the A H L A , uh, for their support. Um, we've really , uh, enjoyed sharing this with you today, this topic. Uh, and we look forward to , um, seeing where we go in the future.

Speaker 3:

Wonderful. Thank you.

Speaker 2:

Thank you for listening. If you enjoy this episode, be sure to subscribe to a H L A speaking of health law wherever you get your podcasts. To learn more about a H L A and the educational resources available to the health law community, visit American health law.org.