AHLA's Speaking of Health Law
The American Health Law Association (AHLA) is the largest nonprofit, nonpartisan educational organization devoted to legal issues in the health care field. AHLA's Speaking of Health Law podcasts offer thoughtful analysis and insightful commentary on the legal and policy issues affecting the American health care system.
AHLA's Speaking of Health Law
Health Care Corporate Governance: Board Oversight of AI, Part One—What Is the Board’s Role?
In this special two-part series, Rob Gerberry, Senior Vice President and Chief Legal Officer, Summa Health, speaks with Michael Peregrine, Partner, McDermott Will & Schulte, about the health care corporate governance oversight of artificial intelligence (AI). In Part One, they discuss the board’s core role regarding AI, the specific details of that role, and the board’s connection to AI deployment decisions.
Watch this episode: https://www.youtube.com/watch?v=kKLPJAv0vGQ
Essential Legal Updates, Now in Audio
AHLA's popular Health Law Daily email newsletter is now a daily podcast, exclusively for AHLA Premium members. Get all your health law news from the major media outlets on this podcast! To subscribe and add this private podcast feed to your podcast app, go to americanhealthlaw.org/dailypodcast.
Stay At the Forefront of Health Legal Education
Learn more about AHLA and the educational resources available to the health law community at https://www.americanhealthlaw.org/.
This episode of AHLA Speaking of Health Law is brought to you by AHLA members and donors like you. For more information, visit American Health Law.org.
SPEAKER_01:Hello, everyone. This is Rob Gerbery. I'm the Chief Legal Officer of SUMA Health and the president-elect designate of the American Health Law Association. I'd like to welcome you to the latest in our continuing series of podcasts on corporate governance issues affecting healthcare organizations. Today's topic focuses on a critically important governance responsibility. The oversight of artificial intelligence is deployed by the company. And as most of our listeners know well, there can be few issues as pressing to the healthcare industry as the use of artificial intelligence to support operations, administration and patient care services. AI developments of the potential for blazing new trails in how healthcare is managed and delivered. It also creates certain risks with its deployment that need to be monitored closely. The rapidity of change and innovation in this field places great focus on the possible role of the board of directors in connection with its oversight and in its decision-making responsibilities. And it's a role that has not previously been well defined, but rather has been evolving. For those and similar reasons, we're treating the topic in a special two-part episode of our podcast series to provide our listeners with hopefully useful information on the board's core role, the specific details of that role, how AI can be used in support of board duties, and the board's connection to AI deployment decisions that may have negative implications on the company's workforce. And in that regard, we'll flag for you certain existing resources that may be helpful as you work with your management and board leaders in creating a balanced approach and an effective pathway forward for board involvement. And as always, we're lucky to be joined by our HLA colleague and fellow, Michael Peregrine, who's also a fellow of the American College of Governance Council. So, Michael, before we dive into this uh new and important topic, I have a basic question that's been sort of bugging at me. We've been doing this podcast in late November of 2025, but AI has been around now as an issue for at least three years. Are we a little late to the party and coming up with this podcast series?
SPEAKER_02:I don't think so, Ram, not at all, for a couple of reasons, uh, which kind of go to the complexity of AI. First of all, the question of the board's proper governance role is really far from being settled. And that's an important point for our listeners to take into consideration. The important National Association of Corporate Directors report on this issue didn't come out until what October of 2024. Second, there really aren't any statutory guidelines, at least the federal at the federal level or case law on the subject. Third, many boards really haven't yet resolved their own approach to governance of AI at the board level. There continue to be real questions about the board's capacity to address AI issues. Just check out the newspaper reporting, including the Wall Street Journal, on today, the day we're reporting it, we're recording this. Fifth, the trust and reputation issues that have long been associated with AI are never ending. And sixth, human capital issues are now rising to the top with a bullet as it relates to AI deployment. Is that helpful?
SPEAKER_01:So with that list, maybe we should be doing a three-part series. But seriously, what is it about AI that makes it such a challenging board concern?
SPEAKER_02:Well, first of all, I have to disclose I am not a tax specialist. So I uh, you know, my my my knowledge about the pure efficiency and effet uh application of AI is gonna be limited, but I think looking at it from a governance perspective, it gets more simple. And I think the answer to your question is the sheer complexity of the technology, which may come easy to some board members and certainly to members of the executive leadership team, isn't gonna come to all board members. There's just gonna be a wide diversity of AI proficiency within an organization. And in my experience, that's just bound to create tension, particularly between the board and management. That's where the that's where the problem is. And also it's gonna be between the board, uh the tech managers within the organization, and executives who live constantly in this world, with innovators and the venture capital people and the like, constantly presenting new research and development opportunities and platforms. It can get messy as there is a gap between the knowledge level of the board and the knowledge level of managers who are dealing with this and the opportunities on a day-to-day basis. We've got to close that gap.
SPEAKER_01:So, Michael, with that potential imbalance, doesn't that potentially argue against strong board involvement?
SPEAKER_02:Well, we can't let that be the prevailing thought. The simple, inescapable fact is that in a corporate entity model, whether it's for-profit or not-for-profit in an organization, the board of directors can and I think must play an important role in the healthcare company's approach to machine learning technologies like A, like AI, and in the mitigation of the associated risk. Period, exclamation point. We just can't let that narrative proceed, even though it's a logical one. And I think the urgency, Rob, is that the pace of AI evolution is so great that the board, if it delays itself at all in developing a corporate governance structure, can fall behind in proficiency, and if then it becomes really unable, it's always catching up, and it's not in a position to truly evaluate the tough AI call. When that happens, management loses its confidence. It's and it's things spiral down.
SPEAKER_01:So maybe, Michael, as we take a step back and just thinking about building that foundational model for a board, how do you think it looks like related to the oversight of AI?
SPEAKER_02:Well, I think it reflects the traditional oversight decision making and information flow themes that are consistent with the core fiduciary duties that we always talk about. But I think the basic model should be broad enough in coverage to include things like core strategy development, research, acquisition, investments and partnerships on innovation, deployment, compliance and risk, trust and reputation, and human capital, and more on that later. Again, I think the board's governance structure should have a finger on the pulse of all of those matters. And I think it also should be designed to supplement the AI supervision already in place at the operational level, and it should help communicate to corporate constituents how AAI is being used responsibly, the company's operation and the provision of healthcare-related services. So oversight, communication, decision making, a lot of the same traditional themes that really zeroed in on how AI operates in the organization.
SPEAKER_01:Well, with that answer, you make it all seem simple enough. So what is the barrier then to getting such a structure put in place since it seems really consistent with a lot how a lot of boards are already structured?
SPEAKER_02:Well, I'm I'm gonna get in trouble here, but uh, you know, part of the problem lies in a couple of factors. Uh, one is uh in the absence of any all-encompassing AI regulatory framework at the federal level. Not making a political statement here, but you know, we went from one administration that was very focused on having specific uh uh regulations at the federal level and a comprehensive approach to AI regulation, to another which is focused on uh eliminating barriers to uh initiative and development of AI. Second of all, uh we've got um the release of the NACD Blue Ribbon Commission report that I mentioned before. That was a real game changer. Um before that time, we never had any real accepted best practices for AI oversight by the board. So again, we've got uh the alpha of the absence of any kind of real federal uh scheme or framework for AI regulation, and the omega, we finally got some guidance at a best practice level on how boards should approach AI oversight. Um both of those combine, I think, to really force specific evaluation of what the board's doing, and it gives them a pathway forward. But I think all of this has been compounded by what I would say, and again, I'll get in trouble, but it's a there's a significant lack of internal and external appreciation for the contributions that the board can make to address AI risks and strategies. And you see a lot of that in today's Wall Street Journal story today being, what is it, November 12th, and sort of taping this, where you have a lot of experienced directors saying, I'm having a hard time understanding this stuff. That's a problem.
SPEAKER_01:So, with that lack of understanding, how do we move forward? How do we get everyone to appreciate uh the need for AI and then the oversight of it?
SPEAKER_02:Well, I think you've got to drill down with the people with the people in the operational uh an awareness of why the board has a real stake in this game. That's the biggest challenge because I think in some organizations there's a fair amount of resistance from corporate leaders to a strong board role. If we can't overcome that, if we can't make the sales, so to speak, and why the board should be involved, we've got huge problems. You know, I see this in my experience. We see the resistance coming from tech leaders, researchers, scientists, developers, the innovation people within the organization, all these folks who believe that, um, and maybe with some amount of justification, that any kind of corporate monitoring system will needlessly frustrate innovation and the competitive advantage, and therefore in our business, uh uh be a detriment to our to the delivery of health care in the most efficient way. Uh it's a theme that also shows up, as I said, in the in the approach to federal regulation of AI, i.e., the more regulation, the more the critical progress and innovation will be stifled. That is a problem. So I I think uh my point, we have got to be able to make the sale that corporate governance has a critical role to play, it's a meaningful role, and that the board is up to skip you know, is up to the challenge. But as I said, some of the criticism is fair, uh, especially with respect to board proficiency. As I said, the journal issue uh today, um, where you have some prominent and seasoned board members express their own frustration with the complexity of AI, that's the essence of the problem. And we go back to the NACD report, and we'll talk about that more later. It starts off, you know, what's the pathway to effective board governance? Proficiency, proficiency, proficiency. But the system just doesn't work. The checks and balances we need to run an organization will not work without some board role with respect to AI. It's just that simple.
SPEAKER_01:So it sounds like the board chair, the chief legal officer, and others may have to play a role in bringing along some of the naysayers and stressing the importance to the rest of the board.
SPEAKER_02:Well, yes, and I think the chief legal officer should get battle pay for uh for for engaging in that role. But I think there's a couple of approaches that you take to sell the this idea of board oversight to the naysayers. I think first is the argument that there absolutely must be a system of checks and balances with respect to AI deployment in order to make sure that uh stakeholder interests are served, that compliance is affected, and that the issue, critical issues of trust and reputation are addressed. And that can't come from the operational side, it must come from work folks who have a fiduciary obligation to the organization. You know, without that kind of oversight, the the company, the healthcare uh provider is flying blind into a thunderstorm of risk. Second is that the argument you make is that in the absence of any, again, I as I said, the in the absence of any meaningful federal regulation on AI use, at least at this point, it falls on the board and its care mark obligations to make sure that there's a system of compliance and risk oversight that protects the organization through appropriate policies, procedures, and practices. Try saying that three pieces sometime, Mr. Girl, right. Um, and then there's the liability exposure to the board should it fail to exercise some level of oversight. It should just basically say we wash our hands, this is too complex, and and we're walking away. That would be catastrophic. There's no DNO coverage in the world that's going to cover you when you just you throw up in your hands and you say uh this is too complex, we'll let others deal with that.
SPEAKER_01:Not to mention it seems like this responsibility is consistent with best practice.
SPEAKER_02:Well, yeah, that's absolutely true. And listeners to our podcast, uh, Rob, will remember I'm not wild to throwing out the term best practice slightly because, as we know, there's no real practice or process to confirm its existence. I'm not issuing a pent legal opinion that something is the best practice. But I do uh kind of evaluate this from the perspective of who's preparing the analysis, where is it coming from, who's adopting it? And that's why, again, the NACD report from last year, which I think is called Technology Leadership of the Boardroom, is so significant. Uh it was came out in October, and it's the byproduct of a really notably diverse blue ribbon commission consisting of leaders from tech, finance, management, military, consulting, higher ed, insurance, and the law. So you've got the people in place who are saying, here are the aspirational goals. This isn't a consulting firm survey, as valuable as they are. It's the input of industry leaders who who and they're providing recommendations as well as toolkit for moving forward. So does that is that constitute best practice? It's the best we have. It is absolutely the best we have, uh, and I think it's a very credible board resource. Is that dodging the question?
SPEAKER_01:I don't think so, because by my read, I think that NACD report, you know, does make it clear uh that it's an affirmative statement that does support the board's role uh in this oversight function.
SPEAKER_02:It really does. Uh the report's overarching uh conclusion, and I don't get any piece of the action for membership in ACD, let's get that straight. I wish there were other statements out there from some of the other um thought leadership organizations, but they're not. Um it's what we have to deal with. Uh but the report's overarching conclusion is this in the current environment, effective corporate governance has a significant impact on whether and how new technologies will drive value creation and will be or won't be accepted by organizations, economies, and societies. Period end of quote. That is the message that has to be delivered internally. You there is a tremendous value proposition to effective corporate governance over oversight.
SPEAKER_01:For our listeners that haven't uh dove through all the pages of this lengthy report, can you give them a clip notes version?
SPEAKER_02:Yeah, I think it's long and it's detailed, but it often, at least to me, and of course I may have uh different definitions of marvelous, it's a marvelous pathway forward. But in a nutshell, uh the report focuses on three imperatives. This is the pathway, this is the direction forward, this is the suggestion about how a board can go ahead and implement effective governance. Uh, the first imperative or the first step is strengthening oversight. Um, so they say, let's ensure trustworthy technology by aligning it with the organization's purpose and values. That's not touchy-feely stuff. Uh, what do your statement of corporate values mean and how does how does your AI use apply to that? We'll come back to that maybe in our second episode. It means upgrading board structures with expertise on technology governance. It means clearly defining the board's role in data oversight. And I think this is particularly important, Rob. It defines decision-making authorities for technology. What when do you have to go up the stream at the board and management levels? I think the lines of authority are clear. The second is um they call deepening insight. Uh, I I call it you know, proficiency is everything. It says, all right, first we want to establish and maintain the tech um proficiency that's critical amongst the board. And that's going to be a sl, it could be a long slog, but it has to happen. You evaluate it periodically. Um there's a uh an interest, there's an interesting story, uh, a news story the other day about a corporate executive, CEO of a major, major, major company, which basically says, we expect you, the employees, to come up to speed and train yourself and and update yourself so you will be useful and AI knowledgeable and help our company going forward. You know, if you don't, all bets are off. So it's appropriate that we we turn that lens on the board as well. Are you through our evaluations, have you been learning your and training learning and teaching yourself and picking up on the education for AI? Um, and then have appropriate and clear metrics for the oversight of uh technology by the board. You're giving the board a framework, uh, a dashboard, whatever, on how to measure it. That's the insight factor. The third factor is kind of a threefold one. The board and management agree that tech is a tech generally is a core element of the long-term strategy of the organization, and why? Uh then you enable exploratory board and management technology discussions. People talk together, they get together, they share ideas and thoughts. Then you, this is the kind of an interesting one that I like. You design board calendars and agendas. You just kind of hardwired into the system to make sure there's going to be appropriate focus on forward-looking discussions. We build in, we come back, we remind this is going to be on the agenda, this for the next five, ten meetings or whatever. Uh, I think that's the three-pronging approach that NACD takes. Uh, it's not the only approach, but again, as we said before, in terms of thought leadership, it's all we've got. But it's substantive.
SPEAKER_01:Well, Michael, that's probably as much AI as our human listeners can handle for one podcast session. So let's go ahead and hit the pause button. Uh we'll return again for part two of this series where we'll press more on what is the governance role uh structured and looking like, and then we'll also look at emerging issues as well in this space. So, Michael, thank you very much for uh the kickoff to this podcast series.
SPEAKER_02:Uh, look forward to that. What a cliffhanger, huh?
SPEAKER_01:That's right. Thank you.
SPEAKER_00:If you enjoyed this episode, be sure to subscribe to AHLA Speaking of Health Law wherever you get your podcasts. For more information about AHLA and the educational resources available to the health law community, visit American Health Law.org and stay updated on breaking healthcare industry news from the major media outlets with AHLA's Health Law Daily Podcast, exclusively from AHLA comprehensive members. To subscribe and add this private podcast feed to your podcast app. Go to americanhealthlaw.org slash daily podcast and