The Clinical Realist

What Physicians Actually Need to Know About AI Contracts Before They Sign Anything

Season 1 Episode 9

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 15:46
A physician executive signed a clinical validation form. Eleven months later, she was the one accountable in peer review after an AI-assisted recommendation contributed to a delayed diagnosis. She had not read the indemnification clause. Almost no physician does.In this episode, Dr. Sarah Matt breaks down the four contract clauses that matter most to physicians in AI procurement, the five questions every physician executive needs answered in writing before their name goes on any clinical validation document, and where physician executives have actual leverage in AI contract negotiation and when that leverage disappears.What you will take away:- The four contract clauses that define your liability in an AI deployment- Why your name can appear in a vendor contract you have never seen- Five questions to get answered in writing before you sign any clinical validation document- Where physician executives have genuine leverage in AI contract negotiation, and when that leverage expiresWebsite: https://drsarahmatt.com | Book a conversation: https://calendly.com/sarahmattmd | LinkedIn: https://www.linkedin.com/in/sarahmattmd/



Resources & Links:

📖 Get the Book: "The Borderless Healthcare Revolution" is available now on Amazon and major retailers.

💼 Work with Dr. Matt:
Looking for a keynote speaker or strategic advisor?
Visit: drsarahmatt.com

🔗 Connect on Social:
LinkedIn: https://www.linkedin.com/in/sarahmattmd/
YouTube: www.youtube.com/@DrSarahMatt

📧 Subscribe to The Briefing: drsarahmatt.com/newsletter-signup
 

Disclaimer:
The views expressed on this podcast are those of Dr. Sarah Matt and her guests. They do not necessarily reflect the official policy or position of any affiliated institutions. This content is for informational and educational purposes only and does not constitute medical advice or a professional consulting relationship.

SPEAKER_00

A physician executive at a mid-sized health system called me after their AI assisted diagnostic tool produced a recommendation that contributed to a delayed diagnosis. The vendor's response was very textbook. The tool performed within its validated parameters. The physician was the one accountable in the peer review. She had signed the clinical validation form 11 months earlier. She had not read the indemnification clause, because almost no physician actually does. That conversation is why I'm recording this episode. Welcome back. I'm Dr. Sarah Matt, and this is the show for physician executives and health system leaders navigating the real operational and strategic implications of AI and clinical settings. Today's episode is about AI contracts. Specifically, what's in them that physicians need to understand before their name goes on the clinical validation line. This is not a legal lecture. I'm not a lawyer, thank goodness, and nothing here is legal advice. What I am is someone who sat in rooms where the aftermath of those contracts lighted squarely on physicians who had no idea what they had agreed to. So if you're a CMO, a CMIO, a physician executive, a CNIO, a CHIO, whoever you are, if you're a clinician executive who has been asked to sign off on an AI deployment in the past 18 months, this episode is for you. So let me describe a procurement workflow that will probably sound very familiar. IT and legal negotiate the contract, finance signs off on the spend, and the clinical executive is brought in to validate clinical appropriateness and sign an internal attestation. None of that internal process is the actual vendor contract, but the vendor contract almost always includes language that assigns post-deployment clinical oversight responsibility to a named physician, physician role, or other clinician. And that person becomes the accountable party when performance questions arise. So why don't physicians read the vendor contracts? Well, there's three big reasons. First of all, they're long. They're written in technology and commercial law language that takes actual legal training to parse. And the physician is typically given 72 hours to validate a tool that took the vendor six months to build a business case for in the first place. There's there's no structural space in that process for the physician to actually engage with the contract appropriately. So the assumption is that legal has handled the legal parts and legal has for the business terms, but what legal has not always handled is the clinical accountability structure, because legal is not in a position to assess clinical risk 100%. So I want to give you a concrete example of how this plays out. Now, I'm using the word physician executive, but again, we all know that other clinicians may be in this spot, whether you're nurses, physical therapists, pharmacists, you name it. So remember that as I go through this talk. So a physician executive was asked to sign off on an ambient documentation tool. She had 48 hours notice, a demo that showed impressive workflow, impressive efficiency numbers, and an internalization form from her CMO's office. She signed it. 18 months later, when a documentation gap in a complex patient case was traced to the AI's transcription of a physician's verbal assessment, she was the name clinical oversight officer in the vendor contract. The vendor's position was that the tool had transcribed accurately. The clinical judgment about what to transcribe and verify remained the physician's actual responsibility. So she hadn't known that she was named in the vendor contract at all, much less as the clinical oversight officer. And that's not a vendor betrayal. That's what the contract said. She just had not read the contract. So the accountability mismatch that this creates is not subtle. You can be named in a contract you never saw for deployment you had limited authority over. And that combination, accountability without visibility, responsibility without genuine authority, that's the structural problem this episode is designed to help you prevent. So you don't need to read every page of an AI vendor contract. You do need, you absolutely do need to know what to look for in four specific areas. So the first is indemnification and liability allocation. So who's responsible when the tool produces a bad outcome? And that outcome, that output actually influences a clinical decision. Almost every AI vendor contract contains language that says the vendor is absolutely not liable for clinical decisions made using the tool's output because clinical judgment remains the physician's responsibility. And that is standard and in isolation somewhat reasonable. The physician is the one with the license and the clinical training. What's not standard and what most physicians never see coming is the contract language that names the physician validator personally as a clinical accountability layer rather than the organization. So when a peer review or regulatory inquiry traces a patient outcome to an AI tool, there's a significant legal difference between the accountability being held by your institution and the accountability being held by the physician whose name appears on the validation form. Know which one you're signing. The second clause is validation scope and limitation. So the vendor validated their tool in a specific population on a specific data set under very specific conditions. And that validation scope is in the contract. When your organization deploys a tool outside that scope, so a different patient population, a different EHR integration, a different workflow context, the vendor's validated performance data no longer applies. And the physician who signed the clinical validation form did so against the vendor's stated validation scope. So if deployment has drifted beyond that scope since the form was signed, and of course it's going to, the validation form is now essentially meaningless. And the physician is still the one accountable. The third clause covers post-deployment performance monitoring requirements. So most AI vendor contracts include a requirement for ongoing clinical performance monitoring. The question is, who owns that monitoring obligation? If the contract names the CMO's office, that's an organizational commitment. If it names a specific role and that role turns over, the monitoring obligation does not automatically transfer to whoever fills the role next. In practice, many health systems sign contracts with explicit monitoring obligations and then stand up whatever monitoring structure they have bandwidth and budget for, which is often not the structure the contract specified. And the absence of required monitoring can become a liability amplifier when you need the documentation trail to demonstrate due diligence. The fourth is the black box access clause, or more accurately, the lack of one. So when something goes wrong with an AI output, can your organization actually access the model's reasoning? Some contracts explicitly limit explainability. Others are just simply silent on it, which creates ambiguity at the worst possible moment. So if your contract does not explicitly give you the right to request an explanation of how the model produced a given output, you may not have that right in a peer review or regulatory context. And that's a negotiating point that almost no health system thinks to raise before signing, because it only becomes urgent after something has already gone terribly wrong. The clause I see cause the most consistent confusion in health system conversations is the monitoring clause, specifically the gap between what the contract requires and what the organization has actually built. And I've been in more than one conversation where the CMO was learning in real time that their vendor contract specified monthly clinical performance reviews routed to a named executive, and the organization had been submitting quarterly reports to a committee that was not the named executive. So the contract had been out of compliance in letter for two years while being compliant in spirit. That's the kind of gap that looks very different before an incident than it does after one. So you don't need to become a technology lawyer. Heaven forbid, we don't need more technology lawyers. You need five specific questions answered in writing before your name is on any clinical validation document. Question one What's the validated scope of this tool? Specific means the specific population, the specific data set, and the specific workflow context the vendor used in their validation studies. In writing, if the vendor can't answer this question with precision, that's not a detail gap. That's a signal about the rigor of their validation process. So you're being asked to validate clinical appropriateness. You can't do that without knowing what the clinical appropriateness claim is actually based on. Question two. What happens to our organization's clinical accountability if the tool is deployed outside its validated scope? Ask your legal team, not the vendor. The vendor's answer will reflect optimism about their product's generalizability. Your legal team's answer will reflect the contract terms, and those are often significantly different. So you want the legal answer before the deployment is planned, not after the expansion is approved. Question three. Who in this organization is named in the vendor contract as the clinical accountability owner? And do they know? This is often a CMO role placeholder that the current CMO has never seen in writing. It's not uncommon common for a physician executive to be contractually named as the clinical accountability owner for a deployment they joined 18 months after the contract was signed, because the role was named rather than the individual. And that physician has no context for the original validation conditions, no institutional memory of the procurement, conversation, and full contractual accountability. So the answer to this question should be a specific named person who has seen the contract language that names them. Question four, what are the post-deployment monitoring requirements in the contract? And has the organization already built the infrastructure to meet them? The second half of that question is the one that almost never gets asked. Organizations sign contracts with monitoring requirements and then build whatever monitoring structure they have bandwidth for. And if the answer to the second half of that is that you're going to figure it out, well, that's a flag, especially because the monitoring requirement is usually what produces the documentation trail that either demonstrates due diligence or establishes the absence of it. And question five: if this tool contributes to a patient harm event, what does our vendor contract say about our right to access the model's reasoning for peer review? Ask your legal team to point you to the specific clause before you sign, not after. This question is uncomfortable to ask because it sounds like you're anticipating failure. You're not. You're doing the same thing you would do when you review a consent form or verify a medication before administration. You're doing the work that the situation actually requires. So let me give you a specific example of what happens when these questions are not asked. So a health system deployed a clinical decision support tool for sepsis screening in their ER. The pilot results were strong. Expansion was approved at the board level based on the pilot summary, and the vendor contract named the CMIO as a clinical accountability officer. The CMIO in that role had been there for eight months when the contract was signed, two years earlier by the previous CMIO. The post-deployment monitoring requirement in the contract specified weekly performance reviews routed to the CMIO directly. Now, the actual monitoring structure that had been built was a monthly committee report that did not go to the CMIO at all. So when a sepsis screening failure was traced to a model drift event, the review revealed that model drift had been visible in the performance data for 11 weeks. The weekly monitoring, the contract required, would have caught it probably at week two. The current CMIO was named in the contract. So the monitoring gap was documented. The combination of those two facts defined what the subsequent months look like for that physician and that institution. None of the five questions above have been asked when the contract was signed. So here's something that's true and rarely stated clearly. Physician executives have significant leverage in AI contract negotiations. Most don't use it. Not because they lack the standing to do so, but because they have been brought into the process way too late to deploy it effectively. So AI vendors need physician executive sign-off to close health system deals. That's genuine leverage, not symbolic influence. Health systems that can't demonstrate physician leadership in AI governance are increasingly unable to get board approval for AI investment. And vendors know this. So the physician executive is imprimature is part of the business case. So the question is when you deploy that leverage and when you don't. Post-term sheet, you have almost no link, no leverage whatsoever. The business case has been built, the executive relationships have been established, the vendor has already invested significant time and resources in reaching the point of term sheet. So asking to renegotiate identification language at that stage will essentially be treated as a procurement delay, not a legitimate clinical concern. So you'll be managed rather than heard. Now, pre-RFP, you have real leverage. If the physician exec is involved in defining the vendor evaluation criteria before the RFP goes out, the clinical performance requirements, the accountability structure, the explainability standards, those requirements become part of the vendor's proposal and the eventual contract negotiation. So you're not asking for changes to a contract that's already structured. You're defying the terms the vendor is agreeing to when they choose to respond. So the practical implication is that a specific organizational conversation needs to occur. If you're not being brought into AI procurement before the RFP is issued, that's the conversation to have with your CEO and COO. Not I want to be involved earlier. That framing asks for a favor. The framing that works is here is the clinical accountability structure that will govern this deployment. Here is what happens to this organization's liability posture if that structure is not defined before the vendor relationship is established. Here is what I need to be in a position to protect both the organization and our patients. That's not asking to be included. That's defined the conditions under which your accountability can be actually meaningful. So I went into a procurement cycle early enough to shape the RFP once, and I have gone in late enough to only review a finished contract twice. And I will tell you the difference. Early involvement looks like this. I was brought in before vendors were solicited. We spent two hours defining what clinical accountability would mean for this type of deployment, what explainability requirements we would include, and what monitoring structure the contract would need to specify. And those requirements, they went right into the RFP. Three of the four vendors who responded had not faced those requirements before. Two adjusted their contract terms to meet our specifications before the first demo. And one declined to respond. So the fourth had robust documentation for every requirement we listed. And that fourth vendor got the deal. Partly because their documentation made the legal and clinical review significantly faster. And that's what early involvement looks like. It's not slower, it's faster because the accountability structure is defined before the first demo, not negotiated after the contract is drafted. So nothing I've said to you today requires you to become a technology procurement expert. What it requires is a specific posture shift from I will validate what has been selected to I will define the clinical accountability requirements before selection begins. So the contracts are written by people who understand tech and commercial law. They're not written by people who understand the daily reality of clinical operations. They don't understand workflow pressure or what happens to a physician's career when a peer review goes the wrong way. That's your expertise. The only way it gets into the contract is if you are in the room when the contract is being shaped. That conversation about who gets to be in the room and when, that's what episode eight was about. So if you haven't listened to that one, go back and start there. Now I'm Dr. Sarah Matt. New episodes drop every Wednesday. And if this was useful, share it with a physician exec who's heading into the AI procurement cycle. Or forward it to the CMO who just got handed a vendor contract to validate. They're gonna thank you.