Ignition by RocketTools

What AI Prior Authorization Actually Looks Like — And Why It Will Demand More From Providers, Not Less

Dan McCoy, MD Season 1 Episode 5

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 20:58

Everyone's pitching AI as the solution to prior authorization. And they're right — the technology is about to solve it. Ambient scribes capturing every detail. Clinical decision support guiding every order. Automated systems submitting perfectly optimized requests. Approval rates heading toward the high 90s.

But here's what nobody's talking about: what happens to healthcare costs when a system designed around 15-20% of requests getting denied suddenly starts approving almost everything?

In this episode, I break down the three distinct layers of AI in prior authorization — and why most people are lumping them together when they have very different implications. I dig into a UCSF study showing physicians using AI scribes saw a 5.8% RVU increase with no rise in claim denials. I explain why over 80% of appealed denials get overturned, but only 12% are even appealed — revealing that prior auth was never really about clinical evaluation.

And I make the case that once AI solves the coding problem, the question shifts from "did you code this correctly?" to "should you have ordered this at all?"

The end game isn't faster paperwork. It's AI evaluating medical judgment. The bar is going up, not down.

Full source list and research citations available on Substack.

SPEAKER_00

Not a week goes by where someone doesn't tell me how AI is going to fix prior authorization. Whether it's a vendor, consultant, carrier rep, everyone's got the same pitch. AI scribes, automated submission, instant approvals, problem solved, right? And look, they're not wrong about the technology. AI is really about to solve prior authorization. Not make it a little faster, but actually solve it. Ambient scribes are capturing every detail, clinical decision supports guiding every order. Automated systems are submitting perfectly optimized requests, and approval rights are heading toward the high 90s. But what nobody in that conversation is telling you is what comes next. What happens to healthcare costs when the system that was designed around 15 to 20% of requests getting denied suddenly starts approving almost everything? Here's what we're going to dig into today. First, there are actually three distinct layers of AI and prior authorization that most people kind of lump together. And understanding the difference matters because they have very different implications. Second, AI-driven coding optimization is not upcoding, but it is raising cost. And I've got data from a UCSF study published in JAMA last month that quantifies exactly how much. Third, and this is the part that most people miss. Once we hit 100% coding efficiency, that's not the end of the story. The question shifts from did you code this correctly to should you have ordered this at all? And AI is going to start evaluating physician medical judgment, not just paperwork accuracy. Here's why this matters right now. I spoke at a carrier meeting in Texas over a year ago. And when I started talking about this coming wave of AI-generated prior authorizations, I got blank stares. These carriers, the people whose job it is to evaluate for whether a procedure should be approved, were completely unprepared for what was about to hit them. And now Health Affairs, one of the most respected healthcare policy journals, published a paper in January 2026 literally calling this an AI arms race between providers and payers. Providers are using AI to get everything approved, and payers are using AI to deny faster. Patients are now using AI-powered tools to appeal denials. It's AI versus AI versus AI. And somewhere in the middle of all this, employers are seeing their healthcare costs hit multi-year highs. Mercer says we're approaching$18,500 per employee. Willis Towers Watson projects a nearly 10% trend increase for 2026. And nobody is connecting the dots between AI efficiency on the provider side and the cost explosion on the employer side. So let's dig in and see what we can find out about this. Most people hear AI and prior authorization and they think about one thing: automating the submission. Fill out the form faster, send it electronically, and get a quicker response. But that's only the third layer. And the first two layers are arguably more important and really more consequential. The first layer is ambient clinical intelligence, AI systems that sit in the exam room and listen to the entire patient encounter. Companies like Nuance, now Microsoft, Dragon Copilot, Abridge, Ambience Healthcare, Suki, Deep Scribe, these systems all passively record the conversation between the doctor and the patient and generate the clinical documentation automatically. Here are the numbers. The Permanente Medical Group, one of the largest health systems in the country, documented over 2.5 million patient encounters using ambient AI scribes in their first year alone. That saved over 15,000 hours of documentation, time across 7,000 physicians. Here's the thing that jumped out to me. The ambient scribe doesn't just save time, it captures far more clinical detail than a human ever could. A doctor writing notes at the end of a 15-minute visit is going to miss things. They're going to underdocument. The AI captures the full conversation, symptoms mentioned in passing, questions asked, clinical contact that is rushed, and the physician would never really write down. And more complete documentation means higher coding complexity. A UCSF study published in JAMA Network Open in January 2026 found that physicians using AI Scribes saw a 5.8% increase in RVUs. That's the unit that drives reimbursement. That translates to an extra$3,044 per physician per year. And here's the critical finding: there was no increase in claim denials. The documentation generally supported the higher codes. This isn't fraud, and I've had people try to tell me that it is, but it really isn't. This is capturing complexity that was always there, but it was never documented. The second layer is clinical decision support and the point of ordering. Let me give you a concrete example. A patient comes in with back pain. Without AI, the doctor might refer them directly to a spine surgeon. The prior authorization gets submitted and the insurance company pushes back. Has the patient tried physical therapy first? Uh denied? We probably have all seen that. With AI-powered clinical decision support, the system flags this at the moment the doctor is making the decision. It says, you know, based on evidence-based guidelines, the recommended pathway for this is for the patient to have six weeks of physical therapy before surgical referral. Now the doctor follows the guideline pathway. When they do eventually order the MRI or the surgical consult, the prior authorization submission already includes the documentation that conservative treatment was tried first. The approval rate goes way up. Bale Clinic redesigned their prior authorization workflows around this principle. They published the results through Epic Share in September 2025, and their denial rates dropped by 70%. Staff productivity also improved by 39%. Think about that. But here's the nuance. Many clinicians weren't actually following established medical guidelines before AI prompted them. So the clinical decision support just didn't make the paperwork better. It actually changed clinical behavior at the point of care. And that's a fundamentally different conversation than just automating forms. The third layer is what most people think about the actual automation of prior authorization submission. This is where companies like Cohere Health, Rhyme, Himata Health, and now Optum are playing. These systems take the documentation generated by the ambient scribe, enhanced by the clinical decision support pathways, and automatically package and submit the prior authorization request. The results are striking. Cohere Health reports auto approval rates of 50 to 90% depending on specialty. Optum launched a new platform in February of 2026, just a month ago, targeting somewhere around a 96% first-past approval rate. And at least one company, Jory AI, claims they've driven prior authorization denial rates down to 0.21%. Let that sink in. And here's what makes this really interesting. In August 2025, a bridge partnered with HiMark Health to do something nobody had done before, use the ambient scribe data to trigger real-time prior authorization at the point of conversation. Okay, think about that. At the time the conversation is happening. Meaning the insurance approval happens while the doctor is still talking to the patient. And that was deployed to 14 hospitals. That's not a pilot, that's actually live. So when you put all three layers together, you get a system that one captures the vast majority of clinical detail from the encounter, two, guides the physician to follow evidence-based pathways, and three, automatically submits a perfectly optimized prior authorization request. And that's why approval rates are heading toward 100%. Not because the system is gaining anything, because it's doing what the system was always supposed to do. Document accurately, follow guidelines, and submit a complete request. The problem is that this system was never designed for 100% efficiency. I'm going to be honest with you. When I first started looking at this, I thought the upcoding angle would be the story. AI scribes inflating costs, gaming the system, driving up costs through fraud. But that's not what's happening. What's happening is more interesting and in some ways more dangerous because it's completely legitimate. Let's go back to that UCSF study: 5.8% RVU increase per physician,$3,044 per year, no increase in denials. That tells us the coding increases are defensible. The payers aren't pushing back because the documentation supports the higher codes. This is genuine optimization, capturing the full complexity of what was happening in the exam room. Well, let's do the math. There are roughly a million physicians in America. If half of them adopt AI scribes over the next three to four years, and the ambient scribe market is estimated at around$600 million, growing more than double year over year. So that's not a realistic. That's an additional$1.5 billion per year in higher reimbursement, just from better documentation of what was already happening. And that's before you add the prior authorization effect. If historical denial rates of 15 to 25% drop towards zero, that's 15 to 25% more procedures being approved. Each one of those is real spending, by the way. We're talking imaging, surgery, specialist referrals. That was all previously being blocked or abandoned. I want to be really clear about the distinction here. This is not upcoding. Upcoding is billing for a higher level of service than was actually provided. That's fraud under the False Claims Act. And the DOJ just had one of their biggest healthcare fraud takedowns in 2025, over 300 defendants, more than$10 billion in attended losses. So they're watching what's going on. What AI scribes are doing is different. They're accurately documenting what was always happening, but just being undercaptured. The visit was complex. The doctor just didn't have the time to document it fully. Now the AI does. But and this is the part that most people miss. The economic effect is the same. Whether the codes go up because of fraud or because of legitimate optimization, the employer writing the check sees the same things, higher cost. So let me put this into context. Employer healthcare costs are at a multi-year high. Mercer's 2025 national survey shows cost at over$17,000 per employee. That was up 6%. I'm actually hearing sometimes way more from employers. We're talking 10%, 15%. So I wouldn't get hooked on that 6%. I actually think it's pretty bad. Aon's projecting 9.5%. WTW says 9.6%. The business group on health says 9%. Whatever the case, these numbers are huge. These numbers are things I've never seen in my entire career. Everyone attributes this to GLP1 drugs, Ozimpic, Wagovi, Manjaro, and especially drugs and high-cost claimants. Those are real factors, don't get me wrong. But here's the analytical gap. Not a single major consultant, not Mercer, not Aon, not WTW, is explicitly breaking out AI-driven coding optimization as a discrete cost driver in the public trend reports. It may be buried inside medical inflation or provider rate increases, but it's not being measured separately. And I think that's a blind spot. And the NPJ Digital Medicine Policy Brief from December 2025 says it plainly. And that's a direct quote. The business case increasingly centers on revenue capture more than intensive coding. When the product being sold to your doctor's office is explicitly positioned as a revenue optimization tool, and the cost of that revenue optimization flows through to your employer health plan, someone should probably be tracking that. So here's the question I keep coming back to. Is this a permanent escalation or is it a one-time step function? I think it's a step function, and here's why. Right now, coding efficiency across the healthcare system is probably somewhere around 70 to 80%. Doctors are leaving money on the table because they don't have time to fully document complex visits. AI scribes will raise that to 95 to 100%. That's a one-time 20 to 30% revenue increase. Once you're at 100% capture, there's really nowhere else to go. The rate of increase should slow down once the saturation is reached. But the baseline, the per visit cost of healthcare is still permanently higher. You've made a step increase. Think of it like a staircase. We're climbing a step right now, a big step. And once we reach the top of that step, the floor is higher than it was before. We're not going back down, but we're also not going to keep climbing at this rate. At least I don't think so. The Jamma Health Forum acknowledged this in January 2026. They suggested that if ambient scribe adoption increases spending through coding intensity, regulators should consider applying automatic downward adjustment. That's the research community saying this is a recalibration and we might need to reset the math. But here's why the equilibrium might be harder to reach than it sounds. Payers aren't sitting still. Cigna has already started automatically down coding level 4 and level 5 ENM claims unless the documentation clearly supports higher complexity. So now you've got provider AI coding up against payer AI coding, and HFMA published a piece in January 2026 describing what they call the battle of the bots. Those are their words, not mine. So it's escalating technology expenditure on both sides of the same transaction, the opposite of reducing administrative cost. And it gets worse. It's not even a two-sided arms race anymore. Patients are now in the fight. A nonprofit called Counterforce Health builds AI tools that help patients generate appeal letters when their claims get denied. Multiple AI chatbots recommend it as a number one tool for potting insurance denials. And I've actually seen patients actually do this myself. So we've got AI optimizing submissions, AI reviewing submissions, and AI appealing denials. Three layers of AI arguing with each other about the same procedure, and the patients are still sitting in the exam room wondering if they can get their MRI. So is that the end of the story? AI optimizes coding, costs go up, everyone adjusts, we find a new normal. I don't think so. And here's where I want to push into territory that most people aren't exploring yet, because the conversation is about to shift from coding accuracy to medical judgment. See, up until now, prior authorization has been fundamentally a paperwork exercise. Did you fill out the form correctly? Did you include the right clinical documentation? Did you follow the administrative process? AI has essentially solved that problem. If ambient scribes capture everything, clinical decision support guides the pathway, and agentic automation submits the perfect application, the paperwork is always going to be right. So the question for payers shifts. It has to. Because if the mid-90s percent of prior authorizations are getting approved on their first passed, and that's where the industry is heading, then the review process isn't really evaluating anything meaningful. It's just confirming the form was filled out correctly. And here's the statistic that makes this even more damning. In multiple analyses, including the recent KFF data, over 80% of prior authorization denials that are appealed, they get overturned. But only about 12% of denials are even appealed, meaning the vast majority of denials were probably wrong in the first place. And the people who didn't appeal just gave up. So the current system where we spend 13 hours per week per physician on prior authorization is mostly confirming things that should have been approved and incorrectly denying things that get overturned on appeal. That is not a system that's evaluating clinical judgment. That's a toll booth. Now here's where it gets interesting, and I want to be transparent. This is forward-looking. This is where I think we're heading based on what the research is suggesting. XOLIS has developed something called the care level score. It's a gradient AI score, not binary approve or deny that assesses medical necessity by analyzing the full patient record in real time. It looks at severity of illness, intensity of service. It's being validated by independent peer-reviewed studies, and it's in production today. This is fundamentally different from checking whether a form was filled out correctly. This is A, I'm making a judgment about whether the clinical decision was appropriate for this specific patient. And here's the Alterum Institute finding that really kind of stopped me in my tracks. Very limited research exists on whether approved prior authorizations actually lead to optimal patient outcomes. Think about that. We study the harm of denials extensively, 91% of physicians saying prior authorization negatively impacts outcomes. One in three has seen it lead to severe or serious adverse events, but nobody's asking the inverse question. When a procedure gets approved, was it actually the best choice for that patient? And that's the gap that AI is going to fill. I think we're heading toward a world where AI doesn't just check whether you coded the procedure correctly. It evaluates whether your clinical decision making was sound. Did the physician consider conservative treatments first? Did the ordering pattern match evidence-based guidelines for this specific patient's profile, their comorbidities, their prior treatments, their demographics? And more than that, predictive models are being built right now that can answer the question: will this treatment actually work for this specific patient based on population outcomes data? That's the shift from is this coded correctly to is this the optimal treatment? And if AI can evaluate individual clinical decisions, it can also build profiles across a physician's entire practice. CMS is already doing a crude version of this. The WISER model, which launched January 1st, 2026 in six states, including right here in Texas, exempts physicians from prior authorization requirements if they achieve a 90% approval rate. That's implicit provider profiling. The system is building a track record and using it to determine future requirements. Scale that up with AI, and you can imagine a world where payers or regulators are analyzing a physician's ordering pattern across thousands of patients, identifying outliers, flagging practice patterns that diverge from evidence-based care. This raises real ethical questions. If physicians know they're being profiled, does it lead to better medicine or to defensive medicine? Do they order more tests to avoid being flagged as underutilizers, or do they avoid innovative treatments because they fall outside established pathways? I don't have definitive answers here, but I think this is where the conversation needs to go. Because AI solving the coding problem is a relatively straightforward technical achievement. AI evaluating medical judgment, that's a fundamentally different proposition. And providers, payers, employers, and regulators, they're just not ready for it right now. So let me pull all this together. We covered three things. First, there are three distinct layers of AI and prior authorization. Ambient listing captures far more clinical detail than any human could. Clinical decision support guides physicians to follow evidence-based pathways, and agentic automation submits the perfect application. Together, they're driving approval rates toward 100%. Second, this is legitimate coding optimization, not up coding. But the economic effect is the same. Employers are seeing cost at multi-year highs, and nobody is really explicitly breaking out AI-driven coding efficiency as a cost driver. The UCSF GMA study found a 5.8% RVU increase with no uptick in denials. The system wasn't built for 100% efficiency. That's really what I'm trying to say. And we're about to find out what that cost. Third, the endgame isn't faster paperwork. It's AI evaluating physician medical judgment. We're moving from did you fill out the form correctly to was this the right clinical decision for this patient? And that shift will demand higher expectation of providers, not lower ones. For employers and benefit consultants out there, the opportunity is to get ahead of this. Start asking your TPA how they're handling AI optimized prior authorization submissions. Start tracking whether AI-driven coding intensity is showing up in your claims data because it's coming if it isn't there already. For providers, the message is different. AI isn't going to make prior authorization disappear. It's going to make prior authorization irrelevant and replace it with something really harder. Clinical appropriateness evaluation, practice pattern analysis, outcome-based accountability. The bar is going up, not down. If this was useful, subscribe for more healthcare AI strategy. I'm going to keep digging into this, particularly the employer cost angle. But guys, I think there's a story there that nobody's really telling you yet. And if you want the deeper dive on this, including all the research citations and the full source list, I've got a complete write up on my Substack, links in the description.