Data Science x Public Health
This podcast discusses the concepts of data science and public health, and then delves into their intersection, exploring the connection between the two fields in greater detail.
Data Science x Public Health
This AI Makes Life-or-Death Decisions… But No One Knows Why
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI models in healthcare are making critical decisions every day…
Who gets flagged as high-risk.
Where resources are sent.
Who gets care first.
But there’s a problem:
Many of these models can’t explain their decisions.
In this episode, we break down Explainable AI (XAI) and why black-box models are a serious risk in public health. You’ll learn how tools like SHAP and LIME reveal what’s really happening inside AI — and why transparency is becoming non-negotiable.
👉 Enjoyed the episode? Follow the show to get new episodes automatically.
If you found the content helpful, consider leaving a rating or review — it helps support the podcast.
For business and sponsorship inquiries, email us at:
📧 contact@bjanalytics.com
Youtube: https://www.youtube.com/@BJANALYTICS
Instagram: https://www.instagram.com/bjanalyticsconsulting/
Twitter/X: https://x.com/BJANALYTICS
So imagine a hospital rolls out this brand new AI model, right? And it perfectly predicts which patients are at the highest risk for readmission.
SPEAKER_00And it's probably saving them a ton of money.
SPEAKER_01Exactly. But then a clinician points to a specific patient and asks, well, why was this person flagged? And literally nobody can answer. Not the software vendor, not the developer, not even the algorithm itself.
SPEAKER_00Which is terrifying. I mean, especially in public health where a single decision affects entire populations, a diagnostic blind spot at that scale is genuinely dangerous.
SPEAKER_01Yeah, it really is. And welcome to our deep dive into excerpts from the black box Explainable AI in public health. Our mission today is uncovering why AI transparency isn't just some technical glitch, but a literal life or death requirement in healthcare.
SPEAKER_00Because I mean, you really can't just deploy a system like that and hope for the best.
SPEAKER_01No, you can't. I keep picturing like a surgeon standing over you with a scalpel saying, Just trust me. I'm not going to explain what I'm doing.
SPEAKER_00Yeah, you would jump right off the table.
SPEAKER_01I would run out of the room. It's a completely unacceptable premise in medicine. Yet we are handing over massive public health decisions to these totally opaque algorithms. Picture an epidemiologist at, I don't know, 7 a.m. trying to decide where to allocate limited hospital beds or distribute vaccines based on a black box.
SPEAKER_00Aaron Powell And they need to trust the system. That is really where explainable AI or XAI comes in. It essentially forces the algorithm to answer a few core questions.
SPEAKER_01Aaron Powell Like what kind of questions?
SPEAKER_00Well, things like what specific features drove this prediction? What is the confidence level? And would the prediction change if the input were slightly different?
SPEAKER_01Aaron Powell Because without those answers, you can't ensure accountability at all.
SPEAKER_00Aaron Powell Exactly. If an AI risk score systematically underestimates the danger for, say, a specific demographic, you just can't diagnose that failure without seeing the map. Trevor Burrus, Jr.
SPEAKER_01Which is exactly why the FDA, the EU AI Act, and HHS are turning transparency into a strict regulatory expectation now.
SPEAKER_00Aaron Powell The regulators are definitely catching up to the risks here.
SPEAKER_01Aaron Powell But I do have to push back a little bit because I know there are inherently interpretable models like decision trees, which are completely transparent by design. You can follow the logics, they sacrifice raw predictive power for that clarity. So I guess my question is: is sacrificing accuracy really a worthy trade-off when human lives are on the line? Like, don't we want the smartest possible model?
SPEAKER_00Well, think of it this way: an incredibly accurate model that clinicians refuse to use because they don't trust it well. That saves exactly zero lives.
SPEAKER_01Aaron Powell Okay. Yeah, that is a very fair point.
SPEAKER_00Aaron Powell But you are highlighting a real tension. When data scientists do use those highly complex, opaque models, they have to rely on what we call post hoc explanation methods.
SPEAKER_01Post hoc, so like after the fact.
SPEAKER_00These are tools applied after the complex model is already trained just to try and figure out what it's actually doing.
SPEAKER_01Aaron Powell Wait, how do you even figure out what a black box is doing from the outside?
SPEAKER_00Mostly by playing with the inputs, tools like S shape and Lyme systematically tweak or hide pieces of patient data to see how the AI's prediction changes.
SPEAKER_01Oh, I see. So if a tool hides a patient's age and suddenly their readmission risk plummet, it knows age is a major factor.
SPEAKER_00Aaron Powell Spot on. And S shape uses this testing to assign exact percentages. It can tell a doctor that a high risk score is, say, 40% based on age, 30% on a comorbidity, and 15% on their zip code.
SPEAKER_01Wow, that's really specific. And what about Lyme?
SPEAKER_00So Lyme works a bit differently. It tests the data to build a simplified local approximation of the complex model for just that one specific patient. Though I mean it can be a bit less stable.
SPEAKER_01It's almost like reverse engineering the thought process. And for visual diagnostics, like a chest x-ray, the sources mention something called saliency maps.
SPEAKER_00Yes, those are fascinating.
SPEAKER_01The tool literally highlights the exact cloudy patch on the lungs that triggered an ammonia fleck.
SPEAKER_00And those visual tools are incredibly useful for audits, but there is a pretty harsh reality to all of these post hoc methods.
SPEAKER_01Ooh, what's the catch?
SPEAKER_00They are only approximations. They are basically educated guesses about what the model appears to be doing, not a true window into its actual internal logic.
SPEAKER_01Wait, really? So we're just guessing.
SPEAKER_00Pretty much. There's this fundamental tension where the most accurate deep learning models are inherently the hardest to actually explain.
SPEAKER_01Wow. Okay, so if these tools are just approximations, then XAI really can't just be an afterthought. It has to be built into the design process from day one.
SPEAKER_00Absolutely. Because if you can't explain a model's prediction, you simply cannot hold it accountable. Knowledge is most valuable when it's actually understood.
SPEAKER_01And right now it sounds like the industry is relying heavily on tools that merely estimate understanding.
SPEAKER_00Which is a pretty precarious place to be.
SPEAKER_01Yeah, definitely. So think back to that hospital room from the beginning where nobody could explain the AI's decision.
SPEAKER_00The black box.
SPEAKER_01Exactly. If our best tools for auditing these systems are ultimately just educated guesses of a deeply complex logic, what happens when the explanation tool itself hallucinates a totally false reason for that patient's readmission risk? Who do you trust then?