Rumor vs Truth
Your trusted source for facts... where we dissect the evidence behind risky rumors and reveal clinical truths.
This podcast series from TRC Healthcare, the team behind Pharmacist’s Letter, Pharmacy Technician’s Letter, and Prescriber Insights products, is designed to help pharmacists, pharmacy technicians, prescribers, and even patients navigate some of the claims they might see about medication therapy.
Find the video version of this show on YouTube: https://www.youtube.com/@trc.healthcare
TRC Healthcare offers CE credit for this podcast for subscribers at our platinum level or higher. Log in to your Pharmacist’s Letter, Pharmacy Technician’s Letter, or Prescriber Insights account and look for the title of this podcast in the list of available CE courses.
Rumor vs Truth
Artificial Intelligence
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI is fast.
AI is fluent.
AI is confident.
But is it clinically trustworthy?
🤖 AI can generate answers in seconds.
But good healthcare decisions require more than speed.
As AI tools become more accessible across healthcare, the line between assistance and authority is starting to blur.
Don and Steve put artificial intelligence under the microscope—cutting through hype to examine how AI is actually performing in healthcare today.
They tackle the questions healthcare professionals are already facing:
🧠 Which clinical tasks can AI reliably support—and which still require human judgment?
⚠️ Is it actually helping clinicians — or quietly creating new risks?
📋 When AI gives a medical answer, how often is it accurate… and how often is it confidently wrong?
🎓 How should AI be used (or limited) when training the next generation of clinicians?
Then they put the evidence behind these claims to the test:
- AI will fully replace healthcare professionals
- Using AI improves patient safety
- AI generated medical answers are more accurate than humans
- Healthcare students should use AI
🏷️ Our listeners can get 10% off a new or upgraded subscription with code rvt1026 at checkout.
******
TRC Healthcare Editor Hosts:
- Stephen Small, PharmD, BCPS, BCPPS, BCCCP, CNSC
- Don Weinberger, PharmD, PMSP
Guests:
- Vickie Danaher, PharmD
- John Turtle, PharmD, MBA
******
CE Information:
None of the speakers have anything to disclose.
TRC Healthcare offers CE credit for this podcast for subscribers at our platinum level or higher. Log in to your Pharmacist’s Letter, Pharmacy Technician’s Letter,or Prescriber Insights account and look for the title of this podcast in the list of available CE courses.
******
Clinical resources:
- FAQ: Artificial Intelligence in Pharmacy Practice
- Checklist: Responding to Med Errors
- Toolbox: Resources for Discussing Medical Misinformation
- Article: Weave AI Into Your Plan for Precepting Learners
- Toolbox: Preceptor’s Guide
******
Email us: rumorvstruth@trchealthcare.com
The content of this podcast is not intended to be a substitute for professional medical advice, diagnosis, or treatment.
Find the show on YouTube by searching for ‘TRC Healthcare’ or clicking here.
Learn more about our product offerings at trchealthcare.com.
This transcript is automatically generated.
00:00:05 Narrator
Welcome to Rumor vs. Truth, your trusted source for facts, where we dissect the evidence behind risky rumors and reveal clinical truths.
00:00:13 Narrator
Today, we'll see if claims around AI deserve a hard reset.
00:00:21 Steve Small
Now, Don, I've got a serious question for you here, and you can be honest.
00:00:24 Steve Small
Would you prefer to have an AI-generated co-host for the show instead of me?
00:00:30 Don Weinberger
It probably would be more efficient, but at the same time, it would flag my puns as low clinical value.
00:00:37 Don Weinberger
So that is a hard no.
00:00:40 Don Weinberger
You forever, Steve.
00:00:41 Steve Small
Thanks, Don.
00:00:41 Steve Small
And we never promised our humor is evidence-based, so—
00:00:45 Steve Small
And bad jokes might actually mean job security here.
00:00:48 Don Weinberger
Yep, exactly.
00:00:49 Don Weinberger
And rest assured, we are your fully human, non-AI generated hosts with the bad puns, everything.
00:00:55 Don Weinberger
With that, I'm Don the pharmacist.
00:00:57 Steve Small
And I'm Steve the pharmacist.
00:00:59 Don Weinberger
In this episode, we're looking at artificial intelligence and big claims around using large language models in healthcare to see what's real, what's exaggerated, and what actually matters in clinical practice.
00:01:10 Steve Small
Right, and before we plug into these claims, it's important to remind folks that this podcast offers continuing education credit for pharmacists, pharmacy technicians, prescribers, and nurses.
00:01:20 Don Weinberger
Just log into your Pharmacist’s Letter, Pharmacy Technician’s Letter, or Prescriber Insights account.
00:01:24 Don Weinberger
Look for the title of this podcast in the list of available CE courses.
00:01:28 Steve Small
And for the purposes of disclosure for today's podcast, none of the speakers have anything to disclose.
00:01:34 Don Weinberger
So, you know, Steve, glad we're doing this topic.
00:01:37 Don Weinberger
You know, artificial intelligence has met a lot of praise, and with that, lots of critique.
00:01:43 Don Weinberger
And it's pretty much ever since ChatGPT came out several years ago.
00:01:47 Don Weinberger
And even one of my family members calls AI automated incompetence.
00:01:52 Don Weinberger
So—
00:01:54 Don Weinberger
Is my aunt really fair with that?
00:01:57 Steve Small
Yeah, sounds kind of harsh, but while we debate AI's role in society, we know AI is already showing up in healthcare.
00:02:04 Steve Small
Things like clinical decision support, documentation, patient education, even drug info.
00:02:09 Steve Small
Also, AI isn't just chatbots here.
00:02:12 Steve Small
A lot of real-world examples include pattern recognition tools you might see in things like imaging or ECG screening, things like that.
00:02:21 Don Weinberger
And that's led to some growing assumptions about safety, accuracy, bias, and that famous question we all have: is AI coming for our jobs?
00:02:30 Steve Small
Yeah, those are some hair-raising concerns there.
00:02:33 Steve Small
And speaking of which, we'll also answer a listener's question about minoxidil for hair growth from our last episode.
00:02:39 Don Weinberger
Hair puns from that episode.
00:02:41 Don Weinberger
That was a great time, wasn't it?
00:02:43 Don Weinberger
All right, so let's go ahead and get to our first claim here.
00:02:46 Don Weinberger
And that actually goes back to our banter from the beginning.
00:02:49 Don Weinberger
And the claim is—
00:02:50 Don Weinberger
AI will fully replace healthcare professionals.
00:02:53 Steve Small
Yeah, this one sparks anxiety fast, especially in pharmacy, because when people see AI writing notes, checking interactions, or answering drug questions, it feels like replacement is right around the corner.
00:03:06 Don Weinberger
Right.
00:03:08 Don Weinberger
But across all the healthcare literature I could find, and especially in pharmacy, AI consistently shows up as a support tool and not a substitute.
00:03:16 Don Weinberger
Even the American Society of Health-System Pharmacists, ASHP,—
00:03:20 Don Weinberger
—statement on AI from 2025 says a common feature of current and future use cases is that they, AI, are designed to augment clinical pharmacy services, not replace the pharmacy workforce.
00:03:33 Steve Small
Yeah, and keep in mind, folks, while we're using a pharmacy example here because that's our wheelhouse, right, these same concerns show up for prescribers and nurses too.
00:03:41 Steve Small
And even though this sounds like a relief, are there good study examples backing up ASHP's claim here, Don?
00:03:48 Don Weinberger
Yeah, so—
00:03:49 Don Weinberger
If we look at pharmacy, for example, we do have studies comparing AI tools, such as large language models, to pharmacists.
00:03:57 Don Weinberger
And they show that AI does have strengths, but also weaknesses, like humans, right?
00:04:02 Don Weinberger
At least for right now.
00:04:03 Don Weinberger
And that pattern—strong in some tasks, weaker in others—is something we see across healthcare roles, not just pharmacists.
00:04:10 Don Weinberger
So to the studies.
00:04:12 Don Weinberger
One study I found compared and graded ChatGPT and clinical pharmacist answers to real-life medical cases—
00:04:18 Don Weinberger
—and standard test questions on a scale of 1 to 10, with 10 being correct.
00:04:22 Steve Small
Oh, pharmacist versus AI here, quite the showdown.
00:04:26 Steve Small
That should be good.
00:04:27 Don Weinberger
Right.
00:04:28 Don Weinberger
Well, ChatGPT performed equally well to the pharmacists regarding drug counseling, but it was weaker with prescription review, patient drug education, and recognizing and assessing adverse drug reactions.
00:04:41 Don Weinberger
And for perspective, ChatGPT scored in the fours to sixes on these categories while pharmacists were nine or above.
00:04:49 Don Weinberger
AI tended to miss issues with complex prescriptions or gave verbose answers to patient questions, which I think does make sense.
00:04:57 Steve Small
Yeah, and well done for those pharmacists based on those scores.
00:04:59 Steve Small
That's pretty good.
00:05:01 Don Weinberger
Yep.
00:05:02 Don Weinberger
Scoring one for the humans, right?
00:05:03 Don Weinberger
But keep in mind, the study was done in 2023, and the grading was subjectively done by five pharmacists.
00:05:10 Don Weinberger
And the study didn't mention any blinding.
00:05:13 Steve Small
Yeah, those are good caveats there.
00:05:15 Steve Small
And 2023, that can feel like a lifetime ago when it comes to this stuff.
00:05:20 Steve Small
And it can be hard to be up to speed with these studies when technology is evolving just so darn fast, right?
00:05:26 Don Weinberger
It seems like just exponential growth, right?
00:05:28 Don Weinberger
But on the flip side, evidence does seem to show it can help with other tasks and workflows.
00:05:33 Don Weinberger
So another study from 2023 looked at 30 pharmacists verifying 200 mock medication orders, half with AI's help and half without.
00:05:42 Don Weinberger
And some of the mock orders intentionally had errors they needed to spot.
00:05:46 Don Weinberger
So pharmacists made the correct decision 91% of the time without AI, but about 93 to 94% with AI.
00:05:55 Don Weinberger
Rates were especially good with uncertainty-aware AI, or AI that shares its confidence in its answer.
00:06:02 Don Weinberger
AI without this feature actually led to longer verification times compared to not using AI at all.
00:06:08 Don Weinberger
So it could slow things down.
00:06:10 Steve Small
So what I'm getting from that is it can help with certain tasks, especially more structured or rule-based ones.
00:06:16 Don Weinberger
Yep, exactly.
00:06:17 Don Weinberger
And for some pharmacy technicians in particular, that can raise real questions about how AI fits into verification, workflow, even automation.
00:06:27 Don Weinberger
Again, pointing to why thoughtful implementation actually matters.
00:06:30 Steve Small
I'm also so glad you mentioned the confidence factor here earlier.
00:06:33 Steve Small
AI can tell us what we want to hear and be confidently wrong about it, which could make you second guess things and, like you said, slow things down.
00:06:40 Steve Small
So it's a really good point.
00:06:41 Don Weinberger
Yeah.
00:06:42 Don Weinberger
Sounds just like my wife.
00:06:43 Don Weinberger
So—and this shows how we need to think about how AI is implemented.
00:06:49 Don Weinberger
If AI is implemented poorly without testing, AI can slow us down and make us less efficient.
00:06:54 Don Weinberger
Make sense?
00:06:56 Don Weinberger
Yeah.
00:06:57 Don Weinberger
All right, so let's go ahead and go back to that claim, which is AI will fully replace healthcare professionals.
00:07:03 Don Weinberger
And the verdict is—
00:07:09 Don Weinberger
This is a rumor.
00:07:11 Steve Small
Yeah, I would agree with ASHP here that current evidence shows AI will change how we do tasks, but not necessarily replace us.
00:07:18 Steve Small
And that applies broadly across healthcare teams, not just pharmacy staff here.
00:07:22 Steve Small
Okay.
00:07:23 Don Weinberger
Now, something fun, Steve.
00:07:25 Don Weinberger
I did pose this claim to Claude, which is our own AI software.
00:07:30 Don Weinberger
To see what it thinks the verdict should actually be.
00:07:32 Steve Small
Oh dear, I'm worried, Don.
00:07:34 Steve Small
I think it'll say true, but that's because I'm a total pessimist here.
00:07:39 Don Weinberger
Yeah, that's pretty aware working with you this long.
00:07:42 Don Weinberger
But actually, Claude called this rumor with conditions.
00:07:46 Don Weinberger
So pretty close.
00:07:49 Don Weinberger
Why?
00:07:49 Don Weinberger
Now, why did it say conditions?
00:07:50 Don Weinberger
So here's a piece of its explanation.
00:07:52 Don Weinberger
The claim—AI will fully replace healthcare professionals—is mostly a rumor, but it has a kernel of conditional truth.
00:08:01 Don Weinberger
So AI will replace or heavily shrink some tasks and some roles in some settings, especially the parts that are repetitive, rules-based, documentation-heavy, or primarily pattern recognition with clear guardrails.
00:08:15 Steve Small
Okay.
00:08:16 Steve Small
I could see its point, but I'd argue a lot of our roles aren't rules-based.
00:08:21 Steve Small
Healthcare is not black and white.
00:08:24 Steve Small
So we can't be fully replaced, is my thinking.
00:08:26 Steve Small
That said, AI won't replace healthcare professionals, but maybe those who use AI might replace those who don't.
00:08:35 Don Weinberger
Right.
00:08:37 Don Weinberger
Later on, Claude even said regulation and safety engineering push toward AI plus clinician, not AI instead of clinician.
00:08:47 Steve Small
Yeah, that is what makes me stand by our rumor verdict.
00:08:51 Steve Small
We need to keep the healthcare experts in the loop.
00:08:54 Steve Small
Even AI here doesn't think it should replace us, which might be the most reassuring thing we've heard all episode so far.
00:09:01 Don Weinberger
So as an example, we do use AI in our Rumor vs Truth editorial process, mostly as a review and refinement tool.
00:09:09 Don Weinberger
It can help us streamline ideas or even point out how bad our puns are.
00:09:13 Don Weinberger
And it's plenty busy with that.
00:09:14 Don Weinberger
But it doesn't create the content or make clinical decisions.
00:09:18 Don Weinberger
That usually requires the human expert, the human touch.
00:09:21 Steve Small
Yeah, and that is similar to how we use AI clinically, or how we should.
00:09:24 Steve Small
It helps us move faster on certain tasks, but it doesn't replace judgment.
00:09:29 Steve Small
We still rely on experts to decide what's accurate, relevant, and safe.
00:09:33 Steve Small
And when I think of clinical care too, the human experience is so nuanced and complicated, it's really hard to see how AI can fully replace things like empathy and compassion or navigate complex ethical issues we see every day—at least not yet.
00:09:49 Don Weinberger
Yeah, so I like that heartfelt point that you just gave, Steve.
00:09:53 Don Weinberger
But it's worth looking into ways AI can help you boost your efficiency in your daily tasks.
00:09:58 Don Weinberger
So we'll pitch our Artificial Intelligence in Pharmacy Practice FAQ on our website for a handy list of potential uses and considerations for AI in your work.
00:10:09 Don Weinberger
And one big theme we're seeing—and you can see it reflected in our AI resource—is that AI's first real foothold may be workflow: documentation, summarizing notes, helping teams spend less time clicking boxes—
00:10:24 Don Weinberger
—not replacing clinical judgment though.
00:10:25 Steve Small
Yeah, love that resource.
00:10:27 Steve Small
But whenever you introduce new tools into workflow, especially ones that touch patient care, the next question is always safety, right?
00:10:33 Steve Small
And it leads us to our next claim here, that using AI improves patient safety.
00:10:41 Don Weinberger
Right, so in fact—
00:10:43 Don Weinberger
Misuse of chatbots in healthcare is considered the top health technology hazard of 2026 by ECRI, which is the home of the Institute for Safe Medication Practices, or ISMP.
00:10:54 Steve Small
Yeah, so to help us untangle this one, I actually reached out to a colleague of mine, Dr. John Turtle, PharmD, MBA, who is an informatics pharmacist and the director of health system operations at VytlOne.
00:11:06 Steve Small
Let's see what he had to say on this.
00:11:11 Steve Small
Thanks for joining us today, John.
00:11:12 Steve Small
I'm so glad you could help us dive into this.
00:11:15 Steve Small
And I'd love to get your thoughts.
00:11:16 Steve Small
Do we know if AI improves any aspects of patient safety?
00:11:22 John Turtle
Hey, Steve, how you doing?
00:11:24 John Turtle
Good to talk to you.
00:11:27 John Turtle
You know, AI—the thing is, AI can detect safety risks earlier.
00:11:34 John Turtle
The first principle of, you know, all healthcare ethics is to do no harm.
00:11:39 John Turtle
But in and of itself, AI is not the risk.
00:11:42 John Turtle
It's how we use it, right?
00:11:44 John Turtle
So, as a clinician, it can help you identify what risks there may be in front of you, but it is ultimately your discretion.
00:11:55 John Turtle
And us as pharmacists, we serve as a critical redundancy in every single environment we practice in.
00:12:01 John Turtle
And so providers, nurses—
00:12:05 John Turtle
—you name it, are relying on our ability to verify whatever information is there and give a recommendation based on what's out in front of us.
00:12:12 John Turtle
So, you know, bolstering that use of AI can help patient safety, but ultimately it's not going to ever replace it.
00:12:25 Steve Small
Yeah, great point.
00:12:26 Steve Small
What can we do on our end to make that detection better?
00:12:30 Steve Small
I assume AI can't just do that automatically.
00:12:33 Steve Small
So what do we need to do on our end?
00:12:35 John Turtle
Well, so a lot of it is just making— you know, everything within clinical software applications is a workflow-based data input, right?
00:12:46 John Turtle
So making sure that your workflows are standardized, they reflect your policies and procedures, and that they are consistent.
00:12:55 John Turtle
So I always like to use the example of a vanco trough, right?
00:12:59 John Turtle
So you're dosing vanco, and this goes back to when I started as an intern—the quality in, quality out, right?
00:13:07 John Turtle
So if the information that you are using to perform your clinical assessment is bad, then your clinical assessment will be bad.
00:13:15 John Turtle
So the same applies for AI in that—
00:13:18 John Turtle
—whatever information is going into the system needs to be consistent and current in order for the AI to provide information back to you.
00:13:27 Steve Small
Great points there.
00:13:28 Steve Small
Yeah, I like that vanco example.
00:13:30 Steve Small
And are there any ways that AI can make safety worse that we know about, that we should be watching for?
00:13:36 John Turtle
Absolutely.
00:13:37 John Turtle
I mean, there's ways that if you're using an outdated data set—
00:13:42 John Turtle
—or any sort of outdated information—
00:13:45 John Turtle
—it’s going to learn AI.
00:13:47 John Turtle
There's three general modalities with AI.
00:13:50 John Turtle
There's neural networks, natural language processing, and predictive analytics.
00:13:54 John Turtle
All three of those—if you're using outdated information—steer clear.
00:14:02 Steve Small
Yeah, good points, good points.
00:14:04 Steve Small
And then what about alert fatigue too?
00:14:05 Steve Small
I've heard about concerns with that.
00:14:07 Steve Small
Does AI improve that?
00:14:09 Steve Small
Does it make it worse there too?
00:14:10 Steve Small
Or what's your take?
00:14:12 John Turtle
I mean, I think that I'm—Stephen—I'm bullish on AI.
00:14:16 John Turtle
I mean, I think that the more as pharmacists we can lean into it, the stronger that we can become, the more that we will be able to accomplish within our clinical work.
00:14:27 John Turtle
But I will say as far as clinical decision support—what I usually call alerts—right?
00:14:33 John Turtle
That's nothing new.
00:14:34 John Turtle
We've seen that.
00:14:35 John Turtle
I mean, I've seen clinical alerts since 2007 when I started in pharmacy.
00:14:41 John Turtle
And so—and you could think even back to before then.
00:14:44 John Turtle
I mean, Stephen, AI—we had spell check, right?
00:14:48 John Turtle
I mean, there was a computer telling you how to spell a word that it thought you were trying to spell a long time ago.
00:14:54 John Turtle
So as far as alert fatigue, there are always going to be things that pop up and tell the clinician, hey, watch out for this.
00:15:04 John Turtle
Now, I think that AI can—
00:15:07 John Turtle
—be more sensitive to the situation based on contextual workflows and contextual situations.
00:15:15 John Turtle
But at the end of the day, if the clinician is not paying attention to the alert, it is always a clinical discretion scenario.
00:15:24 Steve Small
Great points there.
00:15:25 Steve Small
Well, I think we got a lot of great material here.
00:15:28 Steve Small
I'm going to take this back to Don and see what he thinks.
00:15:30 Steve Small
But thank you, John.
00:15:31 Steve Small
This is awesome.
00:15:32 John Turtle
One more thing.
00:15:34 John Turtle
So the way I think of AI—if I'm a big Star Wars nerd, right?
00:15:39 John Turtle
AI is like—AI is like C‑3PO, right?
00:15:43 John Turtle
He's this—you know what?
00:15:45 John Turtle
C‑3PO is in a lot of those movies and he's always helping the good side over the bad.
00:15:51 John Turtle
But at the end of the day, you do not want C‑3PO flying the Millennium Falcon.
00:15:55 Steve Small
That's a good point.
00:15:58 Steve Small
I like that analogy there.
00:16:01 Steve Small
We'll leave it at that.
00:16:02 Steve Small
And I'll bring that back to Don.
00:16:04 Steve Small
I'm sure he loves the Star Wars reference too.
00:16:05 Steve Small
So we'll see.
00:16:06 Steve Small
Thank you, John.
00:16:08 John Turtle
Have a good day.
00:16:13 Don Weinberger
Wow, that really was a great interview—and always appreciate a good Star Wars reference.
00:16:19 Don Weinberger
So I knew you would for that.
00:16:21 Don Weinberger
Yeah.
00:16:22 Don Weinberger
So we really struggle to find, you know, robust randomized data that show that AI improves or harms patient outcomes.
00:16:31 Don Weinberger
But we have growing observational evidence that show it can go either way, right?
00:16:34 Don Weinberger
It's important to consider that AI may catch a dosing error at scale, miss the one detail that a human would catch in seconds.
00:16:43 Steve Small
Right, and we talked about some of AI's weaknesses earlier that could lead to safety issues.
00:16:47 Steve Small
But on the other hand, I also found some systematic reviews that showed AI can be helpful in other ways with safety, such as detecting possible errors after they've occurred—
00:16:57 Steve Small
—or processing large amounts of error report data to maybe find patterns, provide solutions, things like that.
00:17:03 Steve Small
So it really depends on the situation here and how it's used, kind of like what Dr. Turtle was talking about.
00:17:09 Don Weinberger
Another safety use case I thought was interesting is how AI may be used in automation to help limit pharmacist exposure to hazardous meds, which definitely would be a positive for safety.
00:17:20 Steve Small
Yeah, that's a unique use case and the list could go on and on.
00:17:23 Steve Small
So when it comes to this claim—
00:17:25 Steve Small
—that using AI improves patient safety—the verdict is evidence is mixed.
00:17:33 Steve Small
And what's important to realize with any technology, right, is that it depends on how you use it.
00:17:38 Steve Small
Whenever I have talked to students and residents about healthcare technology, I tell them to treat it like a power tool.
00:17:45 Steve Small
You can use an electric saw, for example, to efficiently build something amazing, or you can accidentally cut off all your fingers.
00:17:52 Steve Small
It depends on how you use it.
00:17:54 Don Weinberger
Right.
00:17:55 Don Weinberger
And you could be building something out of fingers, though—then it'd be working, right?
00:17:57 Don Weinberger
But that is definitely a memorable and grisly way to think about it, Steve.
00:18:02 Steve Small
Didn't mean to jump scare you there, Don.
00:18:04 Steve Small
But it's the same with AI.
00:18:06 Steve Small
It can help us be more efficient, but it can also hurt people if we use it improperly or without safeguards.
00:18:13 Don Weinberger
Right.
00:18:13 Don Weinberger
I agree.
00:18:13 Don Weinberger
And staff should always check their employer's policies on approved software and proper use before using an AI program—
00:18:20 Don Weinberger
—in their practice.
00:18:21 Steve Small
Not all programs are alike.
00:18:23 Steve Small
And prior testing is really important to make sure AI products are up to the task.
00:18:27 Steve Small
And the practical start, perhaps, is using AI to start with low‑risk—
00:18:32 Steve Small
—structured tasks first, like we talked about earlier, so you can build your trust in it.
00:18:36 Steve Small
And then if it does well, only then do you move into the high‑stakes clinical decisions.
00:18:42 Don Weinberger
And continue keeping patient safety top of mind with all technology.
00:18:45 Don Weinberger
So you can use our Responding to Med Errors checklist if an event occurs to make sure you gather all the facts, address the error, and reduce the risk in the future.
00:18:54 Steve Small
Yeah, and take a look at the show notes or description.
00:18:56 Steve Small
We've linked directly to that resource in Pharmacist’s Letter, Pharmacy Technician’s Letter, and Prescriber Insights, as well as our Artificial Intelligence in Pharmacy Practice FAQ we mentioned earlier.
00:19:07 Don Weinberger
Right.
00:19:07 Don Weinberger
If you're a new subscriber, don't miss out on these resources.
00:19:09 Don Weinberger
Sign up today to stay ahead with trusted insights and tools.
00:19:15 Don Weinberger
Now, I'm sure we know at least one person who has asked AI a medical question.
00:19:20 Don Weinberger
Perhaps we've done it ourselves.
00:19:23 Don Weinberger
So that leads to the next important claim, which is AI‑generated medical answers are more accurate than humans.
00:19:30 Don Weinberger
Now, when you look at this in two parts—questions patients ask AI versus questions healthcare workers pose to AI.
00:19:38 Steve Small
I agree.
00:19:39 Steve Small
Two different beasts that we're talking about.
00:19:41 Steve Small
And I'm sure many of us have seen how AI programs will add a disclaimer, for example, saying this is for informational purposes only—
00:19:49 Steve Small
—for medical advice or diagnosis, consult a professional.
00:19:53 Steve Small
So knowing that, are there any good study examples looking at this with patients?
00:19:57 Don Weinberger
Yes.
00:19:58 Don Weinberger
So keep in mind the studies out there on this are hard to interpret overall.
00:20:02 Don Weinberger
They use different AI programs, methods, and can involve very specific settings.
00:20:08 Don Weinberger
For example, like patient questions on a specific surgery procedure.
00:20:12 Steve Small
Oh yeah, that is indeed specific.
00:20:14 Steve Small
A really good point.
00:20:16 Don Weinberger
Right.
00:20:16 Don Weinberger
So—but one example—
00:20:19 Don Weinberger
—I saw was a randomized UK study in early 2026 looking at patients using AI versus a traditional internet search.
00:20:27 Don Weinberger
The search would be the control here to identify health condition and course of action.
00:20:32 Don Weinberger
The researchers gave around 1,300 participants various medical scenarios vetted by physicians, which they carried out with AI versus control, compared to what the physicians would have done.
00:20:45 Steve Small
Yeah, that's a pretty interesting design.
00:20:47 Steve Small
What did they find out from that?
00:20:48 Don Weinberger
When tested by researchers themselves, AI correctly identified conditions in 95% of cases and proper courses of action in 50%.
00:20:57 Don Weinberger
On the other hand, this dropped to 35% for conditions and 44% for courses of action when the patients actually used AI, which was about the same as a traditional internet search.
00:21:10 Don Weinberger
So a lot of it depended on what the patient told the program and the risk of leaving out key information.
00:21:16 Steve Small
So in a perfect setting with researchers, it does relatively well, but with patients, maybe not so much.
00:21:20 Steve Small
That can make sense.
00:21:22 Steve Small
So it has potential, but it likely depends on the user.
00:21:26 Don Weinberger
Yeah, exactly.
00:21:27 Don Weinberger
And for healthcare professional questions, one large real‑world study looked at 300 actual questions that were previously answered by a drug information service.
00:21:37 Don Weinberger
When those same questions were given to ChatGPT and Google’s Gemini, only 19% of AI responses were fully accurate—
00:21:45 Don Weinberger
—and supported by reliable references when graded by a senior pharmacy intern and a group of drug information pharmacists.
00:21:52 Steve Small
Yeah, that begs the question, Don—how many of these answers were just flat‑out wrong then?
00:21:57 Don Weinberger
Well, you never might surprise you.
00:22:00 Don Weinberger
About 15% of ChatGPT's answers were completely inaccurate, having different conclusions than the drug information pharmacist and using references that didn't really support the answer.
00:22:14 Don Weinberger
And one example the authors gave was a drug information question asking: does Orencia vial, auto‑injector, and syringe formulations contain sodium phosphate?
00:22:25 Don Weinberger
The pharmacist did confirm that it has it, but AI incorrectly said the auto‑injector formulation did not have sodium phosphate.
00:22:33 Steve Small
Oh, that is quite off.
00:22:35 Don Weinberger
Yes, yeah.
00:22:36 Don Weinberger
And most of the rest were partially correct or incomplete, which was seen in other studies too.
00:22:44 Don Weinberger
I do have a personal example from when we were creating our next CE presentation, and one of it has to do with insulin.
00:22:52 Don Weinberger
And I asked AI to help me design a plan for a patient who's switching from a higher concentration of insulin to a lower concentration of insulin.
00:23:01 Don Weinberger
So it gave a good answer as far as how to convert it, and it gave good references like the American Diabetes Association and other primary references that I was able to chase to the real answer.
00:23:13 Don Weinberger
Now, where it kind of messed up a little is it recommended an insulin product that doesn't exist anymore.
00:23:22 Don Weinberger
So we kind of—yeah.
00:23:24 Don Weinberger
So if you were to run to the prescriber or recommend to the patient, you may not be looking too informative if you recommend something that doesn't exist.
00:23:31 Don Weinberger
So that's just my personal example I experienced.
00:23:34 Steve Small
Yeah, that's a good one.
00:23:35 Don Weinberger
Yeah.
00:23:35 Don Weinberger
All right, so let's go ahead and circle back to that claim, which is AI‑generated medical answers are more accurate than humans.
00:23:41 Don Weinberger
And the verdict is—
00:23:48 Don Weinberger
Rumor with conditions.
00:23:50 Steve Small
Yeah, so AI is very good at retrieving and summarizing information, but it's weaker at weighing competing risks, recognizing when information is missing, et cetera.
00:23:59 Steve Small
So that was a good example, Don.
00:24:01 Steve Small
Maybe you didn't know that a certain product was off the market.
00:24:04 Steve Small
So what was AI's verdict to this one?
00:24:07 Steve Small
Did you plug this in?
00:24:08 Don Weinberger
I—you know—I did.
00:24:10 Don Weinberger
So when plugging this question into Claude again, it said evidence is mixed.
00:24:14 Don Weinberger
So we—it’s—
00:24:16 Don Weinberger
—still a loft.
00:24:17 Don Weinberger
Quoting it here, it said AI‑generated medical answers are sometimes more accurate than some humans in some settings, but not reliably more accurate than humans overall, and often not better than experts.
00:24:31 Don Weinberger
So a little humble pie there.
00:24:33 Steve Small
Yeah, based on that explanation, I think we're safe sticking to our own verdict on this one.
00:24:37 Don Weinberger
Right.
00:24:37 Don Weinberger
So when I told it that “evidence is mixed” isn't an option, Claude leaned toward rumor with conditions instead.
00:24:45 Don Weinberger
So we're roughly on the same page.
00:24:47 Steve Small
Yeah, so it kind of goes back to those prompts and what you tell it.
00:24:50 Steve Small
They can really affect the outcome there.
00:24:53 Don Weinberger
And we know AI programs may not just hallucinate and give wrong answers—
00:24:58 Don Weinberger
—they can be confidently wrong too.
00:25:00 Don Weinberger
So it's a good idea to prioritize using AI programs that tell the user the program's confidence in an answer, or simply ask AI, how confident are you in that answer?
00:25:11 Steve Small
Kind of putting AI on the spot there.
00:25:12 Steve Small
I like it.
00:25:13 Steve Small
And it's key to keep in mind that AI can be prone to bias depending on the sources it uses.
00:25:20 Steve Small
Yale School of Medicine had an article in 2024 on this entitled Bias In, Bias Out, which is a good way to sum it up.
00:25:28 Steve Small
So if it's using flawed data or misinformation to generate an answer, the concern is it can lead users down the wrong path.
00:25:35 Steve Small
And we have to keep that in mind.
00:25:37 Don Weinberger
And I'm going to pitch another resource of ours because we have quite a few on this.
00:25:41 Don Weinberger
So don't let AI send you or your patients down the rabbit hole, really.
00:25:44 Don Weinberger
You can see our resource called Resources for Discussing Medical Misinformation, a chart to help navigate any questionable information that may come from AI or other sources.
00:25:54 Steve Small
Yeah, that's a great resource, Don.
00:25:56 Steve Small
But with these benefits and risks, where does AI fall when it comes to educating the next generation of healthcare professionals?
00:26:04 Steve Small
Our next claim, in fact, is healthcare students should use AI.
00:26:08 Steve Small
And to get a grasp on whether or not AI belongs in healthcare education, I posed this question to our fellow editor, Vickie Danaher, PharmD.
00:26:16 Steve Small
She's written a lot of our great content on AI, including an article in Pharmacist’s Letter on this very topic.
00:26:22 Steve Small
So let's see what she had to say.
00:26:27 Steve Small
Thank you for joining us, Vickie.
00:26:28 Steve Small
And I'm curious—what is your take on this claim that healthcare students should use AI?
00:26:33 Steve Small
What benefits do you see, and are there risks?
00:26:36 Vickie Danaher
Yeah, definitely.
00:26:37 Vickie Danaher
Thanks for having me, Steve.
00:26:38 Vickie Danaher
So AI is already becoming a part of healthcare, as you guys have talked about already.
00:26:44 Vickie Danaher
So ignoring it or not talking about it with students or using it in learning experiences isn't really going to prepare them for the real world.
00:26:53 Vickie Danaher
We know that patients are using AI—they're looking for it for questions on drug information, health information, and they're coming to us to ask about it.
00:27:03 Vickie Danaher
So we want to make sure that students are prepared for the real world that they're going to be living in and working in.
00:27:11 Steve Small
Yeah, the cat's already out of the bag.
00:27:13 Vickie Danaher
Definitely.
00:27:14 Steve Small
So what are good examples you suggest listeners should think about if they want to use or integrate AI into teaching?
00:27:22 Vickie Danaher
So as a pharmacist, one of the main examples that comes to mind for me is drug information questions.
00:27:29 Vickie Danaher
And so patients are coming to us with questions, or clinicians have questions.
00:27:33 Vickie Danaher
And there's really kind of two ways that we can work with our students to approach those questions using AI.
00:27:40 Vickie Danaher
So one way might be for students to prepare a response to a drug information question in the traditional way—
00:27:46 Vickie Danaher
—consulting primary literature, consulting databases, writing up the response on their own.
00:27:52 Vickie Danaher
And then using AI to generate its own response and comparing those two versions.
00:27:58 Vickie Danaher
So was there something that AI missed, or was there something the student missed, or thought about differently?
00:28:04 Vickie Danaher
And that can really be an area to improve their clinical thinking or clinical judgment.
00:28:11 Vickie Danaher
Another option would be to have the student use AI to start off the response to the drug information—
00:28:17 Vickie Danaher
—using it as a starting point and critiquing what it comes up with up front, then revising and building upon it to create a solid answer.
00:28:29 Vickie Danaher
But I think those are two different ways, but the outcome is the same.
00:28:34 Vickie Danaher
You're still looking at the response or information AI is producing.
00:28:40 Vickie Danaher
You're evaluating it, critiquing it, to ensure the answer is solid.
00:28:45 Steve Small
Lots of options—and both lead to great learning.
00:28:47 Steve Small
So what would you say or recommend to listeners who may still be hesitant to use AI with learners?
00:28:54 Vickie Danaher
Yeah, I totally understand the hesitation, and there are many issues surrounding AI.
00:29:01 Vickie Danaher
But I would say the more that I've used it and experimented with it, you build that understanding and awareness of its capabilities and its limitations.
00:29:12 Vickie Danaher
You as well as students will learn what needs to be verified.
00:29:17 Vickie Danaher
And you know that you can't trust it completely, but you can use it as a tool to support the work you're doing.
00:29:24 Steve Small
Excellent, excellent thoughts.
00:29:26 Steve Small
And I'll be looking forward to sharing these with Don and hopefully calming some of his fears around AI.
00:29:30 Steve Small
We'll see.
00:29:31 Steve Small
But thank you for joining us today.
00:29:32 Vickie Danaher
Sounds good.
00:29:36 Don Weinberger
Wow—another great interview, Steve.
00:29:38 Don Weinberger
But I'm glad to actually have Vickie on our podcast.
00:29:42 Don Weinberger
It's great to see her.
00:29:44 Don Weinberger
But I have to circle back to what you capped on at the end here.
00:29:47 Don Weinberger
And who said I was afraid of AI?
00:29:49 Don Weinberger
You know—speak for yourself.
00:29:53 Steve Small
Oops, I may have spoken too soon.
00:29:54 Steve Small
Don, I put words in your mouth.
00:29:56 Don Weinberger
Right.
00:29:56 Steve Small
But okay.
00:29:56 Steve Small
We can say here and now—we can document for the record—it is a rumor that Don is afraid of AI.
00:30:04 Don Weinberger
Right, and this is recorded, so it's officially on the record, right?
00:30:07 Don Weinberger
And you may have noticed we brought in two experts this time—voices for our podcast.
00:30:12 Don Weinberger
It was actually new for us.
00:30:13 Steve Small
Yeah, and that's intentional.
00:30:15 Steve Small
With AI, the best outcomes happen when you keep multiple experts in the loop.
00:30:19 Steve Small
So we figured we should practice what we preach.
00:30:22 Don Weinberger
And honestly, that perspective really showed here.
00:30:25 Don Weinberger
I really liked her suggestions for integrating AI into education.
00:30:28 Don Weinberger
The idea is that it also teaches us how to balance AI's benefits and risks in everyday use.
00:30:32 Steve Small
Exactly.
00:30:33 Steve Small
So when it comes to this claim that healthcare students should use AI, the verdict is true with conditions.
00:30:44 Don Weinberger
And I agree—AI is not going anywhere.
00:30:47 Don Weinberger
So we shouldn't turn a blind eye to it.
00:30:49 Don Weinberger
And evidence suggests students are using it.
00:30:53 Steve Small
Right, and we're in a prime position to guide learners on proper use—and knowing how and when to dig deeper when AI provides suggestions.
00:31:02 Steve Small
We don't want learners approaching AI with fear—maybe just a healthy amount of vigilance instead.
00:31:07 Don Weinberger
We don't want learners outsourcing their thinking.
00:31:10 Don Weinberger
We want them sharpening it, right?
00:31:11 Don Weinberger
So if you want more structure and examples, you can read Vickie’s excellent article in the January 2026 issue of Pharmacist’s Letter, Pharmacy Technician’s Letter, and Prescriber Insights to help with ideas for incorporating AI when teaching learners.
00:31:25 Steve Small
Yeah, and with that said, you don't need AI to tell you what to do if you're enjoying the show.
00:31:29 Steve Small
We've got the human‑verified answer right here.
00:31:33 Don Weinberger
Yep, exactly.
00:31:33 Don Weinberger
And are you a subscriber?
00:31:34 Don Weinberger
Don't forget to claim CE credit for this episode.
00:31:37 Steve Small
And not a subscriber yet or thinking about upgrading?
00:31:40 Steve Small
Access more trusted clinical insights and save 10% with our exclusive listener promo code RVT1026 at checkout.
00:31:50 Don Weinberger
Check out details and links in the show notes below.
00:31:52 Don Weinberger
Don't miss out.
00:31:53 Steve Small
And with that, the bottom line truth here today is that AI can be fast and fluent and confident, but it can still be clinically unsafe without proper oversight.
00:32:04 Don Weinberger
And AI is a powerful copilot.
00:32:07 Don Weinberger
Just don't put it on autopilot.
00:32:09 Steve Small
Exactly.
00:32:09 Steve Small
And as healthcare professionals, we have the knowledge and know‑how to assess AI suggestions to make sure its outputs are applied appropriately and safely to patient care.
00:32:19 Don Weinberger
Okay, so now it's time to stop promoting AI and instead see what our listeners prompted us from the Rumor vs Truth mailbag.
00:32:26 Don Weinberger
We have an audience question from our last episode about hair loss that came in through our “Send Us a Text” link in the podcast show notes.
00:32:35 Don Weinberger
And what they're asking is: is minoxidil effective for growing a better beard?
00:32:40 Steve Small
Oh, interesting question.
00:32:41 Steve Small
You and I don't have trouble with this.
00:32:43 Steve Small
Don—our beards are crazy—but this may not be a totally hair‑brained idea.
00:32:48 Steve Small
I can see where this is coming from.
00:32:50 Steve Small
Now, first, to be clear, topical minoxidil is only FDA‑approved for regrowing hair on the top of the scalp.
00:33:03 Steve Small
It even has a warning saying do not apply to other parts of the body.
00:33:10 Steve Small
So keep in mind here, folks, this hair growth idea is definitely an unapproved, off‑label use when it comes to the beard.
00:33:10 Steve Small
But that said, a randomized, placebo‑controlled study from 2016 looked at using 0.5 mL of 3% topical minoxidil lotion twice daily to the beard area in 46 men who wanted fuller facial hair.
00:33:25 Steve Small
And several physicians then graded how well their beards looked in photos at the end of the trial.
00:33:31 Don Weinberger
So 3% minoxidil lotion—okay, interesting.
00:33:34 Steve Small
How well did it do?
00:33:36 Steve Small
Yeah, the minoxidil patients did have better subjective beard scores compared to the placebo group, although the study didn't really say by how much.
00:33:43 Steve Small
It was kind of unclear.
00:33:45 Steve Small
And the minoxidil group also rated their own beards more highly by the 16‑week mark.
00:33:50 Steve Small
And they reported side effects that were mild.
00:33:53 Steve Small
But keep in mind, we don't carry minoxidil 3% lotion like you were hinting at, Don.
00:33:58 Steve Small
We don't carry that in the U.S.
00:33:59 Steve Small
And this study was small.
00:34:01 Steve Small
So it's kind of difficult to apply these results here.
00:34:04 Don Weinberger
Yeah, so thank you for specifying that.
00:34:06 Don Weinberger
So I wouldn't say you're just splitting hairs there, right?
00:34:11 Don Weinberger
Those are good things to point out.
00:34:12 Steve Small
Yeah, and you might actually get a kick out of this interesting 2024 case report involving identical twin males.
00:34:19 Steve Small
One twin used 5% topical minoxidil once daily on the beard for 16 months, while the other didn't.
00:34:26 Steve Small
So kind of an interesting control group.
00:34:29 Steve Small
The minoxidil twin did show greater subjective beard density and hair growth at that 16‑month mark—
00:34:34 Steve Small
—while reporting only mild local effects like dryness and some increased body hair.
00:34:41 Steve Small
And they also had some initial shedding, which you can expect with minoxidil.
00:34:45 Steve Small
But looking at the photos from this study, even though there was subjective improvement and maybe some improved hair density, it wasn't a lumberjack‑level beard by any means.
00:34:54 Steve Small
In fact, I'd say you and I have fuller beards just on this podcast.
00:34:59 Don Weinberger
Well, yeah.
00:34:59 Don Weinberger
Well, I use mine to hide my double chin.
00:35:01 Don Weinberger
So—but—
00:35:03 Don Weinberger
So is minoxidil the answer to get a big, bushy beard?
00:35:07 Steve Small
I would say that it can improve beard hair density for some people.
00:35:12 Steve Small
And we do have limited controlled trial data and case‑level data to support that.
00:35:17 Steve Small
But it's still off‑label.
00:35:18 Steve Small
And we should look at long‑term outcomes and side effects before using this routinely.
00:35:23 Steve Small
I would say if somebody is considering this, it's a good opportunity for healthcare professionals to set realistic expectations.
00:35:31 Steve Small
You can review the risks and help patients make an informed decision.
00:35:34 Don Weinberger
Okay, and thank you for that.
00:35:36 Don Weinberger
And this is just the kind of audience question we love.
00:35:39 Don Weinberger
It's practical, unexpected, and a little hairy to figure out.
00:35:44 Don Weinberger
Now, if you've got an AI‑related question from this episode—or a technology rumor you want us to fact‑check—send it our way.
00:35:51 Steve Small
Yeah, and we also use your suggestions to plan our episodes.
00:35:53 Steve Small
So email us at rumorvstruth@trchealthcare.com or send us a text right from the podcast show notes.
00:36:00 Don Weinberger
And before you go, claim CE credit and access evidence‑based resources from Pharmacist’s Letter, Pharmacy Technician’s Letter, or Prescriber Insights.
00:36:09 Steve Small
And if you're not yet a subscriber or want to upgrade, save 10% with our exclusive listener code RVT1026 at checkout.
00:36:17 Steve Small
There's an easy link in the show notes.
00:36:19 Don Weinberger
And already a subscriber?
00:36:21 Don Weinberger
Tap the “Claim Credit” link in the show notes or search your CE organizer for this episode.
00:36:28 Steve Small
And join us next time, where we'll separate smart prescribing from stubborn myths around antimicrobial stewardship.
00:36:34 Don Weinberger
You could say that topic is hard to resist—and that's kind of the problem, right?
00:36:40 Steve Small
Exactly.
00:36:40 Don Weinberger
So thanks for joining us on Rumor vs Truth, your trusted source for facts, where we dissect the evidence behind risky rumors and reveal clinical truths.
00:36:48 Don Weinberger
See you next time.
Don Weinberger, PharmD, PMSP
Co-host
Stephen Small, PharmD, BCPS, BCPPS, BCCCP, CNSC
Co-host
Matt Uhrich
Producer
John Turtle, PharmD, MBA
Guest
Vickie Danaher, PharmD
GuestPodcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
Medication Talk
TRC Healthcare