Code & Cure

#15 - When Algorithms Know Your End-Of-Life Wishes Better Than Loved Ones

Vasanth Sarathy & Laura Hagopian

What if the person who knows you best isn’t the best person to speak for you when it matters most?

We explore a study that tested just that—comparing the CPR preferences predicted by loved ones with those predicted by machine learning. The result? Algorithms got it right more often. That surprising outcome raises tough, important questions: Why do partners misjudge? And could AI really support life-and-death decisions when seconds count?

We unpack the study’s approach in everyday terms: who was surveyed, what data fueled the models, and how three algorithms were trained using demographics, clinical records, and stated values. The twist? Basic details like age and sex turned out to be stronger predictors than deeply personal values or medical history. That finding sparks a deeper conversation about autonomy, identity, and the tension between individual dignity and data-driven generalizations.

We also dig into the practical side: advance directives, POLST forms, and the true role of a healthcare proxy. Rather than replacing human decision-makers, we imagine a partner-in-the-loop model—where AI offers guidance, not verdicts, and transparency is key. Because when emergencies hit, it's not just about having a plan—it's about making sure your voice is heard.

If this resonates, take one step today: name your proxy, talk to your doctor, and share your wishes. Then subscribe, send this episode to someone who needs it, and leave a review to help keep these critical conversations alive.

Reference: 

Machine Learning–Based Patient Preference Prediction: A Proof of Concept
Georg Starke, et al.
NEJM AI (2025)

Credits: 

Theme music: Nowhere Land, Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0
https://creativecommons.org/licenses/by/4.0/

SPEAKER_01:

Family members often make critical medical decisions in a crisis. But what if their instincts are wrong? New research shows that machine learning might be able to predict your preferences for CPR better than your partner could. Is that a future we're ready for?

SPEAKER_00:

Hello and welcome to Code and Cure. My name is Vasant Sarati, and I'm an AI researcher, and I'm with Laura Hagopian. I'm an emergency medicine physician. Yeah, and one of our top our topics today is I'm very excited about because it's to me a very sort of different way of thinking about AI. We've been sort of talking about AI as a tool to help with automating tasks and those kinds of things. But playing a very human-like role is what this topic is going to be about today. And I honestly, when I read this paper, I was kind of like, whoa, that's a very creative, but potentially, you know, fraught uh use case for AI. And so I think we let's let's just dive in, right? Let's just dive into it. Yeah, let's just do it. Yeah. So the paper is um about uh using a machine learning system to help with uh patient preferences for their end-of-life care. Um and Laura, jump in if I'm using the wrong terminology.

SPEAKER_01:

No, no, you're using the right terminology. It's just like uh a little uh technical mumbo jumbo there. But it's basically like can a machine learning algorithm predict whether someone would want CPR or not? And can it do a better job than someone's own partner? Because oftentimes it's someone's partner who ends up having to make that decision in the moment if someone's incapacitated.

SPEAKER_00:

Well, that's the key, right? They're incapacitated. In the normal course, you would just ask the person, but for whatever reason, if they're not able to answer, then you're gonna have the proxy or someone else make the decision for them. Which is like very tough in the moment if you're not sure. Yeah. And honestly, you would imagine that the human that you are have entrusted with that capability is going to be is going to stay true to your wishes and do the right thing in that, in that in that position.

SPEAKER_01:

And um what's I mean, that's interesting, right? Because it's like if you've had that conversation, maybe they know what your wish actually is. But what if you haven't had that conversation? And even in this paper, I think they they actually went to people's partners and they said, Hey, do you think this person would want CPR? Yeah. And there were a decent number of people who said, I don't know. And there were a decent number of people who were wrong.

SPEAKER_00:

Yeah. So let's get into that. Um, so what they did was they took a bunch of data. I think they had about 1800 uh participants, and through a combination of survey questions and interviews, they were able to get a sense for uh various things. So they had they got demographic information, the person's sex, age, you know, whether they have children, marital status, and that sort of thing. Um, they got a lot of health-related information about their medical conditions, about uh their physiological, you know, things like um, do they smoke often and what's their body mass index and those sorts of things. Um, I think they also have information about their clinical history, about their visits and um who they have as as their um who they got the vaccinations and that sort of thing.

SPEAKER_01:

Yeah, that they have their colon cancer screening, all of that.

SPEAKER_00:

So there's like the clin the demographics, yeah, the clinical, and then a third layer. The third was the values. They asked, they asked people a number of questions about the importance of having uh importance of sort of questions, things like importance of being a burden, or uh importance of not dying alone, or importance of having physical contact. These are sort of gets to the very core values of who those people are. Um, and so they asked those kinds of questions. Um and beyond that, I think they asked other questions to explore other mental state uh considerations. How often do they pray? How extra how extroverted are they, how lonely are they, how open are they, and those sorts of things. So they got all this data, and of course, uh they got data about whether or not they would choose certain end-of-life uh um conditions, right? And um, this is the data they used to train machine learning models uh on. And so there's 1800 participants, and actually what's interesting is of the 1800 participants, um about 800 of them were also uh there with uh partners. So they were able to ask the partners about the potential end-of-life decisions as well. So they got both the patient's answer and the partner's answer.

SPEAKER_01:

And the you know, AI's answer.

SPEAKER_00:

Well, the AI's answer not yet, right? This is just the data collection piece, right? So the AI, what they did with the AI was they had sort of three models of AI. One that they trained basically on demographic information uh and where they live and that sort of thing. And um, one that they trained on that demographic information plus all the clinical factors that we talked about before. And then a third model that had the basic demographic information, the clinical information, and all the personalized things about their values and their personal um sort of dispositions. Um and so those are the models they trained, and they tested it against, you know, they used standard machine learning techniques to train these models, and they tested them against both the patient's own answers and uh also compared them with their patient's partner's answers. And what was striking was all three models were better than the on average than uh in accuracy than the patient's uh human partner. Um, I think they were reaching the human partner had an accuracy of around 58%, which seems uh to me at least uh awfully low. And the basic model that used just demographic information was above 60%, and the model that used all of the personalized information, I believe, was above 70%. And um, which, you know, is is uh it blew my mind away that we would get this level of a difference. Uh you know, we're used to seeing situations where the human is always right and the AI systems are getting better and better and getting closer to the human. And here we have the flipped problem, which is the AI seems to be way better uh at predicting this. And in fact, they did a uh study, uh, they did a sort of explainability type study to see um what features or what aspects of the patient are most influencing their the the accuracy of the results. So what what were they? What were the factors? Demographics was the key thing, which is kind of like age, age and things you can't change, right? How many kids someone has, yeah. Maybe some of those things you can change, but age and sex you can't really. And and so they're that that's the kind of information that was the most predictive.

SPEAKER_01:

Not the clinical history, not the all the not the sort of values, personalization, yeah, the demographics in and of themselves.

SPEAKER_00:

Uh was the was the strongest of all of these pieces, which again is another incredible thing, which you know gets to questions of free will and all of that stuff.

SPEAKER_01:

Because I know this is sort of like mind-blowing. It feels like we're in this like a murky ethical territory, but I'm also sitting here like, oh my gosh, people's partners don't know what they want. Right. And I suppose what you want can change moment to moment with a new diagnosis or things like that. But the rate that partners knew or didn't know whether or not someone wanted CPR is just strikingly low. Yeah.

SPEAKER_00:

And it's not clear to me to what degree. I don't know if they measured this. This might be a question for further inspection, but I don't know if they looked at whether the partners and the degree to which the partners had prior conversations with the patient uh about these issues, right? Or was it just like presented as here's my partner, they're part of the study, what would they do? Right. Versus them having a prior conversation about some of these pieces, which I would assume would up their accuracy, right? Because they would get it right if they knew this is what they wanted, right? This is what patients want. So I find these aspects to be very interesting. And again, that's one of the reasons why I wanted to talk about this paper. Uh, because it also hits upon many very core human ethical concerns that one has with any AI technology. And here's one where we're talking about respecting personal autonomy. This these people are, you know, there's a preference that they have, that's their autonomy. Being able to respect that and get that correctly is a huge piece. But it's not just about the patient. There's a stakeholder here who, you know, is the patient's partner who is going to have to deal with the after effects of their choices, right? And and they're in in play as well. So it's not just about a machine often making the more correct choice. The more correct choice might be more challenging than that, right? It needs to also do other things. It can't just be correct, it has to also be respectful of the stakeholders involved.

SPEAKER_01:

This is an interesting one because like we know what the patient wanted in this situation. We know what they wanted in this scenario and in this paper, but like in the real world, you might never find out what they actually had wanted. No, right? But you'll never know the answer. You never know the answer. Yeah. Um, but like Well, maybe you do after if they survive. If they suppose that's true.

SPEAKER_00:

Let's hope we know the answer.

SPEAKER_01:

But there are a lot of situations where you may not know the answer, um, which is like difficult for a partner to deal with if they're in that moment and have to make that call. But at the same time, you're like, well, what if the machine learning algorithm like was a false negative? Or what if it was a false positive? Like, what if someone didn't get CPR and they would have wanted it? Or what if someone would not have wanted CPR and we did it to them anyways? Like that's, you know, right, not good.

SPEAKER_00:

Right, right. And it's also, you know, there's also other uh the the the survey takes into account a lot of other factors about values and such, but um, there's also one thing, which is that these values change over time, right? And so it's not static.

SPEAKER_01:

But it's interesting because a lot of the demographic stuff that we were talking about that feeds into this is static, right? Like um your sex, your age, uh your education level, your language. Um, you know, some things can can be adjusted, like your partnership status, um, having children, et cetera. But a lot of those things are static. So then it's like, well, if your preferences can change, but the main predictors in here, those don't change that much.

SPEAKER_00:

Yeah. I mean, I I don't know if age was more than uh others, but age I could imagine is you could have a certain viewpoint when you're young and different viewpoint when you're older. Yeah, absolutely. Uh, but but you're right about the others, and they don't change. So that's interesting um as well. Um, but also there's also other cultural pieces, right? To the what degree are they a certain religious background that requires them to have a certain viewpoint about these end-of-life decisions. And so those, I don't know that the survey took into account that. And I also also honestly, I don't know to what degree the the the participants were culturally varied in that sense as well.

SPEAKER_01:

Um, this was dead in one location in Sweden, right? Switzerland, yeah. That's all right.

SPEAKER_00:

Um, but but yeah, so I think there's some obviously some other factors, which which is all to say that the human partner is still important. And you know, we don't want a situation where, I mean, well, I I don't think at least that we want a situation where the human partner is just replaced. Now, one of the arguments made in the paper, and the paper acknowledges all of this, by the way, but one of the arguments made in this paper is that the human partner might experience a quite a bit of trauma from having to make that decision in the first place. Absolutely. And so and afterwards, right? Right. And so having them lean on an AI system or a machine learning system might be a good thing, right? Um, that's sort of one way to look at it. To have them sort of offload some of that trauma to another system that potentially takes into account things that they hadn't thought about in that moment. Um, that if they had given thought, maybe they would agree with the machine, right? And so it's not clear to me that the that, you know, having a machine isn't is necessarily bad. It could actually be a helpful thing for the human.

SPEAKER_01:

Right. And we talk about human in the loop all the time. This is like a different context of that, right? It's the partner in the loop in this situation where you know you don't want you don't want the AI making the decision alone. That feels sort of like icky in a way, right? And if you have both, like if you have both the machine learning algorithm and the partner, like do you choose between the two? Do you let the two work together? Uh it's sort of sort of an odd thing to think about.

SPEAKER_00:

Yeah. And how do you make that available? Presumably, the final decision maker is not going to be the machine, is going to be the human partner, at least in our system, right? As as things stand. So maybe it's a tool for the human partner to choose to use or not use as it as they are guided in their in their final decision, right? Um, or is it uh are we suggesting that we take that partner out of the equation completely and have the AI system be substituted for the the patient, sort of a patient surrogate in a sense, right? Which is the thing. Yeah, although then it's weird.

SPEAKER_01:

It's it's weird because you're like, oh, where there's like an ick component to that of we don't have, do we not have free will? Like uh, you know, what about wouldn't our proxy maybe like take into account more of the context or maybe have more knowledge of us? I mean, they definitely showed in this article that like partners don't know. But my big takeaway from that was like, why don't partners know? So I read this and I was like, oh, this is a very interesting article that they can predict this, but like, why the heck don't partners know what patients want?

SPEAKER_00:

Great point. And you know, it's to me, that sounds what you're saying sounds like is as much of as much as we're talking about how AI is very good at this, uh, we're also effectively saying that humans are really bad at this, actually. Yeah, that's exactly a conclusion.

SPEAKER_01:

People don't want to talk about end-of-life stuff. And so I was reading this and I was like, geez, okay, well, everyone should have a healthcare proxy, whether that's their partner or not. They should have someone who can make decisions when they're incapacitated, right? When they're not able to do so. And also they should talk to their healthcare proxy, they should talk to their partner, they should talk to their provider about what they would want in a situation if they were incapacitated. This is especially true for older individuals, but like, I don't know what what happens if you're young and in a car crash. It's like, what would you want ahead of time, specifying that and letting them know? That's like a hard conversation to have, though. People don't want to have that conversation. Um, and providers will well, primary care providers will will do this. Um, there's like portable medical orders where you can specify, like, hey, if I were in a situation that I were incapacitated, I would or wouldn't want CPR. I would or wouldn't want, you know, a breathing tube. I would or wouldn't want other things like fluids or antibiotics, or, you know, I would or wouldn't want to focus on comfort and allow a natural death and get oxygen and suction, but allow more of a natural death. Or I would or wouldn't want a feeding tube. Those are all things that it's like, well, that that's that's like potentially an awkward conversation to have, a difficult conversation to have, but also a really important conversation to have at the same time.

SPEAKER_00:

Well, okay, so that's an interesting point because one reaction I have to that is, well, you could just ask the patient those questions and write down the answers. Well, that's exactly what you're supposed to do. Right. But then what is the purpose of having the partner there? The partner is there to sort of fill, to me at least, the part it seems like the human partner is there to fill in the gaps. So you're in a current situation in which one of those existing patient answers don't fit nicely.

SPEAKER_01:

Oh, well, so first of all, people may not have that form filled out. It may not be on file, right? Like that's something you fill out with your provider and you both sign it and you put it in the record. But like, what if what if it's not filled out? Yeah. Uh, what if your preferences might change over time? Like maybe you wouldn't have wanted CPR for your uh if you had something related to your cancer, but if you were in a car accident, you would be okay with um with that or something something like that where the situation or the circumstance has changed. And so it's like, you know, you kind of want a multi-pronged approach here. And you want to make sure that both your provider and your proxy are sort of in the loop in terms of what you might want done. These forms also don't cover all the things, right? They don't cover like in X, Y, Z situation, this is what I would want done. And that's why a proxy is like very important to have. Yeah. Because they're helping make those day-to-day decisions, and not all of those can be specified on this form. It's like a limited number.

SPEAKER_00:

Yeah. Yeah. So I, you know, it seems to me that, yeah, right. So the form is is sort of the first line of attack for uh somebody, you know, to to for a provider to decide whether or not a patient has wants this or not.

SPEAKER_01:

Oh, the provider's not deciding. The patient's deciding. Right. The patient's deciding and the provider.

SPEAKER_00:

No, no, no, no. Well, that's exactly why you have a a proxy. Yes. So the proxy comes in when there's nuance to the situation.

SPEAKER_01:

Well, they're yeah, they're gonna come in no matter what in these situations, because there's always going to be some sort of nuance to it. Yes. But I think one of the important things to note is that like your preferences can change over time, and a lot of people haven't filled out these forms. So it's like, sure, a form could exist. Like, is it on someone's record? Who knows?

SPEAKER_00:

Yeah.

SPEAKER_01:

Which is why, you know, when I read this paper, I was like, ooh, people, people have not talked to their partners or their proxies about what they want.

SPEAKER_00:

That's what it is. Right. That's right. That's right. In fact, that's exactly right. Because in a given, you're right, in a given situation, there is nuance always. So you have to rely on that proxy to actually reason and think about that situation against what they believe the patient to want. And unless the patient and the proxy have had those conversations, they would not be able to reason that at the time when they need to reason with that.

SPEAKER_01:

Right. I mean, I'm gonna use you as an example.

SPEAKER_00:

Oh no.

SPEAKER_01:

Like do you have one of these forms on file with your doctor that says what you would want in a in this type of situation?

SPEAKER_00:

Maybe not.

SPEAKER_01:

I don't think so. And then, like, have you talked to your partner or your healthcare proxy about what you would want if you were incapacitated?

SPEAKER_00:

No.

SPEAKER_01:

Well, that's the thing. Like, that's you're like everybody else, though. Right. Yeah. And so, you know, what would your partner do in that kind of situation? Would they say, I'm not sure? So let's pull the plug, or I'm not sure, let's just do CPR. Uh, it feels like maybe we should do something, so let's just do it. So it's it's really difficult for the partner in the moment to make that choice on behalf of you when they don't know what your preferences are. And I think that's the point is we need to talk to our proxies about what our preferences are. Yeah. So that that was my main takeaway from this article is we're not doing a good enough job about discussing end-of-life care with our proxies.

SPEAKER_00:

Yes. I I think I agree, I completely agree with that. And it's less about the degree to which AI should be used, should not be used, and all of that is a separate, almost a separate question. Because what I found interesting was from the study was also that a lot of the answers were an I don't know answer. Now, what percentage of those I don't know answers could be turned into a yes or no answer if the proxies actually had conversations beforehand? And then whatever is left over as truly I don't know, maybe those are the ones that we rely on a machine learning tool or something else to help us give us more information to give us to say, okay, this person also said this and this and this, or they would have said this and this.

SPEAKER_01:

Um Yeah, it's interesting that they allowed I don't know answers because when you're in a situation, there's no I don't know. There's no I don't know. There's either you do CPR or you don't do CPR. And so you don't get to say, I don't know. It's like a one or a zero, it's a yes or a no. And so like the model wasn't allowed to say I don't know, but the human partners were allowed to say I don't know. So I think that was an interesting piece of this study because in the real world you you're stuck deciding, even if you don't know. Right. Yeah.

SPEAKER_00:

Right. More like the model in that sense. Yeah, yeah. No, yeah, just so many, so many um interesting questions here and does require a fair degree of self-reflection, at least for me, coming out of this, that maybe I need to have these conversations more and and get them get them clear. Now, is there a way to do that? I mean, is there forms? Are there uh things that people is there a framework with which to have this conversation with the R with the proxy, or is that something we should look into and and and and figure out as a as a as a as a group?

SPEAKER_01:

Yeah, I mean, there are these forms that you fill out with your provider called pulse forms. Um, they're like portable medical orders that you can discuss all the things that I was listing, like CPR, um breathing equipment, uh, feeding tubes, um treatments like IV fluids, that kind of stuff. But you could also just sit down and have a conversation with your proxy so that you can let them know what you would want if you weren't able to make a decision in that moment.

SPEAKER_00:

Right, right, right, right. Yeah, no, this is great. Um I've just blown your mind. You have. I I've left you speechless. Well, I again, like I said, I'm coming from a place where so far we've been talking about AI in this very almost mechanical way. That is, there is a there's a human thing, humans get really annoyed doing this thing, and AI can come and do this thing for you. Are they as good as humans? No, but we're getting there. And but we have to make sure that they are trustworthy, blah, blah, blah. Right? That's the the the the general narrative of health and AI. Now AI, in my view, and and it's this is because thanks to you, um, I was able to see this, which is that now AI is saying, hey, look, here are all the gaps in human thinking, uh actually, and maybe not human thinking, but human behavior, and AI is just exposing them. Here is one example where the proxies are are potentially not having conversations, which is why you have this poor performing proxy, right? Or uh that means that they don't know their own uh their partners well enough, right? To to actually answer this question. Um, but the AI is the one that's doing the exposing. So it's less about AI as a tool, uh AI being used to replace the human. It's much more that it's informing us that the humans need to do better, which I think is is wonderful in a sense.

SPEAKER_01:

Yeah, I think it's definitely a different angle on this one.

SPEAKER_00:

Yeah.

SPEAKER_01:

So I think we can we can end here, and I hope that everyone listening has a healthcare proxy or gets one, and everyone has a discussion about end of life care decisions with them and their provider.

SPEAKER_00:

Agreed.

SPEAKER_01:

Thanks for joining us. We'll see you next time on Code and Cure.