Technology · August 1, 2024

End of life decisions are difficult and distressing. Could AI help?

A few months ago, a woman in her mid-50s, let’s call her Sophie, experienced a hemorrhagic stroke. Her brain started to bleed. She underwent brain surgery, but her heart stopped beating.

Sophie’s ordeal left her with significant brain damage. She was unresponsive; she couldn’t squeeze her fingers or open her eyes when asked, and she didn’t flinch when her skin was pinched. She needed a tracheostomy tube in her neck to breathe, and a feeding tube to deliver nutrition directly to her stomach, because she couldn’t swallow. Where should her medical care go from there?

This difficult question was left, as it usually is in these kinds of situations, to Sophie’s family members, recalls Holland Kaplan, an internal medicine physician at Baylor College of Medicine who was involved in Sophie’s care. But the family couldn’t agree. Sophie’s daughter was adamant that her mother would want to stop having medical treatments and be left to die in peace. Another family member vehemently disagreed and insisted that Sophie was “a fighter.” The situation was distressing for everyone involved, including Sophie’s doctors.

End-of-life decisions can be extremely upsetting for surrogates, the people who have to make those calls on behalf of another person, says David Wendler, a bioethicist at the US National Institutes of Health. Wendler and his colleagues have been working on an idea for something that could make things easier: an artificial intelligence-based tool that can help surrogates predict what the patients themselves would want in any given situation.

The tool hasn’t been built yet. But Wendler plans to train it on a person’s own medical data, personal messages and social media posts. He hopes it could not only be more accurate at working out what the patient would want, but that it could also alleviate the stress and emotional burden of difficult decision-making for family members, he says.

Wendler, along with bioethicist Brian Earp at the University of Oxford and their colleagues, hope to start building their tool as soon as they secure funding for it, potentially in the coming months. But rolling it out won’t be simple. Critics wonder how such a tool can ethically be trained on a person’s data, and whether life-or-death decisions should ever be entrusted to AI.

Live or die

Around 34% of people in a medical setting are considered to be unable to make decisions about their own care for various reasons. They may be unconscious, or unable to reason or communicate, for example. This figure is higher among older individuals—one study of people over 60 in the US found that 70% of those faced with important decisions about their care lacked the capacity to make those decisions themselves. “It’s not just a lot of decisions, it’s a lot of really important decisions,” says Wendler. “The kinds of decisions that basically decide whether the person is going to live or die in the near future.”

Chest compressions administered to a failing heart might extend a person’s life. But they might lead to a broken sternum and ribs, and the person might experience significant brain damage by the time they come around, if they ever do. Keeping a person’s heart and lungs functioning with a machine might maintain a supply of oxygenated blood to their organs—but it’s no guarantee they’ll recover, and they could develop numerous infections in the meantime. A terminally ill person might want to continue trying hospital-administered medications and procedures that might offer them a few more weeks or months. Or they might want to forgo those interventions and make themselves more comfortable at home.

Only around one in three adults in the US completes any kind of advance directive—a legal document that specifies the end-of-life care they might want to receive. Wendler estimates that over 90% of end-of-life decisions end up being made by someone other than the patient themselves. The role of a surrogate is to make that decision based on how they believe the patient themselves would want to be treated. But people are generally not very good at making these kinds of predictions. Studies suggest surrogates accurately predict a patient’s end-of-life decisions around 68% of the time.

The decisions themselves can also be extremely distressing, Wendler adds. While some surrogates feel a sense of satisfaction from having supported their loved ones, others struggle with the emotional burden, and can feel guilty for months or even years afterwards. Some fear they ended the life of their loved ones too early. Others worry they unnecessarily prolonged their suffering. “It’s really bad for a lot of people,” says Wendler. “People will describe this as one of the worst things they’ve ever had to do.”

Wendler has been working on ways to help surrogates make these kinds of decisions. Over ten years ago, he developed the idea for a tool that would predict a patient’s preferences based on a set of characteristics, such as their age, gender and insurance status. That tool would have been based on a computer algorithm trained on survey results from the general population. It may seem crude, but these characteristics do seem to influence how people feel about medical care. A teenager is more likely to opt for aggressive treatment than a 90-year-old, for example. And research suggests that predictions based on averages can be more accurate than the guesses made by family members.

In 2007, Wendler and his colleagues built a “very basic” preliminary version of this tool based on a small amount of data. That simplistic tool did “at least as well as next of kin surrogates” in predicting what kind of care people would want, says Wendler.

Now, Wendler, Earp and their colleagues are working on a new idea. Instead of crude characteristics, the team plans to build a tool that will be personalized. The team proposes using AI and machine learning to predict a patient’s treatment preferences based on their personal data, such as their medical history, along with emails, personal messages, web browsing history, social media posts or even Facebook likes. The result would be a “digital psychological twin” of a person—a tool that doctors and family members could consult, and one that could guide a person’s medical care. It’s not yet clear what this would look like in practice, but the team hopes to build and test the tool before refining it.

The team call their tool a personalized patient preference predictor, or P4 for short. In theory, if it works as they hope, it could be more accurate than the previous version of the tool—and more accurate than human surrogates, says Wendler. It could be more reflective of a patient’s current thinking than an advanced directive, which might have been signed a decade beforehand, says Earp.

A better bet?

A tool like the P4 could also help relieve surrogates from some of the emotional burden of making such significant life-or-death decisions about their family members, which can sometimes leave surrogates with symptoms of post traumatic stress disorder, says Jennifer Blumenthal-Barby, a medical ethicist at Baylor College of Medicine in Texas.

Some surrogates experience “decisional paralysis”—they might opt to use the tool to help steer them through a decision-making process, says Kaplan. In cases like these, the P4 could help relieve surrogates from some of the burden they might be feeling, without necessarily giving them a black-and-white answer. It might, for example, suggest that a person was “likely” or “unlikely” to feel a certain way about a treatment, or give a percentage score indicating how likely the answer is to be right or wrong. 

Kaplan can imagine a tool like the P4 being helpful in cases like Sophie’s, where various family members might have different opinions on a person’s medical care. In those cases, the tool could be offered to family members, ideally to help them reach a decision together.

It could also help guide decisions about care for people who don’t have surrogates. Kaplan is an internal medicine physician at Ben Taub Hospital in Houston, a “safety-net” hospital that treats patients whether or not they have health insurance. “A lot of our patients are undocumented, incarcerated, homeless,” says Kaplan. “We take care of patients who basically can’t get their care anywhere else.”

These patients are often in dire straits and at the end stages of diseases by the time Kaplan sees them. Many of them aren’t able to discuss their care, and some don’t have family members to speak on their behalf. Kaplan says she could imagine a tool like the P4 being used in situations like these, to give doctors a little more insight into what the patient might want. In cases like these, it might be difficult to find the person’s social media profile, for example. But other information might prove useful. “If something turns out to be a predictor, I would want it in the model,” says Wendler. “If it turns out that people’s hair color, or where they went to elementary school, or the first letter of their last name turns out to [predict a person’s wishes], then I’d want to add them in.”

This approach is backed by preliminary research by Earp and his colleagues, who have started running surveys to find out how individuals might feel about using the P4. This research is ongoing, but early responses suggest that people would be willing to try the P4 if there were no human surrogates available. Earp says he feels the same way. He also says that, were the P4 to give a different prediction to that of a surrogate, “I’d probably defer to the human that knows me, rather than the model.”

Not a human

Earp’s feelings betray a gut instinct many others will share: that these huge decisions should ideally be made by a human. “The question is: how do we want end-of-life decisions to be made, and by whom?” says Georg Starke at the Swiss Federal Institute of Technology Lausanne. He worries about the potential of taking a techno-solutionism approach, and turning intimate, complex and personal decisions into “an engineering issue.” 

On hearing about the P4, Bryanna Moore, an ethicist at the University of Rochester, says her first reaction was: “oh no.” Moore is a clinical ethicist who offers consultations for patients, family members and hospital staff at two hospitals. “So much of our work is really just sitting with people who are facing terrible decisions… they have no good options,” she says. “What surrogates really need is just for you to sit with them and hear their story and support them through active listening and validating [their] role… I don’t know how much of a need there is for something like this to be honest.”

Moore accepts that surrogates won’t always get it right when deciding on the care of their loved ones. Even if we were able to ask the patients themselves, their answers would probably change over time. Moore calls this the “then self, now self” problem.

And she doesn’t think a tool like the P4 will necessarily solve it. Even if a person had been clear about their wishes in previous notes, messages and social media posts, it can be very difficult to know how you’ll feel about a medical situation until you’re in it. Kaplan recalls treating an 80-year-old man with osteoporosis who had been adamant that he wanted to receive chest compressions if his heart were to stop beating. But when the moment arrived, his bones were too thin and brittle to withstand the compressions. Kaplan remembers hearing his bones cracking “like a toothpick,” and the man’s sternum detaching from his ribs. “And then it’s like, what are we doing? Who are we helping? Could anyone really want this?” says Kaplan.

There are other concerns. For a start, an AI trained on a person’s social media posts may not end up being all that much of a “psychological twin.” “Any of us who have a social media presence know that often what we put on our social media profile doesn’t really represent what we truly believe or value or want,” says Blumenthal-Barby. And even if we did, it’s hard to know how these posts might reflect our feelings about end-of-life care—many people find it hard enough to have these discussions with their family members, let alone on public platforms.

As things stand, AI doesn’t always do a great job of coming up with answers to human questions. Even subtly altering the prompt given to an AI model can leave you with an entirely different response. “Imagine this happening for a fine-tuned large language model that’s supposed to tell you what a patient wants at the end of their life,” says Starke. “That’s scary.”

On the other hand, humans are fallible, too. Vasiliki Rahimzadeh, a bioethicist at Baylor College of Medicine, thinks the P4 is a good idea, provided it is rigorously tested. “We shouldn’t hold these technologies to a higher standard than we hold ourselves,” she says.

Earp and Wendler acknowledge the challenges ahead of them. They hope to build a tool that can capture useful information about a person that might reflect their wishes, but without overstepping that person’s privacy. They want their tool to be a helpful guide that patients and surrogates can choose to use, but not a default way to give black-and-white final answers on a person’s care.

Even if they do succeed on those fronts, they might not be able to control how such a tool is ultimately used. Take a case like Sophie’s, for example. If the P4 were used, its prediction might only serve to further fracture family relationships that are already under pressure. And, if it is presented as the closest indicator of a patient’s own wishes, there’s a chance that a patient’s doctors might feel legally obliged to follow the output of the P4 over the opinions of family members, says Blumenthal-Barby. “That could just be very messy, and also very distressing, for the family members,” she says.

“What I’m most worried about is who controls it,” says Wendler. He fears hospitals misusing tools like the P4 to, for example, avoid undertaking costly procedures. “There could be all kinds of financial incentives,” he says.

Everyone contacted by MIT Technology Review agrees that the use of a tool like the P4 should be optional, and that it won’t appeal to everyone. “I think it has the potential to be helpful for some people,” says Earp. “I think there are lots of people who will be uncomfortable with the idea that an artificial system should be involved in any way with their decision making with the stakes being what they are.”

About The Author