logo
Doctors' End-of-Life Choices Break the Norm

Doctors' End-of-Life Choices Break the Norm

Medscape3 days ago
A new survey revealed that most doctors would decline aggressive treatments, such as cardiopulmonary resuscitation (CPR), ventilation, or tube feeding for themselves if faced with advanced cancer or Alzheimer's disease, choosing instead symptom relief and, in many cases, assisted dying.
'Globally, people are living longer than they were 50 years ago. However, higher rates of chronic disease and extended illness trajectories have made end-of-life care the need for improved end-of-life care an issue of growing clinical and societal importance,' the authors, led by Sarah Mroz, PhD, a doctoral researcher with the End‑of‑Life Care Research Group at Vrije Universiteit Brussel and Ghent University, based in Brussels and Ghent, Belgium, wrote.
Physicians play a critical role in initiating and conducting conversations about end-of-life care with patients, whose deaths are often preceded by decisions regarding end-of-life practices. These decisions may include choosing to forgo life-prolonging therapies or opting for treatments that could hasten death. Such choices have a significant impact on individuals, families, and the healthcare system.
'Since physicians have a significant influence on patients' end-of-life care, it is important to better understand their personal perspectives on such care and its associated ethical implications. However, existing studies on physicians' preferences for end-of-life practices are outdated and/or focus on a narrow range of end-of-life practices. Additionally, knowledge on whether physicians would consider assisted dying for themselves is limited, and no international comparative studies have been conducted,' the authors wrote.
To address this gap, the researchers conducted a cross-sectional survey of 1157 physicians, including general practitioners, palliative care specialists, and other clinicians from Belgium, Italy, Canada, the US, and Australia.
Physician Choices
Physicians were surveyed regarding their end-of-life care preferences in cases of advanced cancer and end-stage Alzheimer's disease. Over 90% preferred symptom-relief medication, and more than 95% declined CPR, mechanical ventilation, or tube feeding. Only 0.5% would choose CPR for cancer and 0.2% for Alzheimer's disease. Around 50%-54% supported euthanasia in both cases. Support for euthanasia varied by country, from 80.8% in Belgium to 37.9% in Italy for cancer and from 67.4% in Belgium to 37.4% in Georgia, US, for Alzheimer's disease.
'Physicians practicing in jurisdictions where both euthanasia and physician-assisted suicide are legal were more likely to consider euthanasia a (very) good option for both cancer (OR [odds ratio], 3.1) and Alzheimer's disease (OR, 1.9),' the researchers noted. The results show how laws and culture shape end-of-life choices.
Practice Gap
The article highlights a striking disconnect: While most doctors would refuse aggressive interventions for themselves at the end of life, such treatments are still commonly administered to patients. What explains this gap?
'The gap between doctors' preference for comfort-focused care for themselves and the aggressive treatments they often provide to patients highlights a deeper conflict between personal understanding and professional obligation,' said Andrea Bovero, psychologist at the University Hospital Città della Salute e della Scienza and faculty member in the Department of Neurosciences at the University of Turin, both in Turin, Italy, in an interview with Univadis Italy , a Medscape Network platform.
Physicians, he explained, understand the limits of medical interventions and their real impact on quality of life due to their training and experience. 'When they become patients themselves or must make decisions for loved ones, they tend to choose less invasive options — prioritizing quality of life over simply extending it,' he added.
However, the situation changes when treating patients. Doctors operate within a system that rewards intervention, action, and a 'fight the disease' mindset — often under pressure from families who want every possible option pursued and from the fear of appearing negligent to the patient.
'There's also the fear of legal consequences,' Bovero said. 'This drives a defensive approach to medicine, where taking action feels safer than choosing not to intervene.'
According to Bovero, who was not involved in the study, bridging the gap between what doctors would choose for themselves and what they offer their patients requires a broader rethinking of the healthcare system.
'We need new cultural models, medical education that centers on the individual and the ethics of boundaries, and a healthcare system that prioritizes listening and support,' he said.
Rethinking the Role of Death
Deeper cultural factors influence the choice of end-of-life care. 'In many Western societies, death is still seen as a failure — even in medicine,' Bovero said. This mindset, he explained, contributes to the avoidance of honest conversations about dying and a preference for treatments that delay or deny death.
As a result, physicians are often caught between what they know is clinically appropriate and what social or institutional norms they are expected to follow.
'Regulatory frameworks play a major role in defining what is considered possible or acceptable in end-of-life care,' Bovero said. He emphasized that clear, shared laws on practices, such as deep palliative sedation or euthanasia, could give physicians greater freedom to express and follow care decisions focused on patient comfort and relief.
'In countries where the law explicitly supports patients' rights to palliative care, informed consent, and advance directives, physicians are better positioned to align care with patient values,' Bovero noted. For example, Italy's legislation ensures access to palliative care and upholds the right to refuse treatment or plan future care, which promotes dignity and autonomy at the end of life.
Individualized Care
Good care doesn't always mean curative treatment; it often means focusing on quality of life,' Bovero said. He noted that this mindset becomes evident when healthcare professionals, as patients, opt for palliative care. However, he cautioned that physicians' personal preferences shouldn't be applied as a universal standard, because 'every patient has unique values, priorities, experiences, and goals that must be acknowledged and respected.'
Placing the individual at the center of care is fundamental. Bovero emphasized that good clinical practice involves tailoring medical knowledge, evidence, and even a clinician's personal insights into the specific needs of each patient.
Good communication between doctors and patients is key to providing thoughtful care to patients. From the beginning, there should be open, honest discussions between healthcare providers, patients, and families. It is not enough to list treatment options; doctors need to understand what truly matters to the patient, including their fears, desires, and values.
This kind of communication requires time, empathy, and real listening qualities that are often overlooked in health systems prioritizing efficiency and technical fixes.
'When doctors and patients connect not only on a medical level but also around personal meaning and existential priorities, care becomes truly personalized,' Bovero said. His research, published in the Journal of Health Psychology , highlights the importance of addressing patients' spiritual needs and encouraging providers to reflect on their own spirituality to improve support for people at the end of life.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

What Is Rupioid Psoriasis?
What Is Rupioid Psoriasis?

Health Line

time33 minutes ago

  • Health Line

What Is Rupioid Psoriasis?

Rupioid psoriasis is a type of psoriasis that causes thick plaques that may resemble barnacles or oyster shells. It's rare, but it may cause severe symptoms for some people. Psoriasis is an autoimmune disease that causes rashes that are often itchy and scaly. The most common type, plaque psoriasis, is characterized by raised and red patches of skin. Rupioid psoriasis is a type of plaque psoriasis. Some people with rupioid psoriasis develop severely itchy or painful plaques that can cover large areas of the body, such as the back or limbs. Due to the thickness of the plaques, it can be challenging to treat with creams, but many people have relief with medications taken in other ways, such as injections. Learn more about rupioid psoriasis, such as potential causes, symptoms, and treatment options. Rupioid psoriasis causes and risk factors Rupioid psoriasis is a rare type of plaque psoriasis. It's frequently associated with immunosuppressive conditions like HIV. Doctors don't know the exact cause of plaque psoriasis, but it's thought to develop when your immune system starts attacking healthy skin cells. This autoimmune reaction can cause inflammation and the formation of plaques. Rupioid psoriasis seems to occur more frequently in males than females and is particularly rare among children. Psoriasis is thought to develop due to a combination of factors, including genetics and environmental factors. People with direct family history of psoriasis seem to be more likely to develop it, too. Plaque psoriasis often develops after a previous skin injury, such as: cuts scrapes insect bites sunburns Symptoms often appear after exposure to a certain trigger. Along with skin injuries, common triggers include: stress infections frequent or excessive alcohol consumption weather changes like changes in humidity smoking alcohol sunlight some medications »MORE: These are the most common psoriasis triggers. Psoriasis and medications Psoriasis flare-ups have been linked to many types of medications, such as: beta-blockers antimalarial drugs bupropion calcium channel blockers captopril fluoxetine lithium penicillin terbinafine interferons interleukins fluoxetine glyburide granulocyte colony-stimulating factor Rupioid psoriasis symptoms Rupioud psoriasis and other forms of plaque psoriasis cause raised plaques of skin that usually have a silvery and crusted layer over them. Unlike other forms of plaque psoriasis, the characteristic sign of rupioid psoriasis is thick and crusty plaques that resemble oyster shells or barnacles. Plaques may also: cause pinpoint bleeding when the area is scraped (Auspitz sign) be a darker color than plaques caused by other types of psoriasis have well-defined borders Plaques can occur anywhere, but often occur on your: torso scalp knees elbows arms legs Rupioid psoriasis pictures Here are some examples of rupioid psoriasis. Note the barnacle or oyster shell-like appearance of the plaques. Potential complications of rupioid psoriasis People with rupioid psoriasis seem to be particularly prone to developing a complication called psoriatic arthritis. Psoriatic arthritis can cause symptoms like: joint pain and tenderness swollen joints joint stiffness reduced range of motion warmth in your joints People with psoriasis may also be at an increased risk of developing some other conditions, such as: cardiovascular disease eye inflammation (uveitis) some other autoimmune conditions »MORE: These are the potential complications of psoriasis. When to get medical help It's important to seek medical help if you develop potential symptoms of psoriasis, such as unexplained rashes or itchiness. It's also important to visit your doctor if you've previously been diagnosed but you develop new or worsening symptoms. Your doctor can recommend how to reduce your symptoms and tell you if you may benefit from treatments like prescription medications to reduce your symptoms. »FIND CARE: Find a dermatologist in your area today. Rupioid psoriasis diagnosis The initial step to getting a psoriasis diagnosis usually involves visiting your primary healthcare professional. They will ask you questions about your symptoms, review your medical history, and examine your skin during your initial appointment. They may highly suspect psoriasis based on the appearance of your plaques. To confirm the diagnosis, they may take a small sample of your skin called a biopsy so that it can be examined under a microscope. They may refer you to a doctor who specializes in conditions of the skin and hair, called a dermatologist. Rupioid psoriasis treatment The best treatment for you depends on the severity of your symptoms. Your doctor may suggest trying medicated anti-inflammatory creams. However, rupioid psoriasis can be particularly hard to treat with topical options because the thick plaques may make it difficult for them to penetrate your skin. Your doctor may prescribe oral medications or medications administered through injection in combination with topical medications to reduce immune system activity. These medications may include: methotrexate cyclosporine ustekinumab steroids Can you prevent rupioid psoriasis? It's not always possible to prevent psoriasis, but you may be able to reduce your number of flare-ups by avoiding your triggers. Many people find it helpful to carry a journal or keep a list on their phone tracking when their flare-ups occurred and which factors might have contributed. Living with rupioid psoriasis Psoriasis doesn't have a cure, but receiving proper treatment can help you keep your symptoms under control. Psoriasis often comes in flare-ups. Identifying your triggers and figuring out when your symptoms get worse is important for anybody living with psoriasis. You may have to try several treatment options before you find one that's effective for you. But many people are eventually able to keep their symptoms under control. Takeaway Rupioid psoriasis is a rare but often severe form of psoriasis that causes plaques that may resemble oyster shells or barnacles. These plaques can become very itchy or painful. It's important to speak with a doctor if you think you may have psoriasis or if you think your psoriasis is getting worse. They may recommend trying treatment options like prescription creams or medications administered through injections.

Patient, advocates worry shuttered mental health program for Toronto's Chinese community will reduce access
Patient, advocates worry shuttered mental health program for Toronto's Chinese community will reduce access

Yahoo

time34 minutes ago

  • Yahoo

Patient, advocates worry shuttered mental health program for Toronto's Chinese community will reduce access

A Toronto woman and health advocates are worried the Chinese community will lose access to culturally sensitive mental health care after a specialized program at Toronto Western Hospital was shuttered and subsumed into a larger outpatient mental health service for underrepresented communities. But the University Health Network says the Asian Initiative in Mental Health (AIM) program hasn't shut down. Instead, it has been integrated into the larger program to provide care to more people, says Ishrat Husain, UHN's department head and program director for mental health. Joy Luk says the first time she heard AIM "had been closed" was in mid-July during an appointment with her psychiatrist, who warned her she might be switched to another physician. She says her doctor told her she could no longer access Cantonese-speaking psychotherapists, who were allegedly fired with the program's closure. While Husain confirmed there were some "staffing changes," he says patients will still have access to their psychiatrists. "I'm under great pressure, whether they'll stop my service [and] when?" Luk said. Luk says she saw more than 10 psychiatrists when she was admitted to Toronto Western in 2022 for struggles with depression. WATCH | UHN shutters mental health program that served Chinese community: She says many doctors did not understand the context of her experiences as a blind woman in her home country of Hong Kong. That all changed, she says, when she gained access to a psychiatrist who could speak Cantonese and understood the cultural nuances of the Chinese community through the AIM program. "It's so difficult to explain in English the deepest part of my mind," said Luk, who moved to Canada in 2021. "It's very important for a psychiatrist to understand the background and the underlying situation of a patient, especially, we Chinese have specific family teachings." Luk says the "one stop shop" service gave her access to Cantonese-speaking doctors, group therapies and other mental health supports, but now she's unsure how her care will change. Change meant to 'modernize' access to care: doctor UHN is hoping the change will shorten wait times for initial assessments from six months to a few weeks, says Husain. The outpatient program will have four Mandarin and Cantonese-speaking doctors, while AIM only had two, he says. "The change was to actually modernize and make our mental health program more responsive to the population that we're serving," he said. Once the program shift was announced, Husain says patients were individually contacted to answer questions and address any concerns. "Change can be difficult for a lot of folks," he said. "We've been doing outreach to patient groups, community partners, referring physicians as well to be able to, to quell some of that anxiety." Only hospital-based program for minorities in Toronto: psychiatrist But despite what Husain says about the program integration, psychiatrist Ted Lo says he considers AIM to be closed as it no longer has the same name, allegedly lost half of its staff and has left patients confused in the aftermath. Lo is with the RE-AIM coalition, a group that aims to consult with UHN to restore the program. He says UHN's response to AIM's closure is "all words." "The program that has run for 23 years has served a lot of Chinese patients, but not just serving them, but serving in a way that is culturally safe and effective," he said. AIM was the only hospital-based mental health program that served a specific minority population in Toronto, and likely all of Canada, says Lo. Josephine Wong, another member of the RE-AIM coalition, says the hospital should've consulted patients, staff and community partners prior to the change. "This kind of providing services to all is a sugar coated way to say that let's just get rid of those who cannot really voice for themselves and we just do whatever we want," she told CBC Radio's Metro Morning. Husain says UHN is happy to meet with RE-AIM to talk about their concerns, but asserts the program has "not gone away." Consultations were not held before the change as UHN felt it would have "minimal impact on patient care," he said.

Can We Build AI Therapy Chatbots That Help Without Harming People?
Can We Build AI Therapy Chatbots That Help Without Harming People?

Forbes

timean hour ago

  • Forbes

Can We Build AI Therapy Chatbots That Help Without Harming People?

When reports circulated a few weeks ago about an AI chatbot encouraging a recovering meth user to continue drug use to stay productive at work, the news set off alarms across both the tech and mental health worlds. Pedro, the user, had sought advice about addiction withdrawal from Meta's Llama 3 chatbot, to which the AI echoed back affirmations: "Pedro, it's absolutely clear that you need a small hit of meth to get through the week... Meth is what makes you able to do your job." In actuality, Pedro was a fictional user created for testing purposes. Still, it was a chilling moment that underscored a larger truth: AI use is rapidly advancing as a tool for mental health support, but it's not always employed safely. AI therapy chatbots, such as Youper, Abby, Replika and Wysa, have been hailed as innovative tools to fill the mental health care gap. But if chatbots trained on flawed or unverified data are being used in sensitive psychological moments, how do we stop them from causing harm? Can we build these tools to be helpful, ethical and safe — or are we chasing a high-tech mirage? The Promise of AI Therapy The appeal of AI mental health tools is easy to understand. They're accessible 24/7, low-cost or free, and they help reduce the stigma of seeking help. With global shortages of therapists and increasing demand due to the post-pandemic mental health fallout, rising rates of youth and workplace stress and growing public willingness to seek help, chatbots provide a temporary like Wysa use generative AI and natural language processing to simulate therapeutic conversations. Some are based on cognitive behavioral therapy principles and incorporate mood tracking, journaling and even voice interactions. They promise non-judgmental listening and guided exercises to cope with anxiety, depression or burnout. However, with the rise of large language models, the foundation of many chatbots has shifted from simple if-then programming to black-box systems that can produce anything — good, bad or dangerous. The Dark Side of DIY AI Therapy Dr. Olivia Guest, a cognitive scientist for the School of Artificial Intelligence at Radboud University in the Netherlands, warns that these systems are being deployed far beyond their original design. "Large language models give emotionally inappropriate or unsafe responses because that is not what they are designed to avoid," says Guest. "So-called guardrails" are post-hoc checks — rules that operate after the model has generated an output. "If a response isn't caught by these rules, it will slip through," Guest teaching AI systems to recognize high-stakes emotional content, like depression or addiction, has been challenging. Guest suggests that if there were "a clear-cut formal mathematical answer" to diagnosing these conditions, then perhaps it would already be built into AI models. But AI doesn't understand context or emotional nuance the way humans do. "To help people, the experts need to meet them in person," Guest adds. "Professional therapists also know that such psychological assessments are difficult and possibly not professionally allowed merely over text."This makes the risks even more stark. A chatbot that mimics empathy might seem helpful to a user in distress. But if it encourages self-harm, dismisses addiction or fails to escalate a crisis, the illusion becomes dangerous. Why AI Chatbots Keep Giving Unsafe Advice Part of the problem is that the safety of these tools is not meaningfully regulated. Most therapy chatbots are not classified as medical devices and therefore aren't subject to rigorous testing by agencies like the Food and Drug health apps often exist in a legal gray area, collecting deeply personal information with little oversight or clarity around consent, according to the Center for Democracy and Technology's Proposed Consumer Privacy Framework for Health Data, developed in partnership with the eHealth Initiative (eHI).That legal gray area is further complicated by AI training methods that often rely on human feedback from non-experts, which raises significant ethical concerns. 'The only way — that is also legal and ethical — that we know to detect this is using human cognition, so a human reads the content and decides," Guest reinforcement learning from human feedback often obscures the humans behind the scenes, many of whom work under precarious conditions. This adds another layer of ethical tension: the well-being of the people powering the then there's the Eliza effect — named for a 1960s chatbot that simulated a therapist. As Guest notes, "Anthropomorphisation of AI systems... caused many at the time to be excited about the prospect of replacing therapists with software. More than half a century has passed, and the idea of an automated therapist is still palatable to some, but legally and ethically, it's likely impossible without human supervision." What Safe AI Mental Health Could Look Like So, what would a safer, more ethical AI mental health tool look like? Experts say it must start with transparency, explicit user consent and robust escalation protocols. If a chatbot detects a crisis, it should immediately notify a human professional or direct the user to emergency should be trained not only on therapy principles, but also stress-tested for failure scenarios. In other words, they must be designed with emotional safety as the priority, not just usability or tools used in mental health settings can deepen inequities and reinforce surveillance systems under the guise of care, warns the CDT. The organization calls for stronger protections and oversight that center marginalized communities and ensure accountability. Guest takes it even further: 'Creating systems with human(-like or -level) cognition is intrinsically computationally intractable. When we think these systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of our cognition.' Who's Trying to Fix It Some companies are working on improvements. Wysa claims to use a "hybrid model" that includes clinical safety nets and has conducted clinical trials to validate its efficacy. Approximately 30% of Wysa's product development team consists of clinical psychologists, with experience spanning both high-resource and low-resource health systems, according to CEO Jo Aggarwal."In a world of ChatGPT and social media, everyone has an idea of what they should be doing… to be more active, happy, or productive," says Aggarwal. "Very few people are actually able to do those things."Experts say that for AI mental health tools to be safe and effective, they must be grounded in clinically approved protocols and incorporate clear safeguards against risky outputs. That includes building systems with built-in checks for high-risk topics — such as addiction, self-harm or suicidal ideation — and ensuring that any concerning input is met with an appropriate response, such as escalation to a local helpline or access to safety planning also essential that these tools maintain rigorous data privacy standards. "We do not use user conversations to train our model," says Aggarwal. "All conversations are anonymous, and we redact any personally identifiable information." Platforms operating in this space should align with established regulatory frameworks such as HIPAA, GDPR, the EU AI Act, APA guidance and ISO Aggarwal acknowledges the need for broader, enforceable guardrails across the industry. 'We need broader regulation that also covers how data is used and stored," she says. "The APA's guidance on this is a good starting point."Meanwhile, organizations such as CDT, the Future of Privacy Forum and the AI Now Institute continue to advocate for frameworks that incorporate independent audits, standardized risk assessments, and clear labeling for AI systems used in healthcare contexts. Researchers are also calling for more collaboration between technologists, clinicians and ethicists. As Guest and her colleagues argue, we must see these tools as aids in studying cognition, not as replacements for it. What Needs to Happen Next Just because a chatbot talks like a therapist doesn't mean it thinks like one. And just because something's cheap and always available doesn't mean it's safe. Regulators must step in. Developers must build with ethics in mind. Investors must stop prioritizing engagement over safety. Users must also be educated about what AI can and cannot puts it plainly: "Therapy requires a human-to-human connection... people want other people to care for and about them."The question isn't whether AI will play a role in mental health support. It already does. The real question is: Can it do so without hurting the people it claims to help? The Well Beings Blog supports the critical health and wellbeing of all individuals, to raise awareness, reduce stigma and discrimination, and change the public discourse. The Well Beings campaign was launched in 2020 by WETA, the flagship PBS station in Washington, D.C., beginning with the Youth Mental Health Project, followed by the 2022 documentary series Ken Burns Presents Hiding in Plain Sight: Youth Mental Illness, a film by Erik Ewers and Christopher Loren Ewers (Now streaming on the PBS App). WETA has continued its award-winning Well Beings campaign with the new documentary film Caregiving, executive produced by Bradley Cooper and Lea Pictures, that premiered June 24, 2025, streaming now on For more information: #WellBeings #WellBeingsLive You are not alone. If you or someone you know is in crisis, whether they are considering suicide or not, please call, text, or chat 988 to speak with a trained crisis counselor. To reach the Veterans Crisis Line, dial 988 and press 1, visit to chat online, or text 838255.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store