logo
#

Latest news with #TomBeauchamp

Let's talk about death and dying
Let's talk about death and dying

Mail & Guardian

time22-07-2025

  • Health
  • Mail & Guardian

Let's talk about death and dying

Medical school prepares our future doctors to save lives. But with death being the endpoint for all of us, shouldn't we be talking about it? Photo: Maria Luísa Queiroz/Unsplash There is an old joke about someone collapsing on an aeroplane mid-flight. The flight attendant shouts out, 'Is there a doctor onboard?' A passenger comes forward but just stands there. 'Why aren't you doing anything? He's dying!' the flight attendant cries. 'I'm a doctor of philosophy,' the passenger says dryly. 'We're all dying.' I can relate. I'm a moral philosopher, and a lot of my professional life has focused on matters relating to death. From the rightness or wrongness of abortion, to the morality of capital punishment or the ethics of using dead bodies for medical research, I've found that death, in the abstract, is an endlessly fascinating subject. But death is no longer just abstract for me. A few years ago, I lost my mother to cancer. I am currently facing a serious health probem of my own. So I've had a lot of time professionally and personally to contemplate death in the not-so-abstract. I teach in the department of medicine at the University of Cape Town, where I talk to my students about respecting patients' choices, avoiding causing harm, helping others and treating all people fairly. Developed by biomedical ethicists Tom Beauchamp and James Childress in the 1970s, these In my classes, we talk about patients' rights, including the right to refuse lifesaving treatment like blood transfusions on religious grounds, or the right to have life-sustaining treatments withdrawn, like saying no to more chemotherapy or other cancer treatments. But we don't talk about what happens after patients exercise these rights, and what it may feel like for the treating clinicians when their patients die from what seem like preventable deaths. Difficult conversations In June I attended the annual In my workshop on using the tools of philosophy to resolve ethical dilemmas, I used an example of a child with terminal cancer to illustrate a point, and during the break, one of the workshop participants, a I wanted to rage and cry at the injustice of the situation, but she seemed calm and at peace. These different reactions suggested to me that there is a critical need to transform the way we relate to death — and we can start by having conversations about it. We can start with our medical students who are focused on their future jobs of 'saving lives'. Of course, I'm not suggesting we shouldn't train our healthcare practitioners to focus on saving lives. But I do think — given that we are all going to die — that we shouldn't avoid the subject as many of us do. We could do so much better to prepare our students ethically, emotionally and practically for one of the critical things they will have to deal with in their professions. If we talk about death with our students, and how it feels to see someone die, perhaps they will be better equipped to help support those who are dying and those who are grieving. Talking about death is challenging. It touches all of us in different ways, whether in our personal experience or professionally. We may have competing views about it, informed by our experience, our religion, our culture. But 'difficult' is not a reason not to have the conversation. Here's the thing: not only do we not talk enough about death with our students, but we also don't talk about dying. I recently read palliative paediatrician Alastair McAlpine's wonderful 'No lecture had prepared me for this. No one had counselled me on how to comfort someone who was dying. Or what to say to someone who was in such pain and distress. I had studied the pharmacological approach to pain management, but not how to deal with loneliness, fear and sadness. Nor how to manage my own feelings around a patient who was slipping away. I didn't know what to do or say. In the face of death, words felt impotent and inadequate.' Patient mortality, professional failure For many philosophers, the task is to analyse and understand death. For me, the task is also to make space for it: in our classrooms, our hospitals, and our hearts. In our classrooms, we need to go beyond asking students what kind of doctors they want to be and ask them to think about what kind of doctors they want to be when their patients are dying. We need to have conversations about how to transition from offering care that cures to care that comforts patients with irreversible conditions such as terminal cancer or end-stage organ failure. When we teach students to be empathetic to their patients' and families' situations, we also need to caution them about becoming overwhelmed and not taking on their patients' suffering as if it were their own. If we don't teach them how to manage their own mental health, we risk them suffering from depression, burnout or compassion fatigue. We need to teach them to learn from experience without being consumed by it. In our clinics and hospitals, we need to challenge the idea that patient mortality equates to professional failure. We can reframe morbidity and mortality meetings, which allow clinicians to review patient care and treatment, as opportunities for learning rather than for shaming. We can encourage our colleagues to view each other — and themselves — as companions who support patients in the full human experience rather than as warriors fighting inevitable biological processes. Chocolates and jokes Upon receiving my recent diagnosis, my first thoughts and words were 'I don't want to die'. I realise now that what I meant was, I don't want to die now. Or soon. But having to consider that I will die at some point has helped me think about how I'd like to live with whatever time I have left — and hopefully it's a lot. It's also helped me talk to others about how I'd like to die and what I'd like after my death. (For example, I don't want the word 'feisty' on my tombstone, however appropriate a description of me it might be.) Talking about death has reduced the anxiety I had about death, and given me some assurance that when the time comes, I can trust others to know that I will have dignity in dying. That if I cannot care for myself, or speak about what I want, they will be able to do so for me, authentically — and hopefully with a dash of dark humour. I'm not religious, so I asked for chocolates and jokes rather than thoughts and prayers. I believe that how we care for ourselves and for others is fundamental to who and what we are. How we live with the dying can say a great deal about who we are, and how we die can say a great deal about how we've lived. Heidi Matisonn is a senior lecturer in bioethics in the EthicsLab at UCT's Neuroscience Institute and department of medicine. This story was produced by the . Sign up for the .

Culture War on Harvard Spells Disaster for America's AI Future
Culture War on Harvard Spells Disaster for America's AI Future

Newsweek

time03-07-2025

  • Politics
  • Newsweek

Culture War on Harvard Spells Disaster for America's AI Future

Advocates for ideas and draws conclusions based on the interpretation of facts and data. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. The battle between the White House and Harvard University over a $2.2 billion federal funding freeze and demands to ban international students is no isolated attack. It's part of a broader war on liberal higher education—and a harbinger of a wider global struggle. A federal court ruling may have temporarily blocked the student ban, but the message is clear: these attacks are ideological, deliberate, and dangerous. The 24 universities backing Harvard's lawsuit know this is bigger than campus politics. Undermining academia weakens one of the last independent institutions shaping AI's impact on society. By weakening the institutions that embed human knowledge and ethical reasoning into AI, we risk creating a vacuum where technological power advances without meaningful checks, shaped by those with the fastest resources, not necessarily the best intentions. The language used in discussions about ethical AI—terms like "procedural justice," "informed consent," and "structural bias"—originates not from engineering labs, but from the humanities and social sciences. In the 1970s, philosopher Tom Beauchamp helped author the Belmont Report, the basis for modern medical ethics. Legal scholar Alan Westin's work at Columbia shaped the 1974 Federal Privacy Act and the very notion that individuals should control their own data. This intellectual infrastructure now underpins the world's most important AI governance frameworks. Liberal arts scholars helped shape the EU's Trustworthy AI initiative and the OECD's 2019 AI Principles—global standards for rule of law, transparency, and accountability. U.S. universities have briefed lawmakers, scored AI companies on ethics, and championed democratized access to datasets through the bipartisan CREATE AI Act. But American universities face an onslaught. Since his inauguration, Trump has banned international students, slashed humanities and human rights programs, and frozen more than $5 billion in federal funding to leading universities like Harvard. These policies are driving us into a future shaped by those who move fastest and break the most. Left to their own devices, private AI companies give lip service to ethical safeguards, but tend not to implement them. And several, like Google, Meta, and Amazon, are covertly lobbying against government regulation. Harvard banners hang in front of Widener Library during the 374th Harvard Commencement in Harvard Yard in Cambridge, Massachusetts, on May 29, 2025. Harvard banners hang in front of Widener Library during the 374th Harvard Commencement in Harvard Yard in Cambridge, Massachusetts, on May 29, 2025. Rick Friedman / AFP/Getty Images This is already creating real-world harm. Facial recognition software routinely discriminates against women and people of color. Denmark's AI-powered welfare system discriminates against the most vulnerable. In Florida, a 14-year-old boy died by suicide after bonding with a chatbot that reportedly included sexual content. The risks compound when AI intersects with disinformation, militarization, or ideological extremism. Around the world, state and non-state actors are exploring how AI can be harnessed for influence and control, sometimes beyond public scrutiny. The Muslim World League (MWL) has also warned that groups like ISIS are using AI to recruit a new generation of terrorists. Just last month, the FBI warned of scammers using AI-generated voice clones to impersonate senior U.S. officials. What's needed is a broader, more inclusive AI ecosystem—one that fuses technical knowledge with ethical reasoning, diverse cultural voices, and global cooperation. Such models already exist. The Vatican's Rome Call for AI Ethics unites tech leaders and faith groups around shared values. In Latin America and Africa, grassroots coalitions like the Mozilla Foundation have helped embed community voices into national AI strategies. For instance, MWL Secretary-General Mohammad Al-Issa recently signed a landmark long-term memorandum of understanding with the president of Duke University, aimed at strengthening interfaith academic cooperation around shared global challenges. During the visit, Al-Issa also delivered a keynote speech on education, warning of the risks posed by extremists exploiting AI. Drawing on his work confronting digital radicalization by groups like ISIS, he has emerged as one of the few global religious figures urging faith leaders to be directly involved in shaping the ethical development of AI. The United States has long been a global AI leader because it draws on diverse intellectual and cultural resources. But that edge is fading. China has tripled its universities since 1998 and poured billions into state-led AI research. The EU's newly passed AI Act is already reshaping the global regulatory landscape. The world needs not just engineers, but ethicists; not just coders, but critics. The tech industry may have the tools to build AI, but it is academia that holds the moral compass to guide it. If America continues undermining its universities, it won't just lose the tech race. It will forfeit its ability to lead the future of AI. Professor Yu Xiong is Associate Vice President at the University of Surrey and founder of the Surrey Academy for Blockchain and Metaverse Applications. He chaired the UK All-Party Parliamentary Group on Metaverse and Web 3.0 advisory board. The views expressed in this article are the writer's own.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store