logo
The ethics of using AI to predict patient choices

The ethics of using AI to predict patient choices

Observer21-06-2025
I recently attended a conference on bioethics in Switzerland where professionals from different countries met to discuss recent topics in medical ethics which was the main theme of this year's conference. Among the highlights of the meeting were several talks about the inclusion of Artificial Intelligence in decision-making and its ethical impact.
What caught my attention was a talk about Personalised Patient Preference Predictor, or P4, which is a tool that aims to predict an individual patient's preferences for healthcare, using machine learning.
The idea is that in situations where a person is incapacitated — for example, found unconscious with no advance directive — the AI would comb through their digital footprint, including tweets, Instagram and Facebook posts, and possibly even emails, to infer their likely wishes. The system would then create a virtual copy of the individual's personality, known as a 'psychological twin,' which would communicate decisions to the medical team on the person's behalf.
While this concept is technologically fascinating, it raises several pressing ethical concerns. First, it assumes that our social media presence accurately reflects our core values and long-term preferences. However, people's views are dynamic and influenced by their emotional state, life experiences, and personal growth. A sarcastic tweet or a momentary opinion shared online may not represent someone's actual end-of-life wishes.
Second, the use of AI risks introducing or amplifying bias — especially against the elderly and individuals from ethnic or religious minorities. AI systems often generalise from large datasets, which can lead to 'one-size-fits-all' assumptions that disregard cultural, spiritual, or personal nuances.
Another critical question is: can AI truly understand or navigate the emotional and moral complexity of disagreements among family members and healthcare providers? Would it possess the empathy required to mediate a delicate conversation, or would it deliver cold logic such as: 'Grandpa is too old, his survival chances are low, so resources would be better allocated elsewhere'?
Furthermore, relying on AI for such deeply human decisions risks the deskilling of health professionals. Ethical decision-making is an essential skill developed through experience, reflection, and dialogue. If AI takes over these roles, clinicians may gradually lose the ability — or the confidence — to engage in these vital discussions.
The speaker, who advocated for the use of P4, admitted he did not fully understand how the AI makes its decisions. This lack of transparency is alarming. If we are to entrust a machine with life-or-death recommendations, we must first demand clarity and accountability in its design and operation.
In my opinion, while AI has a growing role in healthcare, ethical decision-making remains a human responsibility. These discussions are often fraught with disagreement, cultural sensitivity, and intense emotion — particularly when they involve questions of life and death. In my view, we are not yet ready to hand this task over to machines. We are not yet ready to hand this task over to machines.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Foxconn's June sales hit record high, driven by AI and cloud boom
Foxconn's June sales hit record high, driven by AI and cloud boom

Times of Oman

time2 days ago

  • Times of Oman

Foxconn's June sales hit record high, driven by AI and cloud boom

Taipei: Taiwan-based manufacturing giant Hon Hai Precision Industry Co., known globally as Foxconn, sales for the month of June rose 10 per cent from a year earlier, boosted by cloud and networking gadgets showed strong growth amid an artificial intelligence boom. The company posted a consolidated sale of NTD 540.24 billion (USD 18.67 billion) in June, its highest ever for the month and up 10.09 per cent on a year-on-year basis but down 12.26 per cent from a month earlier, reports Focus Taiwan. In the second quarter, Hon Hai's consolidated sales hit a new high of about NTD 1.80 trillion for the April-June period, up 15.82 per cent from a year earlier and up 9.45 per cent from the first quarter. According to the iPhone assembler, its cloud and networking division benefited from solid global demand for AI applications and cloud services to generate higher sales year-over-year in June. Additionally, the company said, its smart consumer electronics operations received a boost from international brands' efforts to launch new entertainment devices also resulted in a significant year-on-year sales increase in June. However, the company's electronic component operations' June sales were little impacted from June 2024 levels, and the computing division suffered a sales decline in the month, according to the company. In the first six months of 2025, Hon Hai posted consolidated sales of NTD 3.44 trillion, up 19.68 per cent from a year earlier, according to Hon Hai figures. Looking ahead, Hon Hai said sales growth momentum is expected to accelerate in the third quarter, a traditional peak season in the information and communication technology industry. It is expected that its revenue would not only grow from a quarter earlier but also from a year earlier. Furthermore, Hon Hai will hold an investor conference on Aug. 14 to detail its second quarter results and provide guidance for the third quarter and 2025 as a whole.

Australia wants to bar children from social media. Can it succeed?
Australia wants to bar children from social media. Can it succeed?

Observer

time3 days ago

  • Observer

Australia wants to bar children from social media. Can it succeed?

SYDNEY — Australia has long been one of the most proactive countries in the world in trying to police the internet. It has clashed with Elon Musk over violent videos and child exploitation on social platform X, forced Google and Facebook to pay for news, and tried to filter out large swaths of online content. Its latest aim may be the most herculean yet. By December, the country wants to remove more than 1 million young teens from social media, under a groundbreaking law that sets a minimum age of 16 to use the platforms. But with fewer than six months before the new regulation goes into effect, much about its implementation remains unclear or undecided. YouTube, which young teens in Australia report using more than any other service, may or may not be covered by the law. Authorities have yet to lay out the parameters of what social media companies need to do to comply, and what would constitute a violation, which could lead to fines of $30 million or more. The government has studied how to verify users' ages but has not released the full results of an extensive trial. 'We may be building the plane a little bit as we're flying it,' Julie Inman Grant, the commissioner of online safety who is tasked with enforcing the law, said in a nationally televised address last month. 'I'm very confident we can get there.' The law could have far-reaching influence if Australia can succeed in getting substantial numbers of teens off social media. Several governments around the world and in various U.S. states are in the process of or planning to impose their own rules on social media for young people, as alarm over the platforms' mental health effects and addictive nature has reached a fever pitch. Passed late last year, the Australian law was billed as one of the first nationwide endeavors aimed at getting children off social media. In May, New Zealand introduced legislation closely modeled on the Australian one, which puts the onus of verifying users' ages on social media platforms. In June, President Emmanuel Macron of France said he wanted to bar children under 15 from social media within months. The questions that remain unsettled in Australia should be a sign to other countries of the thorny path ahead — starting with how to define social media. In Australia, authorities had initially planned to exempt YouTube from the law. But the online safety agency last month advised that it should not be excluded, noting that it was the most popular platform — used by three-quarters of 10- to 15-year-olds — and had features that could lead to excessive use, like infinite scroll and short-form videos. YouTube has strongly objected to the recommendation, saying that it was a video streaming platform rather than a social media service and that more than 4 out of 5 teachers use its videos in the classroom. In an interview, Inman Grant said her office began consultations last week with tech companies to set expectations on what 'reasonable steps' they need to take to comply with the law. The companies will have to demonstrate, to her satisfaction, that they are doing enough to identify underage users and remove their accounts. They will also have to provide ways that parents or teachers can flag accounts belonging to people under 16; show that they are countering attempts at circumvention, such as through a VPN; and prove that they are tracking the efficacy of their methods, she said. Even if not every underage account is immediately purged from all platforms, she said, the law being in place will lead to change in the right direction. 'This is one of the biggest questions of our time, the intersection of social media and children's mental health,' Inman Grant said. Some tech company officials who are working on carrying out the law said that more than halfway through the year, they were still waiting on the government to define the 'yardstick' by which they would be evaluated. Meta, for its part, has already invested in developing technologies to understand users' age and created separate teen accounts with safeguards, Joanna Stevens, a spokesperson for the parent company of Instagram and Facebook, said in a statement. Critics of the law have pointed out that it has numerous blind spots. For instance, it does not address the content that children are able to access without being logged into an account; it only specifies that underage users should be prevented from having accounts. Axel Bruns, a professor of communications and media at the Queensland University of Technology, said that in requiring tech companies to find a way to keep children out through unspecified means, rather than requiring them to better moderate harmful content, the government was choosing to 'go down the sledgehammer way,' he said. 'It's a bit like saying we want to have this magical technology — if you don't come up with it, then it's your fault,' he said. 'It's law as wishful thinking, essentially.' This article originally appeared in

Microsoft to cut 9,000 jobs in fresh round of layoffs: Report
Microsoft to cut 9,000 jobs in fresh round of layoffs: Report

Times of Oman

time6 days ago

  • Times of Oman

Microsoft to cut 9,000 jobs in fresh round of layoffs: Report

Washington DC: American tech giant Microsoft will lay off nearly 9,000 employees, about 4 per cent of its workforce -- in what is its third round of job cuts in recent months, the company confirmed on Wednesday, according to a CNN report. The report said this is Microsoft's largest round of layoffs since 2023, when it cut 10,000 jobs. The move comes amid a broader wave of job cuts in the global tech industry. "We continue to implement organisational changes necessary to best position the company and teams for success in a dynamic marketplace," a Microsoft spokesperson said in a statement, quoted by CNN. The spokesperson also said the company aims to streamline its management structure and improve productivity by leveraging new technologies. Many technology firms, including Microsoft, are turning to artificial intelligence (AI) to boost employee efficiency. Earlier this year, Microsoft CEO Satya Nadella said that between 20 and 30 per cent of the company's code is now written by AI, as Microsoft continues to invest heavily in AI infrastructure. Meanwhile, several reports have projected a sharp rise in global AI spending. According to a UBS report, global AI investment is expected to grow by 60 per cent year-on-year in 2025 to reach USD 360 billion. This upward trend is likely to continue into 2026, with another 33 per cent increase projected, pushing the figure to USD 480 billion. However, UBS anticipates that the share of AI spending by the so-called Big Four tech giants, Microsoft, Amazon, Alphabet, and Meta, will fall from 58 per cent in 2025 to 52 per cent in 2026. Spending outside these major firms is projected to reach USD 150 billion in 2025, with China accounting for an estimated 35 per cent of that amount.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store