
AI doctor four times better at identifying illnesses than humans
Microsoft has developed an artificial intelligence (AI) system that it claims is four times better than doctors at diagnosing complex illnesses.
The tech company's AI diagnosis system was able to correctly identify ailments up to 86pc of the time, compared to just 20pc on average for British and American physicians.
Announcing the findings, Microsoft claimed it had laid the groundwork for 'medical superintelligence'.
It comes as Wes Streeting, the Health Secretary, is seeking to bring AI into widespread use in the NHS to improve efficiency. In April the NHS waiting list rose for the first time in seven months, reaching 7.42m in a blow to one of the Government's key pledges to cut waiting times.
Microsoft claimed its system could solve problems more cheaply than doctors – beating physicians even when sticking to a budget for diagnostic tests.
The system, known as Microsoft AI Diagnostic Orchestrator, or MAI-DxO, was tested on 304 cases from the New England Journal of Medicine, a medical journal known for publishing complex medical cases from Massachusetts General Hospital.
The system comprised a virtual panel of five different AI bots, each serving different roles such as 'Dr Hypothesiser', 'Dr Test-Chooser' and 'Dr Challenger' that would internally deliberate before asking further questions or ordering tests and providing a diagnosis.
In one case, the system diagnosed embryonal rhabdomyosarcoma, a rare form of cancer that normally occurs in children, in a 29-year-old woman.
The system was able to diagnose 85.5pc of conditions when paired with the most advanced AI model developed by ChatGPT developer OpenAI and when it had no budget limit on ordering tests.
However, even when it had to stick to a $2,000 (£1,458) budget for tests, it was correct more than 70pc of the time.
The 21 human doctors, who had an average of 12 years' experience, spent an average of $2,963 on tests.
The average doctor was correct 19.9pc of the time, although they were unable to use textbooks or software to look up information and were generalist physicians, rather than specialists.
Microsoft's AI tool correctly diagnosed the condition more than half the time even when unable to order any tests, Microsoft said.
'Complement' doctors
The researchers said the journal that had provided the cases was behind a paywall and that many were published after the AI system was trained. This ensured that the cases could not have been included in the datasets used to build the AI, and the system had to arrive at the diagnoses itself.
Microsoft's AI health division is led by Mustafa Suleyman, the British entrepreneur who co-founded DeepMind before the lab was acquired by Google. He who moved to Microsoft last year.
The company said it received 50m health queries a day on its Bing search engine and Copilot chatbot, adding: 'AI companions are quickly becoming the new front line in healthcare.'
However, it said AI would complement rather than replace medical professionals.
'[Doctors] need to navigate ambiguity and build trust with patients and their families in a way that AI isn't set up to do,' the company said.
Last week, Mr Streeting unveiled an NHS app that he said would include a chatbot that serves as a 'doctor in your pocket'.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Independent
30 minutes ago
- The Independent
Tinder is testing facial recognition for users as a new security feature
Tinder is piloting a new featuring using facial recognition scans to verify profiles and increase security. New users in California will now be mandated to take a video selfie during the app's onboarding process which Tinder will compare against the user's other photos to verify their profile is genuine. The app will also check the scan against faces used on other accounts and provide verified profiles with a special badge. Tinder will store a non-reversible, encrypted face map to detect duplications, according to Axios, which reported on the new feature. "We see this as one part of a set of identity assurance options that are available to users," Yoel Roth, head of trust and safety at Tinder's parent company Match Group, told the outlet. "Face Check ... is really meant to be about confirming that this person is a real, live person and not a bot or a spoofed account." The feature is already in use in Colombia and Canada, and California will be its first U.S. pilot market. The stored facial data is deleted once a user deletes their profile, Tinder claims. Computer and app users have long attempted to use fraudulent identities on public profiles, for purposes ranging from financial 'romance scams' to full-blown, in-depth attempts at pretending to be someone else, a practice known as 'catfishing.' U.S. Justice Department and FBI officials told CBS News in 2024 that there were more than 64,000 romance scams in the U.S. the previous year. Tinder Swindler, ' who is accused of using dating apps to swindle matches out of millions. Tinder and its competitors have previously added features such as identification verification, real-time photo verification, and location-sharing to prevent safety issues.


Daily Mail
37 minutes ago
- Daily Mail
How weight-loss jabs like Mounjaro and Ozempic can destroy your pancreas. First the flesh dies... then your organs fail. As they're linked to ten deaths, doctors' urgent warning revealed
When Susan McGowan died after just two injections of Mounjaro she'd bought from an online pharmacist, health officials rushed to reassure the public on the safety of the new generation of weight-loss jabs. The death certificate for the 58-year-old nurse from North Lanarkshire, who died last September, listed acute pancreatitis – inflammation of the pancreas – as one of the immediate causes of death. Her use of Mounjaro (or tirzepatide) was recorded as 'a contributing factor'.


Daily Mail
37 minutes ago
- Daily Mail
Children could be banned from social media for LIFE 'if they share classmates nudes'
Children could face a lifetime ban from social media if they share classmates' nudes, under new proposals. Ofcom is urging online platforms to block these classroom cyberbullies from using their sites ever again. Tech companies will be told to cover all bases to ensure offenders cannot re-register using a different name, by using identity verification systems and internet address tracking. The rules are also likely to apply to school kids who share nude images on group chats. More than half of 13 to 15-year-old children who have sent nude pictures of themselves online have had the images distributed to unintended recipients, according to Snapchat. The regulator said those who 'share, generate, or upload CSEA (child sexual exploitation and abuse) content… should be banned from the service and prevented from returning', the Telegraph reports. Ofcom admitted that it was a 'particularly difficult issue' to establish whether children should come under such online rules, including young people who were coerced into sending pictures of themselves. The intention is neither to penalise grooming victims nor those in consensual relationship, the body insisted, but images distributed to a wider group 'can have a significant impact on victims'. The body has considered more lenient options including banning children on a case-by-case basis or introducing an appeals process, but it fears taking an easier position against younger people would encourage adults to pretend to be children in a bid to escape punishment. Sharing or receiving sexual images of children is already against the law but children are not normally prosecuted for the crime if they are a victim or it is seen as non-abusive. The new rules would also apply to images produced by artificial intelligence. Some sites have existing zero tolerance policies for those who distribute such pictures but the new plans will enforce these in law. Tech giants with more than one service, including Meta which owns Facebook, WhatsApp and Instagram, would be required to ban people who break new rules from all of their platforms. The policy is part of a wider aim to protect children online and prevent illegal content going viral. The Internet Watch Foundation, Britain's child abuse imagery hotline, reported receiving an alert every 74 seconds in 2024 - a rise of eight per cent compared with the previous year. In June it was reported that ministers are considering proposals to hand children a social media curfew under measures to improve online safety. Technology Secretary Peter Kyle indicated he was considering an 'app cap' to restrict how much time youths spend on their phones. The cap would limit access to apps to two hours a day, outside of school time and before 10pm. It came as Mr Kyle came under fire from the father of a teen who took her own life after viewing harmful content warned 'sticking plasters' will not be enough to strengthen online safety measures. The Online Safety Act has passed into law, and from this year will require tech platforms to follow new Ofcom-issued codes of practice to keep users safe online, particularly children. But Ian Russell, whose 14-year-old daughter Molly died in 2017, said it was not tough enough and urged the Prime Minister to 'act decisively' in toughening legislation to protect young people online. Mr Russell, who is chairman of the Molly Rose Foundation set up in his daughter's memory, said: 'Every day the Government has delayed bringing in tougher online safety laws we've seen more young lives lost and damaged because of weak regulation and inaction by big tech. 'Parents up and down the country would be delighted to see the Prime Minister act decisively to quell the tsunami of harm children face online, but sticking plasters will not do the job. 'Only a stronger and more effective Online Safety Act will finally change the dial on fundamentally unsafe products and business models that prioritise engagement over safety.' Hefty fines and site blockages are among the penalties for those caught breaking the rules, but many critics have argued the approach gives tech firms too much scope to regulate themselves.