
Microsoft claims its new medical AI tool is 4x more accurate than doctors
Called 'Microsoft AI Diagnostic Orchestrator', or MAI-DxO for short, the AI-powered tool is developed by the company's AI health unit, which was founded last year by Mustafa Suleyman. In a blog post, the tech giant said that when benchmarked against real-world case records, the new medical AI tool 'correctly diagnoses up to 85% of NEJM case proceedings, a rate more than four times higher than a group of experienced physicians' while being more cost-effective.
What's impressive is that these cases are from the New England Journal of Medicine and are very complex and require multiple specialists and tests before doctors can reach any conclusion.
In a statement to the Financial Times, the chief executive of Microsoft AI said that the new AI model was a big step towards 'medical superintelligence' and could help doctors by easing their workload. Microsoft AI Diagnostic Orchestrator works by creating a virtual panel of five AI agents, each of which acts as a doctor with distinct roles like choosing diagnostic tests and coming up with hypotheses.
The tech giant said its new AI system was trained on 304 studies that described some of the most complex cases solved by doctors and used a new technique called 'chain of debate', which it says gives a step-by-step account of how the AI solves real-world problems.
For this, the company used different large language models from OpenAI, Meta, Anthropic, Google, xAI and DeepSeek. Microsoft said that the new AI medical tool correctly diagnosed 85.5 per cent of cases, which is way better compared to experienced human doctors, who were able to correctly diagnose only 20 per cent of the cases. One thing to note is that physicians weren't allowed to refer to textbooks or get advice from their colleagues, something which could have improved their success rate.
Microsoft's new experimental tool does show promising results, but before generative AI can be safely used to diagnose patients, we will need more data and regulatory frameworks in place. To do this, the tech giant said it is tying up with health organisation to test and validate its approach before making the tool available to healthcare specialists.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
32 minutes ago
- Time of India
Degrees of intelligence: Where Meta's top AGI scientists studied and why it matters
Where Meta's top AI scientists studied and why it matters. (AI generated image used for representational purposes) As the race for Artificial General Intelligence (AGI) intensifies, Mark Zuckerberg's Meta has launched a bold, founder-led moonshot: the Meta Superintelligence Lab (MSL). This elite group of technologists—poached from DeepMind, OpenAI, Anthropic and Google—represents some of the sharpest minds in AI today. The MSL is powered not just by capital and ambition, but by a team of formidable researchers whose educational journeys span continents and disciplines. Their academic backgrounds reveal much about the global talent ecosystem shaping the future of artificial general intelligence (AGI). Meta's recent hiring spree—snagging eleven standout researchers from the likes of DeepMind, OpenAI, Google and Anthropic—reinforces that high-impact AI today emerges from a fusion of rigorous theoretical grounding and interdisciplinary exposure. These are individuals whose universities and research environments have shaped breakthrough thinking in large language models (LLMs), multimodal reasoning and AI safety. Two themes emerge when examining where these scientists studied. First, most began their journey at world-class institutions in China or India before moving westward for graduate research or tech leadership. Second, the depth and range of their studies—spanning reasoning, speech integration, image generation, and AI alignment—reveal the interdisciplinary demands of building truly intelligent systems. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like 5 Books Warren Buffett Wants You to Read In 2025 Blinkist: Warren Buffett's Reading List Undo Jack Rae: Grounding in CMU and UCL Jack Rae, recruited from DeepMind for his work on LLMs and long-term memory, studied at Carnegie Mellon University and University College London. Both institutions are renowned for AI and machine learning research. Rae's academic training in computing and cognitive science equipped him to develop memory-augmented neural architectures that embedded practical reasoning into LLMs—capabilities later adopted in models like Gemini. Pei Sun: Structured reasoning from Tsinghua to CMU Pei Sun, also from DeepMind, earned his undergraduate degree from Tsinghua University before moving to Carnegie Mellon University for graduate study. His rigorous grounding in maths and structured reasoning contributed directly to Google's Gemini project focused on logical reasoning and problem solving. Trapit Bansal: IIT Kanpur to UMass Amherst Trapit Bansal, formerly at OpenAI, holds a BTech in Computer Science from IIT Kanpur and completed graduate studies at UMass Amherst. He specialised in 'chain-of-thought' prompting and alignment, helping GPT-4 generate multi-step reasoning—an innovation that significantly advanced LLM coherence and reliability. He completed his BSc–MSc dual degree in Mathematics and Statistics at the Indian Institute of Technology (IIT) Kanpur, graduating in 2012. Post-graduation, he worked briefly at Accenture Management Consulting (2012) and later as a Research Assistant at the Indian Institute of Science (IISc), Bengaluru (2013–2015). Bansal then moved to the University of Massachusetts Amherst, where he earned his MS in Computer Science in 2019 and PhD in 2021. During his doctoral years, he interned at Facebook (2016), OpenAI (2017), Google Research (2018), and Microsoft Research Montréal (2020)—building a rare interdisciplinary perspective across industry labs. He joined OpenAI in January 2022 as a Member of Technical Staff, contributing to GPT-4 and leading development on the internal 'o1' reasoning model. In June 2025, he transitioned to Meta Superintelligence Lab, one of the most high-profile hires in recent memory. Shengjia Zhao: Bridging Tsinghua and Stanford Shengjia Zhao, co-creator of both ChatGPT and GPT-4, also began at Tsinghua University and later joined Stanford for his PhD. His dual focus on model performance and safety helped lay the groundwork for GPT-4 as a reliable, multi-modal AI. Ji Lin: Optimisation from Tsinghua to MIT Ji Lin, an optimisation specialist contributing to GPT-4 scaling, studied at Tsinghua University before moving on to MIT. His expertise in model compression and efficiency is vital for making giant AI models manageable and deployable. Shuchao Bi: Speech-text expert at Zhejiang and UC Berkeley Shuchao Bi earned his undergraduate degree at Zhejiang University in China before pursuing graduate education at UC Berkeley. His work on speech-to-text integration informs vital voice capabilities in GPT-4 and other multi-modal systems. Jiahui Yu: Gemini vision from USTC to UIUC Jiahui Yu, whose expertise bridges both OpenAI and Google in Gemini vision and GPT-4 multimodal design, studied at the University of Science and Technology of China (USTC) before heading to UIUC—renowned for computer vision and graphics research. Hongyu Ren: Safety education from Peking to Stanford Hongyu Ren, an authority on robustness and safety in LLMs, earned his undergraduate degree at Peking University and completed graduate studies at Stanford, blending theoretical rigour with practical insight into model alignment. Huiwen Chang: Image generation from Tsinghua and Princeton Huiwen Chang, who worked on Muse and MaskIT systems while at Google, received his BEng from Tsinghua University and pursued graduate work at Princeton, where he focused on next-generation image generation. Johan Schalkwyk: Voice AI from Pretoria University Voice-AI veteran Johan Schalkwyk led Google Voice Search. He studied at University of Pretoria in South Africa, developing foundational technologies in speech recognition long before transitioning to Sesame AI and eventually MSL. Joel Pobar: Infrastructure from QUT Joel Pobar, formerly with Anthropic and now part of Meta's core team, studied at Queensland University of Technology (QUT) in Australia. His expertise in large AI infrastructure and PyTorch optimisation rounds out the team's ability to build at scale. Why it matters This constellation of academic backgrounds reveals key patterns. First, many team members started at elite institutions in China and India—Tsinghua, Peking, USTC, IIT Kanpur—before completing advanced study in North America or Europe. Such academic migration fosters the cross-pollination of ideas and technologies vital to AGI progress. Second, the diversity in specialisations—from chain-of-thought reasoning and speech-text fusion to alignment and infrastructure optimisation—reflects a holistic approach to AGI development. No single breakthrough will suffice; each educational trajectory contributes a crucial piece of the intelligence puzzle. Lastly, these researchers underscore the importance of rigorous mathematical and computational foundations. Their trajectories, marked by early excellence in computing and prime PhD programmes, highlight that AGI talent is born of sustained academic commitment—not overnight spark. For today's students, this means investing in strong undergrad programmes, targeting interdisciplinary research opportunities, and seeking environments that encourage open, foundational exploration. In aggregate, the educational pedigree of Meta's superintelligence team isn't mere résumé-padding. It's the backbone of a strategy to crack AGI—a challenge that demands not just technical acumen, but a global, theory-driven, collaborative mindset. Is your child ready for the careers of tomorrow? Enroll now and take advantage of our early bird offer! Spaces are limited.


India.com
an hour ago
- India.com
Meet Trapit Bansal, hired for record breaking salary by Meta, not from IIT Mumbai, IIM, NIT, VIT, his overall package is Rs...
Meet Trapit Bansal, hired for record breaking salary by Meta, not from IIT Mumbai, IIM, NIT, VIT, his overall package is Rs... From interning in Facebook, Google, Microsoft, and OpenAI to getting Rs 800 crore joining bonus Trapit Bansal has come a long way. The former OpenAI researcher has officially joined Meta Superintelligence Labs. Bansal confirmed the move on Tuesday in a post on X, writing, 'Thrilled to be joining Meta! Superintelligence is now in sight.' A graduate of IIT Kanpur, Bansal joined OpenAI in 2022 and made significant contributions to its reinforcement learning efforts and early AI reasoning models. TechCrunch described him as 'a highly influential OpenAI researcher.' Who is Trapit Bansal? Trapit Bansal is an AI researcher with a background in mathematics, statistics, and computer science. His research areas include natural language processing (NLP), deep learning, and meta-learning. He earned a Master of Science in Computer Science from the University of Massachusetts Amherst holding a Bachelor of Science degree in Mathematics and Statistics from IIT Kanpur. He also completed his PhD in Computer Science from the same university. During his academic years, he held research internship positions at IISc Bengaluru, Facebook, Google, and Microsoft. He interned at OpenAI for four months in 2017 during his graduation. Following his internships, his first full-time role was at OpenAI, where he joined as a Member of Technical Staff in January 2022. At OpenAI, he worked on reinforcement learning (RL) and reasoning-focused frontier research alongside co-founder Ilya Sutskever. Bansal co-created the model referred to as '01,' though further details are not public, according to his LinkedIn profile.


Hindustan Times
an hour ago
- Hindustan Times
Microsoft 9000 jobs cut: Blizzard puts Warcraft Rumble game in ‘maintenance mode'
Jul 03, 2025 04:21 PM IST Blizzard is winding down its mobile strategy game, Warcraft Rumble, amid massive Microsoft layoffs affecting the studio. The game will no longer receive new features or content, but it will remain online with regular support and limited in-game events. Blizzard cited that the game has been unable to meet long-term expectations despite ongoing improvements, which is a key reason behind the change. Blizzard pauses new content for Warcraft Rumble amid Microsoft's major layoffs. Warcraft Rumble was launched in 2023 to bring the Warcraft experience to mobile. It offers gameplay very similar to popular mobile games like Clash Royale. The game initially generated excitement but struggled to maintain a strong player base. Despite the team's hard work, listening to players' feedback, and exploring different options, the game showed little to no signs of improvement. Blizzard stated in its official announcement, 'We have made the difficult decision to stop developing new content for Warcraft Rumble and focus on maintaining the game for current players.' With that, the studio confirmed that there will be no new content for the game but emphasized keeping the game accessible and stable with support for bug fixes and in-game events. This move from the studio coincides with massive layoffs at Microsoft, the largest in over two years, with approximately 9,000 employees cut globally. This heavily impacted the Xbox gaming division and its subsidiaries, including Blizzard. These layoffs represent about 4% of Microsoft's total workforce and are part of a broader effort to redirect resources toward artificial intelligence and other priorities. Xbox CEO Phil Spencer explained that the company is 'ending or decreasing work in certain areas of the business' to focus on strategic growth areas. These cuts led to the cancellation of popular projects like the Perfect Dark reboot and Everwild, the shutdown of The Initiative, and reshuffling across Microsoft's gaming teams. For Blizzard, these layoffs mean reallocating resources from less successful ventures like Warcraft Rumble to core franchises.