Latest news with #diagnoses


CTV News
a day ago
- Health
- CTV News
Cancer diagnoses rate up in Windsor
Windsor Watch The rate for cancer diagnoses is up in Windsor, according to the vice president of cancer services. CTV Windsor's Bob Bellacicco has more.


Forbes
6 days ago
- Health
- Forbes
Orchestrating Mental Health Advice Via Multiple AI-Based Personas Diagnosing Human Psychological Disorders
Orchestrating multiple AI personas in the medical domain and in mental health therapy by AI is a ... More promising approach. In today's column, I examine a newly identified innovative approach to using generative AI and large language models (LLMs) for medical-related diagnoses, and I then performed a simple mini-experiment to explore the efficacy in a mental health therapeutic analysis context. The upshot is that the approach involves using multiple AI personas in a systematic and orchestrated fashion. This is a method worthy of additional research and possibly adapting into day-to-day mental health therapy practice. Let's talk about it. This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). AI And Mental Health Therapy As a quick background, I've been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I've made on the subject. There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS's 60 Minutes, see the link here. If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis of the field, which also recounts a highly innovative initiative at the Stanford University Department of Psychiatry and Behavioral Sciences called AI4MH; see the link here. Orchestrating AI Personas One of the perhaps least leveraged capabilities of generative AI and LLMs is their ability to computationally simulate a kind of persona. The idea is rather straightforward. You tell the AI to pretend to be a particular type of person or exhibit an outlined personality, and the AI attempts to respond accordingly. For example, I made use of this feature by having ChatGPT undertake the persona of Sigmund Freud and perform therapy as though the AI was mimicking or simulating what Freud might say (see the link here). You can tell LLMs to pretend to be a specific person. The key is that the AI must have sufficient data about the person to pull off the mimicry. Also, your expectations about how good a job the AI will do in such a pretense mode need to be soberly tempered since the AI might end up far afield. An important aspect is not to somehow assume or believe that the AI will be precisely like the person. It won't be. Another angle to using personas is to broadly describe the nature of the persona that you want to have the AI to pretend to be. I have previously done a mini-experiment of having ChatGPT pretend to be a team of mental health therapists that confer when seeking to undertake a psychological assessment (see the link here). None of the personas represented a specific person. Instead, the AI was generally told to make use of several personas that generally represented a group of therapists. There are a lot more uses of AI personas. I'll list a few. A mental health professional who wants to improve their skills can carry on a dialogue with an LLM that is pretending to be a patient, which is a handy means of enhancing the psychological analysis acumen of the therapist (see the link here). Here's another example. When doing mental health research, you can tell AI to pretend to be hundreds or thousands of respondents to a survey. This isn't necessarily equal to using real people, but it can be a fruitful way to gauge what kind of responses you might get and how to prepare accordingly (see the link here and the link here). And so on. Latest Research Uses AI Personas A recently posted research study innovatively used AI personas in the realm of performing medical diagnoses. The study was entitled 'Sequential Diagnosis with Language Models' by Harsha Nori, Mayank Daswani, Christopher Kelly, Scott Lundberg, Marco Tulio Ribeiro, Marc Wilson, Xiaoxuan Liu, Viknesh Sounderajah, Jonathan Carlson, Matthew P Lungren, Bay Gross, Peter Hames, Mustafa Suleyman, Dominic King, Eric Horvitz, arXiv, June 30, 2025, and made these salient remarks (excerpts): There are some interesting twists identified on how to make use of AI personas. The crux is that they had an AI persona that served as a diagnostician, another one that was feeding a case history to the AI-based diagnostician, and they even had another AI persona that acted as an assessor of how well the clinical diagnosis was taking place. That's three AI personas that were set up to aid in performing a medical diagnosis on various case studies presented to the AI. The researchers opted to go further with this promising approach by having a panel of AI personas that performed medical diagnoses. They decided to have five AI personas that would each, in turn, confer while stepwise undertaking a diagnosis. The names given to the AI personas generally suggested what each one was intended to do, consisting of Dr. Hypothesis, Dr. Test-Chooser, Dr. Challenger, Dr. Stewardship, and Dr. Checklist. Without anthropomorphizing the approach, the aspect of using a panel of AI personas would be considered analogous to having a panel of medical doctors conferring about a medical diagnosis. The AI personas each have a designated specialty, and they walk through the case history of the patient so that each specialty takes its turn during the diagnosis. Orchestration In AI Mental Health Analysis I thought it might be interesting to try a similar form of orchestration by doing so in a mental health analysis context. I welcome researchers trying this same method in a more robust setting so that we could have a firmer grasp on the ins and outs of employing such an approach. My effort was just a mini-experiment to get the ball rolling. I used a mental health case history that is a vignette publicly posted by the American Board of Psychiatry and Neurology (ABPN) and entails a fictionalized patient who is undergoing a psychiatric evaluation. It is a handy instance since it has been carefully composed and analyzed, and serves as a formalized test question for budding psychiatrists and psychologists. The downside is that due to being widely known and on the Internet, there is a chance that any generative AI used to analyze this case history might already have scanned the case and its posted solutions. Researchers who want to do something similar to this mini-experiment will likely need to come up with entirely new and unseen case histories. That would prevent the AI from 'cheating' by already having potentially encountered the case. Overview Of The Vignette The vignette has to do with a man in his forties who had previously been under psychiatric care and has recently been exhibiting questionable behavior. As stated in the vignette: 'For the past several months, he has been buying expensive artwork, his attendance at work has become increasingly erratic, and he is sleeping only one to two hours each night. Nineteen years ago, he was hospitalized for a serious manic episode involving the police.' (source: ABPN online posting). I made use of a popular LLM and told it to invoke five personas, somewhat on par with the orchestration approach noted above, consisting of: After entering a prompt defining those five personas, I then had the LLM proceed to perform a mental health analysis concerning the vignette. Orchestration Did Well Included in my instruction to the LLM was that I wanted to see the AI perform a series of diagnoses or turns. At each turn, the panel was to summarize where they were in their analysis and tell me what they had done so far. This is a means of having the AI generate a kind of explanation or indication of what the computational reasoning process entails. As an aside, be careful in relying on such computationally concocted explanations since they may have little to do with what the internal tokenization mechanics of the LLM were actually doing, see my discussion of noteworthy cautions at the link here. I provided the LLM persona panel with questions that are associated with the vignette. I then compared the answers from the AI panel with those that have been posted online and are considered the right or most appropriate answers. To illustrate what the AI personas panel came up with, here's the initial response about the overall characteristics of the patient at the first turn: The analysis ended up matching overall with the posted solution. In that sense, the AI personas panel did well. Whether this was due to true performance versus having previously scanned the case history is unclear. When I asked directly if the case had been seen previously, the LLM denied that it had already encountered the case. Don't believe an LLM that tells you it hasn't scanned something. The LLM might be unable to ascertain that it had scanned the content. Furthermore, in some instances, the AI might essentially lie and tell you that it hasn't seen a piece of content, a kind of cover-up, if you will. Leaning Into AI Personas AI personas are an incredibly advantageous capability of modern-era generative AI and LLMs. Using AI personas in an orchestrated fashion is a wise move. You can get the AI personas to work as a team. This can readily boost the results. One quick issue that you ought to be cognizant of is that if the LLM is undertaking all the personas, you might not be getting exactly what you thought you were getting. An alternative approach is to use separate LLMs to represent the personas. For example, I could connect five different LLMs and have each simulate the personas that I used in my mini-experiment. The idea is that by using separate LLMs, you avoid the chances of the single LLM lazily double-dealing by not really trying to invoke personas. An LLM can be sneaky that way. A final thought for now. Mark Twain famously provided this telling remark: 'Synergy is the bonus that is achieved when things work together harmoniously.' The use of orchestration with AI personas can achieve a level of synergy that otherwise would not be exhibited in these types of analyses. That being said, sometimes you can have too many cooks in the kitchen, too. Make sure to utilize AI persona orchestration suitably, and you'll hopefully get sweet sounds and delightfully impressive results.


Daily Mail
12-07-2025
- Health
- Daily Mail
Groundbreaking discovery that'll see autism diagnoses skyrocket... with one group of Americans hit the hardest
Groundbreaking new autism research suggests that already-rising diagnoses could jump more significantly in the coming years if a new framework for understanding the condition comes into play. The latest research out of Princeton University and the Simons Foundation uncovered four unique subtypes of autism, each with its own genetic 'fingerprint' - finally explaining why some children show signs early while others aren't diagnosed until school age. Your browser does not support iframes. Your browser does not support iframes. Your browser does not support iframes.


WIRED
10-07-2025
- Health
- WIRED
Dr. ChatGPT Will See You Now
Jul 10, 2025 6:46 AM Patients and doctors are turning to AI for diagnoses and treatment recommendations, often with stellar results, but problems arise when experts and algorithms disagree. Woman's finger touching futuristic intelligence and Technology Science concept. Photograph: Francesco Carta fotografo/ Getty Images A poster on Reddit lived with a painful clicking jaw, the result of a boxing injury, for five years. They saw specialists, got MRIs, but no one could give them a solution to fix it, until they described the problem to ChatGPT. The AI chatbot suggested a specific jaw-alignment issue might be the problem and offered a technique involving tongue placement as a treatment. The individual tried it, and the clicking stopped. 'After five years of just living with it,' they wrote on Reddit in April, 'this AI gave me a fix in a minute.' The story went viral, with LinkedIn cofounder Reid Hoffman sharing it on X. And it's not a one-off: Similar stories are flooding social media—of patients purportedly getting accurate assessments from LLMs of their MRI scans or x-rays. Courtney Hofmann's son has a rare neurological condition. After 17 doctor visits over three years and still not receiving a diagnosis, she gave all of his medical documents, scans, and notes to ChatGPT. It provided her with an answer—tethered cord syndrome, where the spinal cord can't move freely because it's attached to tissue around the spine—that she says physicians treating her son had missed. 'He had surgery six weeks from when I used ChatGPT, and he is a new kid now,' she told a New England Journal of Medicine podcast in November 2024. Consumer-friendly AI tools are changing how people seek medical advice, both on symptoms and diagnoses. The era of 'Dr. Google' is giving way to the age of 'Dr. ChatGPT.' Medical schools, physicians, patient groups, and the chatbots' creators are racing to catch up, trying to determine how accurate these LLMs' medical answers are, how best patients and doctors should use them, and how to address patients who are given false information. 'I'm very confident that this is going to improve health care for patients,' says Adam Rodman, a Harvard Medical School instructor and practicing physician. 'You can imagine lots of ways people could talk to LLMs that might be connected to their own medical records.' Rodman has already seen patients turn to AI chatbots during his own hospital rounds. On a recent shift, he was juggling care for more than a dozen patients when one woman, frustrated by a long wait time, took a screenshot of her medical records and plugged it into an AI chatbot. 'She's like, 'I already asked ChatGPT,'' Rodman says, and it gave her the right answer regarding her condition, a blood disorder. Rodman wasn't put off by the exchange. As an early adopter of the technology and the chair of the group that guides the use of generative AI in the curriculum at Harvard Medical School, he thinks there's potential for AI to give physicians and patients better information and improve their interactions. 'I treat this as another chance to engage with the patient about what they are worried about,' he says. The key word here is potential. Several studies have shown that AI is capable in certain circumstances of providing accurate medical advice and diagnoses, but it's when these tools get put in people's hands—whether they're doctors or patients—that accuracy often falls. Users can make mistakes—like not providing all of their symptoms to AI, or discarding the right info when it is fed back to them. In one example, researchers gave physicians a set of patient cases and asked them to estimate the chances of the patients having different diseases—first based on the patients' symptoms and history, and then again after seeing lab results. One group had access to AI assistance while another did not. Both groups performed similarly on a measure of their diagnostic reasoning, which looks at not just the accuracy of the diagnosis but also at how they explained their reasoning, considered alternatives, and suggested next steps. The AI-assisted group had a median diagnostic reasoning score of 76 percent, while the group using only standard resources scored 74 percent. But when the AI was tested alone—without any human input—it scored much higher, with a median score of 92 percent. Harvard's Rodman worked on this study and says when the research was conducted in 2023, AI chatbots were still relatively new, so doctors' lack of familiarity with these tools may have lessened their ability to reach an accurate diagnosis. But beyond that, the broader insight was that physicians still viewed themselves as the primary information filter. 'They loved it when it agreed with them, and they disregarded it when it disagreed with them,' he says. 'They didn't trust it when the machine told them that they were wrong.' Rodman himself tested AI a few years ago on a tough case that he and other specialists had misdiagnosed on first pass. He provided the tool with the information he had on the patient's case, 'and the first thing it spat out was the very rare disease that this patient had,' he says. The AI also offered a more common condition as an alternative diagnosis but deemed it less likely. This was the condition Rodman and the specialists had misdiagnosed the patient with initially. Another preprint study with over 1,200 participants showed that AI offered the right diagnosis nearly 95 percent of the time on its own but dropped to only a third of the time when people used the same tools to guide their own thinking. For example, one scenario in the study involved a painful headache and stiff neck that had come on suddenly. The correct action is to seek immediate medical attention for a potential serious condition like meningitis or a brain hemorrhage. Some users were able to use the AI to reach the right answer, but others were told to just take over-the-counter pain medication and lie down in a dark room. The key difference between the AI's responses, the study found, was due to the information provided—the incorrect answer was generated when the sudden onset of symptoms wasn't mentioned by the user. But regardless of whether the information provided is right or wrong, AI presents its answers confidently, as truthful, even when that answer may be completely wrong—and that's a problem, says Alan Forster, a physician as well as a professor in innovation at McGill University's Department of Medicine. Unlike an internet search that returns a list of websites and links to follow up on, AI chatbots write in prose. 'It feels more authoritative when it comes out as a structured text,' Forester says. 'It's very well constructed, and it just somehow feels a bit more real.' And even if it is right, an AI agent can't complement the information it provides with the knowledge physicians gain through experience, says fertility doctor Jaime Knopman. When patients at her clinic in midtown Manhattan bring her information from AI chatbots, it isn't necessarily incorrect, but what the LLM suggests may not be the best approach for a patient's specific case. For instance, when considering IVF, couples will receive grades for viability for their embryos. But asking ChatGPT to provide recommendations on next steps based on those scores alone doesn't take into consideration other important factors, Knopman says. 'It's not just about the grade: There's other things that go into it'—such as when the embryo was biopsied, the state of the patient's uterine lining, and whether they have had success in the past with fertility. In addition to her years of training and medical education, Knopman says she has 'taken care of thousands and thousands of women.' This, she says, gives her real-world insights on what next steps to pursue that an LLM lacks. Other patients will come in certain of how they want an embryo transfer done, based on a response they received from AI, Knopman says. However, while the method they've been suggested may be common, other courses of action may be more appropriate for the specific patient's circumstances, she says. 'There's the science, which we study, and we learn how to do, but then there's the art of why one treatment modality or protocol is better for a patient than another,' she says. Some of the companies behind these AI chatbots have been building tools to address concerns about the medical information dispensed. OpenAI, the parent company of ChatGPT, announced on May 12 it was launching HealthBench, a system designed to measure AI's capabilities in responding to health questions. OpenAI says the program was built with the help of more than 260 physicians in 60 countries, and includes 5,000 simulated health conversations between users and AI models, with a scoring guide designed by doctors to evaluate the responses. The company says that it found that with earlier versions of its AI models, doctors could improve upon the responses generated by the chatbot, but claims the latest models, available as of April 2025, such as GPT-4.1, were as good as or better than the human doctors. 'Our findings show that large language models have improved significantly over time and already outperform experts in writing responses to examples tested in our benchmark,' Open AI says on its website. 'Yet even the most advanced systems still have substantial room for improvement, particularly in seeking necessary context for underspecified queries and worst-case reliability.' Other companies are building health-specific tools that are specifically designed for medical professionals to use. Microsoft says it has created a new AI system—called MAI Diagnostic Orchestrator (MAI-DxO)—that in testing diagnosed patients four times as accurately as human doctors. The system works by querying several leading large language models—including OpenAI's GPT, Google's Gemini, Anthropic's Claude, Meta's Llama, and xAI's Grok—in a way that loosely mimics multiple human experts working together. New doctors will need to learn how to both use these AI tools as well as counsel patients who use them, says Bernard S. Chang, dean of medical education at Harvard Medical School. That's why his university was one of the first to offer students classes on how to use the technology in their practices. 'It's one of the most exciting things that's happening right now in medical education,' Chang says. The situation reminds Chang of when people started turning to the internet for medical information 20 years ago. Patients would come to him and say, 'I hope you're not one of those doctors that uses Google.' But as the search engine became ubiquitous, he wanted to reply to these patients: 'You wouldn't want to go to a doctor who didn't.' He sees the same thing now happening with AI. 'What kind of doctor is practicing at the forefront of medicine and doesn't use this powerful tool?'


Daily Mail
26-06-2025
- Health
- Daily Mail
America becomes cancer capital of the WORLD with more cases than all but one country
America has surpassed all but one country in new cancer diagnoses, making it a cancer capital of the world. The US saw 2.4million new cases of cancer in 2022, surpassing all but China, which saw nearly 4.8million. However, the US saw a higher rate than the Asian country - 1,307 cases per 100,000 people, compared to China's 490 per 100,000. Overall, the US has the fifth-highest cancer rate in the world, and cases are climbing. America made up about 13 percent of the 19 million cases recorded worldwide in 2022, more than the combined share from all of Africa (six percent), Latin America and the Caribbean (seven percent), and Oceania (less than two percent). And global diagnoses are only expected to increase, reaching 35million a year by 2050. Lung cancer was the most commonly diagnosed among both men and women, responsible for almost 2.5 million new cases, or one in eight cancers worldwide. In the US, an estimated 236,740 new cases of lung cancer were diagnosed, and 130,000 people died. Breast cancer in women made up 12 percent of cases worldwide, colorectal accounted for 10 percent, prostate at seven percent, and stomach at five percent. Cancer is now the leading cause of death in Americans under 85, according to the American Cancer Society's 2024 report. While it remains the second-leading cause of death overall in the US, it has surpassed heart disease as the top killer in younger age groups. The anticipated spike in new cancer diagnoses is mainly due to population growth and aging, but experts are increasingly blaming environmental toxins and ultra-processed foods. And though cancer rates among people under 50 are on the rise, particularly colorectal cancer, the disease still primarily afflicts seniors. 'This rise in projected cancer cases by 2050 is solely due to the aging and growth of the population, assuming current incidence rates remain unchanged,' Dr Hyuna Sung, senior principal scientist at the American Cancer Society, said. 'Notably, the prevalence of major risk factors such as consumption of unhealthy diet, physical inactivity, heavy alcohol consumption, and cigarette smoking are increasing in many parts of the world and will likely exacerbate the future burden of cancer barring any large scale interventions.' Experts warn that the projected rise in cancer cases by 2050 stems from more than just aging populations. Population growth explains part of the increase, but preventable risk factors —poor diet, lack of access to screenings, and chemical exposures — are also driving the disproportionate spikes. While the US diagnoses about one in six global cancer cases, it accounts for just seven percent of deaths worldwide, thanks to advanced treatments and fast drug approvals. The American Cancer Society reported that almost half of all cases and about 56 percent of cancer deaths in 2022 occurred in Asia, where over 59 percent of the world's population lives. Cancer death rates in Africa and Asia are much higher than cases, partly because cancers there are often found late and are harder to treat. Europe has more cancer cases and deaths than expected for its population, making up about 20 percent of global cases and deaths, though it has less than 10 percent of the world's people. Prostate cancer in men is the most frequently diagnosed cancer in 118 countries, followed by lung cancer among both sexes in 33 countries, and liver, colorectal, and stomach cancer ranking in first place in 11, nine, and eight countries, respectively. By 2050, lung cancer cases will climb from about 2.5 million in 2022 to roughly four million. Deaths will also climb from approximately 1.8 million to about three million. Global breast cancer cases are projected to surge from 2.3 million to 3.5 million by 2050, with deaths rising from 666,000 to 1 million. Breast cancer is the most common cancer in women (excluding skin cancer), diagnosed in 157 countries, while cervical cancer leads in 25 others. The US saw 288,000 new cases in 2022, with over 319,000 projected for 2025. Just 10 cancer types cause over 60 percent of all cases and deaths globally. Leading the pack is lung cancer (12.4 percent of cases), followed by breast (11.6 percent), colorectal (9.6 percent), prostate (7.3 percent), and stomach cancers (4.9 percent). Colorectal cancer cases will skyrocket from 1.9 million to 3 million globally by 2050, fueled by processed diets and rising early-onset cases. In 2023, 19,550 Americans under 50 were diagnosed with CRC. Deaths will climb from 904,000 to 1.4 million, especially in regions with poor screening. In North America, pancreatic cancer cases and deaths will both increase sharply, with minimal survival improvements expected. Obesity and diabetes are key drivers. Prostate cancer is set to explode—from 1.5 million cases today to a staggering 2.5 million by 2050—as the world's population ages. Deaths will leap from 397,000 to 600,000, hitting sub-Saharan Africa hardest, where life-saving treatments remain out of reach for millions. Liver cancer cases will rise from 865,000 to 1.2 million. Deaths will remain high, nearing one million, due to late detection in poorer parts of the world. It's on track to become more common, fueled by America's growing struggles with obesity, diabetes, and heavy drinking—even as hepatitis-linked cases fade. Deaths will climb in lockstep, as most patients are diagnosed too late for effective treatment. Cervical cancer cases could plummet from 660,000 to 500,000 with wider HPV vaccination, yet deaths may persist in Africa without better screening. Meanwhile, North America could nearly eliminate it, thanks to vaccines and early detection. Stomach cancer cases will decline slightly due to H. pylori control, but deaths will persist in regions with limited healthcare. Pancreatic cancer cases will spike from 511,000 to 800,000, with deaths mirroring this rise due to poor survival rates. Esophageal cancer cases will grow from 511,000 to 700,000, driven by obesity-related adenocarcinoma in wealthy nations. While cases in the US are rising, deaths due to cancer are on the decline thanks to advances in treatments. Scientists can now engineer immune cells in the lab that target and kill cancer cells, while CRISPR-based gene therapies have progressed from the lab to clinical trials for people with cancer. Dr Karen E Knudsen, CEO of the American Cancer Society, said: 'Understanding the global cancer burden is critical to ensuring everyone has an opportunity to prevent, detect, treat, and survive cancer. 'This data provides insight into trends and potential areas for intervention and can help prioritize discovery efforts worldwide.