logo
#

Latest news with #DavidDSouza

Talk to medical professionals, not just ChatGPT, urge Ontario doctors
Talk to medical professionals, not just ChatGPT, urge Ontario doctors

Yahoo

time2 hours ago

  • Health
  • Yahoo

Talk to medical professionals, not just ChatGPT, urge Ontario doctors

ChatGPT and similar artificial intelligence tools can sometimes answer patient questions accurately, but Canadian medical researchers caution that the information needs to be carefully checked before acting on what you see. The researchers' advice comes as the Ontario Medical Association (OMA) hosted a media briefing this week, discussing DIY information sources — from search engines to social media to chatbots — and their impacts, as well as what patients can do instead. It's important to warn people now, said Dr. Valerie Primeau, a psychiatrist from North Bay who leads inpatient and community programs for mental health and addictions, because patients are increasingly turning to AI tools. The chatbots give convincing and empathetic results — but the information might be fake. "I have patients now that talk to ChatGPT to get advice and have a conversation," Primeau said. "So I foresee that we will continue having this issue, and if we don't address it now and help people navigate this, they will struggle." Dr. David D'Souza, a radiation oncologist in London, Ont., who leads clinical research into image-based treatments for cancer, said depending on how patients interrupt what AI tells them, they could put off conventional treatments. "A patient came to me asking if he should wait to have his cancer that was diagnosed treated in a few years because he believes that AI will customize cancer treatments for patients," D'Souza told reporters. "I had to convince him why he should have treatment now." Given that consumers will use the tools, OMA president Dr. Zainab Abdurrahman advised if a post says "doctors have been hiding this from you," she suggests checking the websites of relevant specialist groups, such as provincial cancer care associations, to see if they back it up. Fake ads, including AI-generated images, can also lead patients astray, warned Abdurrahman, who is also a clinical immunologist and allergist. While the technology is progressing, today's chatbots routinely answer health queries with false information that appears authoritative. In one study, Dr. Benjamin Chin-Yee, an assistant professor in the pathology and lab medicine department at Western University and his co-authors fed nearly 5,000 summaries of medical and scientific literature into AI large language models including ChatGPT and asked for summaries. They found three-quarters of the AI versions missed key parts of carefully guarded statements. For example, the journal article might say a drug was only effective in a certain group of patients while the summary leaves out that key detail, said Chin-Yee, who is also a hematologist. "The worry is that when that nuance in detail is lost, it can be misleading to practitioners who are trying to use that knowledge to impact their clinical practice." Chin-Yee said AI is an active area of research that is rapidly changing, with newer models that are more human-like and user-friendly, but there can be drawbacks to relying on the tools alone. Similarly, David Chen, a medical student at the University of Toronto, compared results provided by chatbots to 200 questions about cancer from a Reddit forum to responses provided by oncologists. "We were surprised to find that these chatbots were able to perform to near-human expert levels of competency based on our physician team's assessment of quality, empathy and readability," Chen said. But the experimental results may not reflect what happens in the real world. "Without medical oversight, it's hard to 100 per cent trust some of these outputs of these generative technologies," Chen said, adding concerns about privacy, security, and patient trust still haven't been fully explored. WATCH | Researchers use AI to help treat brain patients: Don't rely on a single chatbot Generative AI technologies like chatbots are based on pattern-matching technologies that give the most likely output to a given question, based on whatever information it was trained on. In medicine, though, unlikely possible diagnoses can also be important and shouldn't be ruled out. Plus, chatbots can hallucinate — produce outputs that sound convincing but are incorrect, made up, nonsensical or irrelevant. "There's also been research studies that have been put out that suggested that there are hallucination rates of these chat bots that can be upwards of 20 per cent," Chen said, which could make the output "clinically erroneous." In the spring, cardiologist Eric Topol, a professor and executive vice president of Scripps Research in San Diego, Calif., published a book, Superagers: An Evidence-Based Approach to Longevity, that looked at the impact of AI on longevity and quality of life. "There's a lot of good anecdotes, there's bad anecdotes," Topol said of patients using chatbots. "It hasn't been systematically assessed in a meaningful way for public use." Topol said he advises people to consult multiple chatbots and to check that you're getting reliable information. He also suggested asking for citations from the medical literature, noting sometimes those aren't real and need to be verified. Ideally, Topol said there would be a real-world test of chatbot responses from tens of thousands of people tracking what tests were done, what diagnosis was given and the outcomes for those who used AI sources and those who didn't. But tech companies are unlikely to participate because each one wouldn't gain, he said. "It's a different world now and you can't go back in time," Topol said of using the tools wisely.

Talk to medical professionals, not just ChatGPT, urge Ontario doctors
Talk to medical professionals, not just ChatGPT, urge Ontario doctors

CBC

timea day ago

  • Health
  • CBC

Talk to medical professionals, not just ChatGPT, urge Ontario doctors

Social Sharing ChatGPT and similar artificial intelligence tools can sometimes answer patient questions accurately, but Canadian medical researchers caution that the information needs to be carefully checked before acting on what you see. The researchers' advice comes as the Ontario Medical Association (OMA) hosted a media briefing this week, discussing DIY information sources — from search engines to social media to chatbots — and their impacts, as well as what patients can do instead. It's important to warn people now, said Dr. Valerie Primeau, a psychiatrist from North Bay who leads inpatient and community programs for mental health and addictions, because patients are increasingly turning to AI tools. The chatbots give convincing and empathetic results — but the information might be fake. "I have patients now that talk to ChatGPT to get advice and have a conversation," Primeau said. "So I foresee that we will continue having this issue, and if we don't address it now and help people navigate this, they will struggle." Dr. David D'Souza, a radiation oncologist in London, Ont., who leads clinical research into image-based treatments for cancer, said depending on how patients interrupt what AI tells them, they could put off conventional treatments. "A patient came to me asking if he should wait to have his cancer that was diagnosed treated in a few years because he believes that AI will customize cancer treatments for patients," D'Souza told reporters. "I had to convince him why he should have treatment now." Given that consumers will use the tools, OMA president Dr. Zainab Abdurrahman advised if a post says "doctors have been hiding this from you," she suggests checking the websites of relevant specialist groups, such as provincial cancer care associations, to see if they back it up. Fake ads, including AI-generated images, can also lead patients astray, warned Abdurrahman, who is also a clinical immunologist and allergist. Lost nuance makes AI results harder to rely on While the technology is progressing, today's chatbots routinely answer health queries with false information that appears authoritative. In one study, Dr. Benjamin Chin-Yee, an assistant professor in the pathology and lab medicine department at Western University and his co-authors fed nearly 5,000 summaries of medical and scientific literature into AI large language models including ChatGPT and asked for summaries. They found three-quarters of the AI versions missed key parts of carefully guarded statements. For example, the journal article might say a drug was only effective in a certain group of patients while the summary leaves out that key detail, said Chin-Yee, who is also a hematologist. "The worry is that when that nuance in detail is lost, it can be misleading to practitioners who are trying to use that knowledge to impact their clinical practice." Chin-Yee said AI is an active area of research that is rapidly changing, with newer models that are more human-like and user-friendly, but there can be drawbacks to relying on the tools alone. Similarly, David Chen, a medical student at the University of Toronto, compared results provided by chatbots to 200 questions about cancer from a Reddit forum to responses provided by oncologists. "We were surprised to find that these chatbots were able to perform to near-human expert levels of competency based on our physician team's assessment of quality, empathy and readability," Chen said. But the experimental results may not reflect what happens in the real world. "Without medical oversight, it's hard to 100 per cent trust some of these outputs of these generative technologies," Chen said, adding concerns about privacy, security, and patient trust still haven't been fully explored. WATCH | Researchers use AI to help treat brain patients: MUN's Faculty of Medicine is using A.I. to treat brain patients 4 months ago Duration 5:00 Artificial intelligence is expected to revolutionize health care. And at Memorial University's Faculty of Medicine, A.I. is already being used in the treatment of patients with various brain conditions. Neuroscience professor Michelle Ploughman showed the CBC's Carolyn Stokes around her lab at the Miller Centre and demonstrated how A.I. is changing patient care. Don't rely on a single chatbot Generative AI technologies like chatbots are based on pattern-matching technologies that give the most likely output to a given question, based on whatever information it was trained on. In medicine, though, unlikely possible diagnoses can also be important and shouldn't be ruled out. Plus, chatbots can hallucinate — produce outputs that sound convincing but are incorrect, made up, nonsensical or irrelevant. "There's also been research studies that have been put out that suggested that there are hallucination rates of these chat bots that can be upwards of 20 per cent," Chen said, which could make the output "clinically erroneous." In the spring, cardiologist Eric Topol, a professor and executive vice president of Scripps Research in San Diego, Calif., published a book, Superagers: An Evidence-Based Approach to Longevity, that looked at the impact of AI on longevity and quality of life. "There's a lot of good anecdotes, there's bad anecdotes," Topol said of patients using chatbots. "It hasn't been systematically assessed in a meaningful way for public use." Topol said he advises people to consult multiple chatbots and to check that you're getting reliable information. He also suggested asking for citations from the medical literature, noting sometimes those aren't real and need to be verified. Ideally, Topol said there would be a real-world test of chatbot responses from tens of thousands of people tracking what tests were done, what diagnosis was given and the outcomes for those who used AI sources and those who didn't. But tech companies are unlikely to participate because each one wouldn't gain, he said. "It's a different world now and you can't go back in time," Topol said of using the tools wisely.

Ontario doctors warn of increase in DIY medicine
Ontario doctors warn of increase in DIY medicine

CTV News

time3 days ago

  • Health
  • CTV News

Ontario doctors warn of increase in DIY medicine

The Ontario Medical Association (OMA) is sounding the alarm on what it says is a concerning increase in the number of patients turning to do-it-yourself medical solutions rather than getting expert advice from doctors. 'We know people are going online,' Dr. David D'Souza said. 'The aspect of looking is not necessarily a problem; it's the interpretation of it.' D'Souza, a radiation oncologist in London, Ont., said patients are often drawn to ideas that seem 'all natural' or that seem to offer 'miraculous' results or options with no unpleasant side effects. He cited a study which found that about a third of the most popular social media posts about cancer from 2018-2019 contained factually incorrect information. 'You might say, well, what's the big deal? What's the problem with it? Well, most of them are potentially harmful,' he said. D'Souza was one of several doctors who took part in a news conference hosted by the OMA Wednesday, calling attention to the rising trend of do-it-yourself medical solutions. Doctors on the panel said they are increasingly encountering self-diagnoses based on internet research or having to answer questions from patients about viral videos suggesting that fast food can cure migraines or that CBD oil can shrink tumours. Some are even trying to treat themselves. 'I had a patient who had a tube going into their kidney because it was blocked – it's called a nephrostomy tube – due to their cancer. And they actually tried to put the twine from a weed whacker in to get out the sludge that was in there,' D'Souza recalled. 'They were asking about actually putting in a little bit of Lysol to clear it out.' While he managed to dissuade them, other patients have chosen to go with alternate treatments based on their own research, sometimes with devastating effects. One young woman, D'Souza recalled, came to him with a diagnosis of cervical cancer. 'She was not ready to accept conventional treatment and decided she was going to pursue other remedies that she had heard about,' D'Souza said. 'She came back two years later, unfortunately, with her disease having progressed and spread, and in a lot of pain, and unfortunately, our ability to control her cancer and give her a long-term good outcome was severely compromised.' Patients making diagnoses with online quizzes Dr. Valerie Primeau, a psychiatrist from North Bay, Ont., said she's seeing more and more people using quick online tools to diagnose themselves with Attention Deficit/ Hyperactivity Disorder (ADHD), bipolar disorder, and other problems. 'The first concern, obviously, is misdiagnosis,' Primeau said. 'And there's certain disorders that are higher risk of misdiagnosis, specifically bipolar disorder.' She noted that if you think you have an illness, that could increase anxiety about having an illness, which could itself have negative health impacts. Best practices around treatment can also change dramatically in just the space of a couple of years, she said, information that medical experts are more likely to be appraised of than online resources. 'So that can be dangerous, as well as being given unfiltered advice about how to manage the illness, which is not likely to be evidence-based,' Primeau said. She estimated around a third of the patients she sees come to her with self-diagnoses and estimates that proportion will increase. 'It's happening more right now, and I foresee it continuing to happen more and more, especially with AI technology getting more and more available and more and more sophisticated,' Primeau said. 'I have patients now that talk to ChatGPT to get advice.' Social media a source of medical misinformation Dr. Alyse Goldberg, a Toronto endocrinologist who focuses on fertility and treating hormonal conditions, said existing technologies, particularly social media, are already driving people to health information that may not be reliable. She showed examples of posts, presented to her by social media accounts she doesn't even follow, which described 'invisible signs of Polycystic Ovary Syndrome (PCOS)' and 'tips' about other disorders she regularly discusses. 'You get targeted in terms of what therapeutic options your physician may be giving you, but then reasons to avoid some evidence-based treatment,' Goldberg said. While some of the solutions presented by the posts might sound amazing, they may not be tested or evidence-based. Nevertheless, seeing the posts could 'fracture the relationship with the physician,' Goldberg said, especially if the patient feels that good options have been 'withheld.' Some of the posts might also push users toward products that aren't effective or appropriate and Goldberg said it's important to think about 'who's trying to make money off of us and use our symptoms of medical experiences in order to self promote.' OMA President Dr. Zainab Abdurrahman said the organization is particularly concerned about the rise in diagnosis and self-treatment among young people, who tend to lean heavily on information from the Internet. 'When you break it down by generations, we're also seeing a higher uptake, especially in some of our very young populations, who are still in their teens and early 20s, who are looking more at social media and in terms of how they quantify how reliable or credible a source is versus other generations,' Abdurrahman said. She also pointed out that combatting misinformation is a wider problem society is grappling with right now. 'We want to come and address and talk about this, and talk about how to get credible information, because we know misinformation and disinformation is something that, as a society we are managing, and health-care is not immune to this.' While there are many pitfalls and problems with self-diagnosis and treatment, doctors point out that it can be beneficial to do some research from legitimate sources if it leads you to consult a physician who can more accurately diagnose a problem. The doctors also stress that it's important for medical professionals to be communicative with their patients rather than judgmental, recognizing that sometimes a prescribed course of treatment can leave patients feeling like they don't have control. 'Rather than coming back with a judgmental tone, I embrace the fact that they are communicating,' D'Souza said. They also acknowledge that access to family doctors, and financial barriers to certain kinds of medical tests and assessments could also be driving people into the arms of Dr. Google, where quick answers are easy to come by. 'Our phones now are intelligent. They listen to us and they look at our trends,' Primeau points out. 'And if we talk about something that we're concerned about, they will show us posts that relate to that. So the answers seem more immediate, and people want that. People are looking for answers, and they get that validation from that access on social media.'

Risky DIY medicine on the rise as patients turn to Internet for quick answers, Ontario doctors warn
Risky DIY medicine on the rise as patients turn to Internet for quick answers, Ontario doctors warn

CTV News

time4 days ago

  • Health
  • CTV News

Risky DIY medicine on the rise as patients turn to Internet for quick answers, Ontario doctors warn

The Ontario Medical Association (OMA) is sounding the alarm on what it says is a concerning increase in the number of patients turning to do-it-yourself medical solutions rather than getting expert advice from doctors. 'We know people are going online,' Dr. David D'Souza said. 'The aspect of looking is not necessarily a problem; it's the interpretation of it.' D'Souza, a radiation oncologist in London, Ont., said patients are often drawn to ideas that seem 'all natural' or that seem to offer 'miraculous' results or options with no unpleasant side effects. He cited a study which found that about a third of the most popular social media posts about cancer from 2018-2019 contained factually incorrect information. 'You might say, well, what's the big deal? What's the problem with it? Well, most of them are potentially harmful,' he said. D'Souza was one of several doctors who took part in a news conference hosted by the OMA Wednesday, calling attention to the rising trend of do-it-yourself medical solutions. Doctors on the panel said they are increasingly encountering self-diagnoses based on internet research or having to answer questions from patients about viral videos suggesting that fast food can cure migraines or that CBD oil can shrink tumours. Some are even trying to treat themselves. 'I had a patient who had a tube going into their kidney because it was blocked – it's called a nephrostomy tube – due to their cancer. And they actually tried to put the twine from a weed whacker in to get out the sludge that was in there,' D'Souza recalled. 'They were asking about actually putting in a little bit of Lysol to clear it out.' While he managed to dissuade them, other patients have chosen to go with alternate treatments based on their own research, sometimes with devastating effects. One young woman, D'Souza recalled, came to him with a diagnosis of cervical cancer. 'She was not ready to accept conventional treatment and decided she was going to pursue other remedies that she had heard about,' D'Souza said. 'She came back two years later, unfortunately, with her disease having progressed and spread, and in a lot of pain, and unfortunately, our ability to control her cancer and give her a long-term good outcome was severely compromised.' Patients making diagnoses with online quizzes Dr. Valerie Primeau, a psychiatrist from North Bay, Ont., said she's seeing more and more people using quick online tools to diagnose themselves with Attention Deficit/ Hyperactivity Disorder (ADHD), bipolar disorder, and other problems. 'The first concern, obviously, is misdiagnosis,' Primeau said. 'And there's certain disorders that are higher risk of misdiagnosis, specifically bipolar disorder.' She noted that if you think you have an illness, that could increase anxiety about having an illness, which could itself have negative health impacts. Best practices around treatment can also change dramatically in just the space of a couple of years, she said, information that medical experts are more likely to be appraised of than online resources. 'So that can be dangerous, as well as being given unfiltered advice about how to manage the illness, which is not likely to be evidence-based,' Primeau said. She estimated around a third of the patients she sees come to her with self-diagnoses and estimates that proportion will increase. 'It's happening more right now, and I foresee it continuing to happen more and more, especially with AI technology getting more and more available and more and more sophisticated,' Primeau said. 'I have patients now that talk to ChatGPT to get advice.' Social media a source of medical misinformation Dr. Alyse Goldberg, a Toronto endocrinologist who focuses on fertility and treating hormonal conditions, said existing technologies, particularly social media, are already driving people to health information that may not be reliable. She showed examples of posts, presented to her by social media accounts she doesn't even follow, which described 'invisible signs of Polycystic Ovary Syndrome (PCOS)' and 'tips' about other disorders she regularly discusses. 'You get targeted in terms of what therapeutic options your physician may be giving you, but then reasons to avoid some evidence-based treatment,' Goldberg said. While some of the solutions presented by the posts might sound amazing, they may not be tested or evidence-based. Nevertheless, seeing the posts could 'fracture the relationship with the physician,' Goldberg said, especially if the patient feels that good options have been 'withheld.' Some of the posts might also push users toward products that aren't effective or appropriate and Goldberg said it's important to think about 'who's trying to make money off of us and use our symptoms of medical experiences in order to self promote.' OMA President Dr. Zainab Abdurrahman said the organization is particularly concerned about the rise in diagnosis and self-treatment among young people, who tend to lean heavily on information from the Internet. 'When you break it down by generations, we're also seeing a higher uptake, especially in some of our very young populations, who are still in their teens and early 20s, who are looking more at social media and in terms of how they quantify how reliable or credible a source is versus other generations,' Abdurrahman said. She also pointed out that combatting misinformation is a wider problem society is grappling with right now. 'We want to come and address and talk about this, and talk about how to get credible information, because we know misinformation and disinformation is something that, as a society we are managing, and health-care is not immune to this.' While there are many pitfalls and problems with self-diagnosis and treatment, doctors point out that it can be beneficial to do some research from legitimate sources if it leads you to consult a physician who can more accurately diagnose a problem. The doctors also stress that it's important for medical professionals to be communicative with their patients rather than judgmental, recognizing that sometimes a prescribed course of treatment can leave patients feeling like they don't have control. 'Rather than coming back with a judgmental tone, I embrace the fact that they are communicating,' D'Souza said. They also acknowledge that access to family doctors, and financial barriers to certain kinds of medical tests and assessments could also be driving people into the arms of Dr. Google, where quick answers are easy to come by. 'Our phones now are intelligent. They listen to us and they look at our trends,' Primeau points out. 'And if we talk about something that we're concerned about, they will show us posts that relate to that. So the answers seem more immediate, and people want that. People are looking for answers, and they get that validation from that access on social media.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store