Latest news with #TroyZada
Yahoo
26-06-2025
- Health
- Yahoo
ChatGPT Tells User to Mix Bleach and Vinegar
Does mixing bleach and vinegar sound like a great idea? Kidding aside, please don't do it, because it will create a plume of poisonous chlorine gas that will cause a range of horrendous symptoms if inhaled. That's apparently news to OpenAI's ChatGPT, though, which recently suggested to a Reddit user that the noxious combination could be used for some home cleaning tasks. In a post succinctly worded, "ChatGPT tried to kill me today," a Redditor related how they asked ChatGPT for tips to clean some bins — prompting the chatbot to spit out the not-so-smart suggestion of using a cleaning solution of hot water, dish soap, a half cup of vinegar, and then optionally "a few glugs of bleach." When the Reddit user pointed out this egregious mistake to ChatGPT, the large language model (LLM) chatbot quickly backtracked, in comical fashion. "OH MY GOD NO — THANK YOU FOR CATCHING THAT," the chatbot cried. "DO NOT EVER MIX BLEACH AND VINEGAR. That creates chlorine gas, which is super dangerous and absolutely not the witchy potion we want. Let me fix that section immediately." Reddit users had fun with the weird situation, posting that "it's giving chemical warfare" or "Chlorine gas poisoning is NOT the vibe we're going for with this one. Let's file that one in the Woopsy Bads file!" This is all fun and games until somebody really does mix bleach and vinegar and suffers a medical catastrophe. What then? We already have stories about people asking ChatGPT how to inject facial filler, while studies are coming out that say using ChatGPT to self-diagnose an issue is going to lead to erroneous answers that may potentially put you on the wrong medical path. For example, the University of Waterloo in Ontario recently published research showing that ChatGPT got the answers wrong two-thirds of the time when answering medical questions. "If you use LLMs for self-diagnosis, as we suspect people increasingly do, don't blindly accept the results," said Troy Zada, a management sciences doctoral student and first author of the paper, said in a statement about the research. "Going to a human health-care practitioner is still ideal." Unfortunately, the AI industry is making little progress in eliminating the hallucinations these models spit out, even as the models otherwise become more advanced — a problem that will likely get worse as AI embeds itself ever more deeply into our lives. More on OpenAI's ChatGPT: OpenAI May Have Screwed Up So Badly That Its Entire Future Is Under Threat


CTV News
02-06-2025
- Health
- CTV News
Researchers urge caution when using ChatGPT to self-diagnose illnesses
Researchers examined the use of ChatGPT-4 to self-diagnose health problems. As Canadians increasingly turn to artificial intelligence for quick answers about health problems, a new study warns relying on tools like ChatGPT for self-diagnosis could be risky. A team, led by researchers at the University of Waterloo, evaluated the performance of ChatGPT-4, a large language model (LLM) released by OpenAI. The chatbot was asked a series of open-ended medical questions based on scenarios modified from a medical licensing exam. The findings were striking. Only 31 per cent of ChatGPT's responses were deemed entirely correct, and just 34 per cent were considered clear. Troy Zada Sirisha Rambhatla PhD student Troy Zada and Dr. Sirisha Rambhatla at the University of Waterloo are part of the research team. 'So, not that high,' said Troy Zada, a PhD student at the University of Waterloo who led the research team. 'If it is telling you that this is the right answer, even though it's wrong, that's a big problem, right?'' The researchers compared ChatGPT-4 with its earlier 3.5 version and found significant improvements, but not enough. In one example, the chatbot confidently diagnosed a patient's rash as a reaction to laundry detergent. In reality, it was caused by latex gloves — a key detail missed by the AI, which had been told the patient studied mortuary science and used gloves. The researchers concluded that LLMs are not yet reliable enough to replace medical professionals and should be used with caution when it comes to health matters. This is despite studies that have found AI chatbots can best human doctors in certain situations and pass medical exams involving multiple choice questions. Zada said he's not suggesting people stop using ChatGPT for medical information, but they must be aware of its limitations and potential for misinformation. 'It could tell you everything is fine when there's actually a serious underlying issue,' said Zada. He says it could also offer up information that would make someone needlessly worry. Millions of Canadians currently do not have a family doctor and there are concerns some may be relying on artificial intelligence to diagnose health problems, even though AI chatbots often advise users to consult an actual doctor. The researchers also noted the chatbots lack accountability, whereas a human doctor can face severe consequences for errors, such as having their licence revoked or being charged with medical malpractice. While the researchers note ChatGPT did not get any of the answers spectacularly wrong, they have some simple advice. 'When you do get a response be sure to validate that response,' said Zada. Dr. Amrit Kirpalani agrees. He's a pediatric nephrologist and assistant professor at Western University who has studied AI in medicine and has noticed more patients and their family members bringing up AI platforms such as ChatGPT. He believes doctors should initiate conversations about its use with patients because some may be hesitant to talk about it. 'Nobody wants to tell their doctor that they went on ChatGPT and it told them something different,' says Kirpalani. He'd prefer patients discuss a chatbot's response with a physician, especially since an AI can sometimes be even more persuasive than a human. 'I'm not sure I could be as convincing as an AI tool. They can explain some things in a much more simple and understandable way,' says Kirpalani. 'But the accuracy isn't always there. So it could be so convincing even when it's wrong.' He likens AI to another familiar online tool. 'I kind of use the Wikipedia analogy of, it can be a great source of information, but it shouldn't be your primary source. It can be a jumping-off point.' The researchers also acknowledge as LLMs continue to improve, they could eventually be reliably used in a medical setting. But for now, Zada has this to say: 'Don't blindly accept the results.'


CTV News
31-05-2025
- Health
- CTV News
Self-diagnosing with AI? Canadian study finds ChatGPT can get it wrong
Researchers examined the use of ChatGPT-4 to self-diagnose health problems. As Canadians increasingly turn to artificial intelligence for quick answers about health problems, a new study warns relying on tools like ChatGPT for self-diagnosis could be risky. A team, led by researchers at the University of Waterloo, evaluated the performance of ChatGPT-4, a large language model (LLM) released by OpenAI. The chatbot was asked a series of open-ended medical questions based on scenarios modified from a medical licensing exam. The findings were striking. Only 31 per cent of ChatGPT's responses were deemed entirely correct, and just 34 per cent were considered clear. Troy Zada Sirisha Rambhatla PhD student Troy Zada and Dr. Sirisha Rambhatla at the University of Waterloo are part of the research team. 'So, not that high,' said Troy Zada, a PhD student at the University of Waterloo who led the research team. 'If it is telling you that this is the right answer, even though it's wrong, that's a big problem, right?'' The researchers compared ChatGPT-4 with its earlier 3.5 version and found significant improvements, but not enough. In one example, the chatbot confidently diagnosed a patient's rash as a reaction to laundry detergent. In reality, it was caused by latex gloves — a key detail missed by the AI, which had been told the patient studied mortuary science and used gloves. The researchers concluded that LLMs are not yet reliable enough to replace medical professionals and should be used with caution when it comes to health matters. This is despite studies that have found AI chatbots can best human doctors in certain situations and pass medical exams involving multiple choice questions. Zada said he's not suggesting people stop using ChatGPT for medical information, but they must be aware of its limitations and potential for misinformation. 'It could tell you everything is fine when there's actually a serious underlying issue,' said Zada. He says it could also offer up information that would make someone needlessly worry. Millions of Canadians currently do not have a family doctor and there are concerns some may be relying on artificial intelligence to diagnose health problems, even though AI chatbots often advise users to consult an actual doctor. The researchers also noted the chatbots lack accountability, whereas a human doctor can face severe consequences for errors, such as having their licence revoked or being charged with medical malpractice. While the researchers note ChatGPT did not get any of the answers spectacularly wrong, they have some simple advice. 'When you do get a response be sure to validate that response,' said Zada. Dr. Amrit Kirpalani agrees. He's a pediatric nephrologist and assistant professor at Western University who has studied AI in medicine and has noticed more patients and their family members bringing up AI platforms such as ChatGPT. He believes doctors should initiate conversations about its use with patients because some may be hesitant to talk about it. 'Nobody wants to tell their doctor that they went on ChatGPT and it told them something different,' says Kirpalani. He'd prefer patients discuss a chatbot's response with a physician, especially since an AI can sometimes be even more persuasive than a human. 'I'm not sure I could be as convincing as an AI tool. They can explain some things in a much more simple and understandable way,' says Kirpalani. 'But the accuracy isn't always there. So it could be so convincing even when it's wrong.' He likens AI to another familiar online tool. 'I kind of use the Wikipedia analogy of, it can be a great source of information, but it shouldn't be your primary source. It can be a jumping-off point.' The researchers also acknowledge as LLMs continue to improve, they could eventually be reliably used in a medical setting. But for now, Zada has this to say: 'Don't blindly accept the results.'