logo
Stop Using ChatGPT for These 11 Things Right Now

Stop Using ChatGPT for These 11 Things Right Now

CNETa day ago
ChatGPT and other AI chatbots can be powerful natural language tools, especially when you know how to prompt. You can use ChatGPT to save money on travel, plan your weekly meal prep or even help pivot your career.
While I'm a fan, I also know the limitations of ChatGPT, and you should too, whether you're a newbie or an old hand. It's fun for trying out new recipes, learning a foreign language or planning a vacation, but you don't want to give ChatGPT carte blanche in your life. It's not great at everything -- in fact, it can be downright sketchy at a lot of things.
ChatGPT sometimes hallucinates information and passes it off as fact, and it may not always have up-to-date information. It's incredibly confident, even when it's straight up wrong. (The same can be said about other generative AI tools, too, of course.)
That matters the higher the stakes get, like when taxes, medical bills, court dates or bank balances enter the chat. If you're unsure about when turning to ChatGPT might be risky, here are 11 scenarios when you should put down the AI and choose another option. Don't use ChatGPT for any of the following.
(Disclosure: Ziff Davis, the parent company of CNET, in April filed a lawsuit against ChatGPT maker OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
1. Diagnosing physical health issues
I've definitely fed ChatGPT my symptoms out of curiosity, but the answers that come back can read like your worst nightmare. As you pore through potential diagnoses, you could swing from dehydration and the flu to some type of cancer. I have a lump on my chest and entered that information into ChatGPT. Lo and behold, it told me I may have cancer. Awesome! In fact, I have a lipoma, which is not cancerous and occurs in 1 in every 1,000 people. My licensed doctor told me that.
I'm not saying there are no good uses of ChatGPT for health: It can help you draft questions for your next appointment, translate medical jargon and organize a symptom timeline so you can walk in better prepared. And that could help make doctor visits less overwhelming. However, AI can't order labs or examine you, and it definitely doesn't carry malpractice insurance. Know its limits.
2. Taking care of your mental health
ChatGPT can offer grounding techniques, sure, but it can't pick up the phone when you're in real trouble with your mental health. I know some people use ChatGPT as a substitute therapist. CNET's Corin Cesaric found it mildly helpful for working through grief, as long as she kept its limits front of mind. But as someone who has a very real, very human therapist, I can tell you that ChatGPT is still really only a pale imitation at best, and incredibly risky at worst.
ChatpGPT doesn't have lived experience, can't read your body language or tone, and has zero capacity for genuine empathy. It can only simulate it. A licensed therapist operates under legal mandates and professional codes that protect you from harm. ChatGPT doesn't. Its advice can misfire, overlook red flags or unintentionally reinforce biases baked into its training data. Leave the deeper work — the hard, messy, human work — to an actual human who is trained to properly handle it. If you or someone you love is in crisis, please dial 988 in the US, or your local hotline.
3. Making immediate safety decisions
If your carbon-monoxide alarm starts chirping, please don't open ChatGPT and ask it if you're in real danger. I'd go outside first and ask questions later. Large language models can't smell gas, detect smoke or dispatch an emergency crew. In a crisis, every second you spend typing is a second you're not evacuating or dialing 911. ChatGPT can only work with the scraps of info you feed it, and in an emergency, it may be too little and too late. So treat your chatbot as a postincident explainer, never a first responder.
4. Getting personalized financial or tax planning
ChatGPT can explain what an ETF is, but it doesn't know your debt-to-income ratio, state tax bracket, filing status, deductions, retirement goals or risk appetite. Because its training data may stop short of the current tax year, and of the latest rate hikes, its guidance may well be stale when you hit enter.
I have friends who dump their 1099 totals into ChatGPT for a DIY return. The chatbot simply can't replace a CPA who can catch a hidden deduction worth a few hundred dollars or flag a mistake that could cost you thousands. When real money, filing deadlines, and IRS penalties are on the line, call a professional, not AI. Also, be aware that anything you share with an AI chatbot will probably become part of its training data, and that includes your income, your Social Security number and your bank routing information.
5. Dealing with confidential or regulated data
As a tech journalist, I see embargoes land in my inbox every day, but I've never thought about tossing any of these press releases into ChatGPT to get a summary or further explanation. That's because if I did, that text would leave my control and land on a third-party server outside the guardrails of my nondiscloure agreement.
The same risk applies to client contracts, medical charts or anything covered by the California Consumer Privacy Act, HIPAA, the GDPR or plain old trade-secret law. It applies to your income taxes, birth certificate, driver's license and passport. Once sensitive information is in the prompt window, you can't guarantee where it's stored, who can review it internally or whether it may be used to train future models. ChatGPT also isn't immune to hackers and security threats. If you wouldn't paste it into a public Slack channel, don't paste it into ChatGPT.
6. Doing anything illegal
This one is self-explanatory.
7. Cheating on schoolwork
I'd be lying if I said I never cheated on my exams. In high school, I used my first-generation iPod Touch to sneak a peek at a few cumbersome equations I had difficulty memorizing in AP calculus, a stunt I'm not particularly proud of. But with AI, the scale of modern cheating makes that look remarkably tame.
Turnitin and similar detectors are getting better at spotting AI-generated prose every semester, and professors can already hear "ChatGPT voice" a mile away (thanks for ruining my beloved em dash). Suspension, expulsion and getting your license revoked are real risks. It's best to use ChatGPT as a study buddy, not a ghostwriter. You're also just cheating yourself out of an education if you have ChatGPT do the work for you.
8. Monitoring information and breaking news
Since OpenAI rolled out ChatGPT Search in late 2024 (and opened it to everyone in February 2025), the chatbot can fetch fresh web pages, stock quotes, gas prices, sports scores and other real-time numbers the moment you ask, complete with clickable citations so you can verify the source. However, it won't stream continual updates on its own. Every refresh needs a new prompt, so when speed is critical, live data feeds, official press releases, news sites, push alerts and streaming coverage are still your best bet.
9. Gambling
I've actually had luck with ChatGPT and hitting a three-way parlay during the NCAA men's basketball championship, but I would never recommend it to anyone. I've seen ChatGPT hallucinate and provide incorrect information on player statistics, misreported injuries and win-loss records. I only cashed out because I double-checked every claim against real-time odds, and even then I got lucky. ChatGPT can't see tomorrow's box score, so don't rely on it solely to get you that win.
10. Drafting a will or other legally binding contract
ChatGPT is great for breaking down basic concepts. If you want to know more about a revocable living trust, ask away. However, the moment you ask it to draft actual legal text, you're rolling the dice. Estate and family-law rules vary by state, and sometimes even by county, so skipping a witness signature or omitting the notarization clause can get your whole document tossed. Let ChatGPT help you build a checklist of questions for your lawyer, then pay that lawyer to turn that checklist into a document that stands up in court.
11. Making art
This isn't an objective truth, just my own opinion, but I don't believe AI should be used to create art. I'm not anti-artifical intelligence by any means. I use ChatGPT for brainstorming new ideas and help with my headlines, but that's supplementation, not substitution. By all means, use ChatGPT, but please don't use it to make art that you then pass off as your own. It's kind of gross.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

IvyPanda Launches AI-Powered Case Study Solver to Support Academic & Professional Analysis
IvyPanda Launches AI-Powered Case Study Solver to Support Academic & Professional Analysis

Yahoo

time34 minutes ago

  • Yahoo

IvyPanda Launches AI-Powered Case Study Solver to Support Academic & Professional Analysis

Sheridan, Wyoming--(Newsfile Corp. - July 8, 2025) - IvyPanda, a company specializing in innovative tools and other educational resources, has introduced the AI Case Study Solver - an AI-driven solution that revolutionizes how case studies are approached. With this speedy tool, it's possible to do complex analysis on any topic quickly and receive clear, structured insights in mere seconds. IvyPanda - AI Case Study Solver "We've created the Case Study Analyzer to bridge the gap between complexity and clarity," says Hanna Sheremet, CEO of IvyPanda. "By automating the analysis process, we help students, educators, and professionals focus on deeper aspects of various cases and their actionable solutions." The IvyPanda Case Study Solver is ideal for anyone working with detailed case studies, whether in academia, research, medicine, business, or other fields. It's a universal tool for tackling challenging assignments, boosting the work process, and making accurate decisions. Who can benefit from the AI case study analyzer? Students can simplify academic assignments and instantly build a solid foundation for presentations or papers. Professionals can quickly break down business cases, healthcare reports, or legal scenarios for actionable insights. Educators can develop teaching materials or case study solutions more efficiently. Researchers can save time while focusing on deeper analysis and hypothesis testing. What makes the instrument special One of the Case Study Analyzer's standout features is its accessibility. IvyPanda has ensured that the tool is entirely free to use, with no registration or hidden costs. It's also unlimited, giving users the freedom to analyze multiple case studies without barriers. Other key features of the new tool are: AI-powered precision Customization opportunities Structured outputs User-friendly design Adaptability Using the Case Study Analyzer is simple: Paste up to 2,000 words of a case study into the tool. Add specific questions for targeted results (optional). Let the AI deliver a detailed analysis. In just a few moments, users receive structured summaries with identified challenges and actionable recommendations. The tool's advanced algorithms not only allow it to produce results quickly but also ensure clear and easy-to-understand output. All this makes the tool a perfect solution for users of all experience levels. About IvyPanda Since 2015, IvyPanda has been a trusted brand in academic support internationally. Headquartered in the US and Estonia, the company offers students and professionals a comprehensive suite of resources, from writing tools to study guides. IvyPanda's mission is to make learning smarter, more engaging, and more accessible for users worldwide. To learn more about the company and to try the Case Study Analyzer, visit the official website of IvyPanda. Hanna SheremetIvyPanda Study Hubwelcome@ To view the source version of this press release, please visit

OpenAI tightens the screws on security to keep away prying eyes
OpenAI tightens the screws on security to keep away prying eyes

TechCrunch

time43 minutes ago

  • TechCrunch

OpenAI tightens the screws on security to keep away prying eyes

In Brief OpenAI has reportedly overhauled its security operations to protect against corporate espionage. According to the Financial Times, the company accelerated an existing security clampdown after Chinese startup DeepSeek released a competing model in January, with OpenAI alleging that DeepSeek improperly copied its models using 'distillation' techniques. The beefed-up security includes 'information tenting' policies that limit staff access to sensitive algorithms and new products. For example, during development of OpenAI's o1 model, only verified team members who had been read into the project could discuss it in shared office spaces, according to the FT. And there's more. OpenAI now isolates proprietary technology in offline computer systems, implements biometric access controls for office areas (it scans employees' fingerprints), and maintains a 'deny-by-default' internet policy requiring explicit approval for external connections, per the report, which further adds that the company has increased physical security at data centers and expanded its cybersecurity personnel. The changes are said to reflect broader concerns about foreign adversaries attempting to steal OpenAI's intellectual property, though given the ongoing poaching wars amid American AI companies and increasingly frequent leaks of CEO Sam Altman's comments, OpenAI may be attempting to address internal security issues, too. We've reached out to OpenAI for comment.

Could EMR Alerts Boost Hypertension Detection?
Could EMR Alerts Boost Hypertension Detection?

Medscape

time44 minutes ago

  • Medscape

Could EMR Alerts Boost Hypertension Detection?

Could a simple advisory or alert in the electronic medical record (EMR) be the key to more frequent diagnosis and effective treatment of high blood pressure (BP)? While it could help, some physicians say it's not sufficient to solve the persistent problem of hypertension. However, such alerts could be part of an effective approach to catching more cases of hypertension and helping people manage their high BP. Tackling High BP Hypertension is well-known as a 'silent killer.' Left untreated, it raises the risk for heart attack, stroke, and other life-threatening events without causing any symptoms. Hypertension is also a widespread problem. The CDC estimates that about 120 million adults in the US have high BP, defined as having a systolic BP higher than 130 mm Hg or a diastolic BP greater than 80 mm Hg (or taking medication to lower their BP). However, only about 1 in 4 adults with high BP actually have their BP under control. According to the results of a recent quality improvement study in JAMA Network Open , using technology to prompt team-based care can improve primary care hypertension control and diagnosis in the ambulatory setting. The researchers tested an intervention consisting of a high BP advisory in the EMR, along with team training, audit, and feedback. If an elevated BP reading (systolic BP ≥ 140 mm Hg or diastolic BP ≥ 90 mm Hg) was entered into the EMR, the medical assistant received an advisory to recheck the patient's BP. If the BP continued to be elevated, the EMR prompted a clinician-facing advisory, along with an order panel link. 'This study demonstrates that a paired human-technology intervention focused on team-based care and EMR integration is a fruitful approach to improving population health metrics,' the authors wrote. 'Anything that is done to alert the clinician is appropriate because of clinical inertia,' said Luke Laffin, MD, co-director of the Cleveland Clinic's Center for Blood Pressure Disorders in Cleveland, Ohio. 'There's a lot of clinical inertia in hypertension care. It breaks that inertia. I'm not surprised that this intervention works.' The Value of an Alert The idea of incorporating this type of alert into the EMR system is feasible, according to Brian Barr, MD, cardiologist at the University of Maryland Medical Center and assistant professor of medicine at the University of Maryland School of Medicine in Baltimore. 'Most modern EMRs — such as Epic, Cerner, Athenahealth, and others — are equipped with customizable clinical support tools that allow for automated reminders, alerts, and health maintenance prompts,' Barr said, adding that configurations could allow for notifications for missing BP readings or lack of documentation of BP within a particular time period. Alerts could also be quite useful to busy primary care physicians who see patients for a multitude of reasons. 'It's a reminder not to get distracted by the chief complaint syndrome,' said Brent Smith, MD, a family physician in Greenville, Mississippi and member of the board of directors for the American Academy of Family Physicians. 'It forces us not to overlook hypertension when there are other things that brought them into the doctor's office.' Using this type of tool could also identify patients with multiple elevated BP readings but no formal hypertension diagnosis, said Barr. That information could also allow physicians to follow patients more closely and confirm elevated BP in multiple settings — not just in the office, according to Blair Suter, MD, cardiologist with The Ohio State University Wexner Medical Center. 'It could be the canary in the coal mine,' he said. 'It could really be the sign of things to come or to progress to.' However, healthcare organizations must also be cautious about the possibility of contributing to electronic health record alert fatigue and instead find a balance that allows the use of technology to improve patient care without increasing the alert burden on clinicians. The authors of the study also acknowledged that some clinic managers had sustainability concerns about the time needed for BP rechecks. '[A]fter the rollout, some clinics piloted scheduling patients 10 minutes ahead of the clinician visit to increase previsit time for [medical assistants] to manage this and other population health initiatives,' they wrote. 'I think that rather than just having alerts, where people tend to get alarm fatigue, it might be more useful to focus on how we're collecting the data and how reliable it is and how much of a true reflection it is of the patient's true blood pressure when they're not in the office,' said Jeremy Bock, MD, interventional cardiologist and endovascular specialist at VHC Health in Arlington, Virginia. At-Home Monitoring Indeed, even if alerts do help, the challenge of getting patients to monitor their BP regularly at home does persist. At-home checks can seem nonurgent to them, especially if they are already taking an antihypertensive medication and feel fine. Kristen Trom, DO, family physician with Inspira Health in Mullica Hill, New Jersey, said that her organization's EMR uses an alert, but it's still a challenge to get patients to monitor their BP at home and take their medication. 'Resistance to starting medication can be a major challenge,' she said. 'Many of these patients have never been on medication before and prefer not to start.' 'It's trying to find that balance between being nonintrusive and also being part of their daily routine,' said Laffin. Physicians may need to spend more time emphasizing the importance of at-home BP monitoring and ensure patients know how to do it correctly. Prevention efforts can be time-consuming, noted Bock, but they can also improve patient-provider relationships and patient satisfaction. One important element of that process is reminding patients to use a validated BP cuff. Suter recommends directing patients to the website which is also suggested by the American Heart Association. Future Possibilities Time is often the biggest impediment for primary care providers. Eventually EMRs may be configured — along with AI — to incorporate the most effective cues and advisories, and then filter the information that would be most helpful for the physician, Smith said. 'It is getting better, and it has potential for the future,' he said. Barr added that other efforts could bolster such improvements. For example, clinic-level interventions also play a major role. 'Standardized blood pressure measurement protocols, routine use of home blood pressure monitoring, automated follow-up scheduling, and nurse-led hypertension visits can all support timely diagnosis and management,' Barr said. 'Engaging patients through portal alerts, educational messaging, and self-reported blood pressure entries adds another layer of protective care. Together, these strategies help close care gaps and support earlier identification and treatment of hypertension, ultimately improving long-term cardiovascular outcomes.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store