logo
We Asked Chatbots About Home Security: Here's Why You Can't Trust Them

We Asked Chatbots About Home Security: Here's Why You Can't Trust Them

Yahoo11-02-2025

I've been a proponent of useful AI in home security, where it's holding conversations for us, identifying packages, learning to recognize important objects and searching our video histories to answer questions. But that doesn't mean you should pop open ChatGPT and start asking it security questions.
Generative and conversational AI tools have their uses, but it's a bad idea to ask any chatbot about your safety, home security, or threats to your house. We tried -- and it's unnerving how much they get wrong or can't help with.
There are good reasons for this: Even the best LLMs, or large language models, still hallucinate information from the patterns they've gleaned. That's especially a problem in smart home tech, where tech specs, models, compatibility, vulnerabilities and updates shift so frequently. That means its easy for ChatGPT to get confused about what's right, current or even real.
Let's look at a few of the biggest mistakes, so you can see what I mean.
Asking a chatbot about specific security technology is always a risky business, and nothing illustrates that quite so well as this popular Reddit story about a chat AI that told the user a Tesla could access their "home security systems." That's not true -- it's probably a hallucination based on Tesla's HomeLink service, which lets you open compatible garage doors. Services like Google Gemini also suffer from hallucinations, which can make the details hard to trust.
While AI can write anything from essays to phishing emails (don't do that), it still gets information wrong, which can lead to unfounded privacy concerns. Interestingly, when I asked ChatGPT what Teslas could connect to and monitor, it didn't make the same mistake, but it did skip features like HomeLink, so you still aren't getting the full picture. And that's just the start.
ChatGPT and other LLMs also struggle to assimilate real-time information and use it to provide advice. That's especially noticeable during natural disasters like wildfires, floods or hurricanes. As hurricane Milton was bearing down this month, I queried ChatGPT about whether my home was in danger and where Milton was going to hit. Though, thankfully, the chatbot avoided wrong answers, it was unable to give me any advice except to consult local weather channels and emergency services.
Don't waste time on that when your home may be in trouble. Instead of turning to AI for a quick answer, consult weather apps and software like Watch Duty; up-to-date satellite imagery; and local news.
It would be nice if AI chatbots could provide a summary of a brand's history with security breaches and whether there are any red flags about purchasing the brand's products. Unfortunately, they don't seem capable of that yet, so you can't really trust what they have to say about security companies.
For example, when I asked ChatGPT if Ring had suffered any security breaches, it mentioned that Ring had experienced security incidents, but not when (before 2018), which is a vital piece of information. It also missed key developments, including the completion of Ring's payout to affected customers this year and Ring's 2024 policy reversal that made cloud data harder for police to access.
When I asked about Wyze, which CNET isn't currently recommending, ChatGPT said it was a "good option" for home security but mentioned it suffered a data breach in 2019 that exposed user data. But it didn't mention that Wyze had exposed databases and video files in 2022, then vulnerabilities in 2023 and again in 2024 that let users access private home videos that weren't their own. So while summaries are nice, you certainly aren't getting the full picture when it comes to security history or if brands are safe to trust.
Read more: We Asked a Top Criminologist How Burglars Choose Homes
Another common home security question I see is about the need for subscriptions to use security systems or home cameras. Some people don't want to pay ongoing subscriptions, or they want to make sure what they get is worth it. Though chatbots can give lots of recipe specifics, they aren't any help here.
When I questioned ChatGPT about whether Reolink requires subscriptions, it couldn't give me any specifics, saying many products don't require subscriptions for basic features but that Reolink "may offer subscriptions plans" for advanced features. I tried to narrow it down with a question about the Reolink Argus 4 Pro, but again ChatGPT remained vague about some features being free and some possibly needing subscriptions. As answers go, these were largely useless.
Meanwhile, a trip to CNET's guide on security camera subscriptions or Reolink's own subscriptions page shows that Reolink offers both Classic and Upgraded tier subscriptions specifically for LTE cameras, starting at $6 to $7 per month, depending on how many cameras you want to support, and going up to $15 to $25 for extra cloud storage and rich notifications/smart alerts. Finding those answers takes less time than asking ChatGPT, and you get real numbers to work with.
Best DIY Home Security Systems of 2024
See at CNET
As the famous detective said, "Just one more thing." If you do ever query a chatbot about home security, never give it any personal information, like your home address, your name, your living situation or any type of payment info. AIs like ChatGPT have had bugs before that allowed other users to spy on private data like that.
Additionally, LLM privacy policies can always be updated or left vague enough to allow for profiling and the sale of user data they collect. The scraping of data from social media is bad enough, you really don't want to hand personal details over directly to a popular AI service.
Be careful what data you provide as part of a question, and even how you phrase your query, because there's always someone eager to take advantage of whatever data you let slip. If you think you've already given out your address a few too many times online, we have a guide on how you can help fix that.
Read more: Your Private Data Is All Over the Internet. Here's What You Can Do About It
For more information, check out whether you should pay for more-advanced ChatGPT features, and take a look at our in-depth review of Google Gemini and our coverage of the latest on Apple Intelligence.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Musk renews attacks on Trump's "big, beautiful bill," says it will "destroy millions of jobs"
Musk renews attacks on Trump's "big, beautiful bill," says it will "destroy millions of jobs"

CBS News

timean hour ago

  • CBS News

Musk renews attacks on Trump's "big, beautiful bill," says it will "destroy millions of jobs"

Billionaire Elon Musk on Saturday doubled down on his distaste for President Trump's sprawling tax and spending cuts bill, arguing the legislation that Republican senators are scrambling to pass would kill jobs and bog down burgeoning industries. "The latest Senate draft bill will destroy millions of jobs in America and cause immense strategic harm to our country," Musk wrote on X on Saturday as the Senate was scheduled to call a vote to open debate on the nearly 1,000-page bill. "It gives handouts to industries of the past while severely damaging industries of the future." The Tesla and SpaceX CEO, whose birthday is also Saturday, later posted that the bill would be "political suicide for the Republican Party." The criticisms reopen a recent fiery conflict between the former head of the Department of Government Efficiency and the administration he recently left. They also represent yet another headache for Republican Senate leaders who have spent the weekend working overtime to get the legislation through their chamber so it can pass by Mr. Trump's Fourth of July deadline. Musk has previously made his opinions about Trump's "big, beautiful bill" clear. In late May, just a few days before he officially left his post in the federal government, he told "CBS News Sunday Morning" he was "disappointed" with the bill's price tag. "I was disappointed to see the massive spending bill, frankly, which increases the budget deficit, not just decreases it, and undermines the work that the DOGE team is doing," Musk told CBS News. "I think a bill can be big or it can be beautiful," Musk added. "But I don't know if it can be both. My personal opinion." Following a laudatory celebration in the Oval Office, his language became more aggressive and he blasted the bill as "pork-filled" and a "disgusting abomination." "Shame on those who voted for it: you know you did wrong. You know it," he wrote on X earlier this month. In another post, the wealthy GOP donor who had recently forecasted that he'd step back from political donations threatened to fire lawmakers who "betrayed the American people." When Mr. Trump clapped back to say he was disappointed with Musk, back-and-forth fighting erupted and quickly escalated. Musk suggested without evidence that Mr. Trump, who spent the first part of the year as one of his closest allies, was mentioned in files related to sex abuser Jeffrey Epstein. The president also threatened to cut off federal subsidies and contracts to Elon Musk's companies. SpaceX receives tens of billions of dollars in federal money, most of which are in the form of federal grants from NASA. "He's got a lot of money, he gets a lot of subsidy," Mr. Trump told reporters on June 6. "So we'll take a look at that. Only if it's fair for him and for the country. I would certainly think about it, but it has to be fair." Musk ultimately tried to make nice with the administration, saying he regretted some of his posts that "went too far." Trump responded in kind in an interview with The New York Post, saying, "Things like that happen. I don't blame him for anything." The shocking rift came after Musk donated $277 million to Mr. Trump's presidential campaign and other Republican candidates in the last election cycle, according to campaign finance records. It's unclear how Musk's latest broadsides will influence the fragile peace he and the president had enjoyed in recent weeks. The White House didn't immediately respond to a request for comment. Musk has spent recent weeks focused on his businesses, and his political influence has waned since he left the administration.

Verizon's new deal is so crazy good that Reddit wonders if it's real
Verizon's new deal is so crazy good that Reddit wonders if it's real

Miami Herald

timean hour ago

  • Miami Herald

Verizon's new deal is so crazy good that Reddit wonders if it's real

Verizon has been really struggling to keep customers happy lately. In fact, the phone carrier has actually been having a hard time keeping customers in the fold at all. In the first quarter of 2025, the company unfortunately reported a net loss of 289,000 total postpaid phone customers. This was a record net loss for the company, according to New Street Research Analysts. The net loss was also considerably higher than the same time period the year prior, when the company reported a loss of just 114,000 customers. Don't miss the move: Subscribe to TheStreet's free daily newsletter Still, despite the record number of departures, the company did exceed both revenue and earnings estimates in the first quarter of 2025. This suggests that it's doing some things right. But it undoubtedly needs to solve the problem of so many customers leaving for other carriers if it wants to continue to meet or exceed its revenue goals. Verizon may be trying to address this problem now, and to earn some kudos from its user base and potentially improve customer loyalty. The solution: a new Verizon phone offer that's crazy good. The new offer is so good that Reddit users have actually expressed doubts as to whether it is a real deal or is too good to be true. Image source: Morris/Bloomberg via Getty Images Verizon's new deal is an unbeatable one because it includes a totally 100% free iPhone 16. Not only that, but you do not have to do a trade-in to get the free phone, and you do not have to change your plan by upgrading to a more expensive one. Of course, this sounds hard to believe, because being simply handed a new phone with no strings attached seems too good to be true. That's why a Reddit user who was invited to take advantage of the offer felt the need to post a thread on Reddit about the deal Related: T-Mobile makes AT&T and Verizon customers a great offer The Reddit poster said he received a text message that said, "As a way of saying thanks, a line on your account can upgrade to the iPhone 16. No trade-in required, and you can keep your current plan. Call 800.922.0204. Add'l terms apply." He was obviously interested, but he wanted to make sure that the offer was legit. Fortunately, other posters were quick to respond and say that the deal was actually valid. They pointed out it was from the official Verizon number, and others posted replies to say they had received the same offer, or a similar one that they took advantage of for a different free phone. With Verizon simply making a phone available for free, most people who can access this offer will probably want to take advantage of it. However, it is also worth noting that, like with most "free" phone offers, this typically comes in the form of a monthly statement credit for 36 months. So, if you get the free iPhone based on this deal, you'll effectively be committing to staying with Verizon for the next three years until you've earned enough credits to "pay off" the free phone you were given. Related: T-Mobile finally brings back long-awaited feature If you are planning to stay with Verizon anyway, this definitely should not be a deal breaker since most phone deals are structured this way and, the bottom line is, you are still being given a free phone with a starting price of $800 per month – and without having to get a new plan or give up your old device. Of course, you likely will have to pay the tax on the new phone, but that's also a small price to pay if you have an older device and have been waiting to upgrade. If you're hoping to take advantage of this great deal, watch your phone for a text from Vierzon to see if you get a targeted offer, or you can contact Verizon to see if you're eligible. More Retail: Costco quietly plans to offer a convenient service for customersT-Mobile pulls the plug on generous offer, angering customersKellogg sounds alarm on unexpected shift in customer behavior The deal appears to be targeted to current long-term customers or those with multiple lines, so it is likely part of Verizon's efforts to reduce the number of customers leaving and turn those numbers around for the next quarter. And what if you aren't a customer or offered the targeted deal? Verizon is offering a number of other deals worth looking at, including a free phone on any plan (with an eligible trade-in) as well as a 3-year price lock guarantee. So, you won't be left out in the cold. Related: Veteran fund manager unveils eye-popping S&P 500 forecast The Arena Media Brands, LLC THESTREET is a registered trademark of TheStreet, Inc.

Don't Ask AI ChatBots for Medical Advice, Study Warns
Don't Ask AI ChatBots for Medical Advice, Study Warns

Newsweek

time2 hours ago

  • Newsweek

Don't Ask AI ChatBots for Medical Advice, Study Warns

Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Trust your doctor, not a chatbot. That's the sobering conclusion of a new study published in the journal Annals of Internal Medicine, which reveals how artificial intelligence (AI) is vulnerable to being misused to spread dangerous misinformation on health. Researchers experimented with five leading AI models developed by Anthropic, Google, Meta, OpenAI and X Corp. All five systems are widely used, forming the backbone of the AI-powered chatbots embedded in websites and apps around the world. Using developer tools not typically accessible to the public, the researchers found that they could easily progam instances of the AI systems to respond to health-related questions with incorrect—and potentially harmful—information. Worse, the chatbots were found to wrap their false answers in convincing trappings. "In total, 88 percent of all responses were false," explained paper author Natansh Modi of the University of South Africa in a statement. "And yet they were presented with scientific terminology, a formal tone and fabricated references that made the information appear legitimate." Among the false claims made were debunked myths such as that vaccines cause autism, that HIV is an airborne disease and that 5G causes infertility. Of the five chatbots evaluated, four presented responses that were 100 percent incorrect. Only one model showed some resistance, generating disinformation in 40 percent of cases. A stock image showing a sick person using a smartphone. A stock image showing a sick person using a smartphone. demaerre/iStock / Getty Images Plus Disinformation Bots Already Exist The research didn't stop at theoretical vulnerabilities; Modi and his team went a step further, using OpenAI's GPT Store—a platform that allows users to build and share customized ChatGPT apps—to test how easily members of the public could create disinformation tools themselves. "We successfully created a disinformation chatbot prototype using the platform and we also identified existing public tools on the store that were actively producing health disinformation," said Modi. He emphasized: "Our study is the first to systematically demonstrate that leading AI systems can be converted into disinformation chatbots using developers' tools, but also tools available to the public." A Growing Threat to Public Health According to the researchers, the threat posed by manipulated AI chatbots is not hypothetical—it is real and happening now. "Artificial intelligence is now deeply embedded in the way health information is accessed and delivered," said Modi. "Millions of people are turning to AI tools for guidance on health-related questions. "If these systems can be manipulated to covertly produce false or misleading advice then they can create a powerful new avenue for disinformation that is harder to detect, harder to regulate and more persuasive than anything seen before." Previous studies have already shown that generative AI can be misused to mass-produce health misinformation—such as misleading blogs or social media posts—on topics ranging from antibiotics and fad diets to homeopathy and vaccines. What sets this new research apart is that it is the first to show how foundational AI systems can be deliberately reprogrammed to act as disinformation engines in real time, responding to everyday users with false claims under the guise of credible advice. The researchers found that even when the prompts were not explicitly harmful, the chatbots could "self-generate harmful falsehoods." A Call for Urgent Safeguards While one model—Anthropic's Claude 3.5 Sonnet—showed some resilience by refusing to answer 60 percent of the misleading queries, researchers say this is not enough. The protections across systems were inconsistent and, in most cases, easy to bypass. "Some models showed partial resistance, which proves the point that effective safeguards are technically achievable," Modi noted. "However, the current protections are inconsistent and insufficient. Developers, regulators and public health stakeholders must act decisively, and they must act now." If left unchecked, the misuse of AI in health contexts could have devastating consequences: misleading patients, undermining doctors, fueling vaccine hesitancy and worsening public health outcomes. The study's authors call for sweeping reforms—including stronger technical filters, better transparency about how AI models are trained, fact-checking mechanisms and policy frameworks to hold developers accountable. They draw comparisons with how false information spreads on social media, warning that disinformation spreads up to six times faster than the truth and that AI systems could supercharge that trend. A Final Warning "Without immediate action," Modi said, "these systems could be exploited by malicious actors to manipulate public health discourse at scale, particularly during crises such as pandemics or vaccine campaigns." Newsweek has contacted Anthropic, Google, Meta, OpenAI and X Corp for comment. Do you have a tip on a science story that Newsweek should be covering? Do you have a question about chatbots? Let us know via science@ References Modi, N. D., Menz, B. D., Awaty, A. A., Alex, C. A., Logan, J. M., McKinnon, R. A., Rowland, A., Bacchi, S., Gradon, K., Sorich, M. J., & Hopkins, A. M. (2024). Assessing the system-instruction vulnerabilities of large language models to malicious conversion into health disinformation chatbots. Annals of Internal Medicine.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store