
Meta AI's new chatbot raises privacy alarms
If you've ever typed something sensitive into Meta AI, now is the time to check your settings and find out just how much of your data could be exposed.
Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my CYBERGUY.COM/NEWSLETTER.
Meta's AI app, launched in April 2025, is designed to be both a chatbot and a social platform. Users can chat casually or deep dive into personal topics, from relationship questions to financial concerns or health issues.
What sets Meta AI apart from other chatbots is the "Discover" tab, a public feed that displays shared conversations. It was meant to encourage community and creativity, letting users showcase interesting prompts and responses. Unfortunately, many didn't realize their conversations could be made public with just one tap, and the interface often fails to make the public/private distinction clear.
The feature positions Meta AI as a kind of AI-powered social network, blending search, conversation, and status updates. But what sounds innovative on paper has opened the door to major privacy slip-ups.
Privacy experts are sounding the alarm over Meta's Discover tab, calling it a serious breach of user trust. The feed surfaces chats containing legal dilemmas, therapy discussions, and deeply personal confessions, often linked to real accounts. In some cases, names and profile photos are visible. Although Meta says only shared chats appear, the interface makes it easy to hit "share" without realizing it means public exposure. Many assume the button saves the conversation privately. Worse, logging in with a public Instagram account can make shared AI activity publicly accessible by default, increasing the risk of identification.
Some posts reveal sensitive health or legal issues, financial troubles, or relationship conflicts. Others include contact details or even audio clips. A few contain pleas like "keep this private," written by users who didn't realize their messages would be broadcast. These aren't isolated incidents, and as more people use AI for personal support, the stakes will only get higher.
If you're using Meta AI, it's important to check your privacy settings and manage your prompt history to avoid accidentally sharing something sensitive. To prevent accidentally sharing sensitive prompts and ensure your future prompts stay private:
On a phone: (iPhone or Android)
On the website (desktop):
Fortunately, you can change the visibility of prompts you've already posted, delete them entirely, and update your settings to keep future prompts private.
On a phone: (iPhone or Android)
On the website (desktop):
If other users replied to your prompt before you made it private, those replies will remain attached but won't be visible unless you reshare the prompt. Once reshared, the replies will also become visible again.
On both the app and the website:
This issue isn't unique to Meta. Most AI chat tools, including ChatGPT, Claude, and Google Gemini, store your conversations by default and may use them to improve performance, train future models, or develop new features. What many users don't realize is that their inputs can be reviewed by human moderators, flagged for analysis, or saved in training logs.
Even if a platform says your chats are "private," that usually just means they aren't visible to the public. It doesn't mean your data is encrypted, anonymous, or protected from internal access. In many cases, companies retain the right to use your conversations for product development unless you specifically opt out, and finding that opt-out isn't always straightforward.
If you're signed in with a personal account that includes your real name, email address, or social media links, your activity may be easier to connect to your identity than you think. Combine that with questions about health, finances, or relationships, and you've essentially created a detailed digital profile without meaning to.
Some platforms now offer temporary chat modes or incognito settings, but these features are usually off by default. Unless you manually enable them, your data is likely being stored and possibly reviewed.
The takeaway: AI chat platforms are not private by default. You need to actively manage your settings, be mindful of what you share, and stay informed about how your data is being handled behind the scenes.
AI tools can be incredibly helpful, but without the right precautions, they can also open you up to privacy risks. Whether you're using Meta AI, ChatGPT, or any other chatbot, here are some smart, proactive ways to protect yourself:
1) Use aliases and avoid personal identifiers: Don't use your full name, birthday, address, or any details that could identify you. Even first names combined with other context can be risky.
2) Never share sensitive information: Avoid discussing medical diagnoses, legal matters, bank account info, or anything you wouldn't want on the front page of a search engine.
3) Clear your chat history regularly: If you've already shared sensitive info, go back and delete it. Many AI apps let you clear chat history through Settings or your account dashboard.
4) Adjust privacy settings often: App updates can sometimes reset your preferences or introduce new default options. Even small changes to the interface can affect what's shared and how. It's a good idea to check your settings every few weeks to make sure your data is still protected.
5) Use an identity theft protection service: Scammers actively look for exposed data, especially after a privacy slip. Identity Theft companies can monitor personal information like your Social Security Number (SSN), phone number, and email address and alert you if it is being sold on the dark web or being used to open an account. They can also assist you in freezing your bank and credit card accounts to prevent further unauthorized use by criminals. Visit Cyberguy.com/IdentityTheft for tips and recommendations.
6) Use a VPN for extra privacy: A reliable VPN hides your IP address and location, making it harder for apps, websites, or bad actors to track your online activity. It also adds protection on public Wi-Fi, shielding your device from hackers who might try to snoop on your connection. For best VPN software, see my expert review of the best VPNs for browsing the web privately on your Windows, Mac, Android & iOS devices at Cyberguy.com/VPN.
7) Don't link AI apps to your real social accounts: If possible, create a separate email address or dummy account for experimenting with AI tools. Keep your main profiles disconnected. To create a quick email alias you can use to keep your main accounts protected visit Cyberguy.com/Mail.
Meta's decision to turn chatbot prompts into social content has blurred the line between private and public in a way that catches many users off guard. Even if you think your chats are safe, a missed setting or default option can expose more than you intended. Before typing anything sensitive into Meta AI or any chatbot, pause. Check your privacy settings, review your chat history, and think carefully about what you're sharing. A few quick steps now can save you from bigger privacy headaches later.
With so much sensitive data potentially at risk, do you think Meta is doing enough to protect your privacy, or is it time for stricter guardrails on AI platforms? Let us know by writing to us at Cyberguy.com/Contact.
Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide - free when you join my CYBERGUY.COM/NEWSLETTER.
Copyright 2025 CyberGuy.com. All rights reserved.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Bloomberg
19 minutes ago
- Bloomberg
Capgemini to Buy IT Firm WNS for $3.3 Billion to Boost AI
France's Capgemini SE plans to acquire IT outsourcing firm WNS Holdings Ltd. for $3.3 billion, beating rival suitors to a deal aimed at expanding its AI operations. The French group said it's agreed to take over the smaller US-listed firm for $76.50 per share, a premium of about 28% to WNS's average price over the past 90 days. Capgemini expects the deal to boost its earnings per share by about 4% on a normalized basis in 2026.
Yahoo
an hour ago
- Yahoo
Elliptic Labs Launches 12 Smartphone Models with vivo, Transsion, and HONOR for June 2025
OSLO, Norway, July 07, 2025--(BUSINESS WIRE)--Elliptic Labs (OSE: ELABS), a global AI software company and the world leader in AI Virtual Smart Sensors™ currently deployed in over half a billion devices, has shipped its AI Virtual Smart Sensor Platform™ on 12 smartphone models in June 2025. HONOR launched four smartphones: the HONOR 400 Series. vivo announced five smartphones: the vivo IQOO Neo10 Pro+ and S30 Series, the T4 Ultra, and the vivo Y400 Pro. Transsion announced three smartphones with Elliptic Labs: the Tecno POVA Curve and the Tecno POVA 7 Series. For 2025 year-to-date, Elliptic Labs has launched on a total of 36 smartphone models. "Elliptic Labs' AI Virtual Smart Sensor Platform was launched on 12 smartphone models in June 2025, bringing our total to 36 smartphone model launches year-to-date 2025," said Laila Danielsen, CEO of Elliptic Labs. AI Virtual Proximity Sensor INNER BEAUTY Elliptic Labs' AI Virtual Proximity Sensor detects when a user holds their phone up to their ear during a call, allowing the smartphone to turn off its display and disable its screen's touch functionality. This keeps the user's ear or cheek from triggering unwanted actions during the call, such as hanging up or dialing numbers. Turning off the screen also helps conserve battery life. Proximity detection is a core capability that is used in all smartphones, but Elliptic Labs' AI Virtual Proximity Sensor is a unique, software-only solution that delivers robust proximity detection without the need for a dedicated hardware sensor. By replacing hardware sensors with software sensors, the AI Virtual Proximity Sensor reduces device cost and eliminates sourcing risk. INNER BEAUTY is a registered trademark of Elliptic Labs. AI Virtual Smart Sensor, AI Virtual Proximity Sensor, and AI Virtual Smart Sensor Platform are trademarks of Elliptic Labs. All other trademarks or service markets are the responsibility of their respective organizations. About Elliptic Labs Elliptic Labs' AI Virtual Smart Sensor Platform™ brings contextual intelligence to devices, enhancing user experiences. Our technology uses proprietary deep neural networks to create AI-powered Virtual Smart Sensors that improve personalization, privacy, and productivity. Currently deployed in over 500 million devices, our platform works across all devices, operating systems, platforms, and applications. By utilizing system-level telemetry data to cloud-based Large Language Models (LLMs), the AI Virtual Smart Sensor Platform delivers the unrivaled capability to utilize output data from every available data source. This approach allows devices to better understand and respond to their environment, making technology more intuitive and user-friendly. At Elliptic Labs, we're not just adapting to the future of technology – we're actively shaping it. Our goal is to continue pushing the boundaries of contextual intelligence, creating more intuitive and powerful experiences for users worldwide. Elliptic Labs is headquartered in Norway with presence in the USA, China, South-Korea, Taiwan, and Japan. The company is listed on the Oslo Stock Exchange. Its technology and IP are developed in Norway and are solely owned by the company. View source version on Contacts PR Contacts: Patrick Tsuipr@ Investor Relations: Lars Holmø Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
an hour ago
- Yahoo
Repay Holdings Enhances MeridianLink Integration for Streamlined Account Funding, Payments
Repay Holdings Corporation (NASDAQ:RPAY) is one of the cheap penny stocks to buy now. On June 10, Repay Holdings announced significant enhancements to its integration with MeridianLink Inc. (NYSE:MLNK). The new capabilities allow credit unions and banks within MeridianLink's network to offer new members streamlined account funding through various methods, such as debit card, ACH, and digital wallets like Apple Pay & Google Pay. The expanded account funding options are facilitated by REPAY's integrated payment technology and enable financial institutions utilizing MeridianLink Opening to accept funds into new member accounts rapidly. The simplification of the onboarding process benefits both credit unions and their members. A close-up of a person's hand holding a credit card while using a mobile application to make a payment. The enhancements complement REPAY's existing integration with MeridianLink Collect, which is designed to optimize loan collection operations by simplifying accounting and consumer payment processes. Through this integration, MeridianLink's network of financial institution customers can streamline processing efficiencies by accepting ACH and card payments via web, mobile, Interactive Voice Response/IVR, or text. Repay Holdings Corporation (NASDAQ:RPAY) is a payments technology company that provides integrated payment processing solutions in the US. MeridianLink Inc. (NYSE:MLNK) is a SaaS company that provides software solutions for banks, credit unions, mortgage lenders, specialty lending providers, and consumer reporting agencies in the US. While we acknowledge the potential of RPAY as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the . READ NEXT: and . Disclosure: None. This article is originally published at Insider Monkey. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data