logo
The Tea App Data Breach: What Was Exposed and What We Know About the Class Action Lawsuit

The Tea App Data Breach: What Was Exposed and What We Know About the Class Action Lawsuit

CNET2 days ago
Tea, a women's dating safety app that surged to the top of the free iOS App Store listings, suffered a major security breach last week. The company confirmed Friday that it "identified authorized access to one of our systems" that exposed thousands of user images. And now we know that DMs were accessed during the breach, too.
Tea's preliminary findings from the end of last week showed the data breach exposed approximately 72,000 images: 13,000 images of selfies and photo identification that people had submitted during account verification, and 59,000 images that were publicly viewable in the app from posts, comments and direct messages.
Those images had been stored in a "legacy data system" that contained information from more than two years ago, the company said in statement. "At this time, there is no evidence to suggest that current or additional user data was affected."
Earlier Friday, posts on Reddit and 404 Media reported that Tea app users' faces and IDs had been posted on anonymous online message board 4chan. Tea requires users to verify their identities with selfies or IDs, which is why driver's licenses and pictures of people's faces are in the leaked data.
And on Monday, a Tea spokesperson confirmed to CNET that it additionally "recently learned that some direct messages (DMs) were accessed as part of the initial incident." Tea has also taken the affected system offline. That confirmation followed a report by 404 Media on Monday that an independent security researcher discovered it would have been possible for hackers to gain access to DMs between Tea users, affecting messages sent up to last week on the Tea app.
Tea said it has launched a full investigation to assess the scope and impact of the breach.
Class action lawsuit filed
One of the users of the Tea app, Griselda Reyes, has filed a class action lawsuit on behalf of herself and other Tea users affected by the data breach. According to court documents filed on July 28, as reported earlier by 404 Media, Reyes is suing Tea over its alleged "failure to properly secure and safeguard ... personally identifiable information."
"Shortly after the data breach was announced, internet users claimed to have mapped the locations of Tea's users based on metadata contained from the leaked images," the complaint alleges. "Thus, instead of empowering women, Tea has actually put them at risk of serious harm."
Tea also has yet to notify its customers personally about their data being breached, the complaint alleges.
The complaint is seeking class action status, damages for those affected "in an amount to be determined" and certain requirements for Tea to improve its data storage and handling practices.
Scott Edward Cole of Cole & Van Note, the law firm representing Reyes, told CNET he is "stunned" by the alleged lack of security protections in place.
"This application was advertised as a safe place for women to share information, sometimes very intimate information, about their dating experiences. Few people would take that risk if they'd known Tea Dating put such little effort into its cybersecurity," Cole alleged. "One chief goal of our lawsuit is to compel the company to start taking user privacy a lot more seriously."
Tea did not immediately respond to a request for comment on the class action lawsuit.
What is the Tea app?
The premise of Tea is to provide women with a space to report negative interactions they've had while encountering men in the dating pool, with the intention of keeping other women safe.
The app is currently sitting at the No. 2 spot for free apps on Apple's US App Store, right after ChatGPT, drawing international attention and sparking a debate about whether the app violates men's privacy. Following the news of the data breach, it also plays into the wider ongoing debate around whether online identity and age verification pose an inherent security risk to internet users.
In the privacy section on its website, Tea says: "Tea Dating Advice takes reasonable security measures to protect your Personal Information to prevent loss, misuse, unauthorized access, disclosure, alteration and destruction. Please be aware, however, that despite our efforts, no security measures are impenetrable."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Never Use ChatGPT for These 11 Things
Never Use ChatGPT for These 11 Things

CNET

timean hour ago

  • CNET

Never Use ChatGPT for These 11 Things

ChatGPT is the most popular AI chatbot on the internet for good reason. Over the last three years, AI has changed the way we interact with the world around us, making it easier to do all kinds of things. Whether you're planning a trip, trying to save money on groceries or you're searching for specific information, AI has you covered. While I'm a fan, I also know the limitations of ChatGPT, and you should too, whether you're a newbie or an expert. It's fun for trying new recipes, learning a foreign language or planning a vacation, but you don't want to give ChatGPT carte blanche in your life. It's not great at everything -- in fact, it can be downright sketchy at a lot of things. ChatGPT sometimes hallucinates information and passes it off as fact, and it may not always have up-to-date information. It's incredibly confident, even when it's straight up wrong. (The same can be said about other generative AI tools, too, of course.) That matters the higher the stakes get, like when taxes, medical bills, court dates or bank balances enter the chat. If you're unsure about when turning to ChatGPT might be risky, here are 11 scenarios when you should put down the AI and choose another option. Don't use ChatGPT for any of the following. (Disclosure: Ziff Davis, the parent company of CNET, in April filed a lawsuit against ChatGPT maker OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) 1. Diagnosing physical health issues I've definitely fed ChatGPT my symptoms out of curiosity, but the answers that come back can read like your worst nightmare. As you pore over potential diagnoses, you could swing from dehydration and the flu to some type of cancer. I have a lump on my chest and entered that information into ChatGPT. Lo and behold, it told me I may have cancer. In fact, I have a lipoma, which is not cancerous and occurs in 1 in every 1,000 people. My licensed doctor told me that. I'm not saying there are no good uses of ChatGPT for health: It can help you draft questions for your next appointment, translate medical jargon and organize a symptom timeline so you can walk in better prepared. And that could help make doctor visits less overwhelming. However, AI can't order labs or examine you, and it definitely doesn't carry malpractice insurance. Know its limits. 2. Taking care of your mental health ChatGPT can offer grounding techniques, sure, but it can't pick up the phone when you're in real trouble with your mental health. I know some people use ChatGPT as a substitute therapist. CNET's Corin Cesaric found it mildly helpful for working through grief, as long as she kept its limits front of mind. But as someone who has a very real, very human therapist, I can tell you that ChatGPT is still really only a pale imitation at best, and incredibly risky at worst. ChatpGPT doesn't have lived experience, can't read your body language or tone, and has zero capacity for genuine empathy. It can only simulate it. A licensed therapist operates under legal mandates and professional codes that protect you from harm. ChatGPT doesn't. Its advice can misfire, overlook red flags or unintentionally reinforce biases baked into its training data. Leave the deeper work — the hard, messy, human work — to an actual human who is trained to properly handle it. If you or someone you love is in crisis, please dial 988 in the US, or your local hotline. 3. Making immediate safety decisions If your carbon-monoxide alarm starts chirping, please don't open ChatGPT and ask it if you're in real danger. I'd go outside first and ask questions later. Large language models can't smell gas, detect smoke or dispatch an emergency crew. In a crisis, every second you spend typing is a second you're not evacuating or dialing 911. ChatGPT can only work with the scraps of info you feed it, and in an emergency, it may be too little and too late. So treat your chatbot as a postincident explainer, never a first responder. 4. Getting personalized financial or tax planning ChatGPT can explain what an ETF is, but it doesn't know your debt-to-income ratio, state tax bracket, filing status, deductions, retirement goals or risk appetite. Because its training data may stop short of the current tax year, and of the latest rate hikes, its guidance may well be stale when you hit enter. I have friends who dump their 1099 totals into ChatGPT for a DIY return. The chatbot simply can't replace a CPA who can catch a hidden deduction worth a few hundred dollars or flag a mistake that could cost you thousands. When real money, filing deadlines, and IRS penalties are on the line, call a professional, not AI. Also, be aware that anything you share with an AI chatbot will probably become part of its training data, and that includes your income, your Social Security number and your bank routing information. 5. Dealing with confidential or regulated data As a tech journalist, I see embargoes land in my inbox every day, but I've never thought about tossing any of these press releases into ChatGPT to get a summary or further explanation. That's because if I did, that text would leave my control and land on a third-party server outside the guardrails of my nondiscloure agreement. The same risk applies to client contracts, medical charts or anything covered by the California Consumer Privacy Act, HIPAA, the GDPR or plain old trade-secret law. It applies to your income taxes, birth certificate, driver's license and passport. Once sensitive information is in the prompt window, you can't guarantee where it's stored, who can review it internally or whether it may be used to train future models. ChatGPT also isn't immune to hackers and security threats. If you wouldn't paste it into a public Slack channel, don't paste it into ChatGPT. 6. Doing anything illegal This one is self-explanatory. 7. Cheating on schoolwork I'd be lying if I said I never cheated on my exams. In high school, I used my first-generation iPod Touch to sneak a peek at a few cumbersome equations I had difficulty memorizing in AP calculus, a stunt I'm not particularly proud of. But with AI, the scale of modern cheating makes that look remarkably tame. Turnitin and similar detectors are getting better at spotting AI-generated prose every semester, and professors can already hear "ChatGPT voice" a mile away (thanks for ruining my beloved em dash). Suspension, expulsion and getting your license revoked are real risks. It's best to use ChatGPT as a study buddy, not a ghostwriter. You're also just cheating yourself out of an education if you have ChatGPT do the work for you. 8. Monitoring information and breaking news Since OpenAI rolled out ChatGPT Search in late 2024 (and opened it to everyone in February 2025), the chatbot can fetch fresh web pages, stock quotes, gas prices, sports scores and other real-time numbers the moment you ask, complete with clickable citations so you can verify the source. However, it won't stream continual updates on its own. Every refresh needs a new prompt, so when speed is critical, live data feeds, official press releases, news sites, push alerts and streaming coverage are still your best bet. 9. Gambling I've actually had luck with ChatGPT and hitting a three-way parlay during the NCAA men's basketball championship, but I would never recommend it to anyone. I've seen ChatGPT hallucinate and provide incorrect information on player statistics, misreported injuries and win-loss records. I only cashed out because I double-checked every claim against real-time odds, and even then I got lucky. ChatGPT can't see tomorrow's box score, so don't rely on it solely to get you that win. 10. Drafting a will or other legally binding contract ChatGPT is great for breaking down basic concepts. If you want to know more about a revocable living trust, ask away. However, the moment you ask it to draft actual legal text, you're rolling the dice. Estate and family-law rules vary by state, and sometimes even by county, so skipping a witness signature or omitting the notarization clause can get your whole document tossed. Let ChatGPT help you build a checklist of questions for your lawyer, then pay that lawyer to turn that checklist into a document that stands up in court. 11. Making art This isn't an objective truth, just my own opinion, but I don't believe AI should be used to create art. I'm not anti-artifical intelligence by any means. I use ChatGPT for brainstorming new ideas and help with my headlines, but that's supplementation, not substitution. By all means, use ChatGPT, but please don't use it to make art that you then pass off as your own. It's kind of gross.

Walmart Deals of the Day: Grab Apple's Latest AirPods for Less Than $100
Walmart Deals of the Day: Grab Apple's Latest AirPods for Less Than $100

CNET

timean hour ago

  • CNET

Walmart Deals of the Day: Grab Apple's Latest AirPods for Less Than $100

If you're on the hunt for some great bargains, you've come to the right place. CNET's dedicated shopping experts have years of experience tracking down the best deals out there, finding the top picks available at Walmart every single day. For today, July 31, those include a rare discount on Apple's latest AirPods 4, $70 off an enameled Carote Dutch oven and 50% off the EcoFlow power station. The AirPods 4 are the latest generation on the market, and discounts like this are rare. These earbuds are an excellent and more affordable alternative to the AirPods Pro 2 (if you prefer an open-ear design), featuring the same H2 chip with Bluetooth 5.3, plus improved sound, spatial audio support and excellent voice-call performance. Our reviewer hailed them as a "worthy upgrade" over the previous-gen AirPods 3, especially when they're on sale. Carote's 5-quart Dutch oven is perfect for broiling, braising, baking, roasting and more, making it an essential for any kitchen. It's seriously versatile and can be used on gas, electric and induction stovetops, as well as in the oven. The pot combines the excellent heat retention and distribution of cast iron with an enameled coating that's great for more delicate ingredients, like fish or eggs. Plus, the lid has self-basting stalactites that help prevent your dishes from drying out while they cook. EcoFlow makes some of the best portable power stations, and this Walmart-exclusive Lite model is now half off. It has a 950Wh battery capacity, enough to charge your phone over 80 times or power a speaker for more than 50 hours. It also boasts a 1,800W output, so it can handle some larger appliances, making it a great option for power outages and emergencies. With 15 AC, USB and DC inputs, the unit can power multiple devices simultaneously, and it can be charged up to 80% in less than an hour.

Your Microsoft Passwords Will Vanish in a Few Hours. What to Do Right Now
Your Microsoft Passwords Will Vanish in a Few Hours. What to Do Right Now

CNET

time2 hours ago

  • CNET

Your Microsoft Passwords Will Vanish in a Few Hours. What to Do Right Now

It's time to say so long to the Microsoft Authenticator app as we know it. As of this Friday, Aug. 1, the app will no longer save or manage passwords, use two-factor authentication or auto-fill. And it won't be your go-to password manager anymore, either. Instead of passwords, Microsoft is moving to passkeys -- such as PINs, fingerprint scans, facial recognition or a pattern on your device's lock screen. Using passkeys is a safer alternative to the risky password habits 49% of US adults use, according to CNET's password survey. However, Attila Tomaschek, a CNET software senior writer and digital security expert, prefers Microsoft's new login over password habits that can risk your data being stolen. There's not much time to learn about passkeys or password manager, but we're here to help. Here's what you need to know to get started. Microsoft Authenticator will stop supporting passwords on Aug. 1 Microsoft Authenticator houses your passwords and lets you sign into all your Microsoft accounts using a PIN, facial recognition like Windows Hello or other biometric data like a fingerprint. Authenticator can be used in other ways, such as verifying you're logging in if you forgot your password, or using two-factor authentication as an extra layer of security for your accounts. In June, the company stopped letting users add passwords to Authenticator. As of this month, you won't be able to use the autofill password function. And starting Aug. 1, you'll no longer be able to use saved passwords. If you still want to use passwords instead of passkeys, you can store them in Microsoft Edge. However, CNET experts recommend adopting passkeys during this transition. "Passkeys use public key cryptography to authenticate users, rather than relying on users themselves creating their own (often weak or reused) passwords to access their online accounts," Tomaschek said. Why passkeys are a better alternative to passwords So what exactly is a passkey? It's a credential created by the Fast Identity Online Alliance that uses biometric data or a PIN to verify your identity and access your account. Think about using your fingerprint or Face ID to log into your account. That's generally safer than using a password that is easy to guess or susceptible to a phishing attack. "Passwords can be cracked, whereas passkeys need both the public and the locally stored private key to authenticate users, which can help mitigate risks like falling victim to phishing and brute-force or credential-stuffing attacks," said Tomaschek. Passkeys aren't stored on servers like passwords. Instead, they're stored only on your personal device. More conveniently, this takes the guesswork out of remembering your passwords and the need for a password manager. How to set up a passkey in Microsoft Authenticator Microsoft said in a May 1 blog post that it will automatically detect the best passkey to set up and make that your default sign-in option. "If you have a password and 'one-time code' set up on your account, we'll prompt you to sign in with your one-time code instead of your password. After you're signed in, you'll be prompted to enroll a passkey. Then the next time you sign in, you'll be prompted to sign in with your passkey," according to the blog post. To set up a new passkey, open your Authenticator app on your phone. Tap on your account and select "Set up a passkey." You'll be prompted to log in with your existing credentials. After you're logged in, you can set up the passkey. Other password manager alternatives Since Microsoft will get rid of all of your passwords in two weeks, you'll need a new place to store your passwords safely. Tomaschek has a few of the best password manager recommendations after testing and reviewing several. The top recommendation is Bitwarden for its transparency. It's open-source and audited annually. From a price perspective, the free plan lets you store infinite passwords across unlimited devices. The free plan also includes features most password managers would charge for, including password sharing and a username and password generator. Bitwarden's upgraded plans have other upgraded features that could be worth the cost, too. Personally, Tomaschek has been using 1Password for a while, and he likes the interface and family plan. Even though it's second on the list, Tomaschek says it's just as good as Bitwarden.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store