
The Tea app was intended to help women date safely. Then it got hacked.
Here's what to know:
Tea was meant to help women date safely
Tea founder Sean Cook, a software engineer who previously worked at Salesforce and Shutterfly, says on the app's website that he founded the company in 2022 after witnessing his own mother's 'terrifying'' experiences. Cook said they included unknowingly dating men with criminal records and being 'catfished'' — deceived by men using false identities.
Get Starting Point
A guide through the most important stories of the morning, delivered Monday through Friday.
Enter Email
Sign Up
Tea markets itself as a safe way for women to anonymously vet men they might meet on dating apps such as Tinder or Bumble — ensuring that the men are who they say they are, not criminals and not already married or in a relationship. 'It's like people have their own little Yelp pages,'' said Aaron Minc, whose Cleveland firm, Minc Law, specializes in cases involving online defamation and harassment.
Advertisement
In an Apple Store review, one woman wrote that she used a Tea search to investigate a man she'd begun talking to and discovered 'over 20 red flags, including serious allegations like assault and recording women without their consent.'' She said she cut off communication. 'I can't imagine how things could've gone had I not known," she wrote.
Advertisement
A surge in social media attention over the past week pushed Tea to the No. 1 spot on Apple's U.S. App Store as of July 24, according to Sensor Tower, a research firm. In the seven days from July 17-23, Tea downloads shot up 525% compared to the week before. Tea said in an Instagram post that it had reached 4 million users.
Tea has been criticized for invading men's privacy
A female columnist for The Times of London newspaper, who signed into the app, on Thursday called Tea a 'man-shaming site'' and complained that 'this is simply vigilante justice, entirely reliant on the scruples of anonymous women. With Tea on the scene, what man would ever dare date a woman again?''
'Over the last couple of weeks, we've gotten hundreds of calls on it. It's blown up,' attorney Minc said. 'People are upset. They're getting named. They're getting shamed.''
In 1996, Congress passed legislation protecting websites and apps from liability for things posted by their users. But the users can be sued for spreading 'false and defamatory'' information, Minc said.
In May, however, a federal judge in Illinois threw out an invasion-of-privacy lawsuit by a man who'd been criticized by women in the Facebook chat group 'Are We Dating the Same Guy,″ Bloomberg Law reported.
State privacy laws could offer another avenue for bringing legal action against someone who posted your photograph or other personal information in a harmful way, Minc said.
Advertisement
The breach exposed thousands of selfies and photo IDs
In its statement, Tea reported that about 72,000 images were leaked online, including 13,000 images of selfies or photo identification that users submitted during account verification. Another 59,000 images that were publicly viewable in the app from posts, comments and direct messages were also accessed, according to the company's statement.
No email addresses or phone numbers were exposed, the company said, and the breach only affects users who signed up before February 2024. 'At this time, there is no evidence to suggest that additional user data was affected. Protecting tea users' privacy and data is our highest priority,' Tea said.
It said users did not need to change their passwords or delete their accounts. 'All data has been secured.''
Lawyer Minc said he was not surprised to see Tea get targeted. 'These sites get attacked,'' he said. 'They create enemies. They put targets on themselves where people want to go after them.''

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Geek Wire
2 hours ago
- Geek Wire
In AI we trust?
A recent study by Stanford University's Social and Language Technologies Lab (SALT) found that 45% of workers don't trust the accuracy, capability, or reliability of AI systems. That trust gap reflects a deeper concern about how AI behaves when the stakes are high, especially in business-critical environments. Hallucinations in AI may be acceptable when the stakes are low, like drafting a tweet or generating creative ideas, where errors are easily caught and carry little consequence. But in the enterprise, where AI agents are expected to support high-stakes decisions, power workflows, and engage directly with customers, the tolerance for error disappears. True enterprise-grade reliability demands more: consistency, predictability, and rigorous alignment with real-world context, because even small mistakes can have big consequences. This challenge is referred to as 'jagged intelligence,' where AI systems continue to shatter performance records on increasingly complex benchmarks, while sporadically struggling with simpler tasks that most humans find intuitive and can reliably solve. For example, a model might be able to defeat a chess grandmaster that is unable to complete a simple child's puzzle. This mismatch between brilliance and brittleness underscores why enterprise AI demands more than general LLM intelligence alone; it requires contextual grounding, rigorous testing, and continuous fine-tuning. That's why at Salesforce, we believe the future of AI in business depends on achieving what we call Enterprise General Intelligence (EGI) – a new framework for enterprise-grade AI systems that are not only highly capable but also consistently reliable across complex, real-world scenarios. In an EGI environment, AI agents work alongside humans, integrated into enterprise systems and governed by strict rules that limit what actions they can take. To achieve this, we're implementing a clear, repeatable three-step framework – synthesize, measure, and train – and applying this to every enterprise-grade use case. A Three-Step Framework for Building Trust Building AI agents within the enterprise demands a disciplined process that grounds models in business-contextualized data, measures performance against real-world benchmarks, and continuously fine-tunes agents to maintain accuracy, consistency, and safety. Synthesize: Building trustworthy agents starts with safe, realistic testing environments. That means using AI-generated synthetic data that closely resembles real inputs, applying the same business logic and objectives used in human workflows, and running agents in secure, isolated sandboxes. By simulating real-world conditions without exposing production systems or sensitive data, teams can generate high-fidelity feedback. This method is called 'reinforcement learning' and is a critical foundation for developing enterprise-ready AI agents. Building trustworthy agents starts with safe, realistic testing environments. That means using AI-generated synthetic data that closely resembles real inputs, applying the same business logic and objectives used in human workflows, and running agents in secure, isolated sandboxes. By simulating real-world conditions without exposing production systems or sensitive data, teams can generate high-fidelity feedback. This method is called 'reinforcement learning' and is a critical foundation for developing enterprise-ready AI agents. Measure: Reliable agents require clear, consistent benchmarks. Measuring performance isn't just about tracking accuracy, it's about defining what each specific use case requires. The level of precision needed varies: An agent offering product recommendations may tolerate a wider margin of error than one evaluating loan applications or diagnosing system failures. By establishing tailored benchmarks such as Salesforce's initial LLM benchmark for CRM use cases, and acceptable performance thresholds, teams can evaluate agent output in context and iterate with purpose, ensuring the agent is fit for its intended role before it ever reaches production. LLM benchmark Train: Reliability isn't achieved in a single pass — it's the result of continuous refinement. Agents must be trained, tested, and retrained in a constant feedback loop. That means generating fresh data, running real-world scenarios, measuring outcomes, and using those insights to improve performance. Because agent behavior can vary across runs, this iterative process is essential for building stability over time. Only through repeated training and tuning can agents reach the level of consistency and accuracy required for enterprise use. Turning AI Agents Into Reliable Enterprise Partners Building AI agents for the enterprise is much more than simply deploying an LLM for business-critical tasks. Salesforce AI Research's latest research shows that generic LLM agents successfully complete only 58% of simple tasks and barely more than a third of more complex ones. Truly effective EGI agents that are trustworthy in high-stakes business scenarios require far more than an off-the-shelf DIY LLM plug-in. They demand a rigorous, platform-driven approach that grounds models in business-specific context, enforces governance, and continuously measures and fine-tunes performance. The AI we deploy in Agentforce is built differently. Agentforce doesn't run by simply plugging into an LLM. The agents are grounded in business-specific context through Data Cloud, made trustworthy by our enterprise-grade Trust Layer, and designed for reliability through continuous evaluation and optimization using the Testing Center. This platform-driven approach ensures that agents are not only intelligent, but consistently enterprise-ready. As businesses evolve toward a future where specialized AI agents collaborate dynamically in teams, complexity increases exponentially. That's why leveraging frameworks that synthesize, evaluate, and train agents before deployment is critical. This new framework builds the trust needed to elevate AI from a promising technology into a reliable enterprise partner that drives meaningful business outcomes.


Miami Herald
14 hours ago
- Miami Herald
What to know about the hack at Tea, an app where women share red flags about men
A fast-growing app for women was hacked after it shot to the top of app download charts and kicked off heated debates about women's safety and dating. The app, Tea Dating Advice, allowed women worried about their safety to share information about men they might date. Its premise was immediately polarizing: Some praised it as a useful way to warn women about dangerous men, while others called it divisive and a violation of men's privacy. On Friday, Tea said that hackers had breached a data storage system, exposing about 72,000 images, including selfies and photo identifications of its users. Here's what to know about the situation. Released in 2023, the U.S.-based app says it is a resource for women to protect themselves while dating, with some online likening it to a Yelp service for women dating men in the same area. Women who sign up and are approved can join an anonymous forum to seek feedback on men they are interested in, or report bad behavior from men they have dated. Other tools on the app allow users to run background checks, search for criminal records and reverse image search for photos in the hope of spotting 'catfishing,' where people pass off photos of others as themselves. According to Tea's site, the app's founder, Sean Cook, launched the app because he witnessed his mother's 'terrifying' experience with online dating. He said she was catfished and unknowingly engaged with men who had criminal records. Interest in the app this week escalated after it became the subject of videos and conversations about dating and gender dynamics on social media. On Thursday, Tea reported a 'massive surge in growth,' saying on Instagram that more than 2 million users in the past few days had asked to join the app. It was listed as the top free app in Apple's download charts, and was also highly ranked in the Google Play store. Critics however, including some users on 4chan, an anonymous message board known for spreading hateful content, called for the site to be hacked. On Friday, Tea said that there had been a data breach of a 'legacy storage system' holding data for its users. The company said it had detected unauthorized access to about 72,000 images, including about 13,000 selfies and images of identification documents, which the company solicited to verify that users are women. Images from posts, comments and direct messages in the apps were also included in the breach, it said. Tea said that the data belonged to users who signed up before February 2024. According to Tea's privacy policy, the selfies it solicits are deleted shortly after users are verified. The hacked images were not deleted. That data set was stored 'in compliance with law enforcement requirements related to cyberbullying prevention,' Tea said in its statement, and was not moved to newer systems that Tea said were better fortified. Data from the hack, including photos of women and of identification cards containing personal details, appeared to circulate online Friday. An anonymous user shared the database of photographs, which the user said included driver's licenses, to 4chan, according to the tech publication 404 Media, the first outlet to report on the breach. Some circulated a map, which The New York Times was unable to authenticate, that purported to use data from the leak to tie the images to locations. That thread was later deleted. According to an archived version of the thread, the user accused the Tea app of exposing people's personal information because of its inadequate protections. Tea said that it was working with third-party cybersecurity experts, and that there was 'no evidence' to suggest other user data was leaked. The app's terms and conditions note that users provide their location, birth date, photo and photo ID during registration. Tea said, that in 2023, it removed a requirement for photo ID in addition to a selfie. The conversation around Tea has tapped into a larger face-off over the responsibility of platforms that women say can help protect them from dating untrustworthy or violent men. Many of them, such as 'Are We Dating the Same Guy?' groups, have spread widely on platforms like Facebook. But such groups have increasingly drawn accusations of stoking gender divisions, as well as claims from men who say the groups have defamed them or invaded their privacy. This article originally appeared in The New York Times. Copyright 2025
Yahoo
19 hours ago
- Yahoo
Analysts Positive on Salesforce, Inc. (CRM) Amid Mixed Stock Performance
Salesforce, Inc. (NYSE:CRM) secures a place on our list of the . Copyright: drserg / 123RF Stock Photo Salesforce, Inc. (NYSE:CRM) is demonstrating a mixed performance as of the time of writing. The stock last closed at $268.35, an increase of 1.74% from the previous trading day. The stock outperformed the broader market, which returned 0.80%. Meanwhile, the stock has fallen 1.28% over the past month, underperforming both the technology-Software Application industry and the market, which gained 1.8% and 4.5%, respectively. However, analysts have expressed optimism regarding the company's future, projecting earnings of $2.77 per share for the upcoming quarter, an 8.2% increase on a YoY basis. On the other hand, full-year earnings and revenue are projected to increase by 10.78% and 8.64%, respectively. At the same time, Salesforce, Inc. (NYSE:CRM) is trading at a forward price-to-earnings ratio of 23.64x, a discount compared to the industry average of 27.42x. Previously, on July 16, 2025, Citizens JMP maintained a 'Market Outperform' rating on Salesforce, Inc. (NYSE:CRM) with a price target of $430, citing future growth driven by the company's AI and cloud services. Offering Agentforce, Data Cloud, Salesforce Starter, and Tableau, Salesforce, Inc. (NYSE:CRM) provides customer relationship management (CRM) technology, bridging companies and customers. It is one of the best ESG stocks. While we acknowledge the potential of CRM as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. READ NEXT: 14 Cheap Transportation Stocks to Buy According to Analysts and Top 10 AI Infrastructure Stocks to Buy Now. Disclosure: None. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data