
AI can now stalk you with just a single vacation photo
For decades, digital privacy advocates have been warning the public to be more careful about what we share online. And for the most part, the public has cheerfully ignored them.
I am certainly guilty of this myself. I usually click 'accept all' on every cookie request every website puts in front of my face, because I don't want to deal with figuring out which permissions are actually needed. I've had a Gmail account for 20 years, so I'm well aware that on some level that means Google knows every imaginable detail of my life.
Future Perfect
Explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week. Email (required)
Sign Up
By submitting your email, you agree to our Terms and Privacy Notice . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
I've never lost too much sleep over the idea that Facebook would target me with ads based on my internet presence. I figure that if I have to look at ads, they might as well be for products I might actually want to buy.
But even for people indifferent to digital privacy like myself, AI is going to change the game in a way that I find pretty terrifying.
This is a picture of my son on the beach. Which beach? OpenAI's o3 pinpoints it just from this one picture: Marina State Beach in Monterey Bay, where my family went for vacation.
Courtesy of Kelsey Piper
To my merely-human eye, this image doesn't look like it contains enough information to guess where my family is staying for vacation. It's a beach! With sand! And waves! How could you possibly narrow it down further than that?
But surfing hobbyists tell me there's far more information in this image than I thought. The pattern of the waves, the sky, the slope, and the sand are all information, and in this case sufficient information to venture a correct guess about where my family went for vacation. (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent. One of Anthropic's early investors is James McClave, whose BEMC Foundation helps fund Future Perfect.)
ChatGPT doesn't always get it on the first try, but it's more than sufficient for gathering information if someone were determined to stalk us. And as AI is only going to get more powerful, that should worry all of us.
When AI comes for digital privacy
For most of us who aren't excruciatingly careful about our digital footprint, it has always been possible for people to learn a terrifying amount of information about us — where we live, where we shop, our daily routine, who we talk to — from our activities online. But it would take an extraordinary amount of work.
For the most part we enjoy what is known as security through obscurity; it's hardly worth having a large team of people study my movements intently just to learn where I went for vacation. Even the most autocratic surveillance states, like Stasi-era East Germany, were limited by manpower in what they could track.
But AI makes tasks that would previously have required serious effort by a large team into trivial ones. And it means that it takes far fewer hints to nail someone's location and life down.
It was already the case that Google knows basically everything about me — but I (perhaps complacently) didn't really mind, because the most Google can do with that information is serve me ads, and because they have a 20-year track record of being relatively cautious with user data. Now that degree of information about me might be becoming available to anyone, including those with far more malign intentions.
And while Google has incentives not to have a major privacy-related incident — users would be angry with them, regulators would investigate them, and they have a lot of business to lose — the AI companies proliferating today like OpenAI or DeepSeek are much less kept in line by public opinion. (If they were more concerned about public opinion, they'd need to have a significantly different business model, since the public kind of hates AI.)
Be careful what you tell ChatGPT
So AI has huge implications for privacy. These were only hammered home when Anthropic reported recently that they had discovered that under the right circumstances (with the right prompt, placed in a scenario where the AI is asked to participate in pharmaceutical data fraud) Claude Opus 4 will try to email the FDA to whistleblow. This cannot happen with the AI you use in a chat window — it requires the AI to be set up with independent email sending tools, among other things. Nonetheless, users reacted with horror — there's just something fundamentally alarming about an AI that contacts authorities, even if it does it in the same circumstances that a human might.
Some people took this as a reason to avoid Claude. But it almost immediately became clear that it isn't just Claude — users quickly produced the same behavior with other models like OpenAI's o3 and Grok. We live in a world where not only do AIs know everything about us, but under some circumstances, they might even call the cops on us.
Right now, they only seem likely to do it in sufficiently extreme circumstances. But scenarios like 'the AI threatens to report you to the government unless you follow its instructions' no longer seem like sci-fi so much as like an inevitable headline later this year or the next.
What should we do about that? The old advice from digital privacy advocates — be thoughtful about what you post, don't grant things permissions they don't need — is still good, but seems radically insufficient. No one is going to solve this on the level of individual action.
New York is considering a law that would, among other transparency and testing requirements, regulate AIs which act independently when they take actions that would be a crime if taken by humans 'recklessly' or 'negligently.' Whether or not you like New York's exact approach, it seems clear to me that our existing laws are inadequate for this strange new world. Until we have a better plan, be careful with your vacation pictures — and what you tell your chatbot!
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
7 hours ago
- Yahoo
Here's ChatGPT's advice on how to spot the next meme stock
Amid this week's speculative euphoria, we asked ChatGPT what makes a meme stock. The chatbot laid out four things for investors to focus on. The AI also had some ideas about what stocks might be next in line for a short squeeze. What makes a meme stock? ChatGPT has a couple of ideas. When shares of Opendoor Technologies, a little-known online real estate platform, unexpectedly started surging this month, a few other seemingly random names followed. The list of stocks included Krispy Kreme, Kohl's, and GoPro, disparate companies united only by their financial challenges and vaguely retro vibes. On the surface, meme stocks that go parabolic have little in common, but if you look under the hood, there are some similarities in their profiles. How to find the next meme stock? We asked ChatGPT Since the GameStop short squeeze of 2021, investors have been trying to spot the next meme stock before it starts its ascent to the moon. After seeing some investors bragging on Wall Street Bets about getting winning trading advice from ChatGPT, we decided to test the AI's knowledge of the market's speculative frenzy. (We will preface all of this by saying it's not real investment advice. ChatGPT is scouring the internet—including Wall Street Bets! — for wisdom on the subject.) The prompt was: "What's the best way to identify what could be the next meme stock?" After clarifying what a meme stock is, the chatbot responded that identifying one is "more of an art than a science." It then proceeded to lay out some more detailed criteria for investors to think about: High short interest (>20% float), Days-to-cover ≥5The bot suggested Kohl's, Opendoor, and GoPro as examples of stocks with high interest from short sellers, naming and ShortInterestTracker as tools for assessment. Surge in social mentions (Reddit/Twitter)It highlighted the importance of noting sudden increases in a sock's mentions or upvotes on social media platforms such as X, Reddit and Discord, citing platforms including ApeWisdom and AltIndex. Unusual options activity (OI spikes)While discussing unusual options activity, it stated that "retail traders often pile into out-of-the-money (OTM) call options, creating gamma squeezes as market makers hedge by buying shares-fueling price surges." Meme narrative (pop‑culture, underdog story)ChatGPT highlighted the importance of a narrative, citing the David vs. Goliath mentality that drove the GameStop short squeeze or companies with products popular with retail investors, such as Krispy Kreme or AMC Entertainment, a fellow struggling meme stock. Wanting to test the AI's predictive abilities, we asked: "Based on this criteria, can you predict the next meme stock?" ChatGPT quickly named American Eagle Outfitters as its pick for the next meme stock based on its criteria, providing reasons it checks all four boxes. Per the bot: Social Media MomentumIts recent ad campaign starring Sydney Sweeney has ignited buzz across Reddit, X, and Stocktwits, pushing the stock up ~10% on the Interest & Squeeze PotentialWith ~13% of its float shorted, there's room for a squeeze if sentiment continues to climb. Options FlowEarly reports suggest call option volume has ticked upward around recent campaign news. Narrative + Meme-WorthinessThe Sydney Sweeney campaign taps into pop-culture, star power, and a viral storyline — all classic ingredients for meme stock fervor." Indeed, the fashion retailer's new ad campaign featuring Sweeney has made American Eagle a top trending stock across retail trading forums. While the stock is only up 11% for the week, the assessment shows that ChatGPT can spot trends as they're unfolding. Though it wasn't asked to, the chatbot added honorable mentions to what could be the next meme stock, including Krispy Kreme, GoPro, and Rocket Lab USA. It should be noted that these aren't great picks by the bot, as all of these stocks were already fading after surging earlier in the week. Meanwhile, RocketLab, though it has fairly high short interest, is not actually a meme stock. It is possible that ChatGPT confused it with Rocket Companies, which has much higher short interest and was among the meme shares to surge this the original article on Business Insider Sign in to access your portfolio

Business Insider
7 hours ago
- Business Insider
Meta just hired the co-creator of ChatGPT in an escalating AI talent war with OpenAI
Shengjia Zhao, a co-creator of ChatGPT and former lead scientist at OpenAI, is joining Meta as chief scientist of its Superintelligence Labs. CEO Mark Zuckerberg announced Zhao's appointment on Friday in a social media post, and called him a "pioneer" in the field who has already driven several major AI breakthroughs. Zhao previously helped build GPT-4 and led synthetic data efforts at OpenAI. According to the post, Zhao will now work directly with Zuckerberg and Meta's newly appointed chief AI officer, Alexandr Wang, the founder and CEO of Scale AI. The new hire comes during Zuckerberg's multibillion-dollar AI spending spree, including a $15 billion investment in Scale AI and the creation of Meta Superintelligence Labs, a new division focused on foundational models and next-gen research. In addition to Zhao, the company has lured away the three researchers who built OpenAI's Zurich office — Lucas Beyer, Alexander Kolesnikov, and Xiaohua Zhai — all of whom previously also worked at Google's DeepMind. The Superintelligence Labs team is now comprised of a lineup of names previously seen with OpenAI, Anthropic, and Google. But the war for AI talent is far from over. Databricks VP Naveen Rao likened the competition to "looking for LeBron James," estimating that fewer than 1,000 people worldwide can build frontier AI models. Companies without the cash for massive pay packages are turning to hackathons and computing power as incentives. Perplexity CEO Aravind Srinivas said a Meta researcher he tried to poach told him to ask again when the company has "10,000 H100s." AI tech workers have previously told Business Insider that Meta's Mark Zuckerberg has been emailing prospects directly and even hosting AI researchers at his home, while OpenAI CEO Sam Altman has made personal calls to potential hires. Tech company executives have mixed feelings about Meta's poaching efforts. "Meta right now are not at the frontier, maybe they'll they'll manage to get back on there," said Demis Hassabis, the CEO of Google DeepMind, on an episode of the "Lex Fridman Podcast," which aired on Friday. "It's probably rational what they're doing from their perspective because they're behind and they need to do something," Hassabis added. During a July 18 episode of the podcast "Uncapped with Jack Altman," OpenAI CEO Sam Altman criticised some of Meta's "giant offers" to his company's employees, and called the strategy "crazy."

Business Insider
8 hours ago
- Business Insider
Meta just hired the co-creator of ChatGPT in an escalating AI talent war with OpenAI
Meta just escalated the AI talent war with OpenAI. Shengjia Zhao, a co-creator of ChatGPT and former lead scientist at OpenAI, is joining Meta as chief scientist of its Superintelligence Labs. CEO Mark Zuckerberg announced Zhao's appointment on Friday in a social media post, and called him a "pioneer" in the field who has already driven several major AI breakthroughs. Zhao previously helped build GPT-4 and led synthetic data efforts at OpenAI. According to the post, Zhao will now work directly with Zuckerberg and Meta's newly appointed chief AI officer, Alexandr Wang, the founder and CEO of Scale AI. The new hire comes during Zuckerberg's multibillion-dollar AI spending spree, including a $15 billion investment in Scale AI and the creation of Meta Superintelligence Labs, a new division focused on foundational models and next-gen research. In addition to Zhao, the company has lured away the three researchers who built OpenAI's Zurich office — Lucas Beyer, Alexander Kolesnikov, and Xiaohua Zhai — all of whom previously also worked at Google's DeepMind. The Superintelligence Labs team is now comprised of a lineup of names previously seen with OpenAI, Anthropic, and Google. But the war for AI talent is far from over. Databricks VP Naveen Rao likened the competition to "looking for LeBron James," estimating that fewer than 1,000 people worldwide can build frontier AI models. Companies without the cash for massive pay packages are turning to hackathons and computing power as incentives. Perplexity CEO Aravind Srinivas said a Meta researcher he tried to poach told him to ask again when the company has "10,000 H100s." AI tech workers have previously told Business Insider that Meta's Mark Zuckerberg has been emailing prospects directly and even hosting AI researchers at his home, while OpenAI CEO Sam Altman has made personal calls to potential hires. Tech company executives have mixed feelings about Meta's poaching efforts. "Meta right now are not at the frontier, maybe they'll they'll manage to get back on there," said Demis Hassabis, the CEO of Google DeepMind, on an episode of the "Lex Fridman Podcast," which aired on Friday. "It's probably rational what they're doing from their perspective because they're behind and they need to do something," Hassabis added. During a July 18 episode of the podcast "Uncapped with Jack Altman," OpenAI CEO Sam Altman criticised some of Meta's "giant offers" to his company's employees, and called the strategy "crazy." "The degree to which they're focusing on money and not the work and not the mission," said Sam Altman. "I don't think that's going to set up a great culture."