logo
Facebook is asking to use Meta AI on photos in your camera roll you haven't yet shared

Facebook is asking to use Meta AI on photos in your camera roll you haven't yet shared

Yahoo19 hours ago

Facebook is asking users for access to their phone's camera roll to automatically suggest AI-edited versions of their photos — including ones that haven't been uploaded to Facebook yet.
The feature is being suggested to Facebook users when they're creating a new Story on the social networking app. Here, a screen pops up and asks if the user will opt into 'cloud processing' to allow creative suggestions.
As the pop-up message explains, by clicking 'Allow,' you'll let Facebook generate new ideas from your camera roll, like collages, recaps, AI restylings, or photo themes. To work, Facebook says it will upload media from your camera roll to its cloud (meaning its servers) on an 'ongoing basis,' based on information like time, location, or themes.
The message also notes that only you can see the suggestions, and the media isn't used for ad targeting.
However, by tapping 'Allow,' you are agreeing to Meta's AI Terms. This allows your media and facial features to be analyzed by AI, it says. The company will additionally use the date and presence of people or objects in your photos to craft its creative ideas.
The creative tool is another example of the slippery slope that comes with sharing our personal media with AI providers. Like other tech giants, Meta has grand AI ambitions. Being able to tap into the personal photos users haven't yet shared on Facebook's social network could give the company an advantage in the AI race.
Unfortunately for end users, in tech companies' rush to stay ahead, it's not always clear what they're agreeing to when features like this appear.
According to Meta's AI Terms around image processing, 'once shared, you agree that Meta will analyze those images, including facial features, using AI. This processing allows us to offer innovative new features, including the ability to summarize image contents, modify images, and generate new content based on the image,' the text states.
The same AI terms also give Meta's AIs the right to 'retain and use' any personal information you've shared in order to personalize its AI outputs. The company notes that it can review your interactions with its AIs, including conversations, and those reviews may be conducted by humans. The terms don't define what Meta considers personal information, beyond saying it includes 'information you submit as Prompts, Feedback, or other Content.'
We have to wonder whether the photos you've shared for 'cloud processing' also count here.
Meta has not responded to our requests for comment or clarification.
So far, there hasn't been much backlash about this feature. A handful of Facebook users have stumbled across the AI-generated photo suggestions when creating a new story and raised questions about it. For instance, one user on Reddit found that Facebook had pulled up an old photo (in this case, one that had previously been shared to the social network) and automatically turned it into an anime using Meta AI.
When another user in an anti-AI Facebook group asked for help shutting this feature off, the search led to a section called camera roll sharing suggestions in the app's Settings.
We also found this feature under Facebook's Settings, where it's listed in the Preferences section.
On the 'Camera roll sharing suggestions' page, there are two toggles. The first lets Facebook suggest photos from your camera roll when browsing the app. The second (which should be opt-in based on the pop-up that requested permission in Stories) is where you could enable or disable the 'cloud processing,' which lets Meta make AI images using your camera roll photos.
This additional access to use AI on your camera roll's photos does not appear to be new.
We found posts from earlier this year where confused Facebook users were sharing screenshots of the pop-up message that appeared in their Stories section. Meta has also published complete Help Documentation about the feature for both iOS and Android users.
Meta's AI terms have been enforceable as of June 23, 2024; we can't compare the current AI terms with older versions because Meta doesn't keep a record, and previously published terms haven't been properly saved by the Internet Archive's Wayback Machine.
Since this feature dips into your camera roll, however, it extends beyond what Meta had previously announced, in terms of training its AIs on your publicly shared data, including posts and comments on Facebook and Instagram. (EU users had until May 27, 2025 to opt out.)

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta's Xbox-Branded Quest 3S Just Sold Out for All the Wrong Reasons
Meta's Xbox-Branded Quest 3S Just Sold Out for All the Wrong Reasons

Gizmodo

timean hour ago

  • Gizmodo

Meta's Xbox-Branded Quest 3S Just Sold Out for All the Wrong Reasons

Everyone loves limited-edition stuff. There's Sony's 30th anniversary PS5, or Analogue's many limited edition Pocket handhelds, or— I don't know—the Shamrock f***ing Shake. But there's one type of person who loves limited-edition stuff more than your average consumer, and it's a scalper. For proof of that, see Meta's recently released Xbox-branded Quest 3S. See Meta Quest Xbox Edition at Best Buy In case you missed it, Meta's new limited-edition Quest 3S bundle just recently sold out, which on the surface sounds like a great thing for VR and XR. You may be tempted to say, 'Oh, wow! People really like XR headsets, huh?' But before you do that, it may also be worth taking a short gander at eBay, because the resale market over there paints a slightly more cynical picture. It's full of Xbox-branded Quest 3S bundles, folks—and they ain't just giving them away. This bundle, for reference, retails at $399, and the average price I'm seeing on eBay is about $600, though sometimes a little more or a little less. Here is the sad state of affairs on eBay as of the time of typing these words: The list goes on and on, unfortunately, which tells me one thing: the scalpers had a field day with this thing. And that's just kind of sad. It's not sad that someone would want to make money from reselling a limited-edition gadget—as annoying as scalpers are, I can't blame anyone for having a side hustle in this economy. But it is sad that Meta seemingly didn't do much to preserve its limited-edition Quest 3S for XR nerds who unequivocally deserve first dibs. It's also maybe a little sad—as someone who borders on said XR nerd identity—that the race to being out of stock may not actually be driven by real demand. XR headsets, while not the most crucial gadget in the world, are pretty cool and deserve more shine than they get, in my humble opinion. It would have been nice to see them really break through with a little help from an Xbox marketing gimmick. But as always, the almighty aftermarket prevails. To be fair, I'm sure not all of the sales were scalpers trying to make a buck off the XR headset's rarity. Some people, I presume, bought it because it's a pretty good deal for getting into XR—you get a sleek black headset with Xbox green details, Meta's Elite Strap for your head, and a limited-edition Xbox controller to top it off. Based on the retail price of all of that, this bundle saves you somewhere in the ballpark of $95. Some people bought this bundle because of Xbox, too. Here's one instance in which someone seems to have pulled the trigger on this bundle just for the controller. Honestly… respect. That's much more pure than trying to spin the whole thing around for $200. The idea that someone buys the new Quest colorway just to nab the limited edition Xbox Controller and sell the rest is really comical.. — SadlyItsDadley (@SadlyItsBradley) June 27, 2025Listen, scalpers are an inevitable fact of life nowadays when you're buying any gadget that's even slightly in demand. Like it or not, that's just the world we live in—one colored by bots and dropshipping. But I'd be lying if I said that it wouldn't have been nice to see a little effort on Meta's part to prevent that. It can be done! Just look at the Switch 2 launch. People have been resorting to cartoonish levels of robbery to get their hands on it—that's how in-demand this thing is—but Nintendo, with a little bit of forethought, has kept the scourge of scalpers to a dull roar. I guess Meta probably doesn't care that much either way, though. A sale is a sale, whether it ends up on eBay or on your dorky XR- and Xbox-loving head. Sadly, if you're in the latter camp, it looks like the aftermarket is your only option right now. Thanks, Zuckerberg. Just because you look like a dropshipper doesn't mean you have to act like one. See Meta Quest Xbox Edition at Best Buy

Reddit turns 20, and it's going big on AI
Reddit turns 20, and it's going big on AI

The Verge

timean hour ago

  • The Verge

Reddit turns 20, and it's going big on AI

Reddit has become known as the place to go for unfiltered answers from real, human users. But as the site celebrates its 20th anniversary this week, the company is increasingly thinking about how it can augment that human work with AI. The initial rollout of AI tools, like Reddit Answers, is 'going really well,' CTO Chris Slowe tells The Verge. At a time when Google and its AI tools are going to Reddit for human answers, Reddit is going to its own human answers to power AI features, hoping they're the key to letting people unlock useful information from its huge trove of posts and communities. Reddit Answers is the first big user-facing piece of the company's AI push. Like other AI search tools, Reddit Answers will show an AI-generated summary to a query. But Reddit Answers also very prominently links to where the content came from — and as a user, you also know that the link will point you to another place on Reddit instead of some SEO-driven garbage. It also helps that the citations feel much more prominent than on tools like Google's AI Mode — a tool that news publishers have criticized as 'theft.' 'If you just want the short summary, it's there,' Slowe says. 'If you want to delve deeper, it's an easier way to get into it.' In order for those AI answers to be useful, they need to continue to be based on real human responses. Reddit now has to be on the lookout for AI-generated comments and posts infiltrating its site. It's an important thing for the platform to stay on top of, says Slowe: Reddit's key benefit is that you can trust that a lot of what's written on it is written by humans, and AI spam could erode that. 'Trust is an essential component of the way Reddit works,' Slowe says. The platform is using AI and LLMs to help with moderation and user safety, too. The other half of Reddit's AI equation is selling its own data, which is extremely valuable to AI giants. The changes that forced notable apps to shut down and spurred widespread user protests (which Slowe referred to as 'some unpleasantness that happened about two years ago') were positioned by CEO Steve Huffman as more of a way to get AI companies to pony up. And two of the biggest companies have already done so, as Reddit has cut AI deals with both Google and OpenAI. But Reddit also has to be on the lookout for improper use of its data, with the most recent crackdown being its lawsuit against Anthropic. 'At the end of the day, we aren't a charity,' Slowe says. Reddit wants to provide a service that people can use for free, 'but don't build your business on our back and expect us not to try and defend ourselves.' Still, with new AI-powered search products from Google, OpenAI, and others on the rise, Reddit risks getting buried by AI summaries. And Reddit is experimenting with AI-powered searches on its own platform. So what's the company's goal for the future? 'Keep allowing Reddit to be Reddit,' Slowe says. 'I think that the underlying model for Reddit hasn't really drastically changed since the early days.' The platform doesn't require real names (your username is a 'coveted thing' that many people keep private, Slowe says), everything is focused on text, and reputation is more important than who you are; all of these elements marked 'a drastic difference with the rest of social media.' Reddit is also facing competition from a slightly different angle: Digg, which is making a return with the backing of founder Kevin Rose and Reddit co-founder Alexis Ohanian. Slowe didn't have much to say about it, though. 'I always love seeing innovation and I always love seeing new bends on old business models.'

Did AI companies win a fight with authors? Technically
Did AI companies win a fight with authors? Technically

The Verge

timean hour ago

  • The Verge

Did AI companies win a fight with authors? Technically

In the past week, big AI companies have — in theory — chalked up two big legal wins. But things are not quite as straightforward as they may seem, and copyright law hasn't been this exciting since last month's showdown at the Library of Congress. First, Judge William Alsup ruled it was fair use for Anthropic to train on a series of authors' books. Then, Judge Vince Chhabria dismissed another group of authors' complaint against Meta for training on their books. Yet far from settling the legal conundrums around modern AI, these rulings might have just made things even more complicated. Both cases are indeed qualified victories for Meta and Anthropic. And at least one judge — Alsup — seems sympathetic to some of the AI industry's core arguments about copyright. But that same ruling railed against the startup's use of pirated media, leaving it potentially on the hook for massive financial damage. (Anthropic even admitted it did not initially purchase a copy of every book it used.) Meanwhile, the Meta ruling asserted that because a flood of AI content could crowd out human artists, the entire field of AI system training might be fundamentally at odds with fair use. And neither case addressed one of the biggest questions about generative AI: when does its output infringe copyright, and who's on the hook if it does? Alsup and Chhabria (incidentally both in the Northern District of California) were ruling on relatively similar sets of facts. Meta and Anthropic both pirated huge collections of copyright-protected books to build a training dataset for their large language models Llama and Claude. Anthropic later did an about-face and started legally purchasing books, tearing the covers off to 'destroy' the original copy, and scanning the text. The authors argued that, in addition to the initial piracy, the training process constituted an unlawful and unauthorized use of their work. Meta and Anthropic countered that this database-building and LLM-training constituted fair use. Both judges basically agreed that LLMs meet one central requirement for fair use: they transform the source material into something new. Alsup called using books to train Claude 'exceedingly transformative,' and Chhabria concluded 'there's no disputing' the transformative value of Llama. Another big consideration for fair use is the new work's impact on a market for the old one. Both judges also agreed that based on the arguments made by the authors, the impact wasn't serious enough to tip the scale. Add those things together, and the conclusions were obvious… but only in the context of these cases, and in Meta's case, because the authors pushed a legal strategy that their judge found totally inept. Put it this way: when a judge says his ruling 'does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful' and 'stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one' — as Chhabria did — AI companies' prospects in future lawsuits with him don't look great. Both rulings dealt specifically with training — or media getting fed into the models — and didn't reach the question of LLM output, or the stuff models produce in response to user prompts. But output is, in fact, extremely pertinent. A huge legal fight between The New York Times and OpenAI began partly with a claim that ChatGPT could verbatim regurgitate large sections of Times stories. Disney recently sued Midjourney on the premise that it 'will generate, publicly display, and distribute videos featuring Disney's and Universal's copyrighted characters' with a newly launched video tool. Even in pending cases that weren't output-focused, plaintiffs can adapt their strategies if they now think it's a better bet. The authors in the Anthropic case didn't allege Claude was producing directly infringing output. The authors in the Meta case argued Llama was, but they failed to convince the judge — who found it wouldn't spit out more than around 50 words of any given work. As Alsup noted, dealing purely with inputs changed the calculations dramatically. 'If the outputs seen by users had been infringing, Authors would have a different case,' wrote Alsup. 'And, if the outputs were ever to become infringing, Authors could bring such a case. But that is not this case.' In their current form, major generative AI products are basically useless without output. And we don't have a good picture of the law around it, especially because fair use is an idiosyncratic, case-by-case defense that can apply differently to mediums like music, visual art, and text. Anthropic being able to scan authors' books tells us very little about whether Midjourney can legally help people produce Minions memes. Minions and New York Times articles are both examples of direct copying in output. But Chhabria's ruling is particularly interesting because it makes the output question much, much broader. Though he may have ruled in favor of Meta, Chhabria's entire opening argues that AI systems are so damaging to artists and writers that their harm outweighs any possible transformative value — basically, because they're spam machines. It's worth reading: Generative AI has the potential to flood the market with endless amounts of images, songs, articles, books, and more. People can prompt generative AI models to produce these outputs using a tiny fraction of the time and creativity that would otherwise be required. So by training generative AI models with copyrighted works, companies are creating something that often will dramatically undermine the market for those works, and thus dramatically undermine the incentive for human beings to create things the old-fashioned way. … As the Supreme Court has emphasized, the fair use inquiry is highly fact dependent, and there are few bright-line rules. There is certainly no rule that when your use of a protected work is 'transformative,' this automatically inoculates you from a claim of copyright infringement. And here, copying the protected works, however transformative, involves the creation of a product with the ability to severely harm the market for the works being copied, and thus severely undermine the incentive for human beings to create. … The upshot is that in many circumstances it will be illegal to copy copyright-protected works to train generative AI models without permission. Which means that the companies, to avoid liability for copyright infringement, will generally need to pay copyright holders for the right to use their materials. And boy, it sure would be interesting if somebody would sue and make that case. After saying that 'in the grand scheme of things, the consequences of this ruling are limited,' Chhabria helpfully noted this ruling affects only 13 authors, not the 'countless others' whose work Meta used. A written court opinion is unfortunately incapable of physically conveying a wink and a nod. Those lawsuits might be far in the future. And Alsup, though he wasn't faced with the kind of argument Chhabria suggested, seemed potentially unsympathetic to it. 'Authors' complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works,' he wrote of the authors who sued Anthropic. 'This is not the kind of competitive or creative displacement that concerns the Copyright Act. The Act seeks to advance original works of authorship, not to protect authors against competition.' He was similarly dismissive of the claim that authors were being deprived of licensing fees for training: 'such a market,' he wrote, 'is not one the Copyright Act entitles Authors to exploit.' But even Alsup's seemingly positive ruling has a poison pill for AI companies. Training on legally acquired material, he ruled, is classic protected fair use. Training on pirated material is a different story, and Alsup absolutely excoriates any attempt to say it's not. 'This order doubts that any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use,' he wrote. There were plenty of ways to scan or copy legally acquired books (including Anthropic's own scanning system), but 'Anthropic did not do those things — instead it stole the works for its central library by downloading them from pirated libraries.' Eventually switching to book scanning doesn't erase the original sin, and in some ways it actually compounds it, because it demonstrates Anthropic could have done things legally from the start. If new AI companies adopt this perspective, they'll have to build in extra but not necessarily ruinous startup costs. There's the up-front price of buying what Anthropic at one point described as 'all the books in the world,' plus any media needed for things like images or video. And in Anthropic's case these were physical works, because hard copies of media dodge the kinds of DRM and licensing agreements publishers can put on digital ones — so add some extra cost for the labor of scanning them in. But just about any big AI player currently operating is either known or suspected to have trained on illegally downloaded books and other media. Anthropic and the authors will be going to trial to hash out the direct piracy accusations, and depending on what happens, a lot of companies could be hypothetically at risk of almost inestimable financial damages — not just from authors, but from anyone that demonstrates their work was illegally acquired. As legal expert Blake Reid vividly puts it, 'if there's evidence that an engineer was torrenting a bunch of stuff with C-suite blessing it turns the company into a money piñata.' And on top of all that, the many unsettled details can make it easy to miss the bigger mystery: how this legal wrangling will affect both the AI industry and the arts. Echoing a common argument among AI proponents, former Meta executive Nick Clegg said recently that getting artists' permission for training data would 'basically kill the AI industry.' That's an extreme claim, and given all the licensing deals companies are already striking (including with Vox Media, the parent company of The Verge), it's looking increasingly dubious. Even if they're faced with piracy penalties thanks to Alsup's ruling, the biggest AI companies have billions of dollars in investment — they can weather a lot. But smaller, particularly open source players might be much more vulnerable, and many of them are also almost certainly trained on pirated works. Meanwhile, if Chhabria's theory is right, artists could reap a reward for providing training data to AI giants. But it's highly unlikely the fees would shut these services down. That would still leave us in a spam-filled landscape with no room for future artists. Can money in the pockets of this generation's artists compensate for the blighting of the next? Is copyright law the right tool to protect the future? And what role should the courts be playing in all this? These two rulings handed partial wins to the AI industry, but they leave many more, much bigger questions unanswered.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store