Latest news with #privacy


The Verge
4 hours ago
- The Verge
Facebook is starting to feed its AI with private, unpublished photos
For years, Meta's trained its AI programs using the billions of public images uploaded by users onto Facebook and Instagram's servers. But apparently, Meta has decided to try training its AI on the billions of images that users haven't uploaded to those servers. On Friday, TechCrunch reported that Facebook users trying to post something on the Story feature have encountered pop-up messages asking if they'd like to opt into 'cloud processing', which would allow Facebook to 'select media from your camera roll and upload it to our cloud on a regular basis', to generate 'ideas like collages, recaps, AI restyling or themes like birthdays or graduations.' By allowing this feature, the message continues, users are agreeing to Meta AI terms, which allows their AI to analyze 'media and facial features' of those unpublished photos, as well as the date said photos were taken, and the presence of other people or objects in them. You further grant Meta the right to 'retain and use' that personal information. Meta recently acknowledged that it's scraped the data from all the content that's been published on Facebook and Instagram since 2007 to train its generative AI models. Though the company stated that it's only used public posts uploaded from adult users over the age of 18, it has long been vague about exactly what 'public' entails, as well as what counted as an 'adult user' in 2007. Unlike Google, which explicitly states that it does not train generative AI models with personal data gleaned from Google Photos, Meta's current AI usage terms, which have been in place since June 23, 2024, do not provide any clarity as to whether unpublished photos accessed through 'cloud processing' are exempt from being used as training data. Meta did not return TechCrunch's request for comment; The Verge has reached out for comment as well. Thankfully, Facebook users do have an option to turn off camera roll cloud processing in their settings, which, once activated, will also start removing unpublished photos from the cloud after 30 days. But the workaround, disguised as a feature, suggest a new incursion into our private data, one that bypasses the point of friction known as conscientiously deciding to post a photo for public consumption. And according to Reddit posts found by TechCrunch, Meta's already offering AI restyling suggestions on previously-uploaded photos, even if users hadn't been aware of the feature: one user reported that Facebook had Studio Ghiblified her wedding photos without her knowledge.


The Independent
7 hours ago
- The Independent
Family says hidden spy cams at Airbnb captured footage of them having sex, kids using bathroom: lawsuit
A Georgia couple vacationing in Puerto Rico was stunned to find spy cameras hidden throughout their Airbnb, only to have their worst fears realized upon discovering surveillance video – including sound – that shows them having sex, as well as footage of their two kids in the bathroom, 'naked or [in] various stages of undress.' 'The sheer fear, emotional distress, rage, and profound sorrow – and the deep sense of violation experienced by the [pair], both personally and on behalf of their children – were unfathomable,' according to a bombshell lawsuit reviewed by The Independent. The family members are identified in court filings only by their initials, so as to 'avoid the[ir] revictimization… due to the sensitive nature of the grievances asserted,' listing the mom and dad as 'G.P.M.' and 'E.R.R.,' respectively. In an attempt to identify the culprit behind this astonishing invasion of privacy, G.P.M. searched the memory card in one of the cameras for the earliest available file, wondering if any recordings existed from when the devices were installed. 'To her indescribable horror, she discovered a video showing [one of the two hosts]... in the master bedroom installing and adjusting the… lens of the hidden camera, holding a second camera… and later confirming the live feed on his cellphone while verifying the camera's angle and viewpoint,' the family's complaint states. José Morales Boscio, the family's attorney, told The Independent, 'My clients seek justice against the perpetrators who invaded their life as a family and violated their constitutional rights to intimacy. Airbnb must be held accountable, as it profits from the commercial enterprise it operates, while trying to avoid its responsibility to provide a safe and secure environment for its guests.' An Airbnb spokesperson told The Independent that the host in question is no longer allowed to list his property on the site. 'Hidden cameras have always been banned on Airbnb,' the spokesperson said. 'We take any rare reports of violations of our policy seriously. We have banned the host's account as investigations continue and have assisted the authorities.' Earlier this year, an Arkansas couple vacationing at an Airbnb in Scottsdale, Arizona, filed suit after they say they 'enjoyed an intimate moment' on their first evening there, only to subsequently discover a hidden camera above the bed, as The Independent first reported. On February 15, 2025, G.P.M. and E.R.R. booked a week-long vacation for their family at an Airbnb in Hatillo, a rural hamlet of about 4,000 on Puerto Rico's north coast. When the four got there on February 17, the two co-hosts directed the couple to the two-story home's two-bed, two-bath upstairs unit, according to the complaint, which was filed June 24 in San Juan federal court. Four days into their stay, G.P.M. was in the hallway bathroom, getting ready to go to the beach, when she looked in the mirror and noticed a strange reflection coming from an electrical outlet behind her, the complaint goes on. Upon closer inspection of the socket, the complaint continues, G.P.M. saw a 'round crystal that resembled a camera lens.' G.P.M. immediately summoned E.R.R., who told G.P.M. that her eyes must be playing tricks on her, the complaint states. 'G.P.M., however, insisted that her concerns were real, and it was not and proceeded to check the electrical outlets in the master bedroom, where she discovered a second outlet containing what also appeared to be a camera lens,' the complaint says. 'E.R.R. again dismissed her concerns, and the family then left for the beach.' While there, G.P.M. searched the internet and found other travelers' stories about hidden cameras at Airbnbs, according to the complaint. Upon arriving back at the property, the complaint says G.P.M. inspected the outlet above the mirror in the master bathroom, and discovered a hidden camera 'about the size of a pencil point.' As E.R.R. went about removing the outlet itself, a 'black box wrapped in tape with a long wire attached emerged from the wall,' the complaint states. The two contacted Airbnb through its website, and G.P.M. also called 911 to report the three hidden cameras to police. There were also two hidden cameras found in the occupied downstairs unit, according to the complaint. Airbnb offered G.P.M., E.R.R., and their kids another place nearby, and they agreed to move, the complaint states. But since they still had access to the first apartment, G.P.M. and E.R.R. returned to check the cameras' memory cards, the complaint explains. As they opened the files, G.P.M. and E.R.R. 'saw their children['s] images, naked or on [sic] various stages of undress,' and 'saw themselves during their stay, which included them having sexual relations,' the complaint alleges. It says G.P.M. then saw the footage their host had inadvertently uploaded back in February 2024, of himself installing the spy cams. While investigators waited for a judge to issue a search warrant, the complaint says the host and co-host, as well as an 'unknown woman with a laptop in hand,' entered the property via a rear entrance and began removing the hidden cameras. Enraged, E.R.R., who was waiting in a neighbor's house for police to return, 'ran into the upstairs unit and physically confronted one of the hosts, dragging him out,' then engaged the co-host and the woman 'in a heated exchange,' the complaint states. The three eventually left the scene in separate vehicles, according to the complaint. Once they handed over the memory cards to police, G.P.M. and E.R.R., who were scheduled to fly home the next day, sat down with local prosecutors to provide their version of events. Following the meeting, the complaint says the family went to a nearby restaurant for something to eat. 'Shortly after arriving, G.P.M. went to use the restroom but experienced her first panic attack of more to come,' the complaint states. 'Overcome by the feeling of being watched, she was unable to use the restroom, and the [family] left the restaurant soon after.' Deeply traumatized by the experience, E.R.R. postponed the family's return flight by a week, hoping to salvage at least part of their trip, according to the complaint. But, it says, that night, G.P.M. 'began experiencing vivid nightmares in which she and her children were being watched.' 'The following day, G.P.M. noticed that their 9-year-old daughter was withdrawn, avoiding spending time with the family,' the complaint asserts. '... They left their accommodation only when absolutely necessary during the remainder of their stay. G.P.M. concentrated on providing emotional support to their daughter, who confided that she was feeling unwell and believed she was falling into a state of depression as a result of the ordeal.' Now back in Georgia, the family continues to suffer from 'severe emotional distress,' according to the complaint, which says they have 'remained in therapy to this day.' G.P.M. and E.R.R. are seeking a minimum of $5 million in damages over the ordeal, claiming an 'intentional, malicious, and negligent invasion of their privacy.'


TechCrunch
9 hours ago
- Business
- TechCrunch
Facebook is asking to use Meta AI on photos in your camera roll you haven't yet shared
Facebook is asking users for access to their phone's camera roll to automatically suggest AI-edited versions of their photos — including ones that haven't been uploaded to Facebook yet. The feature is being suggested to Facebook users when they're creating a new Story on the social networking app. Here, a screen pops up and asks if the user will opt into 'cloud processing' to allow creative suggestions. As the pop-up message explains, by clicking 'Allow,' you'll let Facebook generate new ideas from your camera roll, like collages, recaps, AI restylings, or photo themes. To work, Facebook says it will upload media from your camera roll to its cloud (meaning its servers) on an 'ongoing basis,' based on information like time, location, or themes. Image Credits:screenshot of Facebook's app, June 2025 The message also notes that only you can see the suggestions, and the media isn't used for ad targeting. However, by tapping 'Allow,' you are agreeing to Meta's AI Terms. This allows your media and facial features to be analyzed by AI, it says. The company will additionally use the date and presence of people or objects in your photos to craft its creative ideas. The creative tool is another example of the slippery slope that comes with sharing our personal media with AI providers. Like other tech giants, Meta has grand AI ambitions. Being able to tap into the personal photos users haven't yet shared on Facebook's social network could give the company an advantage in the AI race. Unfortunately for end users, in tech companies' rush to stay ahead, it's not always clear what they're agreeing to when features like this appear. Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW According to Meta's AI Terms around image processing, 'once shared, you agree that Meta will analyze those images, including facial features, using AI. This processing allows us to offer innovative new features, including the ability to summarize image contents, modify images, and generate new content based on the image,' the text states. The same AI terms also give Meta's AIs the right to 'retain and use' any personal information you've shared in order to personalize its AI outputs. The company notes that it can review your interactions with its AIs, including conversations, and those reviews may be conducted by humans. The terms don't define what Meta considers personal information, beyond saying it includes 'information you submit as Prompts, Feedback, or other Content.' We have to wonder whether the photos you've shared for 'cloud processing' also count here. Meta has not responded to our requests for comment or clarification. So far, there hasn't been much backlash about this feature. A handful of Facebook users have stumbled across the AI-generated photo suggestions when creating a new story and raised questions about it. For instance, one user on Reddit found that Facebook had pulled up an old photo (in this case, one that had previously been shared to the social network) and automatically turned it into an anime using Meta AI. When another user in an anti-AI Facebook group asked for help shutting this feature off, the search led to a section called camera roll sharing suggestions in the app's Settings. Image Credits:screenshot of Facebook's app, June 2025 We also found this feature under Facebook's Settings, where it's listed in the Preferences section. On the 'Camera roll sharing suggestions' page, there are two toggles. The first lets Facebook suggest photos from your camera roll when browsing the app. The second (which should be opt-in based on the pop-up that requested permission in Stories) is where you could enable or disable the 'cloud processing,' which lets Meta make AI images using your camera roll photos. This additional access to use AI on your camera roll's photos does not appear to be new. We found posts from earlier this year where confused Facebook users were sharing screenshots of the pop-up message that appeared in their Stories section. Meta has also published complete Help Documentation about the feature for both iOS and Android users. Meta's AI terms have been enforceable as of June 23, 2024; we can't compare the current AI terms with older versions because Meta doesn't keep a record, and previously published terms haven't been properly saved by the Internet Archive's Wayback Machine. Since this feature dips into your camera roll, however, it extends beyond what Meta had previously announced, in terms of training its AIs on your publicly shared data, including posts and comments on Facebook and Instagram. (EU users had until May 27, 2025 to opt out.)

Associated Press
9 hours ago
- Associated Press
What to know about online age verification laws
The Supreme Court has upheld a Texas law aimed at blocking children under 18 from seeing online pornography by requiring websites to verify the ages of all visitors. Many states have passed similar age verification laws in an attempt to restrict access to adult material from minors, but digital rights groups have raised questions about such laws' effects on free speech and whether verifying ages by accessing sensitive data could violate people's privacy. What is the Texas law? The law requires websites hosting pornographic material to verify the ages of users in hopes of stopping those under 18 from visiting. Adults would need to supply websites with a government-issued ID or use third-party age-verification services. The law carries fines of up to $10,000 per violation — fined against the website — that could be raised to up to $250,000 per violation by a minor. Texas has argued that technology has improved significantly in the last 20 years, allowing online platforms to easily check users' ages with a quick picture. Those requirements are more like ID checks at brick-and-mortar adult stores that were upheld by the Supreme Court in the 1960s, the state said. However, internet service providers, search engines and news sites are exempt from the law. How do sites verify ages? It's already illegal to show children pornography under federal law, however it's rarely enforced. But various measures already exist to verify a person's age online. Someone could upload a government ID or consent to the use facial recognition software to prove they are the age they say they are. Websites and social media companies such as Instagram parent company Meta have argued that age verification should be done by the companies that run app stores, such as Apple and Google, and not individual apps or websites. Can people get around verification? Critics, such as Pornhub have argued that age-verification laws can be easily circumvented with well-known tools such as virtual private networks (VPNs) that reroute requests to visit websites across various public networks. Questions have also been raised about enforcement, with Pornhub claiming those efforts would drive traffic to less-known sites that don't comply with the law and have fewer safety protocols. Who opposes such laws? Though heralded by social conservatives, age verification laws have been condemned by adult websites who argue they're part of a larger anti-sex political movement. They've also garnered opposition from groups that advocate for digital privacy and free speech, including the Electronic Frontier Foundation. The group has argued that it is impossible to ensure websites don't retain user data, regardless of whether age verification laws require they delete it. Samir Jain, vice president of policy at the nonprofit Center for Democracy & Technology, said the court's decision on age verification 'does far more than uphold an incidental burden on adults' speech. It overturns decades of precedent and has the potential to upend access to First Amendment-protected speech on the internet for everyone, children and adults alike.' 'Age verification requirements still raise serious privacy and free expression concerns,' Jain added. 'If states are to go forward with these burdensome laws, age verification tools must be accurate and limit collection, sharing, and retention of personal information, particularly sensitive information like birthdate and biometric data.'


BBC News
11 hours ago
- General
- BBC News
Road safety
Let us know you agree to cookies We use cookies to give you the best online experience. Please let us know if you agree to all of these cookies.