Latest news with #ElectronicFrontierFoundation


The Verge
2 days ago
- Politics
- The Verge
The Supreme Court just upended internet law, and I have questions
Age verification is perhaps the hottest battleground for online speech, and the Supreme Court just settled a pivotal question: does using it to gate adult content violate the First Amendment in the US? For roughly the past 20 years the answer has been 'yes' — now, as of Friday, it's an unambiguous 'no.' Justice Clarence Thomas' opinion in Free Speech Coalition v. Paxton is relatively straightforward as Supreme Court rulings go. To summarize, its conclusion is that: Around this string of logic, you'll find a huge number of objections and unknowns. Many of these were laid out before the decision: the Electronic Frontier Foundation has an overview of the issues, and 404 Media goes deeper on the potential consequences. With the actual ruling in hand, while people are working out the serious implications for future legal cases and the scale of the potential damage, I've got a few immediate, prosaic questions. Even the best age verification usually requires collecting information that links people (directly or indirectly) to some of their most sensitive web history, creating an almost inherent risk of leaks. The only silver lining is that current systems seem to at least largely make good-faith attempts to avoid intentional snooping, and legislation includes attempts to discourage unnecessary data retention. The problem is, proponents of these systems had the strongest incentives to make privacy-preserving efforts while age verification was still a contested legal issue. Any breaches could have undercut the claim that age-gating is harmless. Unfortunately, the incentives are now almost perfectly flipped. Companies benefit from collecting and exploiting as much data as they can. (Remember when Twitter secretly used two-factor authentication addresses for ad targeting?) Most state and federal privacy frameworks were weak even before federal regulatory agencies started getting gutted, and services may not expect any serious punishment for siphoning data or cutting security corners. Meanwhile, law enforcement agencies could quietly demand security backdoors for any number of reasons, including catching people viewing illegal material. Once you create those gaps, they leave everyone vulnerable. Will we see deliberate privacy invasions? Not necessarily! And many people will probably evade age verification altogether by using VPNs or finding sites that skirt the rules. But in an increasingly surveillance-happy world, it's a reasonable concern. Over the past couple of years Pornhub has prominently blocked access to a number of states, including Texas, in protest of local laws requiring age verification. Denying service has been one of the adult industry's big points of leverage, demonstrating one potential outcome of age verification laws, but even with VPN workarounds this tactic ultimately limits the site's reach and hurts its bottom line. The Supreme Court ruling cites 21 other states with rules similar to the Texas one, and now that this approach has been deemed constitutional, it's plausible more will follow suit. At a certain point Pornhub's parent company Aylo will need to weigh the costs and benefits, particularly if a fight against age verification looks futile — and the Supreme Court decision is a step in that direction. In the UK, Pornhub ceded territory on that very front a couple of days ago, agreeing (according to British regulator Ofcom) to implement 'robust' age verification by July 25th. The company declined comment to The Verge on the impact of FSC v. Paxton, but backing down wouldn't be a surprising move here. I don't ask this question with respect to the law itself — you can read the legal definitions within the text of the Texas law right here. I'm wondering, rather, how far Texas and other states think they can push those limits. If states stick to policing content that most people would classify as intentional porn or erotica, age-gating on Pornhub and its many sister companies is a given, along with other, smaller sites. Non-video but still sex-focused sites like fiction portal Literotica seem probably covered. More hypothetically, there are general-focus sites that happen to allow visual, text, and audio porn and have a lot of it, like 4chan — though a full one-third of the service being adult content is a high bar to clear. Beyond that, we're pretty much left speculating about how malicious state attorneys general might be. It's easy to imagine LGBTQ resources or sex education sites becoming targets despite having the exact kind of social value the law is supposed to exempt. (I'm not even getting into a federal attempt to redefine obscenity in general.) At this point, of course, it's debatable how much justification is required before a government can mount an attack on a website. Remember when Texas investigated Media Matters for fraud because it posted unflattering X screenshots? That was roughly the legal equivalent of Mad Libs, but the attorney general was mad enough to give it a shot. Age verification laws are, rather, tailor-made methods to take aim at any given site. The question 'What is porn?' is going to have a tremendous impact on the internet — not just because of what courts believe is obscene for minors, but because of what website operators believe the courts believe is obscene. This is a subtle distinction, but an important one. We know legislation limiting adult content has chilling effects, even when the laws are rarely used. While age verification rules were in flux, sites could reasonably delay making a call on how to handle them. But that grace period is over — seemingly for good. Many websites are going to start making fairly drastic decisions about what they host, where they operate, and what kind of user information they collect, based not just on hard legal decisions but on preemptive phantom versions of them. In the US, during an escalating push for government censorship, the balance of power has just tipped dramatically. We don't know how far it has left to go.


The Hill
15-06-2025
- Politics
- The Hill
Border Patrol drones have shown up at the LA protests. Should we be worried?
Customs and Border Protection recently confirmed the use of unmanned aerial vehicles, better known as drones, over the unrest in Los Angeles. According to a statement to 404 Media, 'Air and Marine Operations' MQ-9 Predators are supporting our federal law enforcement partners in the Greater Los Angeles area, including Immigration and Customs Enforcement, with aerial support of their operations.' Officially, these drones, which CBP has used since 2005, are supposed to be for border security. CBP states that they are 'a critical element of CBP missions to predict, detect, identify, classify, track, deter and interdict border traffic that threatens the continuity of U.S. border security.' That may be true, but the drones are used for quite a bit more than that. CBP frequently lends them to other federal, state and local law enforcement agencies across the country, in some cases for uses that raise questions about civil liberties. Los Angeles is far from the first place where drones have been used to surveil protests and civil unrest. In the three weeks after George Floyd was killed by police in 2020, CBP lent drones to law enforcement agencies in 15 cities. In 2016, indigenous and environmentalist activists protested the construction of the Dakota Access Pipeline, which they argued violated the rights and sovereignty of the Standing Rock Sioux. The local sheriff requested CBP drones to help surveil these protesters, which CBP subsequently provided. Surveillance of anti-pipeline activists with CBP drones didn't stop there. In 2020, Enbridge, Inc. was planning to build a pipeline and faced similar controversy and protests. CBP flew drones over its planned pipeline route and over the homes of anti-pipeline activists, including the executive director of the Indigenous Environmental Network. Surveilling protesters is a concerning use of drones, as it may chill or repress speech, association and assembly protected by the First Amendment. In 2015, CBP claimed it had not used drones to surveil protests or other First Amendment activities. Yet with multiple high-profile reports to the contrary in the years that followed, that appears to have changed. CBP drones are also often lent to different law enforcement agencies for other activities. In 2012, the Electronic Frontier Foundation, a nonprofit that advocates digital freedom and civil liberties, sued the Department of Homeland Security under the Freedom of Information Act to learn how often CBP lent drones to other agencies and why. Initially, Homeland Security sent the Electronic Frontier Foundation incomplete records that failed to mention around 200 drone flights carried out on behalf of other agencies. But by 2014, the foundation learned that CBP had lent drones to other agencies 687 times in the period from 2010 to 2012. This included flights on behalf of many law enforcement agencies, 'ranging from the FBI, ICE, the U.S. Marshals, and the Coast Guard to the Minnesota Bureau of Criminal Investigation, the North Dakota Bureau of Criminal Investigation, the North Dakota Army National Guard, and the Texas Department of Public Safety.' In 2018, David Bier and Matthew Feeney of the Cato Institute published an analysis of CBP's drone program. They noted that 'From 2013 to 2016, only about half of CBP drone flight hours were actually in support of Border Patrol.' They also cite CBP statements 'that 20 percent of all Predator B flights were not in coastal or border areas.' When legislators approved this drone program, their goal was to secure the border. But these days, CBP drones are being used in ways that have significant potential to undermine the privacy of Americans, and not just in areas along the border. Multiple federal court rulings have allowed the government to conduct aerial surveillance without a warrant. No court order or even suspicion of a crime is required. Law-abiding citizens far from the border are therefore vulnerable. When governments acquire new tools, they don't just use them for their original purpose. Government officials, like all people, are creative. This results in 'mission creep' as powers quickly expand and are put to new uses. That means the rest of us should ask a simple question: How would you feel if this power were used against you? Nathan Goodman is a senior research fellow with the Mercatus Center at George Mason University's F.A. Hayek Program.
Yahoo
14-06-2025
- Politics
- Yahoo
Headed to a protest? You might want to leave your cell phone at home
A police helicopter circling over protests of immigration raids in Los Angeles offered a chilling warning to the demonstrators below. "I have all of you on camera," the pilot announced. "I'm going to come to your house." The declaration, whether true or not, raised extreme concerns among civil liberties and digital privacy groups, who reiterated the public has a First Amendment right to protest. It also served as a reminder of the vast arsenal of surveillance technology used by law enforcement to monitor these protests ― including devices like license plate readers, drones, cell site simulators, security cameras and more. Surveillance: Government is trying hard to put more eyes on us in NY. How are they doing it? Locally, protesters are expected to gather across Rochester and Monroe County on June 14 to push back on policies from the Trump administration. The protests are part of a nationwide effort being called "No Kings Day." If you're planning to attend, you might be wondering ― how can I protect myself from surveillance? Here are some basic tips from civil rights and privacy organizations like the New York Civil Liberties Union and the Electronic Frontier Foundation: Consider leaving your phone at home. "When you come to a protest like this, it makes sense to bring a camera with you, to bring a phone with you, to make sure you can record it ― you can get the message out. The message is super important," the ACLU's Daniel Kahn Gillmor said in a 2020 video about privacy at protests. "But those same phones, those same cameras, can cause problems when the data gets into the hands of people who you are against." If you must bring a phone: Turn off all your location services or make sure it is on airplane mode. Your phone transmits radio signals that can help police tie you to an event. Don't forget to check other devices like your smartwatch. Disable features that allow you to unlock your phone using your face or fingerprint and set a strong password. This will make it more difficult for police to unlock your phone without your permission. Encrypt and back up your data. This will protect the information on your phone if it is confiscated by police, lost or stolen. Consider using an encrypted messaging service like Signal to communicate with others during the protest. Consider walking or biking to a protest. Automated license plate readers create a time-stamped, searchable database of cars moving through specific intersections. Police can use those images to track how you got to a protest and prove you were there. Similarly, public transportation systems have the ability to track people's movements through transit cards or credit card payments. Wear a face mask, hat or sunglasses. Conceal any distinguishing features like tattoos or bright hair colors. EFF suggests you dress in dark, monochrome colors that will help you blend into a crowd. Face accessories will make it more difficult for police to identify you if they are using facial recognition technology. Be mindful of other protesters in your photos and videos, especially if live-streaming the protest on social media. Avoid tagging or posting images of people without their permission, or edit the photos to blur out faces and other identifying features. "Law enforcement frequently uses social media and surveillance tools to monitor and identify protesters — especially those expressing dissenting views,' said Justin Harrison, a senior attorney with the NYCLU. 'New Yorkers should take precautions against intrusive surveillance when protesting: leave electronic devices at home, disable location tracking, and avoid using face recognition features. All New Yorkers deserve to mobilize for their rights without fear of the government's prying eyes.' — Kayla Canne covers community safety for the Democrat and Chronicle with a focus on police accountability, government surveillance and how people are impacted by violence. Follow her on Twitter @kaylacanne and @bykaylacanne on Instagram. Get in touch at kcanne@ This article originally appeared on Rochester Democrat and Chronicle: Should you bring your cell phone to a protest? What you should know


TechCrunch
24-05-2025
- Politics
- TechCrunch
Why a new anti-revenge porn law has free speech experts alarmed
Privacy and digital rights advocates are raising alarms over a law that many would expect them to cheer: a federal crackdown on revenge porn and AI-generated deepfakes. The newly signed Take It Down Act makes it illegal to publish nonconsensual explicit images — real or AI-generated — and gives platforms just 48 hours to comply with a victim's takedown request or face liability. While widely praised as a long-overdue win for victims, experts have also warned its vague language, lax standards for verifying claims, and tight compliance window could pave the way for overreach, censorship of legitimate content, and even surveillance. 'Content moderation at scale is widely problematic and always ends up with important and necessary speech being censored,' India McKinney, director of federal affairs at Electronic Frontier Foundation, a digital rights organization, told TechCrunch. Online platforms have one year to establish a process for removing nonconsensual intimate imagery (NCII). While the law requires takedown requests come from victims or their representatives, it only asks for a physical or electronic signature — no photo ID or other form of verification is needed. That likely aims to reduce barriers for victims, but it could create an opportunity for abuse. 'I really want to be wrong about this, but I think there are going to be more requests to take down images depicting queer and trans people in relationships, and even more than that, I think it's gonna be consensual porn,' McKinney said. Senator Marsha Blackburn (R-TN), a co-sponsor of the Take It Down Act, also sponsored the Kids Online Safety Act which puts the onus on platforms to protect children from harmful content online. Blackburn has said she believes content related to transgender people is harmful to kids. Similarly, the Heritage Foundation — the conservative think tank behind Project 2025 — has also said that 'keeping trans content away from children is protecting kids.' Because of the liability that platforms face if they don't take down an image within 48 hours of receiving a request, 'the default is going to be that they just take it down without doing any investigation to see if this actually is NCII or if it's another type of protected speech, or if it's even relevant to the person who's making the request,' said McKinney. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you've built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | REGISTER NOW Snapchat and Meta have both said they are supportive of the law, but neither responded to TechCrunch's requests for more information about how they'll verify whether the person requesting a takedown is a victim. Mastodon, a decentralized platform that hosts its own flagship server that others can join, told TechCrunch it would lean towards removal if it was too difficult to verify the victim. Mastodon and other decentralized platforms like Bluesky or Pixelfed may be especially vulnerable to the chilling effect of the 48-hour takedown rule. These networks rely on independently operated servers, often run by nonprofits or individuals. Under the law, the FTC can treat any platform that doesn't 'reasonably comply' with takedown demands as committing an 'unfair or deceptive act or practice' – even if the host isn't a commercial entity. 'This is troubling on its face, but it is particularly so at a moment when the chair of the FTC has taken unprecedented steps to politicize the agency and has explicitly promised to use the power of the agency to punish platforms and services on an ideological, as opposed to principled, basis,' the Cyber Civil Rights Initiative, a nonprofit dedicated to ending revenge porn, said in a statement. Proactive monitoring McKinney predicts that platforms will start moderating content before it's disseminated so they have fewer problematic posts to take down in the future. Platforms are already using AI to monitor for harmful content. Kevin Guo, CEO and co-founder of AI-generated content detection startup Hive, said his company works with online platforms to detect deepfakes and child sexual abuse material (CSAM). Some of Hive's customers include Reddit, Giphy, Vevo, Bluesky, and BeReal. 'We were actually one of the tech companies that endorsed that bill,' Guo told TechCrunch. 'It'll help solve some pretty important problems and compel these platforms to adopt solutions more proactively.' Hive's model is a software-as-a-service, so the startup doesn't control how platforms use its product to flag or remove content. But Guo said many clients insert Hive's API at the point of upload to monitor before anything is sent out to the community. A Reddit spokesperson told TechCrunch the platform uses 'sophisticated internal tools, processes, and teams to address and remove' NCII. Reddit also partners with nonprofit SWGfl to deploy its StopNCII tool, which scans live traffic for matches against a database of known NCII and removes accurate matches. The company did not share how it would ensure the person requesting the takedown is the victim. McKinney warns this kind of monitoring could extend into encrypted messages in the future. While the law focuses on public or semi-public dissemination, it also requires platforms to 'remove and make reasonable efforts to prevent the reupload' of nonconsensual intimate images. She argues this could incentivize proactive scanning of all content, even in encrypted spaces. The law doesn't include any carve outs for end-to-end encrypted messaging services like WhatsApp, Signal, or iMessage. Meta, Signal, and Apple have not responded to TechCrunch's request for more information on their plans for encrypted messaging. Broader free speech implications On March 4, Trump delivered a joint address to Congress in which he praised the Take It Down Act and said he looked forward to signing it into law. 'And I'm going to use that bill for myself, too, if you don't mind,' he added. 'There's nobody who gets treated worse than I do online.' While the audience laughed at the comment, not everyone took it as a joke. Trump hasn't been shy about suppressing or retaliating against unfavorable speech, whether that's labeling mainstream media outlets 'enemies of the people,' barring The Associated Press from the Oval Office despite a court order, or pulling funding from NPR and PBS. On Thursday, the Trump administration barred Harvard University from accepting foreign student admissions, escalating a conflict that began after Harvard refused to adhere to Trump's demands that it make changes to its curriculum and eliminate DEI-related content, among other things. In retaliation, Trump has frozen federal funding to Harvard and threatened to revoke the university's tax-exempt status. 'At a time when we're already seeing school boards try to ban books and we're seeing certain politicians be very explicitly about the types of content they don't want people to ever see, whether it's critical race theory or abortion information or information about climate change…it is deeply uncomfortable for us with our past work on content moderation to see members of both parties openly advocating for content moderation at this scale,' McKinney said.


Time of India
23-05-2025
- Politics
- Time of India
ET Explainer: What is Trump's Take it Down Act to tackle ‘revenge porn'
US president Donald Trump on Monday signed the Take it Down Act , aimed at tackling non-consensual sexually explicit images, or 'revenge porn'—whether real or AI-generated deepfakes—being published online. This comes as the internet has seen several high-profile cases of non-consensual deepfakes of popular celebrities being circulated online, while social media platforms like X and Meta have rolled back content moderation initiatives in countries like the US. ET's Annapurna Roy explains what the new law does and what it means for these platforms. What does the law say? The law, officially called the Tools to Address Known Exploitation by Immobilising Technological Deepfakes on Websites and Networks Act, makes it a federal crime in the US to knowingly publish intimate images – either authentic or computer-generated – of adults without their consent, as well as of minors. Those who publish such content of minors under the age of 18 can be fined and face up to three years in prison. Where the victims are adults, offenders face up to two years in prison. The Act also imposes penalties on those who threaten to publish such content. Live Events What did Trump say? Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories 'With the rise of AI image generation, countless women have been harassed with deepfakes and other explicit images distributed against their will. This is…wrong… Just so horribly wrong,' Trump said at the signing ceremony. The law will address this 'abusive situation', he said. First lady Melania Trump , who is said to have championed the bill, said AI and social media are addictive for the younger generation and that new technologies can be 'weaponised'. With the law, vulnerable people can be 'better protected from their image or identity being abused through non-consensual intimate imagery,' she said. What does it mean for online platforms? Platforms will have to remove such illegal content within 48 hours after a victim's request. They will also have to make efforts to delete duplicates of this content. Critics, however, have argued that measures such as the takedown provision may be misused. Further, given the short window to take content down, platforms, especially smaller ones, may not be able to verify claims adequately, according to the Electronic Frontier Foundation. Platforms may be forced to weaken encryption to be able to monitor and flag such content better and use flawed technology to crack down on duplicates.