logo
#

Latest news with #DeepfakeResilienceSymposium

Courts aren't ready for AI-generated evidence
Courts aren't ready for AI-generated evidence

Axios

time4 days ago

  • Axios

Courts aren't ready for AI-generated evidence

Photos, videos and audio — once courtroom gold — are becoming less reliable as deepfakes spread, digital forensics experts warn. Why it matters: Courts can't keep up, and there aren't enough forensic analysts to verify AI-faked evidence. The big picture: AI-generated evidence can take many forms. Consider the following hypotheticals: A divorce case where one parent edits a photo's background to imply the child was in an unsafe environment. A murder investigation where deepfake video falsely puts an innocent person at the scene the crime. A wrongful termination lawsuit where someone creates a deepfake audio recording of a coworker making comments that got them fired (which happened in Baltimore County). Between the lines: As AI tools improve, proving that a photo, video or audio snippet was manipulated will get more challenging, if not impossible. Like financial fraud cases that rely on expert testimony to unpack accounting records, courts will now depend on digital forensics experts to spot the telltale signs of tampered media. But even expert analysis may not be enough to persuade a jury beyond a reasonable doubt. Threat level: Photos, videos and audio have long been the gold standard for evidence in any legal case. "Everybody has images, everybody has voice recordings, CCTV cameras, police body cameras, dash cams," Hany Farid, co-founder and chief science officer at GetReal Security, told Axios. "It's just everything. It's bonkers." The other side: Defendants will also face challenges in proving their media hasn't been altered by AI. For example, lawyers could argue that body camera footage from a police officer was tampered with, even if it wasn't. "In parts of the country where the police are not trusted, that could gain some traction," Farid said. The intrigue: This isn't the first time courts have had to adapt to new technology, Farid added. When photo and video evidence first emerged, judges often required negatives or scrutinized timestamps. But today's legal standards haven't caught up to the speed and sophistication of deepfake tools, which are evolving far faster than past media forms. Zoom in: Even for forensics investigators, the technology isn't there yet to help track the chain-of-custody for deepfakes, Joseph Pochron, managing director of digital investigations and cyber risk at Nardello & Co., said at the Deepfake Resilience Symposium in San Francisco last week. Each AI verification tool on the market is a black box in terms of how it determines what percentage of a piece of content was AI generated, creating an opening for manipulation and misinterpretation of images, videos and audio, Pochron said. Now, investigators have to get creative with how they prove something is or is not AI-manipulated. Pochron's team has begun transcribing audio and analyzing sentence structure to see if it matches patterns popular with AI tools. But even that method will be moot within a year as deepfakes become more humanlike. The bottom line: Experts urge people to preserve as many original files — voicemails, text messages, photos — on their devices as possible in case they're needed in court. "We've had a couple [cases] where it's an email that's been emailed again, but where's the original?" Pochron said. "The metadata or any other supporting artifacts may be crucial to help us figure this out." What's next: AI tools have already perfected the art of deepfake images and audio, but it will be less than two years until they fine-tune fake videos, too, Farid said.

Scoop: Top Secret Service official targeted in "swatting" attack
Scoop: Top Secret Service official targeted in "swatting" attack

Axios

time18-07-2025

  • Politics
  • Axios

Scoop: Top Secret Service official targeted in "swatting" attack

A top Secret Service official was the target of a " swatting" incident at his home on the Fourth of July, according to an agency official. Why it matters: While the incident didn't lead to any harm, it's another example of just how difficult it's become for law enforcement to reign in the the wave of hoax calls. Zoom in: Someone called 9-1-1 on July 4 to falsely report that the senior-level official's daughter was running around the house with a weapon, Michael Centrella, assistant director of the Secret Service's office of field operations, told a small gathering of tech executives Thursday at the Deepfake Resilience Symposium in San Francisco. The voice on the phone appeared to be trying to mimic the agent's, but law enforcement is still investigating whether the caller used a precise deepfake of the agent's voice or just a synthesizer to sound like a man around his age, Centrella added. However, the bad actor's plan was foiled by a simple fact: The agent doesn't have a daughter, and his local law enforcement knew that. The Secret Service asked Axios to keep the name of the official who was targeted anonymous to protect them and their family from copycat attacks. What they're saying:"We were able to protect [the senior official] and not have a major incident," Centrella said. "But you've seen these cases now across the country, they are very impactful." The big picture: " Swatting" — where a bad actor calls call 9-1-1 and reports a fake incident, resulting in armed police officers responding to a home address — has been on the rise in D.C. in recent years. Prominent targets have included several of President Trump's cabinet nominees, both Republican and Democratic lawmakers, various judges and government agency officials. The Secret Service target does not share the same level of protection as other public officials who have been victims. AI tools have made voice cloning easier, and bad actors can easily spoof phone numbers to make themselves harder to detect. Threat level: Beyond the possibility of a frightening interaction with police, the calls are also troubling because they indicate the caller knows their target's personal address. In this case, the targeted agent is a very private person, and it's difficult to find details about his personal life online.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store