Latest news with #deepfakes


Gizmodo
2 hours ago
- Entertainment
- Gizmodo
Denmark's Plan to Fight Deepfakes: Give Citizens Copyright to Their Own Likeness
Here's a weird potential future: When you're born, you are issued a birth certificate, a social security card, and a copyright. That possibility is emerging in Denmark, where officials are considering changes to the nation's copyright laws to provide citizens with a right to their own likeness as a means of combating AI-generated deepfakes, according to The Guardian. The proposal, advanced by the Danish Ministry of Culture and expected for a parliamentary vote this fall, would grant Danish citizens copyright control over their own image, facial features, and voice. This protection would, in theory, allow Danes to demand that online platforms remove deepfakes and other digital manipulations that were shared without their consent. It would also cover 'realistic, digitally generated imitations' of an artist's performance without consent, so no AI-generated versions of your favorite artists' songs would be allowed. In addition to granting copyright protections to people, the proposed amendment would establish 'severe fines' for any tech platform that does not comply with the law and respond to requests for takedown. The person who is impersonated in the deepfake could also seek compensation. 'In the bill we agree and are sending an unequivocal message that everybody has the right to their own body, their own voice and their own facial features, which is apparently not how the current law is protecting people against generative AI,' Danish culture minister, Jakob Engel-Schmidt, told The Guardian. 'Human beings can be run through the digital copy machine and be misused for all sorts of purposes and I'm not willing to accept that.' Denmark is far from the only nation taking action on deepfakes. Earlier this year, the United States passed the Take It Down Act, a much more narrowly defined bill that gives people the right to request that platforms take down nonconsensually shared sexually explicit images of them—though some activists have argued that the law is ill-defined and could be weaponized by those acting in bad faith.


Times
17 hours ago
- Politics
- Times
Danish citizens to ‘own their own faces' to prevent deepfakes
Denmark plans to become the first country in the world to give its citizens copyright over their faces and voices in an effort to clamp down on 'deepfakes' — videos, audio clips and images that are digitally doctored to spread false information. In recent years the tools for making deepfakes, including artificial intelligence-assisted editing software, have become so sophisticated and ubiquitous that it takes not much more than a few clicks of a mouse to create them. They are already endemic in the political sphere and were deployed during recent election campaigns in Slovakia, Turkey, Bangladesh, Pakistan and Argentina. The former US president Joe Biden was subjected to an audio deepfake during the Democratic presidential primary in New Hampshire last year. In November an MP from the German Social Democratic party was reprimanded for posting a deepfake video of Friedrich Merz, the conservative leader and future chancellor, saying that his party 'despised' the electorate. The Danish culture ministry said it would soon no longer be possible to distinguish between real and deepfake material. That in turn would undermine trust in authentic pictures and videos, it warned. 'Since images and videos swiftly become embedded in people's subconscious, digitally manipulated versions of an image or video can establish fundamental doubts and perhaps even a completely wrong perception of genuine depictions of reality.' There is now broad cross-party support in Denmark's parliament for a reform to the copyright law that would make it illegal to share deepfakes. The bill includes a special protection for musicians and performing artists against digital imitations. 'We are now sending an unequivocal signal to all citizens that you have the right to your own body, your own voice and your own facial features,' said Jakob Engel-Schmidt, the culture minister. Lars Christian Lilleholt, the parliamentary leader of the Danish Liberal party, which is part of the ruling coalition, said AI tools had made it alarmingly easy to impersonate politicians and celebrities and to exploit their aura of credibility to propagate false claims. 'It is not just harmful to the individual who has their identity stolen,' he said. 'It is harmful to democracy as a whole when we cannot trust what we see.' The reform will include an exemption for parody and satire. This is a thorny area: several studies suggest a large proportion of political deepfakes are humorous or harmless rather than malicious. There are some experts who warn that concern about the phenomenon risks tipping over into a moral panic. In April last year Mette Frederiksen, Denmark's Social Democratic prime minister, was targeted with an AI-generated deepfake that fell into this grey area. After her government announced that it was abolishing a Christian public holiday, the right-wing populist Danish People's Party released a video of a fake press conference where Frederiksen appeared to say she would scrap all the other religious holidays, including Easter and Christmas. The clip, which was presented as a dream sequence and clearly labelled as AI-manipulated content, prompted debate about the acceptable boundaries of the technology.


News24
2 days ago
- Politics
- News24
G20 leaders have a role to play in the media's fight against deepfakes
There is a troubling scourge of deepfakes, which impact on the media and need the attention of the G20 writes Sbu Ngalwa. Not much happens in Polokwane – save for that odd day when the Peter Mokaba Stadium plays host to a Premier Soccer League home game, and then suddenly, the town comes alive. The last time Polokwane likely welcomed international visitors from diverse nationalities was in June 2010, during the Soccer World Cup. But I digress. In the Limpopo capital earlier this month, senior government officials, international organisations, civil society and technocrats from G20 countries gathered for the third meeting of the Digital Economy Working Group (DEWG). The DEWG is one of 15 working groups participating in a series of meetings focused on topics relevant to the G20. Inputs from these meetings contribute to the G20 summit agenda and may be incorporated into the final G20 Leaders' Declaration after the South African-hosted summit in November 2025. Troubling scourge of deepfakes Therefore, working group discussions have the potential to make it onto the agenda of the leaders of countries that hold over 85% of the global GDP. This is by far the most powerful bloc of global nations, and South Africa holds the G20 presidency this year and will hand over the baton to the United States for 2026. The G20 needs to be inclusive in its deliberations, considerations, and ultimately, decisions. Fortunately, civil society is involved in some of the Sherpa streams. The DEWG invited me to provide input on behalf of the SA National Editors' Forum (Sanef) and, by extension, the African Editors Forum (TAEF) on the troubling scourge of deepfakes and their impact on the media. Deepfakes involve the digital manipulation of videos/images to appear to show the targeted person's face, body or voice. The intention is to mislead and spread false information. SABC Morning Live presenter Leanne Manas is among the victims. Her image was used to promote an online trading platform. As a result, she needed security to escort her to work, where victims of scams travelled to the SABC to 'confront' her. The Polokwane meeting provided an opportunity to highlight the media's role in fighting misinformation and disinformation and also to appeal to the G20 nations to involve the media in their collective response against deepfakes and other efforts to promote information integrity. The mainstream media has been at the forefront of exposing deepfakes and informing the public about this dangerous assault on the truth. In May, News24 exposed the hijacking of local websites for the nefarious purposes of spreading fake stories about the country. Through its fact-checking desk, News24 revealed how a digital marketing agency based in India was buying expired domains of South African websites and using them to publish fake AI-generated stories. This included a report about plans by the City of Cape Town to introduce a traffic congestion tax. In a country burdened by growing inequality, an increase in the fuel levy and the recent political uproar over the government's failed attempt to increase VAT, such a story would not only gain traction online, but also offend many people. That's how convincing some of these stories are. Thankfully, it was quickly debunked, as the City also issued a statement denying that such a tax was in the pipeline. Danger of the new weapons Another fake story presented bogus documents about new load shedding scenarios and yet another purported change in social security grants. Besides impersonating journalists in deepfake scams, sexually humiliating imagery of our colleagues, designed to intimidate and discredit, is also concocted. Importantly, though, the News24 report demonstrates how journalism serves as an antidote, bringing to light the immense dangers of these new weapons. The media is already besieged by powerful political and economic interests for whom certain truths are inconvenient. On top of this, we now face the onslaught of deepfakes, which also causes harm to political and business leaders, celebrities, and indeed ordinary people (especially women and girls, and people investing their personal savings). While we appreciate G20 countries addressing the issue, we caution against the use of overly broad laws and regulations that, under the guise of criminalising deepfakes, may effectively suppress public interest journalism. As media organisations, Sanef and Media Monitoring Africa are discussing these issues in a parallel process to the G20, dubbed the M20. Building on the pioneering decisions on information integrity made during Brazil's G20 Presidency last year, this independent initiative aims to sustain momentum and ensure continued engagement throughout 2025 - and beyond. Support for media freedom To play our part in the common fight against deepfakes, we need the G20 to show genuine support for media freedom and editorial independence. Back in Polokwane, Minister of Communications and Digital Technologies Solly Matlatsi's comments were therefore welcomed when he assured that government was looking at evolving capabilities of generative AI and risks posed by deepfakes, and that it was 'an issue of growing concern for information integrity and public trust'. - Sbu Ngalwa is acting secretary-general of The African Editors' Forum (TAEF, treasurer-general of the SA National Editors' Forum (Sanef) and a member of the local M20 organising committee. For more information about the M20 initiative, visit the website, or contact the M20 secretariat. *Want to respond to the columnist? Send your letter or article to opinions@ with your name and town or province. You are welcome to also send a profile picture. We encourage a diversity of voices and views in our readers' submissions and reserve the right not to publish any and all submissions received. Disclaimer: News24 encourages freedom of speech and the expression of diverse views. The views of columnists published on News24 are therefore their own and do not necessarily represent the views of News24.


Forbes
4 days ago
- Business
- Forbes
Why Our Legal System Is Unprepared For The Synthetic Media Age
Jason Crawforth is the Founder and CEO of a company working to restore confidence in digital media authenticity. In a time when seeing is no longer believing, our legal system is facing a credibility crisis. The U.S. Department of Justice, long a guardian of evidentiary truth, is now staring down a new threat: the rise of synthetic media—AI-generated videos, audio and images indistinguishable from reality. As these technologies grow in sophistication, the DOJ must confront a harsh truth: Traditional evidentiary standards are on the verge of collapse. The implications are grave. For over a century, video and photographic evidence have been a foundation of truth in courtrooms. But when fabricated footage becomes indistinguishable from actual events, what happens to that foundation? Synthetic media is advancing at a breakneck pace. Deepfake volumes double every six months. The amount of synthetic content is growing at an exponential pace. The DOJ is no longer just prosecuting crimes; it's doing so in a world where digital reality is contested. They aren't alone in federal agencies feeling the ramifications of deep fakes. The U.S. Department of Treasury has highlighted the problem with now former-Director Andrea Gacki, sharing that, 'While GenAI holds tremendous potential as a new technology, bad actors are seeking to exploit it to defraud American businesses and consumers, to include financial institutions and their customers.' According to Europol's 2022 report, the emergence of 5G and cloud computing has not only enhanced communication and privacy for institutions but also opened new doors for criminal exploitation. The additional bandwidth enables real-time manipulation of video streams, allowing deepfake technologies to infiltrate videoconferencing, live-streaming and even television broadcasts. Europol's Innovation Lab emphasizes the rise of "crime-as-a-service" (CaaS), where tools, techniques and AI models—including deepfakes—are commercialized on dark web platforms. These criminal actors are typically early adopters, staying a step ahead of law enforcement. The report also warns of a deeper societal impact. As synthetic content becomes increasingly realistic and accessible, public trust in media, institutions and authority figures could erode—culminating in what experts term an 'information apocalypse' or 'reality apathy.' In this future, distinguishing fact from fiction becomes nearly impossible, and a shared societal truth may disappear altogether. Current forensic tools, while valuable, are increasingly reactive and insufficient. They play catch-up in an endless game of cat and mouse. The only truly sustainable solution is a proactive one. This isn't just innovation. It's necessity. Consider the legal ramifications. If we can't prove that a video wasn't tampered with—if we can't even prove it was real to begin with—how can we admit it as evidence? How can we convict? How can we defend ourselves? The burden of proof becomes a liability. The courtroom becomes a stage for doubt, not justice. The DOJ must act decisively to update its evidentiary standards. It must embrace media authenticity technologies that don't just detect manipulation but prove reality. Just as DNA testing revolutionized forensics, real-time media authentication can restore trust in digital evidence. It must also push for global standards. Digital content crosses borders in milliseconds; our legal protections must be just as agile. What Should The DOJ (And Everyone Else) Be Doing Right Now? To navigate the synthetic media era, the DOJ and other similar institutions need a mindset and technological shift. Here's how they (and you) can get ahead of the threat instead of reacting to it: 1. Shift from detection to authentication. Reactive forensic tools try to spot fakes after they surface, often too late. Instead, agencies need platforms that prove content is real from the moment of capture. Think of it like a digital notary that stamps the truth into every pixel. This transforms evidentiary review from a guessing game into a science. 2. Embrace point-of-creation integrity. The only sustainable solution is to anchor authenticity at the source. By cryptographically fingerprinting digital content and securing it, organizations can maintain an unbroken chain of custody. This level of tamper-evidence deters bad actors and restores evidentiary confidence. 3. Vet media like digital DNA. Just as DNA revolutionized criminal justice by making biological evidence indisputable, cryptographic 'digital DNA' can do the same for recorded evidence. Agencies should adopt tools that let them verify authenticity with mathematical certainty, not assumptions. 4. Demand provenance by default. Any digital asset, especially if presented as evidence, should come with a verifiable origin trail. If it doesn't, treat it with caution. Courts and investigative bodies should begin establishing standards where unverified content is presumed unreliable until proven otherwise. 5. Train for a new evidentiary standard. Judges, prosecutors and investigators must be educated on what modern media authentication looks like. The courtroom of the future won't just examine motive and opportunity, it will scrutinize metadata, hash integrity, and blockchain timestamps. This is more than a technical challenge—it's a societal one. In an era of disinformation, manipulated media isn't just a tool of crime; it's a weapon against democracy, trust and truth itself. The justice system's integrity depends on truth. And in this new age, truth must be defended at the pixel level. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Fox News
5 days ago
- Fox News
Top 5 scams spreading right now
Lately, I've had way too many calls on my shows from people who have lost thousands (sometimes hundreds of thousands) to scams. These are so cleverly evil, it's like Ocean's Eleven but starring a dude with three Instagram followers and a ChatGPT subscription. Last chance to enter to win $500 in giveaway. Enter now! You see, we're way past scam emails from sketchy Nigerian princes. Today's scams are slick, personalized and powered by scary-good tech like AI voice cloning and deepfakes. And yep, people fall for them every single day. Here are today's scummy front-runners, plus how to protect your cash, pride and sanity: 1. The AI voice clone This one's horrifying because it sounds like someone you trust. Scammers grab a clip of your child's, spouse's, boss' voice from social media, podcasts or even your voicemail. Then they call your mom, your grandpa, your partner: "Hi, it's me. I'm in big trouble. I need money. Don't tell anyone." It's not them. It's AI. And it works because it feels real. Anthony in Los Angeles was deceived by scammers who used AI to replicate his son's voice. Believing his son was in distress, Anthony transferred $25,000 to the fraudsters. If you get a call like this, call or text the person. Try someone they live or work with. 2. 'Your bank account's frozen' You get a text or call from your "bank," and the number looks legit. They say your account is locked due to suspicious activity and you need to confirm your info. Stop right there. That link? Fake. The person on the phone? Also fake. Charles in Iowa lost over $300,000. Always open your bank's app or type the web address in yourself. Never tap the link they send. 3. Crypto investment 'friend' This starts on Instagram, Facebook or LinkedIn. Someone friends you, chats you up, gains your trust, then casually mentions they're making a fortune in crypto. They even offer to show you how. Suddenly you're handing over money or access to a wallet, and poof, it's gone. A couple in Georgia lost $800,000 after falling victim to a cryptocurrency scam. Just because someone's friendly doesn't mean they're honest. Don't fall for a stranger friending you on social media. If you're lonely, volunteer somewhere. 4. Gold bar scam You get a call from someone claiming to be with the FBI or your bank's fraud team. They say your money's at risk, and you need to withdraw it, convert it into gold bars and turn it over for "safekeeping." A 72-year-old retiree from New Hampshire was scammed into purchasing $3.1 million worth of gold bars and turned it over to the scammer. Yes, it sounds insane, but it's happening, and people are losing everything. Come on, you know that real law enforcement doesn't operate this way. 5. Vet emergency A neighbor's crying. Your dog's been hit by a car. They rushed your fur baby to the vet and paid the bill. You owe them $1,200. But wait ... your pup is fine, snoring on the couch. You've been pet-shamed into Venmoing a scammer. If any of this sounds familiar, your gut is whispering danger or you're not sure what might be happening in a situation, reach out to me. I'll help you figure out what's real and what's a scam. Better to ask than get burned. I won't judge you, I promise. Get tech-smarter on your schedule Award-winning host Kim Komando is your secret weapon for navigating tech. Copyright 2025, WestStar Multimedia Entertainment. All rights reserved.