logo
Denmark's Plan to Fight Deepfakes: Give Citizens Copyright to Their Own Likeness

Denmark's Plan to Fight Deepfakes: Give Citizens Copyright to Their Own Likeness

Gizmodo6 hours ago

Here's a weird potential future: When you're born, you are issued a birth certificate, a social security card, and a copyright. That possibility is emerging in Denmark, where officials are considering changes to the nation's copyright laws to provide citizens with a right to their own likeness as a means of combating AI-generated deepfakes, according to The Guardian.
The proposal, advanced by the Danish Ministry of Culture and expected for a parliamentary vote this fall, would grant Danish citizens copyright control over their own image, facial features, and voice. This protection would, in theory, allow Danes to demand that online platforms remove deepfakes and other digital manipulations that were shared without their consent. It would also cover 'realistic, digitally generated imitations' of an artist's performance without consent, so no AI-generated versions of your favorite artists' songs would be allowed.
In addition to granting copyright protections to people, the proposed amendment would establish 'severe fines' for any tech platform that does not comply with the law and respond to requests for takedown. The person who is impersonated in the deepfake could also seek compensation.
'In the bill we agree and are sending an unequivocal message that everybody has the right to their own body, their own voice and their own facial features, which is apparently not how the current law is protecting people against generative AI,' Danish culture minister, Jakob Engel-Schmidt, told The Guardian. 'Human beings can be run through the digital copy machine and be misused for all sorts of purposes and I'm not willing to accept that.'
Denmark is far from the only nation taking action on deepfakes. Earlier this year, the United States passed the Take It Down Act, a much more narrowly defined bill that gives people the right to request that platforms take down nonconsensually shared sexually explicit images of them—though some activists have argued that the law is ill-defined and could be weaponized by those acting in bad faith.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Authors call on publishers to limit their use of AI
Authors call on publishers to limit their use of AI

Yahoo

time30 minutes ago

  • Yahoo

Authors call on publishers to limit their use of AI

An open letter from authors including Lauren Groff, Lev Grossman, R.F. Kuang, Dennis Lehane, and Geoffrey Maguire calls on book publishers to pledge to limit their use of AI tools, for example by committing to only hire human audiobook narrators. The letter argues that authors' work has been 'stolen' by AI companies: 'Rather than paying writers a small percentage of the money our work makes for them, someone else will be paid for a technology built on our unpaid labor.' Among other commitments, the authors call for publishers to 'make a pledge that they will never release books that were created by machine' and 'not replace their human staff with AI tools or degrade their positions into AI monitors.' While the initial letter was signed by an already impressive list of writers, NPR reports that another 1,100 signatures were added in the 24 hours after it was initially published. Authors are also suing tech companies over using their books to train AI models, but federal judges dealt significant blows to those lawsuits earlier this week.

Tipalti's Darren Upson on the strategic use of AI-driven finance
Tipalti's Darren Upson on the strategic use of AI-driven finance

Yahoo

time30 minutes ago

  • Yahoo

Tipalti's Darren Upson on the strategic use of AI-driven finance

Tipalti currently serves over 5,000 companies via AI-driven solutions to automate finance operations. These include accounts payable, employee expenses, global pay-outs, procurement, supplier management, and tax compliance. What should excite Tipalti's backers is the sheer scale of the addressable market of prospects not yet using AI-powered finance software. 'There are a huge number of companies, especially those of a more traditional nature, that have not even scratched the surface of automation and what it can do for their businesses,' says Darren is a fully automated, cloud-based platform that simplifies the most complex finance workflows, helping its clients manage end-to-end payables across multiple entities, currencies, and countries—with built-in compliance. And its seamless ERP integrations gives complete visibility and control. So, it eliminates manual work and speeds up the entire payables process, automating everything from invoice approval to global payments and reconciles data seamlessly. That summary is hardly over-techy and should be understandable to even the most basic of business leaders. The Tipalti proposition is boosted by the backing of JPMorgan Chase, which is about as good as it gets if you had the widest possible choice of potential financial backers. Back in 2023, Tipalti raised $150m in growth financing from JPMorgan and Hercules Capital. At the time, it brought total funding to more than $550m and valued the firm at over $8bn. JPMorgan is also one of the major banks Tipalti uses to route its billions of dollars' worth of supplier payments on behalf of its customers. 'We work in the fintech space as an automation platform for accounts payable and also mass payments as well. We are both a software company and a financial services business. We execute payments for our businesses and we handle the whole accounts payable process from start to finish, basically stripping out all of the manual, redundant processes associated with accounts payable. 'As that is the most time-consuming part of finance, we're trying to help organisations, especially those fast-growing businesses, to actually put their people to better use and leverage technology to really do that manual, repetitive work more efficiently, faster and to be more scalable as well.' He says that a typical client firm will be high-growth, pre-IPO outfits that have already embraced automation. They will be firms that understand how automation enables them to be more agile as an organisation. 'The big challenge that exists now is individuals' understanding the art of the possible. There's a lot of people that think that they've already automated everything they can do. But there's always more and with AI tools and the technology that exists out there, there's so many more areas across an organisation that you can automate to create more efficiency, especially when businesses are looking to try and retain and attract talent as well. It's about, how do they make sure that the people that are coming into the workforce now are doing roles that are actually fulfilling and that they enjoy.' In the run up to the UK's Labour government first budget, there was much negative press coverage forecasting doom and gloom if taxes such as capital gains tax were raised. In the end, the rate of increase was not as steep as the most pessimistic forecasts but CGT did rise from 20% to 24% for higher rate taxpayers. And given the track record over history of past Labour governments, further tax rises are more likely than any tax cuts. Upson says that the UK is still the number one destination out of North America for start-ups. 'We've got such a hotbed of talent, and we've got a large banking infrastructure in place as well. So [tax changes] do not mean that the lights are going to go out overnight. But I think that there have to be more incentives to actually attract and support individuals that are looking to build the next big thing.' "Tipalti's Darren Upson on the strategic use of AI-driven finance" was originally created and published by Retail Banker International, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site.

5 ways people build relationships with AI
5 ways people build relationships with AI

Yahoo

timean hour ago

  • Yahoo

5 ways people build relationships with AI

When you buy through links on our articles, Future and its syndication partners may earn a commission. Stories about people building emotional connections with AI are appearing more often, but Anthropic just dropped some numbers claiming it's far from as common as it might seem. Scraping 4.5 million conversations from Claude, the company discovered that only 2.9 percent of users engage with it for emotional or personal support. Anthropic wanted to emphasize that while sentiment usually improves over the conversation, Claude is not a digital shrink. It rarely pushes back outside of safety concerns, meaning it won't give medical advice and will tell people not to self-harm. But those numbers might be more about the present than the future. Anthropic itself admits the landscape is changing fast, and what counts as "affective" use today may not be so rare tomorrow. As more people interact with chatbots like Claude, ChatGPT, and Gemini and more often, there will be more people bringing AI into their emotional lives. So, how exactly are people using AI for support right now? The current usage might also predict how people will use them in the future as AI gets more sophisticated and personal. Let's start with the idea of AI as a not-quite therapist. While no AI model today is a licensed therapist (and they all make that disclaimer loud and clear), people still engage with them as if they are. They type things like, "I'm feeling really anxious about work. Can you talk me through it?" or "I feel stuck. What questions should I ask myself?" Whether the responses that come back are helpful probably varies, but there are plenty of people who claim to have walked away from an AI therapist feeling at least a little calmer. That's not because the AI gave them a miracle cure, but because it gave them a place to let thoughts unspool without judgment. Sometimes, just practicing vulnerability is enough to start seeing benefits. Sometimes, though, the help people need is less structured. They don't want guidance so much as relief. Enter what could be called the emotional emergency exit. Imagine it's 1 AM and everything feels a little too much. You don't want to wake up your friend, and you definitely don't want to scroll more doom-laced headlines. So you open an AI app and type, "I'm overwhelmed." It will respond, probably with something calm and gentle. It might even guide you through a breathing exercise, say something kind, or offer a little bedtime story in a soothing tone. Some people use AI this way, like a pressure valve – a place to decompress where nothing is expected in return. One user admitted they talk to Claude before and after every social event, just to rehearse and then unwind. It's not therapy. It's not even a friend. But it's there. For now, the best-case scenario is a kind of hybrid. People use AI to prep, to vent, to imagine new possibilities. And then, ideally, they take that clarity back to the real world. Into conversations, into creativity, into their communities. But even if the AI isn't your therapist or your best friend, it might still be the one who listens when no one else does. Humans are indecisive creatures, and figuring out what to do about big decisions is tough, but some have found AI to be the solution to navigating those choices. The AI won't recall what you did last year or guilt you about your choices, which some people find refreshing. Ask it whether to move to a new city, end a long relationship, or splurge on something you can barely justify, and it will calmly lay out the pros and cons. You can even ask it to simulate two inner voices, the risk-taker and the cautious planner. Each can make their case, and you can feel better that you made an informed choice. That kind of detached clarity can be incredibly valuable, especially when your real-world sounding boards are too close to the issue or too emotionally invested. Social situations can cause plenty of anxiety, and it's easy for some to spiral into thinking about what could go wrong. AI can help them as a kind of social script coach. Say you want to say no but not cause a fight, or you are meeting some people you want to impress, but are worried about your first impression. AI can help draft a text to decline an invite or suggest ways to ease yourself into conversations with different people, and take on the role to let you rehearse full conversations, testing different phrasings to see what feels good. Accountability partners are a common way for people to help each other achieve their goals. Someone who will make sure you go to the gym, go to sleep at a reasonable hour, and even maintain a social life and reach out to friends. Habit-tracking apps can help if you don't have the right friend or friends to help you. But AI can be a quieter co-pilot for real self-improvement. You can tell it your goals and ask it to check in with you, remind you gently, or help reframe things when motivation dips. Someone trying to quit smoking might ask ChatGPT to help track cravings and write motivational pep talks. Or an AI chatbot might ensure you keep up your journaling with reminders and suggestions for ideas on what to write about. It's no surprise that people might start to feel some fondness (or annoyance) toward the digital voice telling them to get up early to work out or to invite people that they haven't seen in a while to meet up for a meal. Related to using AI for making decisions, some people look to AI when they're grappling with questions of ethics or integrity. These aren't always monumental moral dilemmas; plenty of everyday choices can weigh heavily. Is it okay to tell a white lie to protect someone's feelings? Should you report a mistake your coworker made, even if it was unintentional? What's the best way to tell your roommate they're not pulling their weight without damaging the relationship? AI can act as a neutral sounding board. It will suggest ethical ways to consider things like whether accepting a friend's wedding invite but secretly planning not to attend is better or worse than declining outright. The AI doesn't have to offer a definitive ruling. It can map out competing values and help define the user's principles and how they lead to an answer. In this way, AI serves less as a moral authority than as a flashlight in the fog. Right now, only a small fraction of interactions fall into that category. But what happens when these tools become even more deeply embedded in our lives? What happens when your AI assistant is whispering in your earbuds, popping up in your glasses, or helping schedule your day with reminders tailored not just to your time zone but to your temperament? Anthropic might not count all of these as effective use, but maybe they should. If you're reaching for an AI tool to feel understood, get clarity, or move through something difficult, that's not just information retrieval. That's connection, or at least the digital shadow of one. You and your friends can now share and remix your favorite conversations with the Claude AI chatbot Anthropic's new AI-written blog is more of a technical treat than a literary triumph A new AI feature can control your computer to follow your orders

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store