
Canadian researchers create tool to remove anti-deepfake watermarks from AI content
Academia and industry have focused on watermarking as the best way to fight deepfakes and 'basically abandoned all other approaches,' said Andre Kassis, a PhD candidate in computer science who led the research.
At a White House event in 2023, the leading AI companies — including OpenAI, Meta, Google and Amazon — pledged to implement mechanisms such as watermarking to clearly identify AI-generated content.
AI companies' systems embed a watermark, which is a hidden signature or pattern that isn't visible to a person but can be identified by another system, Kassis explained.
He said the research shows the use of watermarks is most likely not a viable shield against the hazards posed by AI content.
'It tells us that the danger of deepfakes is something that we don't even have the tools to start tackling at this point,' he said.
The tool developed at the University of Waterloo, called UnMarker, follows other academic research on removing watermarks. That includes work at the University of Maryland, a collaboration between researchers at the University of California and Carnegie Mellon, and work at ETH Zürich.
Kassis said his research goes further than earlier efforts and is the 'first to expose a systemic vulnerability that undermines the very premise of watermarking as a defence against deepfakes.'
In a follow-up email statement, he said that 'what sets UnMarker apart is that it requires no knowledge of the watermarking algorithm, no access to internal parameters, and no interaction with the detector at all.'
When tested, the tool worked more than 50 per cent of the time on different AI models, a university press release said.
AI systems can be misused to create deepfakes, spread misinformation and perpetrate scams — creating a need for a reliable way to identify content as AI-generated, Kassis said.
After AI tools became too advanced for AI detectors to work well, attention turned to watermarking.
The idea is that if we cannot 'post facto understand or detect what's real and what's not,' it's possible to inject 'some kind of hidden signature or some kind of hidden pattern' earlier on, when the content is created, Kassis said.
The European Union's AI Act requires providers of systems that put out large quantities of synthetic content to implement techniques and methods to make AI-generated or manipulated content identifiable, such as watermarks.
In Canada, a voluntary code of conduct launched by the federal government in 2023 requires those behind AI systems to develop and implement 'a reliable and freely available method to detect content generated by the system, with a near-term focus on audio-visual content (e.g., watermarking).'
Kassis said UnMarker can remove watermarks without knowing anything about the system that generated it, or anything about the watermark itself.
'We can just apply this tool and within two minutes max, it will output an image that is visually identical to the watermark image' which can then be distributed, he said.
'It kind of is ironic that there's billions that are being poured into this technology and then, just with two buttons that you press, you can just get an image that is watermark-free.'
Kassis said that while the major AI players are racing to implement watermarking technology, more effort should be put into finding alternative solutions.
Watermarks have 'been declared as the de facto standard for future defence against these systems,' he said.
'I guess it's a call for everyone to take a step back and then try to think about this problem again.'
This report by The Canadian Press was first published July 23, 2025.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New York Post
2 minutes ago
- New York Post
Tips to help your teen navigate AI chatbots — and what to watch out for: experts
As artificial intelligence technology becomes part of daily life, adolescents are turning to chatbots for advice, guidance and conversation. The appeal is clear: Chatbots are patient, never judgmental, supportive and always available. That worries experts who say the booming AI industry is largely unregulated and that many parents have no idea about how their kids are using AI tools or the extent of personal information they are sharing with chatbots. Advertisement 5 Bruce Perry, 17, demonstrates the possibilities of artificial intelligence by creating an AI companion on Character AI. AP New research shows more than 70% of American teenagers have used AI companions and more than half converse with them regularly. The study by Common Sense Media focused on 'AI companions,' like Character. AI, Nomi and Replika, which it defines as 'digital friends or characters you can text or talk with whenever you want,' versus AI assistants or tools like ChatGPT, though it notes they can be used the same way. It's important that parents understand the technology. Experts suggest some things parents can do to help protect their kids: Advertisement 5 Bruce Perry poses for a portrait after discussing his use of artificial intelligence in school assignments and for personal questions. AP — Start a conversation, without judgment, says Michael Robb, head researcher at Common Sense Media. Approach your teen with curiosity and basic questions: 'Have you heard of AI companions?' 'Do you use apps that talk to you like a friend?' Listen and understand what appeals to your teen before being dismissive or saying you're worried about it. — Help teens recognize that AI companions are programmed to be agreeable and validating. Advertisement Explain that's not how real relationships work and that real friends with their own points of view can help navigate difficult situations in ways that AI companions cannot. 5 It's important that parents understand the technology. Experts suggest some things parents can do to help protect their kids. AP 'One of the things that's really concerning is not only what's happening on screen but how much time it's taking kids away from relationships in real life,' says Mitch Prinstein, chief of psychology at the American Psychological Association. 'We need to teach kids that this is a form of entertainment. It's not real, and it's really important they distinguish it from reality and should not have it replace relationships in your actual life.' The APA recently put out a health advisory on AI and adolescent well-being, and tips for parents. Advertisement — Parents should watch for signs of unhealthy attachments. 'If your teen is preferring AI interactions over real relationships or spending hours talking to AI companions, or showing that they are becoming emotionally distressed when separated from them — those are patterns that suggest AI companions might be replacing rather than complementing human connection,' Robb says. 5 The APA recently put out a health advisory on AI and adolescent well-being, and tips for parents. AP — Parents can set rules about AI use, just like they do for screen time and social media. Have discussions about when and how AI tools can and cannot be used. Many AI companions are designed for adult use and can mimic romantic, intimate and role-playing scenarios. While AI companions may feel supportive, children should understand the tools are not equipped to handle a real crisis or provide genuine mental health support. 5 While AI companions may feel supportive, children should understand the tools are not equipped to handle a real crisis or provide genuine mental health support. AP If kids are struggling with depression, anxiety, loneliness, an eating disorder or other mental health challenges, they need human support — whether it is family, friends or a mental health professional. Advertisement — Get informed. The more parents know about AI, the better. 'I don't think people quite get what AI can do, how many teens are using it and why it's starting to get a little scary,' says Prinstein, one of many experts calling for regulations to ensure safety guardrails for children. 'A lot of us throw our hands up and say, 'I don't know what this is!' This sounds crazy!' Unfortunately, that tells kids if you have a problem with this, don't come to me because I am going to diminish it and belittle it.' Older teenagers have advice, too, for parents and kids. Banning AI tools is not a solution because the technology is becoming ubiquitous, says Ganesh Nair, 18. 'Trying not to use AI is like trying to not use social media today. It is too ingrained in everything we do,' says Nair, who is trying to step back from using AI companions after seeing them affect real-life friendships in his high school. 'The best way you can try to regulate it is to embrace being challenged.' 'Anything that is difficult, AI can make easy. But that is a problem,' says Nair. 'Actively seek out challenges, whether academic or personal. If you fall for the idea that easier is better, then you are the most vulnerable to being absorbed into this newly artificial world.'


Tom's Guide
2 minutes ago
- Tom's Guide
ChatGPT now handles 2.5 billion prompts a day — and it's changing how we search
ChatGPT users are officially chatting at scale and Google might want to start paying closer attention. According to Axios, ChatGPT now processes more than 2.5 billion prompts per day, with around 330 million of those coming from users in the U.S. alone. No wonder CEO Sam Altman is pushing for more GPUs. That's a dramatic surge from just seven months ago, when the platform was averaging about 1 billion daily prompts. The sharp rise in usage signals a major shift in how people are turning to AI for everyday answers, ideas and productivity also raises an important question: what happens when we start asking ChatGPT more questions than we do Google? Google still dominates traditional search, with around 14 billion queries per day. But ChatGPT is catching up fast. While search engines are built to index the internet and surface relevant links, ChatGPT is trained to understand, summarize and synthesize language. That difference matters. More users are beginning to favor AI chat assistants for explanations, writing help, summaries and planning rather than sifting through web results. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. OpenAI's growing numbers show that chatbots are becoming default tools for a wide range of tasks, from work to school to daily life. If ChatGPT becomes the first stop for everything from dinner recipes to customer service scripts, it could chip away at Google's dominance in user attention, and eventually, ad dollars. It also suggests that generative AI is becoming deeply embedded in user habits, which means this shift for everyday users could mean: But there are trade-offs. ChatGPT's free tier still makes up the bulk of its user base, and while that scale is impressive, it also presents challenges. Running billions of prompts a day takes serious compute power, and OpenAI may need to adjust pricing, access or model behavior as it balances growth with sustainability. If you're already using ChatGPT regularly, this moment confirms you're not alone. But it also highlights the importance of knowing when to use a chatbot versus a traditional search engine. Use ChatGPT when you want something explained or simplified, you need help brainstorming or organizing ideas, or want support learning something new. If you're shopping locally or looking up something and want a variety of sources. Google is also a good option for fact-checking, although you can use ChatGPT for that, too. It's clear that ChatGPT is becoming a daily tool for hundreds of millions of users, and its 2.5 billion daily prompts show no signs of slowing. Whether this is a Google killer or a new kind of assistant remains to be seen. But one thing's clear: the way we search, learn and interact with information is evolving fast, and AI is at the center of it. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.


Android Authority
2 minutes ago
- Android Authority
Proton's ChatGPT rival is prioritizing privacy with encrypted chats and zero logs
Tushar Mehta / Android Authority TL;DR The company behind Proton Mail and ProtonVPN has announced a new privacy-focused AI chatbot, called 'Lumo.' Chats with Lumo are claimed to be end-to-end encrypted and stored directly on your devices. It offers both free and paid tiers and is available on Android and iOS devices, as well as with a web interface. Proton, known for its eponymous VPN and mail client apps, is joining the rally of companies that have embraced artificial intelligence or AI. Earlier today, Proton announced its entry into the AI chatbot market, positioning itself against stalwarts like ChatGPT, Google Gemini, and Microsoft's Bing Chat, but with a different approach. Proton has claimed its Lumo AI chatbot abides by the same privacy code as the rest of its products. It states that the chatbot does not store any chats, which remain encrypted and are therefore only accessible on your devices. Queries to Lumo go through Proton's data centers in Europe and are immune to disclosure demands by law enforcement agencies in countries like the US. You can use the chatbot even without signing up for or logging into your existing Proton account. However, logging in will enable a history of your chats. But even then, these chats are stored locally on your device and do not sync across multiple devices. Proton says its privacy-first approach ensures that data is never used to train or refine AI models. It doesn't reveal the encryption standard but compares Lumo with other services, such as Proton Mail, which use OpenPGP with AES-256 or ChaCha20 for end-to-end network encryption. There is no information available about the underlying language models either, but the company says it utilizes open-source AI models built in Europe. It further denies any association with OpenAI or any American or Chinese AI company. Tushar Mehta / Android Authority Like other chatbots, Lumo can process text and voice-generated queries, source results directly from the web, write code, and summarize text-based file types such as PDF and DOC as input. However, it currently cannot handle media files, such as images or videos, meaning it can neither use them to augment input nor generate them. Proton Lumo is free to use, but the unpaid tier comes with restrictions, such as slower processing, daily limits, and caps on file size. There is another paid tier, called Lumo Plus, which unlocks unlimited chats, a longer chat history, and support for multiple uploads per query for €9.99 (~$11.70). Lumo is available on mobile for both Android and iOS, as well as through a web interface. Got a tip? Talk to us! Email our staff at Email our staff at news@ . You can stay anonymous or get credit for the info, it's your choice.