Latest news with #Theworldofbeasts


Indian Express
20-06-2025
- Entertainment
- Indian Express
Can you trust what you see? How AI videos are taking over your social media
A few days ago, a video that claimed to show a lion approaching a man asleep on the streets of Gujarat, sniffing him and walking away, took social media by storm. It looked like it was CCTV footage. The clip was dramatic, surreal, but completely fake. It was made using Artificial Intelligence (AI), but that didn't stop it from going viral. The video was even picked up by some news outlets, and reported as if it was a real incident, without any verification. The video originated from a YouTube channel – The world of beasts, which inconspicuously mentioned 'AI-assisted designs' in its bio. In another viral clip, a kangaroo – allegedly an emotional support animal – was seen attempting to board a flight with its human. Again, viewers were fascinated, many believing the clip to be real. The video first appeared on the Instagram account 'Infinite Unreality,' which openly brands itself as 'Your daily dose of unreality.' The line between fiction and reality, now more than ever, isn't always obvious to idle users. From giant anacondas swimming freely through rivers to a cheetah saving a woman from danger, AI-generated videos are flooding platforms, often blurring the boundary between the unbelievable and the impossible. With AI tools becoming more advanced and accessible, these creations are only growing in number and becoming sophisticated. To understand just how widespread the problem of AI-generated videos is, and why it matters, The Indian Express spoke to experts working at the intersection of technology, media, and misinformation. 'Not just the last year, not just the last month, even in the last couple of weeks, I've seen the volume of such videos increase,' said Ben Colman, CEO of deepfake detection firm Reality Defender. He gave a recent example – a 30-second commercial by betting platform Kalshi that aired a couple of weeks ago, during Game 3 of the 2025 NBA Finals. The video was made using Google's new AI video tool, Veo 3. 'It's blown past the uncanny valley, meaning it's infinitely more believable, and more videos like this are being posted to social platforms today compared to the day prior and so on,' Colman said. Sam Gregory, executive director of WITNESS, a non-profit that trains activists in using tech for human rights, said, 'The quantity and quality of synthetic audio have rapidly increased over the past year, and now video is catching up. New tools like Veo generate photorealistic content that follows physical laws, matches visual styles like interviews or news broadcasts, and syncs with controllable audio prompts.'. The reason behind platforms like Instagram, Facebook, TikTok, and YouTube pushing AI-generated videos, beyond technical novelty, is not very complex – such videos grab user attention, something all platforms are desperate for. Colman said, 'These videos make the user do a double‑take. Negative reactions on social media beget more engagement and longer time on site, which translates to more ads consumed.' 'Improvements in fidelity, motion, and audio have made it easier to create realistic memetic content. People are participating in meme culture using AI like never before,' said Gregory. According to Ami Kumar, founder of Social & Media Matters, 'The amplification is extremely high, unfortunately, platform algorithms prioritise quantity over quality, promoting videos that generate engagement regardless of their accuracy or authenticity.' Gregory, however, said that demand plays a role. 'Once you start watching AI content, your algorithm feeds you more. 'AI slop' is heavily monetised,' he said. 'Our own PhDs have failed to distinguish real photos or videos from deepfakes in internal tests,' Colman admitted. Are the big platforms prepared to put labels and checks on AI-generated content? Not yet. Colman said most services rely on 'less‑than‑bare‑minimum provenance watermark checks,' which many generators ignore or can spoof. Gregory warned that 'research increasingly shows the average person cannot distinguish between synthetic and real audio, and now, the same is becoming true for video.' When it comes to detection, Gregory pointed to an emerging open standard, C2PA (Coalition for Content Provenance and Authenticity), that could track the origins of images, audio and video, but it is 'not yet adopted across all platforms.' Meta, he noted, has already shifted from policing the use of AI to policing only content deemed 'deceptive and harmful.' Talking about AI-generated video detection, Kumar said, 'The gap is widening. Low-quality fakes are still detectable, but the high-end ones are nearly impossible to catch without advanced AI systems like the one we're building at Contrails.' However, he is cautiously optimistic that the regulatory tide, especially in Europe and the US, will force platforms to label AI output. 'I see the scenario improving in the next couple of years, but sadly loads of damage will be done by then,' he said. A good question to ask is, 'Who is making all these clips?' And the answer is, 'Everyone'. 'My kids know how to create AI-generated videos and the same tools are used by hobbyists, agencies, and state actors,' Colman said. Gregory agreed. 'We are all creators now,' he said. 'AI influencers, too, are a thing. Every new model spawns fresh personalities,' he said, adding that there is a growing trend of commercial actors producing AI-slop – cheap, fantastical content designed to monetise attention. Also Read | Canva rolls out new AI video clip feature powered by Google's Veo 3 model Kumar estimated that while 90 per cent of such content is made for fun, the remaining 10 per cent is causing real-world harm through financial, medical, or political misinformation. A case in point is the falsified footage of United Kingdom-based activist Tommy Robinson's viral migrant‑landing video. According to Colman, AI is a creative aid – not a replacement – and insisted that intentional deception should be clearly separated from artistic expression. 'It becomes manipulation when people's emotions or beliefs are deliberately exploited,' he said. Gregory pointed out one of the challenges – satire and parody can easily be misinterpreted when stripped of context. Kumar had a pragmatic stance: 'Intent and impact matter most. If either is negative, malicious, or criminal, it's manipulation.' The stakes leap when synthetic videos enter conflict zones and elections. Gregory recounted how AI clips have misrepresented confrontations between protesters and US troops in Los Angeles. 'One fake National Guard video racked up hundreds of thousands of views,' he said. Kumar said deepfakes have become routine in wars from Ukraine to Gaza and in election cycles from India to the US. Colman called for forward-looking laws: 'We need proactive legislation mandating detection or prevention of AI content at the point of upload. Otherwise, we're only penalising yesterday's problems while today's spiral out of control.' Gregory advocated for tools that reveal a clip's full 'recipe' across platforms, while warning of a 'detection-equity problem'. Current tools often fail to catch AI content in non-English languages or compressed formats. Kumar demanded 'strict laws and heavy penalties for platforms and individuals distributing AI-generated misinformation.' 'If we lose confidence in the evidence of our eyes and ears, we will distrust everything,' Gregory warned. 'Real, critical content will become just another drop in a flood of AI slop. And this scepticism can be weaponised to discredit real journalism, real documentation, and real harm.' Synthetic content is, clearly, here to stay. Whether it becomes a tool for creativity or a weapon of mass deception will depend on the speed at which platforms, lawmakers and technologists can build, and adopt, defences that keep the signal from being drowned by the deepfake noise.


India Today
08-06-2025
- India Today
Fact Check: Here is why this big cat DID NOT disturb a sleeping man
A video being shared like wildfire on social media right now shows a lion roaming around a man asleep on a quiet street at night. The animal sniffs him, and then, remarkably, leaves without causing any harm to the person. Those sharing the video called it a miracle. Some also wondered why it chose not to attack. advertisementThey also claimed that the video was from India. A Facebook user captioned the video as: 'Miracle in India: Sleeping Man Survives Close Encounter with Lion.' India Today Fact Check, however, found that the video is not of a real incident but PROBEWhile sharing the video, an X user credited it to a YouTube channel named 'The world of beasts'. We found the video on the channel uploaded on June 6. The description of the video in Portuguese translates to, 'Lion finds man sleeping on the street in Gujarat!' But it was clearly mentioned in the description that the video was either altered or synthetic. advertisement Going through the channel, we found that it shares only AI-generated videos of animals. The 'About' section of the channel also notes that these videos are created using from this, there are also some visual inconsistencies in the video which show that it is synthetic. Like, the text written on the boards of the closed shops is gibberish. We tried to translate it using Google Lens, but it didn't work the sleeping posture of the man appears abnormal. The man can be seen lying on his stomach, but his legs are not aligned with the upper part of his body. We also checked the video using Google's SynthID Detector which helps in determining if an image or video was created using Google AI or not. The detector confirmed to us that the video was made with Google AI. Therefore, we concluded that the video is not real but AI-generated. Want to send us something for verification? Please share it on our at 73 7000 7000 You can also send us an email at factcheck@