
Paraglider Peng Yujiang's viral video may not be as terrifying as he claimed; here's how AI may have aided it
Chinese paraglider Peng Yujiang made headlines for surviving a strong cloud vortex and accidentally rising to 8,000 metres. However, a NBC News report has suggested that the terrifying viral video, which may have been generated by artificial intelligence, at least partially.
Peng had no oxygen masks and survived extreme cold and high wind speeds. He suffered frostbite and low oxygen levels but had recorded the entire 72-minute flight.
'It was terrifying... Everything was white. I couldn't see any direction. Without the compass, I wouldn't have known which way I was going. I thought I was flying straight, but in reality, I was spinning,' Peng told the Chinese media.
According to the initial investigation, the first five seconds of the viral video might be AI-generated. In the viral clip, Peng can be seen gliding at high altitude with his legs dangling, but the footage is reportedly cropped.
NBC News said Peng's video was cropped out to omit Doubao AI's logo, suggesting that the ByteDance-owned company's AI tool likely created at least the first five seconds of the viral video.
The omitted portion of the viral video was uploaded separately to Facebook on May 25 with the company's watermark.
'It's unclear if the remaining footage of Peng gliding through the sky, which differs from the first five seconds, is authentic or not,' the news outlet said.
GetReal Labs, an AI-verification company, corroborated the claims, stating that the analysis of the footage yielded proof of AI use.
'We were able to extract a few frames and analyse them using our Inspect platform, and our models confirm that the images are synthetic,' said GetReal Labs. It also said that several other elements in the video differed from the rest of the footage.
News agency Reuters, which distributed the clip without the AI logo, has since removed the video. Other news outlets have also removed their versions of the video.
'We have reason to believe this is an AI-generated video and are currently working on killing this footage,' Reuters said.
The Chinese paraglider has now been banned from flying for six months.
Authorities in Gansu punished him for not submitting a flight plan and banned his companion, Gu Zhimin, for sharing the video online without permission.
Peng Yujiang's flight was not officially approved. He, nevertheless, claimed he was doing ground paragliding training when strong winds lifted him up.
The Gansu Aero Sports Association called it an accident, not illegal flying. Still, according to Sixth Tone, it suspended him from flying for six months.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Hindustan Times
15 minutes ago
- Hindustan Times
Warning: Using ChatGPT for these 10 things could put you at serious risk
From writing emails and planning trips to solving maths problems and fixing code, ChatGPT has become a go-to tool for many of us. Some people use it to write essays, others ask it to suggest recipes, learn languages or even decide what to watch next on Netflix. It's fast, helpful and always available. That's what makes it so tempting to rely on. Trusting ChatGPT too much blindly can put you in serious trouble. Stop doing these 10 things immediately.(Pexels) But just because ChatGPT can answer our questions doesn't mean it should! The more we use it, the more we start trusting it with things that may be too personal, sensitive or even risky. And that's where the problems begin. So while it's great for basic tasks or quick explanations, there are some things you should never use ChatGPT for. Here's a list of 10 situations where it's better to stop and think before asking AI for help: 1. Diagnosing health problems Feeling unwell? It's tempting to ask ChatGPT what's wrong, but its answers can be way off, jumping from flu to cancer in seconds. It can't examine you or run tests like a real doctor. At best, it can help you prepare questions for your appointment. 2. Dealing with mental health Have you been turning to GPT to talk if you're stressed, anxious or dealing with something heavy? ChatGPT might offer calming tips. But it's not a therapist. It feels like it listens, but it can't truly listen, understand emotions or guide you through hard times like a real person can. 3. Making emergency decisions In an emergency, like a gas leak, fire or health scare, don't waste time asking ChatGPT what to do. It can't sense danger or call for help. Every second matters in a crisis. Step outside, call emergency services, and stay safe. Use ChatGPT later to understand what happened, not while it's happening. 4. Planning your taxes or finances ChatGPT can help explain financial terms, but it doesn't know your income, expenses or tax situation. Its advice might be outdated or too general. It can also miss important deductions or give incorrect guidance. Sharing sensitive information like your bank details or Social Security number can put you at risk. For tax or financial planning, it's always safer to consult a real expert 5. Sharing confidential or personal information Avoid putting private or sensitive information into ChatGPT. This includes legal documents, medical records, ID details, or anything protected by privacy laws. Once you enter it, you lose control over where that data goes. It could be stored, reviewed, or even used to train future models. If you wouldn't share it publicly, don't share it with a chatbot. 6. Doing anything illegal Trying to ask ChatGPT to help with something shady? Bad idea. Not only is it wrong, but it can also get you into serious trouble. 7. Checking breaking news or real-time updates ChatGPT can now pull live information like stock prices and news headlines, but it doesn't update automatically. You have to keep asking for new data each time. For real-time updates, it's better to follow news websites, official alerts or live feeds. ChatGPT is helpful, but not a replacement for breaking news sources. 8. Gambling Using ChatGPT to place bets might seem fun, but it's risky. It can get player stats, injuries, or scores wrong. It also can't predict future results. Even if it sounds confident, it's still guessing. Gambling with AI advice can lead to losses. 9. Writing legal documents ChatGPT can explain legal terms, but it shouldn't be used to write wills or legal contracts. Laws vary by state and even by county, and small mistakes, like missing a signature, can make a document invalid. Use ChatGPT to prepare questions or understand the basics, but always let a licensed lawyer handle the final document for legal safety. 10. Creating original art You can use it to brainstorm ideas, but calling AI-made content your own is unfair to real artists. Be honest about what's human and what's not.


New Indian Express
17 minutes ago
- New Indian Express
AI's role in mental health grows, sparks expert concern
BENGALURU: As the usage of AI is expanding to address the concerns of mental health, stress and other related issues, the experts pointed out that there is a need to understand that how various tools are being used, the type of information it is disseminating. Dr Prabha S Chandra, Professor and Head of Psychiatry, NIMHANS, said, the information has its positive and negative sides. She informed that patients are using AI as a convenient tool for them to seek solutions. Experts said people are validating the use of AI as they are not comfortable sharing their personal details with strangers, they fear being judged. The age group that is using AI is between 18-25 years. Conversation is essential to beat mental health issues. On one hand, it can reduce the load on therapists, but on the other, there is a need to understand and evaluate the process as in the long run, this will have adverse consequences, said ADBS coordinator and principal investigator, NIMHANS, Dr Sanjeev Jain. Officials in the Karnataka health department said that AI was becoming a challenge for them to reach out to people in peri-urban and rural areas as people are using AI to seek solutions to their problems. 'The solutions AI gives cannot be used for all, as one size does not fit all. The number of people approaching us is less,' said a mental health expert working with the health department. IISc, Centre for Brain Research experts said, 'Use of AI is affecting the cognitive skills and brain development. Now people are already stressed because of their over-dependence on it.' Dr Ajit V Bhide, noted consultant psychiatrist and former president of the Indian Psychiatry Society said, it should never be recommended as the first choice, especially when seeking medication.


Time of India
26 minutes ago
- Time of India
A new divide: Nations with AI data centres & those without
Last month, Sam Altman, the CEO of the artificial intelligence company OpenAI, donned a helmet to visit the construction site of the company's new data centre project in Texas. Bigger than New York's Central Park, the estimated $60 billion project will be one of the most powerful computing hubs ever created when completed as soon as next year. Around the same time as Altman's visit to Texas, Nicolas Wolovick, a computer science professor at the National University of Cordoba in Argentina, was running what counts as one of his country's most advanced AI computing hubs. It was in a converted room at the university, where wires snaked between aging AI chips and server computers. "We are losing," Wolovick said. AI has created a new digital divide , fracturing the world between nations with the computing power for building cutting-edge AI systems and those without. The split is influencing geopolitics and global economics, creating new dependencies and prompting a desperate rush to not be excluded from a technology race. The biggest beneficiaries by far are the US, China and the EU, hosting over half of the world's most powerful data centres, which are used for developing the most complex AI systems, according to data compiled by Oxford University researchers. Only 32 countries, or about 16% of nations, have these large facilities filled with microchips and computers. US and Chinese companies operate over 90% of the data centres that other companies and institutions use for AI work. Africa and South America have almost no AI computing hubs, while India has five and Japan four. More than 150 countries have nothing. nyt