Iran-Israel fighting distorted by tech-fuelled misinformation
The information warfare unfolding alongside ground combat, sparked by Israel's strikes on Iran's nuclear facilities and military leadership, underscores a digital crisis in the age of rapidly advancing AI tools that have blurred the lines between truth and fabrication.
The surge in wartime misinformation has exposed an urgent need for stronger detection tools, experts say, as major tech platforms have largely weakened safeguards by scaling back content moderation and reducing reliance on human fact-checkers.
After Iran struck Israel with barrages of missiles last week, AI-generated videos falsely claimed to show damage inflicted on Tel Aviv and Ben Gurion Airport.
The videos were widely shared across Facebook, Instagram and X.
Using a reverse image search, AFP's fact-checkers found that the clips were originally posted by a TikTok account that produces AI-generated content.
There has been a "surge in generative AI misinformation, specifically related to the Iran-Israel conflict," Ken Jon Miyachi, founder of the Austin-based firm BitMindAI, told AFP.
"These tools are being leveraged to manipulate public perception, often amplifying divisive or misleading narratives with unprecedented scale and sophistication."
GetReal Security, a U.S. company focused on detecting manipulated media including AI deepfakes, also identified a wave of fabricated videos related to the Israel-Iran conflict.
The company linked the visually compelling videos, depicting apocalyptic scenes of war-damaged Israeli aircraft and buildings as well as Iranian missiles mounted on a trailer, to Google's Veo 3 AI generator, known for hyper-realistic visuals.
The Veo watermark is visible at the bottom of an online video posted by the news outlet Tehran Times, which claims to show "the moment an Iranian missile" struck Tel Aviv.
"It is no surprise that as generative-AI tools continue to improve in photo-realism, they are being misused to spread misinformation and sow confusion," said Hany Farid, the co-founder of GetReal Security and a professor at the University of California, Berkeley.
Farid offered one tip to spot such deepfakes: the Veo 3 videos were normally eight seconds in length or a combination of clips of a similar duration.
"This eight-second limit obviously doesn't prove a video is fake, but should be a good reason to give you pause and fact-check before you re-share," he said.
The falsehoods are not confined to social media.
Disinformation watchdog NewsGuard has identified 51 websites that have advanced more than a dozen false claims ranging from AI-generated photos purporting to show mass destruction in Tel Aviv to fabricated reports of Iran capturing Israeli pilots.
Sources spreading these false narratives include Iranian military-linked Telegram channels and state media sources affiliated with the Islamic Republic of Iran Broadcasting (IRIB), sanctioned by the US Treasury Department, NewsGuard said.
"We're seeing a flood of false claims and ordinary Iranians appear to be the core targeted audience," McKenzie Sadeghi, a researcher with NewsGuard, told AFP.
Sadeghi described Iranian citizens as "trapped in a sealed information environment," where state media outlets dominate in a chaotic attempt to "control the narrative."
Iran itself claimed to be a victim of tech manipulation, with local media reporting that Israel briefly hacked a state television broadcast, airing footage of women's protests and urging people to take to the streets.
Adding to the information chaos were online clips lifted from war-themed video games.
AFP's fact-checkers identified one such clip posted on X, which falsely claimed to show an Israeli jet being shot down by Iran. The footage bore striking similarities to the military simulation game Arma 3.
Israel's military has rejected Iranian media reports claiming its fighter jets were downed over Iran as "fake news."
Chatbots such as xAI's Grok, which online users are increasingly turning to for instant fact-checking, falsely identified some of the manipulated visuals as real, researchers said.
"This highlights a broader crisis in today's online information landscape: the erosion of trust in digital content," BitMindAI's Miyachi said.
"There is an urgent need for better detection tools, media literacy, and platform accountability to safeguard the integrity of public discourse."
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Economic Times
42 minutes ago
- Economic Times
Machines may soon think in a language we don't understand, leaving humanity in the dark: Godfather of AI sounds alarm
Synopsis Artificial Intelligence pioneer Geoffrey Hinton cautions about AI's future. Hinton suggests AI could create its own language. This language might be beyond human understanding. He expresses regret for not recognizing the dangers sooner. Hinton highlights AI's rapid learning and knowledge sharing capabilities. He urges for ethical guidelines alongside AI advancements. The goal is to ensure AI remains benevolent. Agencies Geoffrey Hinton, the "Godfather of AI," warns that AI could develop its own incomprehensible language, potentially thinking in ways beyond human understanding. Hinton, a Nobel laureate, regrets not recognizing the dangers of AI sooner, emphasizing the rapid pace at which machines are learning and sharing information. Geoffrey Hinton, often dubbed the 'Godfather of AI,' has once again sounded a sobering alarm about the direction in which artificial intelligence is evolving. In a recent appearance on the One Decision podcast, Hinton warned that AI may soon develop a language of its own — one that even its human creators won't understand. 'Right now, AI systems do what's called 'chain of thought' reasoning in English, so we can follow what it's doing,' Hinton explained. 'But it gets more scary if they develop their own internal languages for talking to each other.' He went on to add that AI has already demonstrated it can think 'terrible' thoughts, and it's not unthinkable that machines could eventually think in ways humans can't track or interpret. Hinton's warnings carry weight. The 2024 Nobel Prize laureate in Physics, awarded for his pioneering work on neural networks, has helped lay the foundation for today's most advanced AI systems, including deep learning and large language models. But today, Hinton is wrestling with what he calls a delayed realization. 'I should have realised much sooner what the eventual dangers were going to be,' he said. 'I always thought the future was far off and I wish I had thought about safety sooner.' That hindsight is now driving his advocacy. Hinton believes that as digital systems become more advanced, the gap between machine intelligence and human understanding will widen at a staggering pace. One of Hinton's most compelling concerns is how digital systems differ fundamentally from the human brain. AI models, he says, can share what they learn instantly across thousands of copies. 'Imagine if 10,000 people learned something and all of them knew it instantly — that's what happens in these systems,' he explained on BBC News . It's this kind of distributed, collective intelligence that could soon allow machines to outpace even our most ambitious understanding. AI models like GPT-4 already surpass humans in general knowledge, and though they lag in complex reasoning for now, Hinton says that gap is closing fast. While Hinton has made waves by speaking openly about AI risks, he says others in the tech world are staying quiet — at least in public. 'Many people in big companies are downplaying the risk,' he noted, despite their private concerns. One exception, he says, is Google DeepMind CEO Demis Hassabis, who has shown serious commitment to addressing those risks. Hinton's own exit from Google in 2023 was widely misinterpreted as a protest. He now clarifies, 'I left Google because I was 75 and couldn't program effectively anymore. But when I left, maybe I could talk about all these risks more freely.' With AI's capabilities expanding and governments scrambling to catch up, the global conversation around regulation is intensifying. The White House recently unveiled an 'AI Action Plan' aimed at accelerating innovation while limiting funding to overly regulated states. But for Hinton, technical advancements must go hand in hand with ethical guardrails. He says the only real hope lies in finding a way to make AI 'guaranteed benevolent' — a lofty goal, given that the very systems we build may soon be operating beyond our comprehension.


India.com
an hour ago
- India.com
Where is Pakistan's oil reserves? Never been found so far despite drilling for..., Imran Khan already...
Where is Pakistan's oil reserves? Never been found so far despite drilling for..., Imran Khan already.... Pakistan's Massive Oil and Gas Reserves: Pakistan has made headlines all over after claiming to have discovered huge reserves of oil and gas in its sea area. However, the latest details and expert opinions reveal an entirely different picture. Well, these claims of finding natural reserves are not new and date back to Imran Khan's regime. Let us understand this whole matter in detail. Imran Khan's Claims In 2019, Imran Khan, the then Prime Minister of Pakistan, claimed that huge reserves of oil and gas is going to be found 230-280 kms from the coast of Karachi, near the Iranian border. He stated that this discovery will change his country's fate and end all the debts. Khan claimed that the oil and gas reserve will not only fulfill the country's needs but also make it an oil-exporting country.


India.com
2 hours ago
- India.com
ChatGPT Launches Study Mode Feature For Students In 11 Indian Languages; Heres How To Use It Free
ChatGPT Study Mode Feature: OpenAI has launched a new Study Mode in ChatGPT for free, designed to assist students with learning, revision, and exam preparation. This dedicated mode is now available for free in India, offering a major educational boost to millions of users. The AI-powered study assistant is accessible to logged-in users across Free, Plus, Pro, and Team plans. The new Study is available in 11 Indian languages with multimodal support, bringing voice, image, and text together to make AI learning accessible for diverse learners. Meanwhile, the ChatGPT Edu users will also get access in the coming weeks. This move comes as a response to growing concerns and conversations around the use of AI in education, aiming to make learning more accessible and effective for students. The new Study Mode in ChatGPT helps students learn more effectively. They can get simple explanations, ask subject-specific questions, and even take quizzes — all within one interface. Whether it's math, essays, or science, Study Mode makes studying easier, more personalized, and more helpful. The new mobile-friendly design ensures that students from any part of the country, whether urban or rural, can use it easily, making high-quality learning support widely accessible. ChatGPT 'Study Mode: How To Use It For Free Step 1: Go to the ChatGPT app or website and sign in to your account. Step 2: On the home screen, look for the 'Study Mode' banner or icon. Step 3: Tap or click the banner to switch to Study Mode. Step 4: Choose what you want to study, like English, History, or Physics. Step 5: Ask questions, get simple explanations, or try quizzes and practice material. How To Use ChatGPT Study Mode For Academic Success The new mode helps students make the most of their study time. You can ask the AI to break down your syllabus into daily, bite-sized lessons, making learning more manageable. Students can practice regularly by using the quiz feature to test themselves before exams and stay sharp. If any topic feels confusing, simply request a simplified explanation or an easy-to-understand analogy. To enhance your preparation, you can copy AI-generated summaries into your personal study notes or journal. Adding further, students can easily switch between topics based on their needs and requirements.