
Musk-owned AI chatbot struggled to fact-check Israel-Iran war
A new report reveals that Grok — the free-to-use AI chatbot integrated into Elon Musk's X — showed "significant flaws and limitations" when verifying information about the 12-day conflict between Israel and Iran (June 13-24), which now seems to have subsided.
Researchers at the Atlantic Council's Digital Forensic Research Lab (DFRLab) analysed 130,000 posts published by the chatbot on X in relation to the 12-day conflict, and found they provided inaccurate and inconsistent information.
They estimate that around a third of those posts responded to requests to verify misinformation circulating about the conflict, including unverified social media claims and footage purporting to emerge from the exchange of fire.
"Grok demonstrated that it struggles with verifying already-confirmed facts, analysing fake visuals and avoiding unsubstantiated claims," the report says.
"The study emphasises the crucial importance of AI chatbots providing accurate information to ensure they are responsible intermediaries of information."
While Grok is not intended as a fact-checking tool, X users are increasingly turning to it to verify information circulating on the platform, including to understand crisis events.
X has no third-party fact-checking programme, relying instead on so-called community notes where users can add context to posts believed to be inaccurate.
Misinformation surged on the platform after Israel first struck in Iran on 13 June, triggering an intense exchange of fire.
Grok fails to distinguish authentic from fake
DFRLab researchers identified two AI-generated videos that Grok falsely labelled as "real footage" emerging from the conflict.
The first of these videos shows what seems to be destruction to Tel Aviv's Ben Gurion airport after an Iranian strike, but is clearly AI-generated. Asked whether it was real, Grok oscillated between conflicting responses within minutes.
It falsely claimed that the false video "likely shows real damage at Tel Aviv's Ben Gurion Airport from a Houthi missile strike on May 4, 2025," but later claimed the video "likely shows Mehrabad International Airport in Tehran, Iran, damaged during Israeli airstrikes on June 13, 2025."
Euroverify, Euronews' fact-checking unit, identified three further viral AI-generated videos which Grok falsely said were authentic when asked by X users. The chatbot linked them to an attack on Iran's Arak nuclear plant and strikes on Israel's port of Haifa and the Weizmann Institute in Rehovot.
Euroverify has previously detected several out-of-context videos circulating on social platforms being misleadingly linked to the Israel-Iran conflict.
Grok seems to have contributed to this phenomenon. The chatbot described a viral video as showing Israelis fleeing the conflict at the Taba border crossing with Egypt, when it in fact shows festival-goers in France.
It also alleged that a video of an explosion in Malaysia showed an "Iranian missile hitting Tel Aviv" on 19 June.
Chatbots amplifying falsehoods
The findings of the report come after the 12-day conflict triggered an avalanche of false claims and speculation online.
One claim, that China sent military cargo planes to Iran's aid, was widely boosted by AI chatbots Grok and Perplexity, a three-year-old AI startup which has drawn widespread controversy for allegedly using the content of media companies without their consent.
NewsGuard, a disinformation watchdog, claimed both these chatbots had contributed to the spread of the claim.
The misinformation stemmed from misinterpreted data from flight tracking site Flightradar24, which was picked up by some media outlets and amplified artificially by the AI chatbots.
Experts at DFRLab point out that chatbots heavily rely on media outlets to verify information, but often cannot keep up with the fast-changing news pace in situations of global crises.
They also warn against the distorting impact these chatbots can have as users become increasingly reliant on them to inform themselves.
"As these advanced language models become an intermediary through which wars and conflicts are interpreted, their responses, biases, and limitations can influence the public narrative."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


AFP
21 hours ago
- AFP
Mass explosion video is AI-generated, not US attack on Iran
"Heavy water nuclear plant at Arak. I guess Trump wasn't bluffing after all - just part of the fireworks in Iran. Expect to see more surprises," 21, 2025 Facebook post sharing the clip of a mushroom cloud detonation over a residential area. Image A screenshot of a Facebook reel taken June 26, 2025 video, which also circulated in Spanish and in French, spread as violence escalated in the Middle East, with Israel bombarding Iran and the US military striking its nuclear installations before a ceasefire was reached. attacked three Iranian sites key to Tehran's nuclear program on the night of June 21, hitting targets in the provinces of Natanz, Isfahan and the mountain-buried Fordo. The strikes added to a 12-day Israeli campaign that also targeted the country's top military brass and saw Iran retaliate by firing waves of missiles at Israel. US President Donald Trump has insisted the operation was a "spectacular military success" that "obliterated" Iran's nuclear sites, despite an intelligence assessment that raised doubts and claims from the Iranian government that it had "taken the necessary measures" to ensure the continuation of its program. Arak's heavy water reactor was attacked June 19 by Israel, not the US military, according to the Israel Defense Forces (archived here). But the video of the massive blast circulating on social media is AI-generated. A Google reverse image search uncovered an identical video posted June 18 to YouTube by the Turkey-based account "@cmlacyn" (archived here). The video's title -- as well as comments from the author -- reference AI usage. The owner of the account, Cemil Aciyan, states in his bio that "all videos on this channel are produced with artificial intelligence" (archived here). a June 20 direct message on Instagram, Aciyan confirmed to AFP: "I created all the videos on my channel with artificial intelligence." A search on Aciyan's other social media platforms yielded results for the same video on Instagram with the caption: "It's not real, I produced it with artificial intelligence" (archived here). AFP has debunked a slew of online misinformation about Iran here.

LeMonde
a day ago
- LeMonde
Brazil's Supreme Court makes social media directly liable for illegal content
Brazil's Supreme Court on Thursday, June 26, ruled that digital platforms must act immediately to remove hate speech and content that promotes serious crimes, in a key ruling on the liability of Big Tech for illegal posts. Brazil, where a Supreme Court judge famously took Elon Musk's X offline last year for 40 days over disinformation, has gone further than any other Latin American country in clamping down on questionable or illegal social media posts. Thursday's ruling makes social media platforms liable for third-party content deemed illegal, even without a court order. Eight of the 11 judges ruled than an article of the 2014 Internet Civil Framework, which holds that the platforms are liable for questionable content only if they refuse to comply with a court order to remove it, was partially unconstitutional. A majority of judges ruled that platforms must act "immediately" to remove content that promotes anti-democratic actions, terrorism, hate speech, child pornography and other serious crimes. For other types of illegal content, companies may be held liable for damages if they fail to remove it after it is flagged up by a third party. The ruling is likely to deepen the tensions between the Supreme Court, on one hand, and the technology companies who accuse Brazil of censorship. "We preserve freedom of expression as much as possible, without, however, allowing the world to fall into an abyss of incivility, legitimizing hate speech or crimes indiscriminately committed online," the court's president, Justice Luis Roberto Barroso, wrote. Justice Kassio Nunes, one of the three dissenting judges, argued, however, that "civil liability rests primarily with those who caused the harm" and not with the platforms.


Euronews
2 days ago
- Euronews
Musk-owned AI chatbot struggled to fact-check Israel-Iran war
A new report reveals that Grok — the free-to-use AI chatbot integrated into Elon Musk's X — showed "significant flaws and limitations" when verifying information about the 12-day conflict between Israel and Iran (June 13-24), which now seems to have subsided. Researchers at the Atlantic Council's Digital Forensic Research Lab (DFRLab) analysed 130,000 posts published by the chatbot on X in relation to the 12-day conflict, and found they provided inaccurate and inconsistent information. They estimate that around a third of those posts responded to requests to verify misinformation circulating about the conflict, including unverified social media claims and footage purporting to emerge from the exchange of fire. "Grok demonstrated that it struggles with verifying already-confirmed facts, analysing fake visuals and avoiding unsubstantiated claims," the report says. "The study emphasises the crucial importance of AI chatbots providing accurate information to ensure they are responsible intermediaries of information." While Grok is not intended as a fact-checking tool, X users are increasingly turning to it to verify information circulating on the platform, including to understand crisis events. X has no third-party fact-checking programme, relying instead on so-called community notes where users can add context to posts believed to be inaccurate. Misinformation surged on the platform after Israel first struck in Iran on 13 June, triggering an intense exchange of fire. Grok fails to distinguish authentic from fake DFRLab researchers identified two AI-generated videos that Grok falsely labelled as "real footage" emerging from the conflict. The first of these videos shows what seems to be destruction to Tel Aviv's Ben Gurion airport after an Iranian strike, but is clearly AI-generated. Asked whether it was real, Grok oscillated between conflicting responses within minutes. It falsely claimed that the false video "likely shows real damage at Tel Aviv's Ben Gurion Airport from a Houthi missile strike on May 4, 2025," but later claimed the video "likely shows Mehrabad International Airport in Tehran, Iran, damaged during Israeli airstrikes on June 13, 2025." Euroverify, Euronews' fact-checking unit, identified three further viral AI-generated videos which Grok falsely said were authentic when asked by X users. The chatbot linked them to an attack on Iran's Arak nuclear plant and strikes on Israel's port of Haifa and the Weizmann Institute in Rehovot. Euroverify has previously detected several out-of-context videos circulating on social platforms being misleadingly linked to the Israel-Iran conflict. Grok seems to have contributed to this phenomenon. The chatbot described a viral video as showing Israelis fleeing the conflict at the Taba border crossing with Egypt, when it in fact shows festival-goers in France. It also alleged that a video of an explosion in Malaysia showed an "Iranian missile hitting Tel Aviv" on 19 June. Chatbots amplifying falsehoods The findings of the report come after the 12-day conflict triggered an avalanche of false claims and speculation online. One claim, that China sent military cargo planes to Iran's aid, was widely boosted by AI chatbots Grok and Perplexity, a three-year-old AI startup which has drawn widespread controversy for allegedly using the content of media companies without their consent. NewsGuard, a disinformation watchdog, claimed both these chatbots had contributed to the spread of the claim. The misinformation stemmed from misinterpreted data from flight tracking site Flightradar24, which was picked up by some media outlets and amplified artificially by the AI chatbots. Experts at DFRLab point out that chatbots heavily rely on media outlets to verify information, but often cannot keep up with the fast-changing news pace in situations of global crises. They also warn against the distorting impact these chatbots can have as users become increasingly reliant on them to inform themselves. "As these advanced language models become an intermediary through which wars and conflicts are interpreted, their responses, biases, and limitations can influence the public narrative."