Latest news with #DFRLab


Express Tribune
3 days ago
- Express Tribune
Grok churns out fake facts about Israel-Iran war
Elon Musk's AI chatbot Grok produced inaccurate and contradictory responses when users sought to fact-check the Israel-Iran conflict, a study said on Tuesday, raising fresh doubts about its reliability as a debunking tool. With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilising AI-powered chatbots – including xAI's Grok – in search of reliable information, but their responses are often themselves prone to misinformation. "The investigation into Grok's performance during the first days of the Israel-Iran conflict exposes significant flaws and limitations in the AI chatbot's ability to provide accurate, reliable, and consistent information during times of crisis," said the study from the Digital Forensic Research Lab (DFRLab) of the Atlantic Council, an American think tank. "Grok demonstrated that it struggles with verifying already-confirmed facts, analysing fake visuals, and avoiding unsubstantiated claims." The DFRLab analysed around 130,000 posts in various languages on the platform X, where the AI assistant is built in, to find that Grok was "struggling to authenticate AI-generated media." Following Iran's retaliatory strikes on Israel, Grok offered vastly different responses to similar prompts about an AI-generated video of a destroyed airport that amassed millions of views on X, the study found. It oscillated – sometimes within the same minute – between denying the airport's destruction and confirming it had been damaged by strikes, the study said. In some responses, Grok cited a missile launched by Yemeni rebels as the source of the damage. In others, it wrongly identified the AI-generated airport as one in Beirut, Gaza, or Tehran. When users shared another AI-generated video depicting buildings collapsing after an alleged Iranian strike on Tel Aviv, Grok responded that it appeared to be real, the study said. The Israel-Iran conflict, which led to US airstrikes against Tehran's nuclear program over the weekend, has churned out an avalanche of online misinformation including AI-generated videos and war visuals recycled from other conflicts. AI chatbots also amplified falsehoods. As the Israel-Iran war intensified, false claims spread across social media that China had dispatched military cargo planes to Tehran to offer its support. When users asked the AI-operated X accounts of AI companies Perplexity and Grok about its validity, both wrongly responded that the claims were true, according to disinformation watchdog NewsGuard. Researchers say Grok has previously made errors verifying information related to crises such as the recent India-Pakistan conflict and anti-immigration protests in Los Angeles. Last month, Grok was under renewed scrutiny for inserting "white genocide" in South Africa, a far-right conspiracy theory, into unrelated queries.


Euronews
4 days ago
- Euronews
Musk-owned AI chatbot struggled to fact-check Israel-Iran war
A new report reveals that Grok — the free-to-use AI chatbot integrated into Elon Musk's X — showed "significant flaws and limitations" when verifying information about the 12-day conflict between Israel and Iran (June 13-24), which now seems to have subsided. Researchers at the Atlantic Council's Digital Forensic Research Lab (DFRLab) analysed 130,000 posts published by the chatbot on X in relation to the 12-day conflict, and found they provided inaccurate and inconsistent information. They estimate that around a third of those posts responded to requests to verify misinformation circulating about the conflict, including unverified social media claims and footage purporting to emerge from the exchange of fire. "Grok demonstrated that it struggles with verifying already-confirmed facts, analysing fake visuals and avoiding unsubstantiated claims," the report says. "The study emphasises the crucial importance of AI chatbots providing accurate information to ensure they are responsible intermediaries of information." While Grok is not intended as a fact-checking tool, X users are increasingly turning to it to verify information circulating on the platform, including to understand crisis events. X has no third-party fact-checking programme, relying instead on so-called community notes where users can add context to posts believed to be inaccurate. Misinformation surged on the platform after Israel first struck in Iran on 13 June, triggering an intense exchange of fire. Grok fails to distinguish authentic from fake DFRLab researchers identified two AI-generated videos that Grok falsely labelled as "real footage" emerging from the conflict. The first of these videos shows what seems to be destruction to Tel Aviv's Ben Gurion airport after an Iranian strike, but is clearly AI-generated. Asked whether it was real, Grok oscillated between conflicting responses within minutes. It falsely claimed that the false video "likely shows real damage at Tel Aviv's Ben Gurion Airport from a Houthi missile strike on May 4, 2025," but later claimed the video "likely shows Mehrabad International Airport in Tehran, Iran, damaged during Israeli airstrikes on June 13, 2025." Euroverify, Euronews' fact-checking unit, identified three further viral AI-generated videos which Grok falsely said were authentic when asked by X users. The chatbot linked them to an attack on Iran's Arak nuclear plant and strikes on Israel's port of Haifa and the Weizmann Institute in Rehovot. Euroverify has previously detected several out-of-context videos circulating on social platforms being misleadingly linked to the Israel-Iran conflict. Grok seems to have contributed to this phenomenon. The chatbot described a viral video as showing Israelis fleeing the conflict at the Taba border crossing with Egypt, when it in fact shows festival-goers in France. It also alleged that a video of an explosion in Malaysia showed an "Iranian missile hitting Tel Aviv" on 19 June. Chatbots amplifying falsehoods The findings of the report come after the 12-day conflict triggered an avalanche of false claims and speculation online. One claim, that China sent military cargo planes to Iran's aid, was widely boosted by AI chatbots Grok and Perplexity, a three-year-old AI startup which has drawn widespread controversy for allegedly using the content of media companies without their consent. NewsGuard, a disinformation watchdog, claimed both these chatbots had contributed to the spread of the claim. The misinformation stemmed from misinterpreted data from flight tracking site Flightradar24, which was picked up by some media outlets and amplified artificially by the AI chatbots. Experts at DFRLab point out that chatbots heavily rely on media outlets to verify information, but often cannot keep up with the fast-changing news pace in situations of global crises. They also warn against the distorting impact these chatbots can have as users become increasingly reliant on them to inform themselves. "As these advanced language models become an intermediary through which wars and conflicts are interpreted, their responses, biases, and limitations can influence the public narrative."


Express Tribune
4 days ago
- Express Tribune
Grok shows 'flaws' in fact-checking Israel-Iran war: study
Elon Musk's AI chatbot Grok produced inaccurate and contradictory responses when users sought to fact-check the Israel-Iran conflict, a study said Tuesday, raising fresh doubts about its reliability as a debunking tool. With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilizing AI-powered chatbots — including xAI's Grok — in search of reliable information, but their responses are often themselves prone to misinformation. "The investigation into Grok's performance during the first days of the Israel-Iran conflict exposes significant flaws and limitations in the AI chatbot's ability to provide accurate, reliable, and consistent information during times of crisis," said the study from the Digital Forensic Research Lab (DFRLab) of the Atlantic Council, an American think tank. "Grok demonstrated that it struggles with verifying already-confirmed facts, analyzing fake visuals, and avoiding unsubstantiated claims." The DFRLab analyzed around 130,000 posts in various languages on the platform X, where the AI assistant is built in, to find that Grok was "struggling to authenticate AI-generated media."


Time of India
5 days ago
- Business
- Time of India
Elon Musk's Grok shows 'flaws' in fact-checking Israel-Iran war: study
HighlightsA study by the Digital Forensic Research Lab of the Atlantic Council revealed that Elon Musk's AI chatbot Grok provided inaccurate and contradictory responses regarding the Israel-Iran conflict, questioning its reliability as a fact-checking tool. The investigation found that Grok struggled to authenticate AI-generated media and frequently oscillated between confirming and denying the destruction of an airport in response to user inquiries. Elon Musk criticized Grok for its poor sourcing after it cited Media Matters, a media watchdog he has previously targeted in lawsuits, showcasing ongoing concerns about the chatbot's ability to provide reliable information. Elon Musk 's AI chatbot Grok produced inaccurate and contradictory responses when users sought to fact-check the Israel-Iran conflict , a study said Tuesday, raising fresh doubts about its reliability as a debunking tool. With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilizing AI-powered chatbots -- including xAI 's Grok -- in search of reliable information, but their responses are often themselves prone to misinformation . "The investigation into Grok's performance during the first days of the Israel-Iran conflict exposes significant flaws and limitations in the AI chatbot's ability to provide accurate, reliable, and consistent information during times of crisis," said the study from the Digital Forensic Research Lab (DFRLab) of the Atlantic Council, an American think tank. "Grok demonstrated that it struggles with verifying already-confirmed facts, analyzing fake visuals, and avoiding unsubstantiated claims." The DFRLab analyzed around 130,000 posts in various languages on the platform X, where the AI assistant is built in, to find that Grok was "struggling to authenticate AI-generated media ." Following Iran's retaliatory strikes on Israel, Grok offered vastly different responses to similar prompts about an AI-generated video of a destroyed airport that amassed millions of views on X, the study found. It oscillated -- sometimes within the same minute -- between denying the airport's destruction and confirming it had been damaged by strikes, the study said. In some responses, Grok cited the a missile launched by Yemeni rebels as the source of the damage. In others, it wrongly identified the AI-generated airport as one in Beirut, Gaza, or Tehran. When users shared another AI-generated video depicting buildings collapsing after an alleged Iranian strike on Tel Aviv, Grok responded that it appeared to be real, the study said. The Israel-Iran conflict, which led to US air strikes against Tehran's nuclear program over the weekend, has churned out an avalanche of online misinformation including AI-generated videos and war visuals recycled from other conflicts. AI chatbots also amplified falsehoods. As the Israel-Iran war intensified, false claims spread across social media that China had dispatched military cargo planes to Tehran to offer its support. When users asked the AI-operated X accounts of AI companies Perplexity and Grok about its validity, both wrongly responded that the claims were true, according to disinformation watchdog NewsGuard. Researchers say Grok has previously made errors verifying information related to crises such as the recent India-Pakistan conflict and anti-immigration protests in Los Angeles. Last month, Grok was under renewed scrutiny for inserting "white genocide" in South Africa, a far-right conspiracy theory, into unrelated queries. Musk's startup xAI blamed an "unauthorized modification" for the unsolicited response. Musk, a South African-born billionaire, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white people. Musk himself blasted Grok after it cited Media Matters -- a liberal media watchdog he has targeted in multiple lawsuits -- as a source in some of its responses about misinformation. "Shame on you, Grok," Musk wrote on X. "Your sourcing is terrible."


Economic Times
5 days ago
- Economic Times
Elon Musk's Grok shows 'flaws' in fact-checking Israel-Iran war: study
Elon Musk's AI chatbot Grok produced inaccurate and contradictory responses when users sought to fact-check the Israel-Iran conflict, a study said Tuesday, raising fresh doubts about its reliability as a debunking tool. With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilizing AI-powered chatbots -- including xAI's Grok -- in search of reliable information, but their responses are often themselves prone to misinformation. "The investigation into Grok's performance during the first days of the Israel-Iran conflict exposes significant flaws and limitations in the AI chatbot's ability to provide accurate, reliable, and consistent information during times of crisis," said the study from the Digital Forensic Research Lab (DFRLab) of the Atlantic Council, an American think tank."Grok demonstrated that it struggles with verifying already-confirmed facts, analyzing fake visuals, and avoiding unsubstantiated claims."The DFRLab analyzed around 130,000 posts in various languages on the platform X, where the AI assistant is built in, to find that Grok was "struggling to authenticate AI-generated media." Following Iran's retaliatory strikes on Israel, Grok offered vastly different responses to similar prompts about an AI-generated video of a destroyed airport that amassed millions of views on X, the study found. It oscillated -- sometimes within the same minute -- between denying the airport's destruction and confirming it had been damaged by strikes, the study said. In some responses, Grok cited the a missile launched by Yemeni rebels as the source of the damage. In others, it wrongly identified the AI-generated airport as one in Beirut, Gaza, or Tehran. When users shared another AI-generated video depicting buildings collapsing after an alleged Iranian strike on Tel Aviv, Grok responded that it appeared to be real, the study Israel-Iran conflict, which led to US air strikes against Tehran's nuclear program over the weekend, has churned out an avalanche of online misinformation including AI-generated videos and war visuals recycled from other chatbots also amplified the Israel-Iran war intensified, false claims spread across social media that China had dispatched military cargo planes to Tehran to offer its users asked the AI-operated X accounts of AI companies Perplexity and Grok about its validity, both wrongly responded that the claims were true, according to disinformation watchdog say Grok has previously made errors verifying information related to crises such as the recent India-Pakistan conflict and anti-immigration protests in Los month, Grok was under renewed scrutiny for inserting "white genocide" in South Africa, a far-right conspiracy theory, into unrelated startup xAI blamed an "unauthorized modification" for the unsolicited response. Musk, a South African-born billionaire, has previously peddled the unfounded claim that South Africa's leaders were "openly pushing for genocide" of white himself blasted Grok after it cited Media Matters -- a liberal media watchdog he has targeted in multiple lawsuits -- as a source in some of its responses about misinformation."Shame on you, Grok," Musk wrote on X. "Your sourcing is terrible."