logo
Grok churns out fake facts about Israel-Iran war

Grok churns out fake facts about Israel-Iran war

Express Tribune2 days ago

Elon Musk's AI chatbot Grok produced inaccurate and contradictory responses when users sought to fact-check the Israel-Iran conflict, a study said on Tuesday, raising fresh doubts about its reliability as a debunking tool.
With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilising AI-powered chatbots – including xAI's Grok – in search of reliable information, but their responses are often themselves prone to misinformation.
"The investigation into Grok's performance during the first days of the Israel-Iran conflict exposes significant flaws and limitations in the AI chatbot's ability to provide accurate, reliable, and consistent information during times of crisis," said the study from the Digital Forensic Research Lab (DFRLab) of the Atlantic Council, an American think tank.
"Grok demonstrated that it struggles with verifying already-confirmed facts, analysing fake visuals, and avoiding unsubstantiated claims."
The DFRLab analysed around 130,000 posts in various languages on the platform X, where the AI assistant is built in, to find that Grok was "struggling to authenticate AI-generated media."
Following Iran's retaliatory strikes on Israel, Grok offered vastly different responses to similar prompts about an AI-generated video of a destroyed airport that amassed millions of views on X, the study found.
It oscillated – sometimes within the same minute – between denying the airport's destruction and confirming it had been damaged by strikes, the study said.
In some responses, Grok cited a missile launched by Yemeni rebels as the source of the damage. In others, it wrongly identified the AI-generated airport as one in Beirut, Gaza, or Tehran.
When users shared another AI-generated video depicting buildings collapsing after an alleged Iranian strike on Tel Aviv, Grok responded that it appeared to be real, the study said.
The Israel-Iran conflict, which led to US airstrikes against Tehran's nuclear program over the weekend, has churned out an avalanche of online misinformation including AI-generated videos and war visuals recycled from other conflicts.
AI chatbots also amplified falsehoods.
As the Israel-Iran war intensified, false claims spread across social media that China had dispatched military cargo planes to Tehran to offer its support.
When users asked the AI-operated X accounts of AI companies Perplexity and Grok about its validity, both wrongly responded that the claims were true, according to disinformation watchdog NewsGuard.
Researchers say Grok has previously made errors verifying information related to crises such as the recent India-Pakistan conflict and anti-immigration protests in Los Angeles.
Last month, Grok was under renewed scrutiny for inserting "white genocide" in South Africa, a far-right conspiracy theory, into unrelated queries.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Meta wins copyright lawsuit
Meta wins copyright lawsuit

Express Tribune

time18 hours ago

  • Express Tribune

Meta wins copyright lawsuit

A US judge on Wednesday handed Meta a victory over authors who accused the tech giant of violating copyright law by training Llama artificial intelligence on their creations without permission. District Court Judge Vince Chhabria in San Francisco ruled that Meta's use of the works to train its AI model was "transformative" enough to constitute "fair use" under copyright law, in the second such courtroom triumph for AI firms this week. However, it came with a caveat that the authors could have pitched a winning argument that by training powerful generative AI with copyrighted works, tech firms are creating a tool that could let a sea of users compete with them in the literary marketplace. "No matter how transformative (generative AI) training may be, it's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books," Chhabria said in his ruling. Tremendous amounts of data are needed to train large language models powering generative AI. Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment. AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally transforms the original content and is necessary for innovation. "We appreciate today's decision from the court," a Meta spokesperson said in response to an AFP inquiry. "Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology." In the case before Chhabria, a group of authors sued Meta for downloading pirated copies of their works and using them to train the open-source Llama generative AI, according to court documents. Books involved in the suit include Sarah Silverman's comic memoir The Bedwetter and Junot Diaz's Pulitzer Prize–winning novel The Brief Wondrous Life of Oscar Wao, the documents showed. "This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful," the judge stated. "It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one." Market harming? A different federal judge in San Francisco on Monday sided with AI firm Anthropic regarding training its models on copyrighted books without authors' permission. District Court Judge William Alsup ruled that the company's training of its Claude AI models with books bought or pirated was allowed under the "fair use" doctrine in the US Copyright Act. "Use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use," Alsup wrote in his decision. "The technology at issue was among the most transformative many of us will see in our lifetimes," Alsup added in his decision, comparing AI training to how humans learn by reading books. The ruling stems from a class-action lawsuit filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who accused Anthropic of illegally copying their books to train chatbot Claude, the company's ChatGPT rival. Alsup rejected Anthropic's bid for blanket protection, ruling that the company's practice of downloading millions of pirated books to build a permanent digital library was not justified by fair use protections.

Wimbledon's human touch yields to electronic eyes but officials embrace new role
Wimbledon's human touch yields to electronic eyes but officials embrace new role

Business Recorder

timea day ago

  • Business Recorder

Wimbledon's human touch yields to electronic eyes but officials embrace new role

BENGALURU: The All England Club's decision to jettison line judges in favour of technology carries an air of inevitability as the world embraces AI but the human arbiters of the boundaries of the tennis court are hoping to continue playing a key role. Convention has almost been a religion during Wimbledon's 148-year history but advancements in technology have been impossible to resist with live Electronic Line Calling (ELC) set to take over from impeccably-attired line judges when action begins at the grasscourt major on Monday. The tournament's once-robust pool of around 300 line judges has been cut to 80 and they will serve as 'match assistants', who support chair umpires and step in should the ELC - powered by more than 450 cameras - fail in any of the 18 courts in use. The Association of British Tennis Officials (ABTO) said the new position, which will be adopted at events that use live ELC, provided a fresh avenue for its officials with strong interest expressed in the role. 'Whilst this evolution has resulted in a reduction in the overall officiating days for line umpires, the impact has been partially offset by the creation of the match assistant position,' the ABTO told Reuters via email. The body noted that although line judges will no longer be used at Wimbledon or ATP tournaments, there were still opportunities for them at other levels including at many WTA events and ITF World Tennis Tour tournaments. French taxi drivers threaten airports, French Open tennis in standoff Interest in the traditional role is likely to be sustained with the pathway to becoming a match assistant on the grandest stage involving training as line umpires. First deployed as an experiment at the Next Gen ATP Finals in Milan in 2017, the ELC system was adopted more widely during the COVID-19 pandemic before eventually being used across all ATP Tour events from this year. The Australian Open and U.S. Open have also replaced line judges with ELC, but the French Open has not favoured the switch despite the availability of technology specific to claycourts as traces left by the ball help umpires with their decision-making. Largely popular The ELC system is largely popular among the players even if some, including world number one Aryna Sabalenka and three-times Grand Slam finalist Alexander Zverev, expressed their disbelief at decisions during the recent claycourt season. Tournaments relying on the human eye are not entirely immune to controversial calls, however, and the All England Club's move that comes after extensive testing last year is likely to only ruffle the feathers of the most staunch traditionalists. Britain's Lawn Tennis Association (LTA) said it understood the decision amid changes to officiating globally and expressed its commitment to continue developing officials in the country going forward. 'We are working with the ABTO to develop a joint strategy that will ensure officials can be retained within the sport, new officials can be recruited and the officiating community will be supported through the changes,' the LTA said. Line judges often bring their own theatrical element to the sport with their distinctive voices, postures, and interactions with players but All England Club chief executive Sally Bolton said many of them understood that change would come. 'The time is right for us to move on,' said Bolton. 'We absolutely value the commitment that those line umpires have provided to the Championships over many years. 'We do have a significant number of them coming back in a new role … so we're really pleased to have many of them still involved with delivering the Championships.'

Grok churns out fake facts about Israel-Iran war
Grok churns out fake facts about Israel-Iran war

Express Tribune

time2 days ago

  • Express Tribune

Grok churns out fake facts about Israel-Iran war

Elon Musk's AI chatbot Grok produced inaccurate and contradictory responses when users sought to fact-check the Israel-Iran conflict, a study said on Tuesday, raising fresh doubts about its reliability as a debunking tool. With tech platforms reducing their reliance on human fact-checkers, users are increasingly utilising AI-powered chatbots – including xAI's Grok – in search of reliable information, but their responses are often themselves prone to misinformation. "The investigation into Grok's performance during the first days of the Israel-Iran conflict exposes significant flaws and limitations in the AI chatbot's ability to provide accurate, reliable, and consistent information during times of crisis," said the study from the Digital Forensic Research Lab (DFRLab) of the Atlantic Council, an American think tank. "Grok demonstrated that it struggles with verifying already-confirmed facts, analysing fake visuals, and avoiding unsubstantiated claims." The DFRLab analysed around 130,000 posts in various languages on the platform X, where the AI assistant is built in, to find that Grok was "struggling to authenticate AI-generated media." Following Iran's retaliatory strikes on Israel, Grok offered vastly different responses to similar prompts about an AI-generated video of a destroyed airport that amassed millions of views on X, the study found. It oscillated – sometimes within the same minute – between denying the airport's destruction and confirming it had been damaged by strikes, the study said. In some responses, Grok cited a missile launched by Yemeni rebels as the source of the damage. In others, it wrongly identified the AI-generated airport as one in Beirut, Gaza, or Tehran. When users shared another AI-generated video depicting buildings collapsing after an alleged Iranian strike on Tel Aviv, Grok responded that it appeared to be real, the study said. The Israel-Iran conflict, which led to US airstrikes against Tehran's nuclear program over the weekend, has churned out an avalanche of online misinformation including AI-generated videos and war visuals recycled from other conflicts. AI chatbots also amplified falsehoods. As the Israel-Iran war intensified, false claims spread across social media that China had dispatched military cargo planes to Tehran to offer its support. When users asked the AI-operated X accounts of AI companies Perplexity and Grok about its validity, both wrongly responded that the claims were true, according to disinformation watchdog NewsGuard. Researchers say Grok has previously made errors verifying information related to crises such as the recent India-Pakistan conflict and anti-immigration protests in Los Angeles. Last month, Grok was under renewed scrutiny for inserting "white genocide" in South Africa, a far-right conspiracy theory, into unrelated queries.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store