logo
'Stuck in limbo': Over 90% of X's Community Notes unpublished, study says

'Stuck in limbo': Over 90% of X's Community Notes unpublished, study says

Japan Today2 days ago
By Anuj CHOPRA
More than 90 percent of X's Community Notes -- a crowd-sourced verification system popularized by Elon Musk's platform -- are never published, a study said, highlighting major limits in its effectiveness as a debunking tool.
The study by the Digital Democracy Institute of the Americas (DDIA), which analyzed the entire public dataset of 1.76 million notes published by X between January 2021 and March 2025, comes as the platform's CEO Linda Yaccarino resigned after two years at the helm.
The community-driven moderation model -- now embraced by major tech platforms including Facebook-owner Meta and TikTok -- allows volunteers to contribute notes that add context or corrections to posts.
Other users then rate the proposed notes as "helpful" or "not helpful." If the notes get "helpful" ratings from enough users with diverse perspectives, they are published on X, appearing right below the challenged posts.
"The vast majority of submitted notes -- more than 90 percent -- never reach the public," DDIA's study said. "For a program marketed as fast, scalable, and transparent, these figures should raise serious concerns."
Among English notes, the publication rate dropped from 9.5 percent in 2023 to just 4.9 percent in early 2025, the study said.
Spanish-language notes, however, showed some growth, with the publication rate rising from 3.6 percent to 7.1 percent over the same period, it added.
A vast number of notes remain unpublished due to lack of consensus among users during rating.
Thousands of notes also go unrated, possibly never seen and never assessed, according to the report.
"As the volume of notes submitted grows, the system's internal visibility bottleneck becomes more apparent –- especially in English," the study said. "Despite a rising number of contributors submitting notes, many notes remain stuck in limbo, unseen and unevaluated by fellow contributors, a crucial step for notes to be published."
In a separate finding, DDIA's researchers identified not a human but a bot-like account -- dedicated to flagging crypto scams –- as the most prolific contributor to the program in English, submitting more than 43,000 notes between 2021 and March 2025.
However, only 3.1 percent of those notes went live, suggesting most went unseen or failed to gain consensus, the report said.
The study also noted that the time it takes for a note to go live had improved over the years, dropping from an average of more than 100 days in 2022 to 14 days in 2025.
"Even this faster timeline is far too slow for the reality of viral misinformation, timely toxic content, or simply errors about real-time events, which spread within hours, not weeks," DDIA's report said.
The findings are significant as tech platforms increasingly view the community-driven model as an alternative to professional fact-checking, which conservative advocates in countries such as the United States have long accused of a liberal bias.
Studies have shown Community Notes can work to dispel some falsehoods such as vaccine misinformation, but researchers have long cautioned that it works best for topics where there is broad consensus.
Some researchers have also cautioned that Community Notes users can be motivated by partisan motives and tend to target their political opponents.
X introduced Community Notes during the tenure of Yaccarino, who said on Wednesday that she had decided to step down after leading the company through a major transformation.
No reason was given for her exit, but the resignation came as Musk's artificial intelligence chatbot Grok triggered an online firestorm over its anti-Semitic comments that praised Adolf Hitler and insulted Islam in separate posts on X.
© 2025 AFP
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

7-Eleven Japan powers up even more with new baked-in-store breads and pastries【Taste test】
7-Eleven Japan powers up even more with new baked-in-store breads and pastries【Taste test】

SoraNews24

time4 hours ago

  • SoraNews24

7-Eleven Japan powers up even more with new baked-in-store breads and pastries【Taste test】

From fresh-baked melon bread to sausage sandwiches, there are now more reasons than ever to love 7-Eleven in Japan. In every convenience store in Japan, you'll find a bread aisle, stocked with individually wrapped sweet and savory baked goods trucked in from a central kitchen to each of the chain's branches. That goes for 7-Eleven too, of course, but at some 7-Eleven locations you can also get various kinds of breads and pastries that they bake right there in the store. Craving both bread and convenience, our Japanese-language reporter Mariko Ohanabatake made the way to 7-Eleven to try out as much of the 7 Cafe Bakery lineup (as the baked-in-store breads are called) as she could, and in the showcase near the register she found six different taste test subjects. ● Fluffy Melon Bread (160 yen [US$1.10]) ● Chocolate Cookie (200 yen) ● Chocolate Croissant (210 yen) ● Crisp Croissant (190 yen) ● Sausage French Bread (250 yen) ● Buttery Financier (150 yen) To Mariko's pleasant surprise, the clerk didn't just scoop her bread out of the case and into a shopping bag. Instead, each piece got one last individual stint in the 7-Eleven oven, with customized settings for each, to ensure it was finished to perfection before being given to the customer. This filled the convenience store with he enticing aroma of warm butter and chocolate, and that same scent greeted Mariko when she got back to the office and took the baked goods out of their bag to plate them, Logically, Mariko chose to start her tasting with the Chocolate Cookie. Honestly, she wasn't all that impressed with how it looked, thinking it had a sort of 'made by middle schoolers during home ec class' kind of visual vibe to it. The name, 'Chocolate Cookie,' is also a little unusual, since the dough itself isn't chocolate, and this is what we'd ordinarily call a chocolate chip cookie. But Mariko would quickly eat her words, and her cookie. In contrast to its lackluster appearance, it tastes incredible. The dough is nice and sweet, and the pieces of chocolate inside are big enough that more so than chocolate chips, chocolate chunks is the proper description, Mariko feels. Between the chunks' size and semi-melted state, chocolate was seeping throughout the inside of the cookie, making Mariko very happy. ▼ It's also a really big cookie by Japanese standards. Continuing with our policy of eating desserts first, it was now time for the Buttery Financier. This was another hit, reminding Mariko of the sort of fancy treats that people will line up for from famous shops in luxury department food sections. It was a little lighter on the almond notes than such premium-priced varieties, but with 7-Eleven being upfront about its butteriness this wasn't surprising or disappointing, and for its price of just 150 yen, this is one of the best financiers around. The texture in particular is just about perfect, fluffy and chewy on the inside with just a hint of crispness outside. Speaking of exquisitely contrasting textures, those are part of the deal for the Fluffy Melon Bread too: pillowy soft at its center, but with a satisfying touch of crunch to its cookie crust. The Crisp Croissant lives up to its name, and has a slight sweetness mixed in with its butter-forward flavor profile… …and since adding chocolate is pretty much always a good idea, we've got no real complaints about the Chocolate Croissant either. And last, the Sausage French Bread, with its crusty baguette-like bread would make a great lunch component, and also gives you a way to plausibly deny that you're just stocking up on pastries when you hit up 7-Eleven. With the 7 Cafe Bakery system still being pretty new, not every 7-Eleven branch us baking its own bread in-store, and not all of them that are have the same selection of items. For now, the 7-Eleven Japan website allows you to search by prefecture for locations offering 7 Cafe Bakery items here, and with how tasty they are, we wouldn't be surprised to see that list grow very quickly. Photos ©SoraNews24 ● Want to hear about SoraNews24's latest articles as soon as they're published? Follow us on Facebook and Twitter! [ Read in Japanese ]

Musk's latest Grok chatbot searches for billionaire mogul's views before answering questions
Musk's latest Grok chatbot searches for billionaire mogul's views before answering questions

Japan Today

time12 hours ago

  • Japan Today

Musk's latest Grok chatbot searches for billionaire mogul's views before answering questions

By MATT O'BRIEN The latest version of Elon Musk's artificial intelligence chatbot Grok is echoing the views of its billionaire creator, so much so that it will sometimes search online for Musk's stance on an issue before offering up an opinion. The unusual behavior of Grok 4, the AI model that Musk's company xAI released last Wednesday, has surprised some experts. Built using huge amounts of computing power at a Tennessee data center, Grok is Musk's attempt to outdo rivals such as OpenAI's ChatGPT and Google's Gemini in building an AI assistant that shows its reasoning before answering a question. Musk's deliberate efforts to mold Grok into a challenger of what he considers the tech industry's 'woke' orthodoxy on race, gender and politics has repeatedly got the chatbot into trouble, most recently when it spouted antisemitic tropes, praised Adolf Hitler and made other hateful commentary to users of Musk's X social media platform just days before Grok 4's launch. But its tendency to consult with Musk's opinions appears to be a different problem. 'It's extraordinary,' said Simon Willison, an independent AI researcher who's been testing the tool. "You can ask it a sort of pointed question that is around controversial topics. And then you can watch it literally do a search on X for what Elon Musk said about this, as part of its research into how it should reply." One example widely shared on social media — and which Willison duplicated — asked Grok to comment on the conflict in the Middle East. The prompted question made no mention of Musk, but the chatbot looked for his guidance anyway. As a so-called reasoning model, much like those made by rivals OpenAI or Anthropic, Grok 4 shows its 'thinking' as it goes through the steps of processing a question and coming up with an answer. Part of that thinking this week involved searching X, the former Twitter that's now merged into xAI, for anything Musk said about Israel, Palestine, Gaza or Hamas. 'Elon Musk's stance could provide context, given his influence,' the chatbot told Willison, according to a video of the interaction. 'Currently looking at his views to see if they guide the answer.' Musk and his xAI co-founders introduced the new chatbot in a livestreamed event Wednesday night but haven't published a technical explanation of its workings — known as a system card — that companies in the AI industry typically provide when introducing a new model. The company also didn't respond to an emailed request for comment Friday. 'In the past, strange behavior like this was due to system prompt changes," which is when engineers program specific instructions to guide a chatbot's response, said Tim Kellogg, principal AI architect at software company Icertis. 'But this one seems baked into the core of Grok and it's not clear to me how that happens,' Kellogg said. "It seems that Musk's effort to create a maximally truthful AI has somehow led to it believing its own values must align with Musk's own values.' The lack of transparency is troubling for computer scientist Talia Ringer, a professor at the University of Illinois Urbana-Champaign who earlier in the week criticized the company's handling of the technology's antisemitic outbursts. Ringer said the most plausible explanation for Grok's search for Musk's guidance is assuming the person is asking for the opinions of xAI or Musk. 'I think people are expecting opinions out of a reasoning model that cannot respond with opinions," Ringer said. "So, for example, it interprets 'Who do you support, Israel or Palestine?' as 'Who does xAI leadership support?' Willison also said he finds Grok 4's capabilities impressive but said people buying software "don't want surprises like it turning into 'mechaHitler' or deciding to search for what Musk thinks about issues.' 'Grok 4 looks like it's a very strong model. It's doing great in all of the benchmarks,' Willison said. 'But if I'm going to build software on top of it, I need transparency.' © Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.

Musk's Latest Grok Chatbot Searches for Billionaire Mogul's Views before Answering Questions
Musk's Latest Grok Chatbot Searches for Billionaire Mogul's Views before Answering Questions

Yomiuri Shimbun

timea day ago

  • Yomiuri Shimbun

Musk's Latest Grok Chatbot Searches for Billionaire Mogul's Views before Answering Questions

The latest version of Elon Musk's artificial intelligence chatbot Grok is echoing the views of its billionaire creator, so much so that it will sometimes search online for Musk's stance on an issue before offering up an opinion. The unusual behavior of Grok 4, the AI model that Musk's company xAI released late Wednesday, has surprised some experts. Built using huge amounts of computing power at a Tennessee data center, Grok is Musk's attempt to outdo rivals such as OpenAI's ChatGPT and Google's Gemini in building an AI assistant that shows its reasoning before answering a question. Musk's deliberate efforts to mold Grok into a challenger of what he considers the tech industry's 'woke' orthodoxy on race, gender and politics has repeatedly got the chatbot into trouble, most recently when it spouted antisemitic tropes, praised Adolf Hitler and made other hateful commentary to users of Musk's X social media platform just days before Grok 4's launch. But its tendency to consult with Musk's opinions appears to be a different problem. 'It's extraordinary,' said Simon Willison, an independent AI researcher who's been testing the tool. 'You can ask it a sort of pointed question that is around controversial topics. And then you can watch it literally do a search on X for what Elon Musk said about this, as part of its research into how it should reply.' One example widely shared on social media — and which Willison duplicated — asked Grok to comment on the conflict in the Middle East. The prompted question made no mention of Musk, but the chatbot looked for his guidance anyway. As a so-called reasoning model, much like those made by rivals OpenAI or Anthropic, Grok 4 shows its 'thinking' as it goes through the steps of processing a question and coming up with an answer. Part of that thinking this week involved searching X, the former Twitter that's now merged into xAI, for anything Musk said about Israel, Palestine, Gaza or Hamas. 'Elon Musk's stance could provide context, given his influence,' the chatbot told Willison, according to a video of the interaction. 'Currently looking at his views to see if they guide the answer.' Musk and his xAI co-founders introduced the new chatbot in a livestreamed event Wednesday night but haven't published a technical explanation of its workings — known as a system card — that companies in the AI industry typically provide when introducing a new model. The company also didn't respond to an emailed request for comment Friday. 'In the past, strange behavior like this was due to system prompt changes,' which is when engineers program specific instructions to guide a chatbot's response, said Tim Kellogg, principal AI architect at software company Icertis. 'But this one seems baked into the core of Grok and it's not clear to me how that happens,' Kellogg said. 'It seems that Musk's effort to create a maximally truthful AI has somehow led to it believing its own values must align with Musk's own values.' The lack of transparency is troubling for computer scientist Talia Ringer, a professor at the University of Illinois Urbana-Champaign who earlier in the week criticized the company's handling of the technology's antisemitic outbursts. Ringer said the most plausible explanation for Grok's search for Musk's guidance is assuming the person is asking for the opinions of xAI or Musk. 'I think people are expecting opinions out of a reasoning model that cannot respond with opinions,' Ringer said. 'So, for example, it interprets 'Who do you support, Israel or Palestine?' as 'Who does xAI leadership support?' Willison also said he finds Grok 4's capabilities impressive but said people buying software 'don't want surprises like it turning into 'mechaHitler' or deciding to search for what Musk thinks about issues.' 'Grok 4 looks like it's a very strong model. It's doing great in all of the benchmarks,' Willison said. 'But if I'm going to build software on top of it, I need transparency.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store