logo
#

Latest news with #digitaldemocracy

'Stuck in limbo': Over 90% of X's Community Notes unpublished, study says
'Stuck in limbo': Over 90% of X's Community Notes unpublished, study says

France 24

time10-07-2025

  • Business
  • France 24

'Stuck in limbo': Over 90% of X's Community Notes unpublished, study says

The study by the Digital Democracy Institute of the Americas (DDIA), which analyzed the entire public dataset of 1.76 million notes published by X between January 2021 and March 2025, comes as the platform's CEO Linda Yaccarino resigned after two years at the helm. The community-driven moderation model -- now embraced by major tech platforms including Facebook-owner Meta and TikTok -- allows volunteers to contribute notes that add context or corrections to posts. Other users then rate the proposed notes as "helpful" or "not helpful." If the notes get "helpful" ratings from enough users with diverse perspectives, they are published on X, appearing right below the challenged posts. "The vast majority of submitted notes -- more than 90 percent -- never reach the public," DDIA's study said. "For a program marketed as fast, scalable, and transparent, these figures should raise serious concerns." Among English notes, the publication rate dropped from 9.5 percent in 2023 to just 4.9 percent in early 2025, the study said. Spanish-language notes, however, showed some growth, with the publication rate rising from 3.6 percent to 7.1 percent over the same period, it added. A vast number of notes remain unpublished due to lack of consensus among users during rating. Thousands of notes also go unrated, possibly never seen and never assessed, according to the report. "As the volume of notes submitted grows, the system's internal visibility bottleneck becomes more apparent –- especially in English," the study said. "Despite a rising number of contributors submitting notes, many notes remain stuck in limbo, unseen and unevaluated by fellow contributors, a crucial step for notes to be published." 'Viral misinformation' In a separate finding, DDIA's researchers identified not a human but a bot-like account -- dedicated to flagging crypto scams –- as the most prolific contributor to the program in English, submitting more than 43,000 notes between 2021 and March 2025. However, only 3.1 percent of those notes went live, suggesting most went unseen or failed to gain consensus, the report said. The study also noted that the time it takes for a note to go live had improved over the years, dropping from an average of more than 100 days in 2022 to 14 days in 2025. "Even this faster timeline is far too slow for the reality of viral misinformation, timely toxic content, or simply errors about real-time events, which spread within hours, not weeks," DDIA's report said. The findings are significant as tech platforms increasingly view the community-driven model as an alternative to professional fact-checking, which conservative advocates in countries such as the United States have long accused of a liberal bias. Studies have shown Community Notes can work to dispel some falsehoods such as vaccine misinformation, but researchers have long cautioned that it works best for topics where there is broad consensus. Some researchers have also cautioned that Community Notes users can be motivated by partisan motives and tend to target their political opponents. X introduced Community Notes during the tenure of Yaccarino, who said on Wednesday that she had decided to step down after leading the company through a major transformation. No reason was given for her exit, but the resignation came as Musk's artificial intelligence chatbot Grok triggered an online firestorm over its anti-Semitic comments that praised Adolf Hitler and insulted Islam in separate posts on X.

The David Seymour ‘bots' debate: Do online submission tools help or hurt democracy?
The David Seymour ‘bots' debate: Do online submission tools help or hurt democracy?

RNZ News

time05-06-2025

  • Business
  • RNZ News

The David Seymour ‘bots' debate: Do online submission tools help or hurt democracy?

ACT Party leader David Seyour in studio for an interview on season 3 of 30 with Guyon Espiner. Photo: RNZ / Cole Eastham-Farrelly A discussion document on a Regulatory Standards Bill is not, on the face of it, the sort of thing that might have been expected to prompt 23,000 responses. But in an age of digital democracy, the Ministry for Regulation was probably expecting it. The bill , led by ACT Party leader David Seymour, is controversial. It sparked a response from activists, who used online tools to help people make their opposition known. Of the 23,000 submissions, 88 percent were opposed. Seymour this week told RNZ's 30 with Guyon Espiner , that figure reflected "bots" generating "fake" submissions. He did not provide evidence for the claim and later explained he wasn't referring to literal bots but to "online campaigns" that generate "non-representative samples" that don't reflect public opinion. Seymour has previous experience with this sort of thing. The Treaty Principles Bill got a record 300,000 submissions when it was considered by the Justice Committee earlier this year. Is Seymour right to have raised concerns about how these tools are affecting public debate? Or are they a boon for democracy? Submission tools are commonly used by advocacy groups to mobilise public input during the select committee process. The online tools often offer a template for users to fill out or suggested wording that can be edited or submitted as is. Each submission is usually still sent by the individual. Taxpayers' Union spokesperson Jordan Williams said submitting to Parliament used to be "pretty difficult". "You'd have to write a letter and things like that. What the tools do allow is for people to very easily and quickly make their voice heard." The tools being used now are part of sophisticated marketing campaigns, Williams said. "You do get pressure groups that take particular interest, and it blows out the numbers, but that doesn't mean that officials should be ruling them out or refusing to engage or read submissions." The Taxpayers' Union has created submission tools in the past, but Williams said he isn't in favour of tools that don't allow the submitter to alter the submission. He has encouraged supporters to change the contents of the submission to ensure it is original. "The ones that we are pretty suspicious of is when it doesn't allow the end user to actually change the submission, and in effect, it just operates like a petition, which I don't think quite has the same democratic value." Clerk of the House of Representatives David Wilson said campaigns that see thousands of similar submissions on proposed legislation are not new, they've just taken a different form. "It's happened for many, many years. It used to be photocopied forms. Now, often it's things online that you can fill out. And there's nothing wrong with doing that. It's a legitimate submission." However, Wilson pointed out that identical responses would likely be grouped by the select committee and treated as one submission. "The purpose of the select committee calling for public submissions is so that the members of the committee can better inform themselves about the issues. They're looking at the bill, thinking about whether it needs to be amended or whether it should pass. So if they receive the same view from hundreds of people, they will know that." But that isn't to say those submissions are discredited, Wilson said. "For example, the committee staff would say, you've received 10,000 submissions that all look exactly like this. So members will know how many there were and what they said. But I don't know if there's any point in all of the members individually reading the same thing that many times." Jordan Williams co-founded the Taxpayers' Union in 2013 with David Farrar. Photo: RNZ / Cole Eastham-Farrelly But Williams said there were risks in treating similar submissions created using 'tools' as one submission. "Treating those ones as if they are all identical is not just wrong, it's actually undemocratic," he said. "It's been really concerning that, under the current parliament, they are trying to carte blanche, reject people's submissions, because a lot of them are similar." AI should be used to analyse submissions and identify the unique points. "Because if people are going to take the time and make a submission to Parliament, at the very least, the officials should be reading them or having them summarised," Williams said. Labour MP Duncan Webb is a member of the Justice Committee and sat in on oral submissions for the Treaty Principles Bill. He said he attempted to read as many submissions as possible. "When you get a stock submission, which is a body of text that is identical and it's just been clicked and dragged, then you don't have to read them all, because you just know that there are 500 people who think exactly the same thing," he said. "But when you get 500 postcards, which each have three handwritten sentences on them, they may all have the same theme, they may all be from a particular organisation, but the individual thoughts that have been individually expressed. So you can't kind of categorise it as just one size fits all. You've got to take every single case on its merits." Webb said he takes the select committee process very seriously. "The thing that struck me was, sure, you read a lot [of submissions] which are repetitive, but then all of a sudden you come across one which actually changes the way you think about the problem in front of you. "To kind of dismiss that as just one of a pile from this organisation is actually denying someone who's got an important point to make, their voice in the democratic process." Sign up for Ngā Pitopito Kōrero , a daily newsletter curated by our editors and delivered straight to your inbox every weekday.

Beyond ceasefire, India and Pakistan battle on in digital trenches
Beyond ceasefire, India and Pakistan battle on in digital trenches

Arab News

time24-05-2025

  • Politics
  • Arab News

Beyond ceasefire, India and Pakistan battle on in digital trenches

ISLAMABAD: As Indian and Pakistani guns fell silent after trading fire for days this month, the war over facts and fiction is far from over and fierce battle rages on social media as to who won, who distorted the truth, and which version of events should be trusted. As both states continue to push competing narratives, experts warn that misinformation, censorship and AI-generated propaganda have turned digital platforms into battlegrounds, with real-world consequences for peace, truth and regional stability. The four-day military standoff, which ended on May 10 with a US-brokered ceasefire, resulted from an attack in Indian-administered Kashmir that killed 26 people last month. India accused Pakistan of backing the assault, a charge Islamabad has consistently denied. While the truce between the nuclear-armed archfoes has since held, digital rights experts have sounded alarm over the parallel information war, which continues based on disinformation, censorship and propaganda on both sides, threatening the ceasefire between both nations. Asad Baig, who heads the Media Matters for Democracy not-for-profit that works on media literacy and digital democracy, noted that broadcast media played a central role in spreading falsehoods during the India-Pakistan standoff to cater to an online audience hungry for 'sensational content.' 'Disinformation was overwhelmingly spread from the Indian side,' Baig told Arab News. 'Media was playing to a polarized, online audience. Conflict became content, and content became currency in the monetization game.' Several mainstream media outlets, mostly in India, flooded the public with fake news, doctored visuals and sensationalist coverage, fueling mass anxiety and misinformation, according to fact-checkers and experts, who say the role of media at this critical geopolitical juncture undermined journalistic integrity and misled citizens. 'I think this is a perfect example of the media becoming a tool of propaganda in the hands of a state,' said prominent digital rights activist Usama Khilji, calling on those at the helm of television and digital media outlets to independently verify state claims using tools like satellite imagery or on-ground sources. In Pakistan, X, previously known as Twitter, had been banned since February 2024, with digital rights groups and global organizations calling the blockade a 'blatant violation' of civic liberties and a threat to democratic freedoms. But on May 7, as Pakistan's responded to India's missile strikes on its territory that began the conflict, the platform was suddenly restored, allowing users to access it without a VPN that allows them to bypass such restrictions by masking their location. The platform has remained accessible since. 'We were [previously] told that X is banned because of national security threats,' Khilji told Arab News, praising the government's 'strategic move' to let the world hear Pakistan's side of the story during this month's conflict. 'But when we actually got a major national security threat in terms of literal war, X was unblocked.' Indian authorities meanwhile blocked more than 8,000 X, YouTube and Instagram accounts belonging to news outlets as well as Pakistani celebrities, journalists and influencers. 'When only one narrative is allowed to dominate, it creates echo chambers that breed confusion, fuel conflict, and dangerously suppress the truth,' Khilji explained. VIRTUAL WAR Minutes after India attacked Pakistan with missiles on May 7, Pakistan released a video to journalists via WhatsApp that showed multiple blasts hitting an unknown location purportedly in Pakistan. However, the video later turned out to be of Israeli bombardment of Gaza and was retracted. On May 8, Indian news outlets played a video in which a Pakistani military spokesperson admitted to the downing of two of their Chinese-made JF-17 fighter jets. X users later pointed out that the video was AI-generated. Throughout the standoff both mainstream and digital media outlets found themselves in the eye of the storm, with many official and verified accounts sharing and then retracting false information. The use of AI-generated videos and even video game simulations misrepresented battlefield scenarios in real time and amplified confusion at a critical moment. Insights from experts paint a disturbing picture of how information warfare is becoming inseparable from conventional conflict. From deliberate state narratives to irresponsible media and rampant misinformation on social platforms, the truth itself is becoming a casualty of war. AFP Digital Verification Correspondent Rimal Farrukh describes how false information was often laced with hate speech, targeting vulnerable communities like Muslims in India and Hindus in Pakistan. 'We saw dehumanizing language, misleading visuals, and recycled war footage, often from unrelated conflicts like Russia-Ukraine or Israel-Gaza, used to stoke fear and deepen biases,' she told Arab News.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store