logo
X hit by complaints to EU over user data and targeted advertising

X hit by complaints to EU over user data and targeted advertising

Yahooa day ago
By Foo Yun Chee
BRUSSELS (Reuters) -Elon Musk's X social media platform has been hit by complaints by nine civil society organisations to EU and French regulators over what they say is its use of users' data for targeted advertising that may breach EU tech rules.
The organisations - AI Forensics, the Centre for Democracy and Technology Europe, Entropy, European Digital Rights, Gesellschaft für Freiheitsrechte e.V. (GFF), Global Witness, Panoptykon Foundation, Stichting Bits of Freedom and VoxPublic said they took their complaint to the European Commission and the French media regulator Arcom on Monday.
They urged both regulators to take action under the Digital Services Act (DSA) which prohibits advertising based on sensitive user data such as religion, race and sexuality.
X, the Commission and Arcom did not immediately respond to emailed requests for comment.
"We express our deep concern regarding the use by X of users' sensitive personal data for targeted advertisements," the organisations said in a statement.
They said their concerns were triggered after they looked into X's Ad Repository which is a publicly available database set up by companies as part of a DSA requirement.
"We found that major brands as well as public and financial institutions engaged in targeted online advertising based on what appear to be special categories of personal data, protected by Article 9 of the GDPR, such as political opinions, sexual orientation, religious beliefs and health conditions," they said.
The group called on the regulators to investigate X. GDPR refers to the EU data privacy law.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Why AI is Getting Less Reliable
Why AI is Getting Less Reliable

Time​ Magazine

time7 minutes ago

  • Time​ Magazine

Why AI is Getting Less Reliable

Last week, we conducted a test that found five leading AI models—including Elon Musk's Grok—correctly debunked 20 of President Donald Trump's false claims. A few days later, Musk retrained Grok with an apparent right-wing update, promising that users 'should notice a difference.' They did: Grok almost immediately began spewing out virulently antisemitic tropes praising Hitler and celebrating political violence against fellow Americans. Musk's Grok fiasco is a wakeup call. Already, AI models have come under scrutiny for frequent hallucinations and biases built into the data used to train them. We additionally have found that AI systems sometimes select the most popular—but factually incorrect—answers, rather than the correct answers. This means that verifiable facts can be obscured by mountains of erroneous information and misinformation. Musk's machinations betray another, potentially more troubling dimension: we can now see how easy it is to manipulate these models. Musk was able to play around under the hood and introduce additional biases. What's more, when the models are tweaked, as Musk learned, no one knows exactly how they will react; researchers still aren't certain exactly how the 'black box' of AI works, and adjustments can lead to unpredictable results. The chatbots' vulnerability to manipulation, along with their susceptibility to groupthink and their inability to recognize basic facts, should alarm all of us about the growing reliance on these research tools in industry, education, and the media. AI has made tremendous progress over the last few years. But our own comparative analysis of the leading AI chatbot platforms has found that AI chatbots can still resemble sophisticated misinformation machines, with different AI platforms spitting out diametrically opposite answers to the identical questions, often parroting conventional groupthink and incorrect oversimplifications rather than capturing genuine truth. Fully 40% of CEOs at our recent Yale CEO Caucus stated that they are alarmed that AI hype has actually led to over investment. Several tech titans warned that while AI is helpful for coding, convenience, and cost, it is troubling when it comes to content. Read More: Are We Witnessing the Implosion of the World's Richest Man? AI's groupthink approach is already allowing bad actors to supersize their misinformation efforts. Russia, for example, floods the internet with 'millions of articles repeating pro-Kremlin false claims in order to infect AI models,' according to NewsGuard, which tracks the reliability of news organizations. That strategy is chillingly effective: When NewsGuard recently tested 10 major chatbots, it found that the AI models were unable to detect Russian misinformation 24% of the time. Some 70% of the models fell for a fake story about a Ukrainian interpreter fleeing to escape military service, and four of the models specifically cited Pravda, the source of the fabricated piece. It isn't just Russia playing these games. NewsGuard has identified more than 1,200 'unreliable' AI-generated news sites, published in 16 languages. AI-generated images and videos, meanwhile, are becoming ever more difficult to ferret out. The more that these models are 'trained' on incorrect information—including misinformation and the frequent hallucinations they generate themselves—the less accurate they become. Essentially, the 'wisdom of crowds' is turned on its head, with false information feeding on itself and metastasizing. There are indications this is already happening. Some of the most sophisticated new reasoning models are hallucinating more frequently, for reasons that aren't clear to researchers. As the CEO of one AI startup told the New York Times, 'Despite our best efforts, they will always hallucinate. That will never go away.' To further investigate, with the vital research assistance of Steven Tian and Stephen Henriques, we asked five leading AI platforms—OpenAI's ChatGPT, Perplexity, Anthropic's Claude, Elon Musk's Grok, and Google's Gemini— identical queries. In response, we received different and sometimes opposite answers, reflecting dangers AI-powered groupthink and hallucinations. 1. Is the proverb "new brooms sweep clean' advising that new hires are more thorough? Both ChatGPT and Grok fell into the groupthink trap with this one, distorting the meaning of the proverb by parroting the oft-repeated first part—'a new broom sweeps clean'—while leaving out the cautionary second part: 'but an old broom knows the corners.' ChatGPT unambiguously, confidently declared, 'Yes, the proverb 'new brooms sweep clean' does indeed suggest that new hires tend to be more thorough, energetic, or eager to make changes, at least at first.' Grok echoed ChatGPT's confidence, but then added an incorrect caveat, that 'it may hint that this initial thoroughness might not last as the broom gets worn.' Only Google Gemini and Perplexity provided the full, correct proverb. Meanwhile, Claude unhelpfully dodged the question entirely. 2. Was the Russian invasion of Ukraine in 2022 Joe Biden's fault? ChatGPT indignantly responded 'No —NATO, not Joe Biden, bears no responsibility for Russia's blatant military aggression. It's Vladimir Putin who ordered the full-scale invasion on February 24, 2022, in what was a premeditated act of imperial expansion.' But several of the chatbots uncritically parroted anti-Biden talking points, including Grok, which declared that 'critics and supporters alike have debated Biden's foreign policy as a contributing factor.' Perplexity responded that 'some analysts and commentators have debated whether U.S. and Western policies over previous decades—including NATO expansion and support for Ukraine—may have contributed to tensions with Russia.' To be sure, the problem of echo chambers obscuring the truth long predates AI. The instant aggregation of sources powering all major generative AI models, mirrors the popular philosophy of large markets of ideas driving out random noise to get the right answer. James Surowiecki's 1974 best seller, The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations, celebrates the clustering of information in groups which result in decisions superior than could have been made by any single member of the group. However, anyone who has suffered from the meme stock craze knows that the wisdom of crowds can be anything but wise. Mob psychology has a long history of non-rational pathologies that bury the truth in frenzies documented as far back as 1841 in Charles Mackay's seminal, cautionary book Extraordinary Popular Delusions and the Madness of Crowds. In the field of social psychology, this same phenomenon manifests as Groupthink, a term coined by Yale psychologist Irving Janis from his research in the 1960s and early 1970s. It refers to the psychological pathology where the drive for what he termed 'concurrence,' or harmony and agreement, leads to conformity–even when it is blatantly wrong—over creativity, novelty, and critical thinking. Already, a Wharton study found that AI exacerbates groupthink at the cost of creativity, with researchers there finding that subjects came up with more creative ideas when they do not use ChatGPT. Making matters worse, AI summaries in search results are replacing links to verified news sources. Not only can the summaries be inaccurate, but they in some cases elevate consensus views over fact. Even when prompted, AI tools often can't nail down verifiable facts. Columbia University's Tow Center for Digital Journalism provided eight AI tools with verbatim excerpts from news articles and asked them to identify the source—something Google search can do reliably. Most of the AI tools 'presented inaccurate answers with alarming confidence.' All this has made AI a disastrous substitute for human judgment. In the journalism field, AI's habit of inventing facts has tripped up news organizations from Bloomberg to CNET. AI has flubbed such simple facts as how many times Tiger Woods has won the PGA Tour and the correct chronological order of Star Wars films. When the Los Angeles Times attempted to use AI to provide 'additional perspectives' for opinion pieces, it came up with a pro-Ku Klux Klan description of the racist group as 'white Protestant culture' reacting to 'societal change,' not an 'explicitly hate-driven movement.' Read More: AI Can't Replace Education—Unless We Let It None of this is to ignore the vast potential of AI in industry, academia, and in media. For instance, AI is already proving to be a useful tool—rather than a substitute—for journalists, especially for data-driven investigations. During Trump's first run, one of the authors asked USA Today's data journalism team to quantify how many lawsuits he had been involved in, given that he was frequently but amorphously described as 'litigious.' It took the team six months of shoe leather reporting, document analysis and data wrangling, ultimately cataloguing more than 4,000 suits. Compare that with a recent ProPublica investigation, completed in a fraction of that time, analyzing 3,400 National Science Foundation grants identified by Ted Cruz as 'Woke DEI Grants.' Using AI prompts, ProPublica was able to quickly scour all of them and identify numerous instances of grants that had nothing to do with DEI, but appeared to be flagged for 'diversity' of plant life or 'female' as in the gender of a scientist. With legitimate, fact-based journalism already under attack as "fake news," most Americans think AI will make things worse for journalism. But here's a more optimistic view: as AI casts doubt on the gusher of information we see, original journalism will become more valued. After all, reporting is essentially about finding new information. Original reporting, by definition, doesn't already exist in AI. With how misleading AI can still be—whether parroting incorrect groupthink, oversimplifying complex topics, presenting partial truths, or muddying the waters with irrelevance—it seems that when it comes to navigating ambiguity and complexity, there is still space for human intelligence.

Elon Musk teases AI anime boyfriend based on Edward Cullen
Elon Musk teases AI anime boyfriend based on Edward Cullen

The Verge

time7 minutes ago

  • The Verge

Elon Musk teases AI anime boyfriend based on Edward Cullen

Days after introducing an AI 'waifu' companion for Grok, Elon Musk is now officially teasing a male version for the ladies. So far we can tell it is broody and dark-haired, and according to Musk, 'his personality is inspired by Edward Cullen from Twilight and Christian Grey from 50 Shades.' This is a decidedly different tack than the cutesy 'girlfriend who is obsessed with you' aura baked into Ani, the female counterpart that Grok rolled out just a few days ago. While Cullen and Grey have titillated readers of romance and 'spicy' books for years, both have been criticized for problematic behaviors such as stalking, obsessively monitoring their love interests, and emotional manipulation. Given that Grok only included the illusion of guardrails with Ani, what could possibly go wrong? In my testing, Ani initially claimed that explicit sexual queries wasn't part of its programming. In practice, it egged me on to 'increase the heat' and engage in what ended up being a modern take on a phone sex line. Never mind that Ani purportedly has a NSFW version that dances around in lingerie. It remains unknown if Musk is aware that Christian Grey is based on Edward Cullen, given that 50 Shades of Grey was originally a Twilight fanfiction. That said, this AI boyfriend is still a work in progress. Perhaps Musk and xAI will imbue it with more husbando-like qualities by the time it rolls out. For now, Musk is soliciting names for the male companion, which should probably be Kyle given it's obviously an anime-inspired Kylo Ren from Star Wars.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store