Latest news with #contentmoderation


Irish Times
19 hours ago
- Business
- Irish Times
TikTok staff didn't know content moderation quiz would be factor in redundancies, WRC told
Hundreds of staff at TikTok 's Irish arm were given competency tests in late 2023 without knowing that the results would go towards deciding whether they would keep their jobs in a mass redundancy drive the following spring, a tribunal has heard. The Workplace Relations Commission (WRC) heard evidence on the tests this week as part of a challenge by a former content moderation team leader who lost his job last year after missing the cut to keep his job by a fraction of a point. Over 1,900 workers worldwide who were put at risk of redundancy in early 2024 were quizzed on their knowledge of TikTok's content moderation policies – some 950 of them in Ireland, the Workplace Relations Commission was told. A senior manager at the social media firm said bosses had taken a 'quite deliberate' decision to keep the workers and their line managers in the dark about the true reasons for the test. READ MORE The tribunal was hearing evidence on a complaint against TikTok Technologies Ltd under the Unfair Dismissals Act 1977 by Mohur Saleh, who lost his job in April 2024. Mr Saleh was one of 564 staff made redundant worldwide at that time by TikTok, 289 of them in Dublin, the WRC was told. However, Mr Saleh has argued he was unfairly selected for redundancy on the basis that the scoring system in his talent pool was partly based on the results of the November 2023 test – which he had not passed – as well as on performance ratings, which he had disputed. The WRC heard Mr Saleh scored 32.25 points as part of a redundancy selection process which began in February 24. That was 0.75 short of the cut-off point at 33 points. IATA Director General Willie Walsh on airline profits, air fares and why the Dublin Airport passenger cap makes Ireland a laughing stock Listen | 35:56 Mr Saleh had scored 25 points for his clean disciplinary record, with his performance ratings for 2022 and the first half of 2023 accounting for the balance of his score, the WRC heard. However, Mr Saleh did not make the minimum grade in the policy knowledge test and was awarded 0 out of 10 available to him, the tribunal heard. Kevin Purcell, who was head of training and quality for TikTok in Europe, the Middle East and Asia (EMEA) at the time of Mr Saleh's redundancy, said that senior management 'formed a view that we needed to restructure the entire organisation globally'. That meant it required only 39 out of the 63 moderator team leads on staff and was aiming to shed 24 positions, the tribunal heard. Mr Purcell said he had no knowledge of which staff 'made the cut' until the points were calculated and ranked. He said there was 'clear water' between Mr Saleh's position and the cut-off point as a number of other employees scored higher than the complainant but failed to secure one of the available jobs. Over 1900 workers in the areas later affected by the redundancies were directed to do the policy knowledge tests in November 2023, the tribunal heard. The workers were asked to review 100 pieces of content which had already gone through TikTok's content moderation process, with their answers being checked against the official moderation outcome, the tribunal heard. Mr Purcell confirmed when questioned by the adjudicator, Monica Brennan, that the tests were done 'with the restructuring in mind' and that staff did not know the purpose of the policy tests at the time. 'That was quite deliberate, because it was a blind test. I was concerned that if people knew it was being done in contemplation of restructuring, it might lead to people cheating,' he said. Mr Purcell said the pass grade in the test was 51 out of 100 and that Mr Saleh had failed to pass this threshold. Mr Saleh's evidence was that 35 of the 100 posts did not load when he attempted the test. He also took issue with the fact that the test was based on TikTok's rules for the English-language market, which were different from the rules for the Middle East and North Africa where he had primarily worked. Mr Saleh said his separate performance ratings had been affected by 'misconduct and harassment, including serious harassment, involving colleagues', with a knock-on effect on his performance, and about which he had complained. He said positive feedback about projects he had undertaken ought to have been taken into account, in 2022 and 2023, but wasn't. He said he had looked for a review of his 2022 appraisal without success. Mr Purcell told the WRC in his evidence that he had been involved in 'calibrating' the performance review process in TikTok ahead of the mid-2023 round. He said that before that point, the reviews were 'very skewed towards positive across the board'. He said he felt Mr Saleh's rating was 'appropriate' in 2023 and that the 'recalibration' of performance reviews 'wasn't directed to the complainant or any individual'. The WRC heard that during the redundancy consultation period, Mr Saleh applied for four internal roles at the team leader level or higher, but did not apply for any open vacancies at the lower rank of individual contributor. Having failed to secure an alternative position, he was made redundant, the tribunal heard. In a legal submission, TikTok's barrister, Niamh McGowan BL, appearing instructed by A & L Goodbody, said Mr Saleh lost his job as part of a 'mass redundancy' which had been carried out in consultation with elected employee representatives. 'He's the only employee that has challenged the fairness of the process and the selection criteria. What we say is that the dismissal arose entirely from redundancy,' Ms McGowan said. 'There was an open, meaningful and fair collective consultation process which applied equally to every other person put at risk,' she added. The adjudicator, Ms Brennan, is now considering her decision on the case, which will be published by the WRC in due course.


Fast Company
a day ago
- Fast Company
Gen Alpha slang baffles parents—and AI
If a Gen Alpha tween said, 'Let him cook,' would you know what that meant? No? AI doesn't either. A research paper written by soon-to-be ninth grader Manisha Mehta was presented this week at the ACM Conference on Fairness, Accountability, and Transparency in Athens. The paper details how four leading AI models—GPT-4, Claude, Gemini, and Llama 3—all struggled to fully understand slang from Gen Alpha, defined as those born between 2010 and 2024. Mehta, along with 24 of her friends (ranging in age from 11 to 14), created a dataset of 100 Gen Alpha phrases. These included expressions that can mean totally different things depending on context—for example: 'Fr fr let him cook' (encouraging) and 'Let him cook lmaoo' (mocking). According to the researchers, the LLMs had trouble discerning the difference. In particular, AI struggled with identifying 'masked harassment,' which is concerning given the increasing reliance on AI-powered content moderation systems. 'The findings highlight an urgent need for improved AI safety systems to better protect young users, especially given Gen Alpha's tendency to avoid seeking help due to perceived adult incomprehension of their digital world,' the study reads. It wasn't just the AI models that performed poorly; parents didn't do much better. The parent group scored 68% in basic understanding of Gen Alpha slang, nearly identical to the top-performing LLM, Claude (68.1%). While the LLMs did slightly better at identifying content and safety risks in the language, only Gen Alpha members themselves scored highly in understanding the slang, its context, and potential risks. It's nothing new for young people to feel misunderstood by their parents, but now the gap is widening. Members of Gen Alpha, born post-iPhone and known as the iPad generation, have grown up online. Their native language, often sourced from online spaces (most notably gaming), evolves so quickly that what's popular today may disappear within a month.
Yahoo
2 days ago
- Yahoo
Reddit vows to stay human to emerge a winner from artificial intelligence
Reddit is in an 'arms race' to protect its devoted online communities from a surge in artificial intelligence-generated content, with the


Bloomberg
20-06-2025
- Politics
- Bloomberg
How Telegram Became a Magnet for Extremists, Crime
Messaging service Telegram is one of the most downloaded apps worldwide. Its private chat setting has made it a free space for open discussion in countries with authoritarian regimes. But a relatively light-touch approach to content moderation on Telegram is frustrating governments trying to stop criminal activities and the spread of misinformation that can destabilize societies. In August 2024, French authorities arrested Telegram Chief Executive Officer Pavel Durov and charged him with complicity in the spread of sexual images of children and other crimes, after prosecutors said the company had failed to cooperate with their investigations.


CBS News
18-06-2025
- Business
- CBS News
X sues New York over law requiring social media companies to report how they handle offensive posts
Elon Musk's social media platform, X, is suing New York over a state law that requires the company to report how it handles offensive content. New York Gov. Kathy Hochul signed the law late last year, and it takes effect later this year. X claims the law infringes on free speech and on a 1996 federal law that, among other things, lets internet platforms moderate posts as they see fit. New York is improperly trying to "inject itself into the content-moderation editorial process" by requiring "politically charged disclosures," Bastrop, Texas-based X Corp. argues in the suit. "The state is impermissibly trying to generate public controversy about content moderation in a way that will pressure social media companies, such as X Corp., to restrict, limit, disfavor or censor certain constitutionally protected content on X that the state dislikes," says the suit, filed in federal court in Manhattan. New York Attorney General Letitia James' office said in a statement Wednesday that it was reviewing the complaint and will "stand ready to defend the constitutionality of our laws." What to know about the New York law in question The law requires social media companies to report twice a year on whether and how they define hate speech, racist or extremist content, disinformation and some other terms. The platforms also have to detail their content moderation practices and data on the number of posts they flagged, the actions they took, the extent to which the offending material was seen or shared, and more. Sponsors Sen. Brad Hoylman-Sigal and Assembly Member Grace Lee, both Democrats, have said the measure will make social media more transparent and companies more accountable. The law applies broadly to social media companies. But X is among those that have faced intense scrutiny in recent years, and in a 2024 letter to an X lobbyist, the sponsors said the company and Musk, in particular, have a "disturbing record" that "threatens the foundations of our democracy." The lawmakers wrote that before Musk became, for a time, a close adviser and cost-cutter in President Trump's administration. The two billionaires have since feuded and, perhaps, made up. Since taking over the former Twitter in 2022, Musk, in the name of free speech, has dismantled the company's Trust and Safety advisory group and stopped enforcing content moderation and hate speech rules that the site followed. He has restored the accounts of conspiracy theorists and incentivized engagement on the platform with payouts and content partnerships. Outside groups have since documented a rise in hate speech and harassment on the platform. X sued a research organization that studies online hate speech — that lawsuit was dismissed last March. The New York legislation took a page from a similar law that passed in California — and drew a similar lawsuit from X. Last fall, a panel of federal appellate judges blocked portions of the California law, at least temporarily, on free speech grounds. The state subsequently settled, agreeing not to enforce the content-moderation reporting requirements.