logo
#

Latest news with #ShaunDavies

UK Events Industry Pushes for EU Deal to Ease Post-Brexit Barriers
UK Events Industry Pushes for EU Deal to Ease Post-Brexit Barriers

Skift

time02-07-2025

  • Business
  • Skift

UK Events Industry Pushes for EU Deal to Ease Post-Brexit Barriers

As Brexit fallout continues to hamper international attendance, UK event leaders are calling for targeted agreements with the EU to restore cross-border mobility and recover international business lost to red tape and rising costs. The UK events industry is urging policymakers to negotiate a Mutual Recognition Agreement (MRA) with the European Union to ease cross-border rules that have disrupted trade shows, exhibitions, and meetings since Brexit. Britain's exit from the EU was finalized in 2020 when the two sides agreed to a trade deal. The UK' $84.7 billion events sector has seen a drop in international exhibitors and delegates due to increased red tape, visa delays, and logistical hurdles. 'Business events support trade and regional economies. The ability to operate easily across borders is essential,' said Shaun Davies, Labour MP for Telford and chair of the All-Party Parliamentary Group for Events. Patchwork of Systems to Navigate Event professionals must now navigate 27 separate immigration systems. Each with its own short-term work requirements. Belgium, for example, requires a work permit even for visits under 90 days. These hurdles have forced some major shows to relocate. One example is ICE (International Casinos Exhibition), a major gaming industry trade show. It is relocating from London to Barcelona. Organizers cite rising costs and logistical burdens for EU-based exhibitors as a driving factor in the decision. Led by the Events Industry Alliance, the campaign seeks to restore international cooperation and position the UK as a more accessible destination for global events. A new white paper by The Business of Events outlines further recommendations. They include reopening the EU-UK Trade and Cooperation Agreement to allow for an events-specific visa exemption, issuing clear country-by-country guidance, and creating centralized support services to help professionals comply with EU work requirements. These steps, the paper argues, would help the UK stay competitive and grow in a post-Brexit world.

YouTube, Meta, TikTok reveal misinformation tidal wave
YouTube, Meta, TikTok reveal misinformation tidal wave

The Advertiser

time04-06-2025

  • Politics
  • The Advertiser

YouTube, Meta, TikTok reveal misinformation tidal wave

Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles. Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles. Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles. Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles.

YouTube, Meta, TikTok reveal misinformation tidal wave
YouTube, Meta, TikTok reveal misinformation tidal wave

West Australian

time04-06-2025

  • Business
  • West Australian

YouTube, Meta, TikTok reveal misinformation tidal wave

Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles.

YouTube, Meta, TikTok reveal misinformation tidal wave
YouTube, Meta, TikTok reveal misinformation tidal wave

Perth Now

time04-06-2025

  • Business
  • Perth Now

YouTube, Meta, TikTok reveal misinformation tidal wave

Thousands of misleading videos, scam ads and fake profiles made in Australia have been wiped from online platforms over the past year to address a growing wave of misinformation. More than 25,000 videos deemed to feature "harmful" fake claims were removed from TikTok and YouTube, reports showed, while unverified and misleading election ads ranked among the most commonly removed content by Meta and Google. Eight technology companies outlined their actions in transparency reports published on Thursday in accordance with the voluntary Australian Code of Practice on Disinformation and Misinformation. Several tech firms declined to detail their efforts to tackle fraudulent content in Australia, including social media platforms X and Snapchat. The statistics follow heightened concern about misinformation online after the emergence of generative artificial intelligence tools, and warnings they may be used to create convincing deepfakes and political ads. US firms including Google, Meta, Twitch, Apple and Microsoft released transparency reports under the industry code, and addressed issues including the identification of misleading claims, safeguards for users, and content removal. TikTok revealed it removed more than 8.4 million videos from its Australian platform during 2024, including more than 148,000 videos deemed to be inauthentic. Almost 21,000 of the videos violated the company's "harmful misinformation policies" during the year, the report said, and 80 per cent, on average, were removed before users could view them. Google removed more than 5100 YouTube videos from Australia identified as misleading, its report said, out of more than 748,000 misleading videos removed worldwide. Election advertising also raised red flags for tech platforms in Australia, with Google rejecting more than 42,000 political ads from unverified advertisers and Meta removing more than 95,000 ads for failing to comply with its social issues, elections and politics policies. Meta purged more than 14,000 ads in Australia for violating misinformation rules, took down 350 posts on Facebook and Instagram for misinformation, and showed warnings on 6.9 million posts based on articles from fact-checking partners. In January, the tech giant announced plans to end fact-checking in the US and its report said it would "continue to evaluate the applicability of these practices" in Australia. Striking a balance between allowing content to be shared online and ensuring it would not harm others was a "difficult job," Digital Industry Group code reviewer Shaun Davies said, and the reports showed some companies were using AI tools to flag potential violations. "I was struck in this year's reports by examples of how generative AI is being leveraged for both the creation and detection of (misinformation) and disinformation," he said. "I'm also heartened that multiple initiatives that make the provenance of AI-generated content more visible to users are starting to bear fruit." In its report, Microsoft also revealed it had removed more than 1200 users from LinkedIn for sharing misinformation, while Apple identified 2700 valid complaints against 1300 news articles.

MPs call for urgent action over 'toxic' male influencers
MPs call for urgent action over 'toxic' male influencers

ITV News

time24-04-2025

  • Politics
  • ITV News

MPs call for urgent action over 'toxic' male influencers

ITV News' Political Correspondent Harry Horton speaks to Labour MPs about how to tackle the culture of toxic masculinity and give young men and boys more positive role models. A new group of Labour MPs want to pressure the government into a radical rethink of how to steer young men and boys away from the culture of toxic masculinity. Each of the eight MPs at the meeting picks out different challenges facing young men and boys. 'We have to get away from a political snobbery,' said Shaun Davies, who represents Telford in Shropshire. 'Which is to say that to talk about men's issues and boys issues is somehow anti-women or anti-girls. It absolutely is not.' Jonathan Brash, a former teacher and now Hartlepool MP said: 'I've been looking at the exclusion rates in secondary school and they're going up and up and up. 'Why are young men no longer fitting into our education system and then what happens when they are pushed out of it?' Rachel Tayler's North Warwickshire constituency is a former mining area. She believes a shift in the type of physical work men often do has had an impact: 'Now they're working in massive logistics factories, all with earpods in or on forklift trucks or operating robots. And they don't see anybody or talk to anybody all day long.' The conversations around masculinity have been sparked, in part, by the hit Netflix drama 'Adolescence', which tells the story of a 13-year-old boy accused of stabbing a female classmate. Mr Davies said Labour MPs have been pushing for a cross-government approach on issues affecting men long before the TV drama, but admits politicians have to do more. 'There's absolutely a fundamental problem that there is a generation of young boys coming through where there is not an offer for them and they do not have a sense of belonging and that's a moral outrage that we need to address.' In Bishop Auckland, the local MP Sam Rushworth wants to hear from pupils about the issues raised by Adolescence. He's invited ITV News to a conversation he's hosting at the school, and there's one name that keeps bring brought up by the pupils: Andrew Tate. 'People take him seriously,' said one girl. 'He's got such an influence on people'. One boy said Tate and other male influencers just 'popped up' on his social media feed. 'I thought this might help me learn how to make lots of money. But then when I found out what he did, I straight unfollowed him.' Some of the boys admit talking about emotions is much more taboo than it is for girls. 'We have this idea that we can't open up as much,' said one year ten boy. 'You don't speak to anyone about them,' said another. 'There's no point. Because most of the time it's someone telling you just to man up.' Away from politicians, one former teacher is trying to help navigate young men through their own adolescence. Mike Nicholson set up Progressive Masculinity to hold workshops in schools to challenge some of society's expectations of what it means to be a man. 'I noticed while I was a teacher that boys and young men really don't have safe spaces to go and discuss what it can mean to be a man, to explore the potential of masculinity without fear of judgement, without fear of shame or being ridiculed,' he said. Nicholson said the challenges facing men are not new, but believes the world is now ready to have what he calls 'difficult conversations'. 'I think social media maybe has intensified some of it, but I think these conversations are well overdue.' So what can be done? The Labour MPs we spoke to have called for a 'cultural shift' in the way the public and private sector approaches the issues faced by young men and boys. Campaigners say there needs to be a 'dedicated strategy' across government. But the challenges are broad, spanning areas such as health, education and the internet. Even the prime minister, who has taken a keen interest in the challenges raised by Adolescence, admits there 'isn't an obvious policy response'. And so the fear some gave is that despite the attention of MPs and the public, there is a risk that young men and boys could slip off the agenda.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store