logo
A power for good?

A power for good?

For some, the advent of the worldwide web is still fresh in the memory. But technological leaps seem to happen with ever-increasing frequency, and we now all find ourselves blinking in the brilliant light at the dawn of the age of AI. At the Advertising Standards Authority (ASA), we've donned the sunglasses and rolled up our sleeves, and AI is already proving a game-changer in how we regulate.
The lightning speed with which AI has developed and integrated into our everyday lives inevitably raises legitimate concerns. What does it mean for jobs, data protection, originality, creativity, copyright, plagiarism, truth, bias, mis- and disinformation and what we think is fake vs real?
These are undoubtedly important issues to grapple with. But the technology also brings multiple benefits. As was the case in the mid-1990s with the launch of search, web browsers and online shops, there were innovators, early adopters, cautious sceptics and technology resisters. AI is no different. The ASA is firmly in the 'early adopter' category. Four years ago, we appointed a head of data science and began building our AI capability; AI is now central to our transformation into a preventative and proactive regulator. Around 94 per cent of the 33,903 ads we had amended or withdrawn last year came from our proactive work using our AI-based Active Ad Monitoring system. The ability to be front-foot and take quick and effective action is crucial when regulating the vast online ecosystem. AI gives us much greater visibility of online ads.
Last year, our system scanned 28 million ads with machine learning and, increasingly, large language models finding the likely non-compliant ads we're interested in. That was a tenfold increase on 2023. Our target is to scan 50 million ads this year. AI-based tools are embedded in our work to help us monitor and tackle ads in high-priority areas and are now used in most of our projects, including our work on climate change and the environment, influencer marketing, financial advertising, prescription-only medicines, gambling and e-cigarettes. It's enabling us to carry out world-leading regulation – monitoring, identifying and tackling potential problem ads at pace and scale. Take one example: our ongoing climate change and environment project. Following high-profile and precedent-setting rulings against major players in various industries, we're now seeing businesses adapting and evolving to make better evidenced, more precise green claims.
Monthly sweeps using AI show high levels of compliance. Following our 2023 airline rulings on misleading 'sustainable' and 'eco-friendly' claims, of the circa 140,000 ads we monitored, we found just five that were clearly non-compliant.
Importantly, we're not removing humans from the equation. Our experts are and will remain central to our regulation. While our AI capability has dramatically improved the efficiency of our monitoring (weeding out the millions of ads that stick to the rules and aren't a problem), it filters and flags potential problem ads to our human specialists for their expert assessment. AI is assisting rather replacing our people. There are a lot of open questions about how AI will impact industries, positively and negatively. And that's certainly true of advertising, as ever at the forefront of technological change.
Subscribe to The New Statesman today from only £8.99 per month Subscribe
We know that the use of AI is already changing advertising. There are big efficiency and effectiveness gains in play. Lower-cost ad ideation and creation, hyper-personalisation and improved customer experience. Quicker and better media planning and buying. Get this right and ads will be cheaper to make and send, and be more engaging and relevant to receive. UK businesses and the British economy will be boosted. But in all of this, responsible ads must not be sacrificed at the altar of advances in technology.
We're well aware of the many potential benefits and problems AI poses for advertising. Think back to the story from Glasgow, where AI-generated ads promised a Willy Wonka-themed event that wasn't quite as advertised. The advertising of certain AI products and services certainly throws up broader ethical considerations. On our radar are ads for AI tech offering mental health support (substituting human therapists), essay-writing tools that pass work off as original, and chat boxes that act as a partner or friend. We don't regulate the products themselves, but in all these examples there is potential for ads to be misleading, irresponsible or harmful. How can businesses use AI safely and responsibly? What does that mean for advertisers?
Our media and technology-neutral rules already cover most of the risks. Ads can't mislead, a principle as old as the hills. In the past, that might have been using photo-editing software; today, it might be through generative AI. Adverts must not be likely to cause harm or serious or widespread offence either. Generative AI might be an unsurpassed pattern-recogniser, but it's not a human and may well miss the nuance of judging prevailing standards in society when producing ad content. Advertisers who harness AI can't abdicate responsibility for the creative content that it produces. That's why we urge businesses to be careful: use the good of AI, but avoid the bad. Put in place human checks and balances.
At the ASA, we're determined to take full advantage of technological advances, developing our Active Ad Monitoring system further and making even more use of large language models to speed up review of ads. Actively experimenting with how these tools can make our internal processes more efficient. And continuing to keep a close eye on how AI is used in advertising.
We are witnessing the next technological revolution that will change society in ways the internet did, perhaps even more. We can say with confidence that our use of AI is already delivering world-leading advertising regulation.
Related

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Period drama: Here We Flo pulls ‘plastic-free' pledge amid row over green claims
Period drama: Here We Flo pulls ‘plastic-free' pledge amid row over green claims

The Guardian

time2 days ago

  • The Guardian

Period drama: Here We Flo pulls ‘plastic-free' pledge amid row over green claims

The sustainable period care brand Here We Flo, which launched in 2017 selling 'plant powered' pads and liners that are '100% free of nasties', is removing the terms 'plastic-free' and 'no synthetic fibres' from its packets. The company said it had been working on a 'packaging refresh' for the past year. It denied that it had made the changes because 'the sustainable material claims are misleading', but admitted it would no longer be using terms such as 'biodegradable', 'no synthetic fibres, 'plastic-free', 'eco-friendly' and 'planet-friendly' on its period products. The changes come amid an angry spat between Here We Flo and a rival brand, Mooncup, which has complained to regulators about Here We Flo's green claims. Mooncup cited a report by scientists at a leading university that allegedly found Here We Flo's pads and liners contained a combination of synthetic and semi-synthetic materials. In the legal letter sent by &Sisters, the parent company of Mooncup, to Here We Flo, and seen by the Guardian, it is alleged the company was falsely using terms such as 'natural', 'biodegradable' and 'plastic-free' in its advertising and on packaging. The allegations are based on findings from a 100-page lab report that it shared with Here We Flo and included in complaints sent to the Competition and Markets Authority and Advertising Standards Authority. Here We Flo denied that its practices consisted of unfair and misrepresentative advertising and labelling. It said the Mooncup report was 'fundamentally flawed and lacks rudimentary detail'. It added that it contained 'inaccuracies' and would have no standing legally. British women spend approaching £300m a year on period products, and the market is dominated by big brands such as Tampax and Always and supermarket own-labels. However demand for 'green' alternatives to pads and liners that typically contain plastic is growing due to campaigns highlighting the pollution they cause. With 3bn disposable products used every year, an estimated 200,000 tonnes of menstrual waste ends up in UK landfill sites. Sustainable period products make up a small but growing part of the market, with total sales of about £6m. Here We Flo is the market leader, selling in major retailers including Boots and Tesco. Mooncup is best known for its reusable silicone menstrual cup. The report was commissioned by Mooncup as a benchmarking exercise. It claims that tests on Here We Flo pads found the non-biodegradable plastics sodium polyacrylate and polyethylene alongside bamboo viscose, a semi-synthetic material obtained through the chemical processing of raw bamboo. Sign up to Business Today Get set for the working day – we'll point you to all the business news and analysis you need every morning after newsletter promotion Here We Flo was co-founded by Tara Chandra and Susan Allen, friends who met while studying at the London School of Economics. The challenger brand describes itself as 'proudly women-of-colour owned and sustainably built'. It has won over young women by talking frankly about periods and sex. Two years ago it secured B Corp status, a fair trade style label for companies. In a 12-page response to the Mooncup allegations, Here We Flo's lawyers described it as a calculated attempt to 'destroy' its market leadership. While it is enjoying stellar sales growth, it said its smaller rival was a 'male-run, declining business'. The 'optics of such a male attack on a female-owned company in relation to feminine hygiene' would not go unnoticed, it added. In turn Here We Flo alleged that Mooncup was misleading consumers with some of its green claims, such as the length of time its pads take to biodegrade, and should be investigated. Mooncup says that its biodegradability claims are verified by the leading standards and certification bodies.

Hey influencers, #stayhonest
Hey influencers, #stayhonest

Scotsman

time16-06-2025

  • Scotsman

Hey influencers, #stayhonest

Content creators are legally bound to make it clear when they're involved in a commercial partnership (Picture: Adobe) Amina Amin looks at how UK law aims to keep the people pushing things on social media on the straight and narrow Sign up to our Scotsman Money newsletter, covering all you need to know to help manage your money. Sign up Thank you for signing up! Did you know with a Digital Subscription to The Scotsman, you can get unlimited access to the website including our premium content, as well as benefiting from fewer ads, loyalty rewards and much more. Learn More Sorry, there seem to be some issues. Please try again later. Submitting... Social media marketing has rapidly evolved into a powerful tool for brand building in the 21st century. From fashion and beauty to tech and toys, influencer partnerships have become a go-to strategy for businesses aiming to maximise consumer reach and enhance brand reputation. As the digital economy grows, influencers are playing an increasingly central role across industries, raising questions about the distinction between personal content and commercial advertising. Advertisement Hide Ad Advertisement Hide Ad From an influencer's perspective, the rise of social media marketing has unlocked new opportunities to advertise and monetise their content. Individuals are now able to leverage their platforms to promote products and services, positioning themselves as brand ambassadors. Laws around influencing are in place, but enforcement has been patchy, says Amina Amin However, the rapid evolution of this space has made it difficult for regulators to keep pace with emerging challenges surrounding this modern form of advertising. A key concern is the consumer's right to know when content is being promoted as part of a commercial relationship. This principle underpins advertising transparency and aims to ensure consumers are not misled about whether a post reflects genuine opinion or paid promotion. The UK has been ahead of the curve here. In 2008, it expanded its advertising laws to cover the growing influencer market, primarily through the Consumer Protection from Unfair Trading Regulations 2008 and CAP Code (administered by the ASA), which both require influencers to clearly disclose paid partnerships and the commercial nature of their content. Despite these measures, enforcement has remained patchy. A 2021 report by the Advertising Standards Authority (ASA) revealed that 65 per cent of sponsored Instagram Stories failed to meet disclosure standards. This triggered warnings from the ASA and a renewed focus on compliance. Advertisement Hide Ad Advertisement Hide Ad In response, the ASA and the Competition and Markets Authority (CMA) issued the 3rd edition of their influencer marketing guide in 2023. The updated guide provides clearer direction to influencers, brands and platforms, aiming to ensure all sponsored content is immediately identifiable to the public. Therefore, to remain compliant, influencers posting in the UK must: always insert a disclaimer on content that is being posted on behalf of a brand (for example, through the use of "#ad" or a platform's own disclosure tool); ensure that the disclaimer features across all relevant content, including all individual posts, Stories, videos and reels; make the disclaimer clearly visible; and be clear in all posts if there is an ongoing partnership with a particular brand. Both influencers and businesses should seek specialist legal advice before entering into a commercial relationship for promotion services. This will reduce the risk of a breach, which can result in ASA and/or CMS sanctions such as receiving reputational damage or having trading privileges withdrawn. Ultimately, the core message is to be transparent. Regulators expect full disclosure of commercial arrangements even if content is being posted on behalf of a brand. From a business perspective, it is vital that you act in line with the law; as while influencers can boost your reputation, irresponsible use of these services may have the opposite reputational effect.

A power for good?
A power for good?

New Statesman​

time13-06-2025

  • New Statesman​

A power for good?

For some, the advent of the worldwide web is still fresh in the memory. But technological leaps seem to happen with ever-increasing frequency, and we now all find ourselves blinking in the brilliant light at the dawn of the age of AI. At the Advertising Standards Authority (ASA), we've donned the sunglasses and rolled up our sleeves, and AI is already proving a game-changer in how we regulate. The lightning speed with which AI has developed and integrated into our everyday lives inevitably raises legitimate concerns. What does it mean for jobs, data protection, originality, creativity, copyright, plagiarism, truth, bias, mis- and disinformation and what we think is fake vs real? These are undoubtedly important issues to grapple with. But the technology also brings multiple benefits. As was the case in the mid-1990s with the launch of search, web browsers and online shops, there were innovators, early adopters, cautious sceptics and technology resisters. AI is no different. The ASA is firmly in the 'early adopter' category. Four years ago, we appointed a head of data science and began building our AI capability; AI is now central to our transformation into a preventative and proactive regulator. Around 94 per cent of the 33,903 ads we had amended or withdrawn last year came from our proactive work using our AI-based Active Ad Monitoring system. The ability to be front-foot and take quick and effective action is crucial when regulating the vast online ecosystem. AI gives us much greater visibility of online ads. Last year, our system scanned 28 million ads with machine learning and, increasingly, large language models finding the likely non-compliant ads we're interested in. That was a tenfold increase on 2023. Our target is to scan 50 million ads this year. AI-based tools are embedded in our work to help us monitor and tackle ads in high-priority areas and are now used in most of our projects, including our work on climate change and the environment, influencer marketing, financial advertising, prescription-only medicines, gambling and e-cigarettes. It's enabling us to carry out world-leading regulation – monitoring, identifying and tackling potential problem ads at pace and scale. Take one example: our ongoing climate change and environment project. Following high-profile and precedent-setting rulings against major players in various industries, we're now seeing businesses adapting and evolving to make better evidenced, more precise green claims. Monthly sweeps using AI show high levels of compliance. Following our 2023 airline rulings on misleading 'sustainable' and 'eco-friendly' claims, of the circa 140,000 ads we monitored, we found just five that were clearly non-compliant. Importantly, we're not removing humans from the equation. Our experts are and will remain central to our regulation. While our AI capability has dramatically improved the efficiency of our monitoring (weeding out the millions of ads that stick to the rules and aren't a problem), it filters and flags potential problem ads to our human specialists for their expert assessment. AI is assisting rather replacing our people. There are a lot of open questions about how AI will impact industries, positively and negatively. And that's certainly true of advertising, as ever at the forefront of technological change. Subscribe to The New Statesman today from only £8.99 per month Subscribe We know that the use of AI is already changing advertising. There are big efficiency and effectiveness gains in play. Lower-cost ad ideation and creation, hyper-personalisation and improved customer experience. Quicker and better media planning and buying. Get this right and ads will be cheaper to make and send, and be more engaging and relevant to receive. UK businesses and the British economy will be boosted. But in all of this, responsible ads must not be sacrificed at the altar of advances in technology. We're well aware of the many potential benefits and problems AI poses for advertising. Think back to the story from Glasgow, where AI-generated ads promised a Willy Wonka-themed event that wasn't quite as advertised. The advertising of certain AI products and services certainly throws up broader ethical considerations. On our radar are ads for AI tech offering mental health support (substituting human therapists), essay-writing tools that pass work off as original, and chat boxes that act as a partner or friend. We don't regulate the products themselves, but in all these examples there is potential for ads to be misleading, irresponsible or harmful. How can businesses use AI safely and responsibly? What does that mean for advertisers? Our media and technology-neutral rules already cover most of the risks. Ads can't mislead, a principle as old as the hills. In the past, that might have been using photo-editing software; today, it might be through generative AI. Adverts must not be likely to cause harm or serious or widespread offence either. Generative AI might be an unsurpassed pattern-recogniser, but it's not a human and may well miss the nuance of judging prevailing standards in society when producing ad content. Advertisers who harness AI can't abdicate responsibility for the creative content that it produces. That's why we urge businesses to be careful: use the good of AI, but avoid the bad. Put in place human checks and balances. At the ASA, we're determined to take full advantage of technological advances, developing our Active Ad Monitoring system further and making even more use of large language models to speed up review of ads. Actively experimenting with how these tools can make our internal processes more efficient. And continuing to keep a close eye on how AI is used in advertising. We are witnessing the next technological revolution that will change society in ways the internet did, perhaps even more. We can say with confidence that our use of AI is already delivering world-leading advertising regulation. Related

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store