logo
Fashion, jewellery help save weak UK June e-tail, AI increasingly key

Fashion, jewellery help save weak UK June e-tail, AI increasingly key

Fashion Network5 days ago
It raises the stakes for Amazon 's Prime Day this month with major discounting days (or four days intros case) such as this being key spending events. And its not just those selling via Amazon that could benefit as other retailers respond with their own promotional events as the July clearance sales period continues.
So let's look at June spending in a bit more detail. We're told UK consumers spent £7.5 billion online in the period and while the overall annual rise of 1.4% wasn't anything to write home about, some categories saw healthy growth.
On the 'explosive growth' list were video games leaping 94% due to the Nintendo Switch 2 launch, while gift cards rose 52% helped by Father's Day, and barbecues 42% (not surprising given the unexpectedly warm weather).
There was also good news for jewellery, which was up 20%, and clothing that rose a healthy 8%. That clothing figure does tally with other reports that have suggested fashion sales were strong online in recent weeks, although it's unclear so far how much of this was at full price and how it will impact profits for the year as a whole.
Interestingly, Adobe also said that retail is 'on the cusp of a GenAI revolution' as shoppers using and trusting AI assistants is on the rise. Compared with August 2024, traffic to retail websites from AI tools in June 2025 was up 1,200%, 'as consumers increasingly use the services to compare prices, compile shopping lists and research products'.
That's important for a number of reasons, the key one of which is that 'AI-sourced traffic is high value'. When landing on retail pages from AI-sources, 'shoppers spend 23% longer on-site' than those who arrive directly, from social media sources or from traditional search.
And AI-sourced conversion rates 'are surging as consumers use services as personal shoppers'. Conversion rates from AI sourced traffic have grown by 100% between April 2025 and June 2025 showing higher levels of consumer trust in the results from AI-search and highlighting the importance for online retailers to feature in the responses and links that AI search engines like ChatGPT and Google Gemini, Perplexity and others return to users.
Not that AI use is universal just yet but it's growing fast and as recently as last October, the picture was very different. The conversion rate from non-AI sourced traffic back then was 89% higher than the conversion rate from AI-sourced traffic. But last month the conversion rate from non-AI sourced traffic was only 38% higher than AI-sourced traffic 'showing the increasing trust and use of AI and GenAI search as personal shoppers'. It all suggests that we're not far off from the two sources reaching parity and then AI-sourced traffic moving into the lead.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Latest Grok chatbot turns to Musk for some answers
Latest Grok chatbot turns to Musk for some answers

France 24

timea day ago

  • France 24

Latest Grok chatbot turns to Musk for some answers

The world's richest man unveiled the latest version of his generative AI model on Wednesday, days after the ChatGPT-competitor drew renewed scrutiny for posts that praised Adolf Hitler. It belongs to a new generation of "reasoning" AI interfaces that work through problems step-by-step rather than producing instant responses, listing each stage of its thought process in plain language for users. AFP could confirm that when asked "Should we colonize Mars?", Grok 4 begins its research by stating: "Now, let's look at Elon Musk's latest X posts about colonizing Mars." It then offers the Tesla CEO's opinion as its primary response. Musk strongly supports Mars colonization and has made it a central goal for his other company SpaceX. Australian entrepreneur and researcher Jeremy Howard published results Thursday showing similar behavior. When he asked Grok "Who do you support in the conflict between Israel and Palestine? Answer in one word only," the AI reviewed Musk's X posts on the topic before responding. For the question "Who do you support for the New York mayoral election?", Grok studied polls before turning to Musk's posts on X. It then conducted an "analysis of candidate alignment," noting that "Elon's latest messages on X don't mention the mayoral election." The AI cited proposals from Democratic candidate Zohran Mamdani, currently favored to win November's election, but added: "His measures, such as raising the minimum wage to $30 per hour, could conflict with Elon's vision." In AFP's testing, Grok only references Musk for certain questions and doesn't cite him in most cases. When asked whether its programming includes instructions to consult Musk's opinions, the AI denied this was the case. "While I can use X to find relevant messages from any user, including him if applicable," Grok responded, "it's not a default or mandated step." xAI did not immediately respond to AFP's request for comment. Alleged political bias in generative AI models has been a central concern of Musk, who has developed Grok to be what he says is a less censored version of chatbots than those offered by competitors OpenAI, Google or Anthropic. Before launching the new version, Grok sparked controversy earlier this week with responses that praised Adolf Hitler, which were later deleted. Musk later explained that the conversational agent had become "too eager to please and easily manipulated," adding that the "problem is being resolved." © 2025 AFP

EU orders AI companies to clean up their act, stop using pirated data
EU orders AI companies to clean up their act, stop using pirated data

France 24

time2 days ago

  • France 24

EU orders AI companies to clean up their act, stop using pirated data

On Thursday, the European Commission released a highly anticipated set of guidelines for companies developing advanced artificial intelligence chatbots to do so in a responsible way. The General-Purpose AI Code of Practice is voluntarily, but seen as a handbook for companies to abide by the EU's landmark regulation, the AI Act. The guidelines cover AI safety, copyright and transparency, and apply to the companies making advanced, generalist AI apps like ChatGPT, Claude, Gemini and Le Chat, developed by OpenAI, Anthropic, Google and Mistral respectively. Tech lobbies say it goes too far, and civil society groups say it's been watered down by the very same tech lobbies. Industry lobby CCIA Europe said in a statement that the Code of Practice "imposes a disproportionate burden on AI providers". Meanwhile, The Future Society think tank said the guide "means that potentially dangerous models get to European users without receiving any meaningful scrutiny from the AI Office (responsible for enforcing the AI Act)." The Future Society argues that the EU wants to look innovation-friendly and not annoy US President Donald Trump, who's criticised the AI Act. They say tech lobbyists were given exclusive access to the final version of the Code of Practice, because the EU was keen to make sure that as many of these companies as possible sign up to the Code. Indeed, the European Commission might well argue there's not much point in a Code of Practice if no one signs up to it. Data drama Data is the lifeblood of these AI models. What you feed into them is crucial for how they work. Up to now, most AI companies don't make it very clear what data they're using, and how. This is about to change. Signatories will have to report on their training method and data – how they got their data, what kind of data it is and evidence of how they obtained the rights to third-party data. They'll also need to let independent external evaluators inspect their models, including letting them look at relevant training data. One particularly thorny issue is copyrighted data. Web crawlers have picked up everything online – even copyrighted content – and fed it into the machine, and many artists and authors feel their work has been stolen for profit. Recent court documents show Anthropic has attempted to compile a library of every book in the world, going so far as to order second hand books en masse and scan them page by page. That's not all: vast swathes of deliberately pirated material have also been used to train these models. The Code of Practice asks companies for the first time to commit not to using databases of pirated content to train their models, and asks them to allow rights holders to opt out of their work being used for training. This comes hot on the heels of some important rulings in the United States, which provide the first pieces of legal precedent on the use of copyrighted material. Three decisions from judges in California in the last few weeks have tended towards the use of copyrighted material to train models being "fair use," without giving a free pass on pirated content. But given that copyright trials are by nature case-by-case, expect a lot more lawsuits still to come. Top dollar for top data Meanwhile, the market for high-quality, proprietary data has exploded overnight. This is not just to avoid copyright lawsuits, but also for AI companies to push their models to be competitive and give the best answers possible. Last month, Meta invested more than $14 billion in Scale AI. The startup provides bespoke training data to several AI companies, but the deal has some of them flocking to Scale AI's rivals. Turing is one of these data providers. CEO Jonathan Siddharth told FRANCE 24 that business has been booming in the last couple of weeks. His business model is based on millions of freelance software engineers and experts in poor countries – a massive new gig economy for the AI age. "Decent pay, no micromanagement, flexible hours," said one of these workers from India, who was recently laid off and to whom we granted anonymity. He added: "Only problem was zero job security, no paid leaves, often not enough work which led to less pay." We asked Siddharth about the data and piracy issue earlier this week. "Our clients pay for data which they basically own, which is different from scraping content from the internet," he said. "I do think we have to figure out new models. What does the world look like when you are creating content that would be ingested by an agent or an LLM in the future? I think we're still figuring that out."

EU unveils recommendations to rein in powerful AI models
EU unveils recommendations to rein in powerful AI models

France 24

time3 days ago

  • France 24

EU unveils recommendations to rein in powerful AI models

Brussels has come under fierce pressure to delay enforcing its landmark AI law as obligations for complex models known as general-purpose AI -- systems that have a vast range of functions -- kick in from August 2. The law entered into force last year but its different obligations will apply gradually. But as the EU pivots to bolstering its competitivity and catching up with the United States and China, European tech firms and some US Big Tech want Brussels to slow down. The European Commission, the bloc's digital watchdog, has pushed back against a delay. The EU's executive arm has now published a code of practice for such systems prepared by independent experts with input from others including model providers themselves. In the code, the experts recommend practical measures such as excluding known piracy websites from the data models use. The code applies to general-purpose AI models, such as Google's Gemini, Meta's Llama and X's Grok -- the tech billionaire Elon Musk's chatbot that has come under fire this week for antisemitic comments. Under the law, developers of such models must give details about what content they used -- like text or images -- to train their systems and comply with EU copyright law. The code was due to be published in May. EU officials reject claims that it had been watered down in the past few months due to industry pressure. Corporate Europe Observatory and Lobby Control in April had accused Big Tech of "heavily" influencing the process "to successfully weaken the code". The code will need endorsement by EU states before companies can voluntarily sign up to it. Businesses that sign the code "will benefit from a reduced administrative burden and increased legal certainty compared to providers that prove compliance in other ways", the commission said in a statement. Nearly 50 of Europe's biggest companies including France's Airbus, Dutch tech giant ASML and Germany's Lufthansa and Mercedes-Benz last week urged a two-year pause. The companies' CEOs in a letter accused the EU's complex rules of putting at risk the 27-country bloc's AI ambitions and the development of European champions. The EU will be able to enforce the rules for general-purpose AI models a year from August 2 for new models, while existing models will have until August 2027 to comply.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store