logo
France's SBF 120 newcomer OVH reports 9.3% rise in Q3 revenue

France's SBF 120 newcomer OVH reports 9.3% rise in Q3 revenue

Yahoo24-06-2025
(Reuters) -French cloud services provider OVH on Thursday reported a 9.35% rise in third-quarter revenue to 272 million euros ($287 million), driven by strong demand for its Public Cloud services, and reaffirmed its full-year forecast.
Revenue from the Public Cloud segment grew 17%, with new customer acquisitions increasing 12%, OVH said.
Its Private Cloud division also logged robust growth, with a 25% jump in new customers during the quarter, attributed to a repositioning of some offerings, CEO Benjamin Revcolevschi stated.
Stock of the company, which joined France's SBF 120 index in June, has risen more than 170% year-to-date, according to LSEG data.
"We are on track to exceed 1 billion euro in revenue this year," Revcolevschi said, adding, "We are at the heart of a new dynamic with the acceleration of enquiries for sovereign solutions."
OVH previously highlighted the growing appetite in Europe for locally developed digital utilities, driven by rising concerns over data and infrastructure sovereignty.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Bally's Sells Interactive Casino Unit to Intralot in $3.2 Billion Deal
Bally's Sells Interactive Casino Unit to Intralot in $3.2 Billion Deal

Bloomberg

time29 minutes ago

  • Bloomberg

Bally's Sells Interactive Casino Unit to Intralot in $3.2 Billion Deal

Bally's Corp. is selling its interactive casino business to Intralot SA in a deal that values the business at €2.7 billion (US $3.2 billion). The division, formally known as Gamesys, will be part of a combined company with about €1.1 billion of revenue, 60% of that from the UK, according to a statement Tuesday. Bally's Chief Executive Officer Robeson Reeves will become CEO of the new business.

Glenfiddich Owners William Grant & Sons Acquire Famous Grouse
Glenfiddich Owners William Grant & Sons Acquire Famous Grouse

Forbes

time39 minutes ago

  • Forbes

Glenfiddich Owners William Grant & Sons Acquire Famous Grouse

The iconic Scotch whisky brand - Scotland's bestselling whisky - has now been acquired by family ... More firm William Grant & Sons. Following a number of regulatory approvals including from the UK Competition & Markets Authority, Scotch whisky titans and family firm William Grant & Sons has officially acquired two well-known whisky brands—The Famous Grouse and Naked Malt. Though for months the acquisition was an open secret within the industry, it was officially announced today (July 1st) and marks a significant addition to the company's iconic stable of Scotch whisky brands and distilleries which includes names like Glenfiddich, Balvenie, Monkey Shoulder, and many more. The purchase sees the brands transferred from previous owners Edrington, who own the Macallan and Highland Park distilleries, amongst others. According to The Northern Scot, Edrington had announced the agreement last year in September, as the company wanting to focus further on 'ultra-premium spirits'. William Grant & Sons' addition of Famous Grouse and Naked Malt further strengthens its offering in the Scotch blends where Famous Grouse has long been a leader—it's the bestselling whisky in Scotland. Soren Hagh, the recently appointed chief executive of William Grant & Sons, expressed his enthusiasm for the acquisition in the official press announcement: 'I am delighted to complete this acquisition and welcome The Famous Grouse into our portfolio. It is a remarkable Scottish brand with rich history and a strong market position in a number of countries. Over the coming years, we will build on this strong foundation and work to evolve the brand into a true global icon. We also see a lot of potential in Naked Malt, which will be a great addition to our portfolio. Together, these brands perfectly complement our vision for growth, and we look forward to investing in their future and sharing their stories with whisky lovers around the world.' The deal concludes several months of negotiation and transition planning between William Grant & Sons and Edrington, the Glasgow-based company that had owned Famous Grouse since the 19th century. While Edrington has been shifting focus toward premium single malts like The Macallan and Highland Park, Famous Grouse had remained a key revenue generator in its portfolio. According to industry coverage the acquisition includes both the brands and their associated inventory (including spinoff brands such as Famous Grouse Smoky Black, Sherry Cask Finish, and others) are now fully under WG&S control, though exact production arrangements haven't been revealed. It's likely that existing contracts and bottling facilities will stay in place for now. First launched in 1896, Famous Grouse is the creation of grocer Matthew Gloag III from Perthshire and has been Scotland's bestselling whisky since 1980. It also holds a Royal Warrant, which was renewed by King Charles III in December 2024, and exports to over 100 countries. Naked Malt was first launched as the Naked Grouse in 2011 before becoming a standalone brand in 2017 and then renamed in 2021. This one is a blended malt whisky aged in first-fill sherry casks that's proven to be a particular hit in Asian markets. With the acquisition of both brands, William Grant & Sons strengthens its hand in the blended market - this is particularly interesting as so many other whisky companies, such as Edrington, focus on going premium these days. In any case, consumers won't see any significant changes for the moment with the shift in ownership, but it will certainly be interesting to see what comes next for such an iconic whisky brand like Famous Grouse.

A Pro-Russia Disinformation Campaign Is Using Free AI Tools to Fuel a ‘Content Explosion'
A Pro-Russia Disinformation Campaign Is Using Free AI Tools to Fuel a ‘Content Explosion'

WIRED

timean hour ago

  • WIRED

A Pro-Russia Disinformation Campaign Is Using Free AI Tools to Fuel a ‘Content Explosion'

Jul 1, 2025 3:27 PM Consumer-grade AI tools have supercharged Russian-aligned disinformation as pictures, videos, QR codes, and fake websites have proliferated. Photo Illustration: WIRED Staff; Getty Images A pro-Russia disinformation campaign is leveraging consumer artificial intelligence tools to fuel a 'content explosion' focused on exacerbating existing tensions around global elections, Ukraine, and immigration, among other controversial issues, according to new research published last week. The campaign, known by many names including Operation Overload and Matryoshka (other researchers have also tied it to Storm-1679), has been operating since 2023 and has been aligned with the Russian government by multiple groups, including Microsoft and the Institute for Strategic Dialogue. The campaign disseminates false narratives by impersonating media outlets with the apparent aim of sowing division in democratic countries. While the campaign targets audiences around the world, including in the US, its main target has been Ukraine. Hundreds of AI-manipulated videos from the campaign have tried to fuel pro-Russian narratives. The report outlines how, between September 2024 and May 2025, the amount of content being produced by those running the campaign has increased dramatically and is receiving millions of views around the world. In their report, the researchers identified 230 unique pieces of content promoted by the campaign between July 2023 and June 2024, including pictures, videos, QR codes, and fake websites. Over the last eight months, however, Operation Overload churned out a total of 587 unique pieces of content, with the majority of them being created with the help of AI tools, researchers said. The researchers said the spike in content was driven by consumer-grade AI tools that are available for free online. This easy access helped fuel the campaign's tactic of 'content amalgamation,' where those running the operation were able to produce multiple pieces of content pushing the same story thanks to AI tools. 'This marks a shift toward more scalable, multilingual, and increasingly sophisticated propaganda tactics,' researchers from Reset Tech, a London-based nonprofit that tracks disinformation campaigns, and Check First, a Finnish software company, wrote in the report. 'The campaign has substantially amped up the production of new content in the past eight months, signalling a shift toward faster, more scalable content creation methods.' Researchers were also stunned by the variety of tools and types of content the campaign was pursuing. "What came as a surprise to me was the diversity of the content, the different types of content that they started using,' Aleksandra Atanasova, lead open-source intelligence researcher at Reset Tech, tells WIRED. 'It's like they have diversified their palette to catch as many like different angles of those stories. They're layering up different types of content, one after another.' Atanasova added that the campaign did not appear to be using any custom AI tools to achieve their goals, but were using AI-powered voice and image generators that are accessible to everyone. While it was difficult to identify all the tools the campaign operatives were using, the researchers were able to narrow down to one tool in particular: Flux AI. Flux AI is a text-to-image generator developed by Black Forest Labs, a German-based company founded by former employees of Stability AI. Using the SightEngine image analysis tool, the researchers found a 99 percent likelihood that a number of the fake images shared by the Overload campaign—some of which claimed to show Muslim migrants rioting and setting fires in Berlin and Paris—were created using image generation from Flux AI. The researchers were then able to generate images that closely replicate the aesthetic of the published images using prompts that included discriminatory language—such as 'angry Muslim men.' This highlights 'how AI text-to-image models can be abused to promote racism and fuel anti-Muslim stereotypes,' the researchers wrote, adding that it raises 'ethical concerns on how prompts work across different AI generation models.' 'We build in multiple layers of safeguards to help prevent unlawful misuse, including provenance metadata that enables platforms to identify AI generated content, and we support partners in implementing additional moderation and provenance tools,' a spokesperson for Black Forest Labs wrote in an email to WIRED. 'Preventing misuse will depend on layers of mitigation as well as collaboration between developers, social media platforms, and authorities, and we remain committed to supporting these efforts.' Atansova tells WIRED the images she and her colleagues reviewed did not contain any metadata. Operation Overload's use of AI also uses AI-voice cloning technology to manipulate videos to make it appear as if prominent figures are saying things they never did. The number of videos produced by the campaign jumped from 150 between June 2023 and July 2024 to 367 between September 2024 and May 2025. The researchers said the majority of the videos in the last eight months used AI technology to trick those who saw them. In one instance, for example, the campaign published a video in February on X that featured Isabelle Bourdon, a senior lecturer and researcher at France's University of Montpellier, seemingly encouraging German citizens to engage in mass riots and vote for the far-right Alternative for Germany (AfD) party in federal elections. This was fake: The footage was taken from a video on the school's official YouTube channel where Bourdon discusses a recent social science prize she won. But in the manipulated video, AI-voice cloning technology made it seem as if she was discussing the German elections instead. The AI-generated content produced by Operation Overload is shared on over 600 Telegram channels, as well as by bot accounts on social media platforms like X and Bluesky. In recent weeks, the content has also been shared on TikTok for the first time. This was first spotted in May, and while the number of accounts was small—just 13— the videos posted were seen 3 million times before the platform demoted the accounts. "We are highly vigilant against actors who try to manipulate our platform and have already removed the accounts in this report,' Anna Sopel, a TikTok spokesperson, tells WIRED. 'We detect, disrupt and work to stay ahead of covert influence operations on an ongoing basis and report our progress transparently every month.' The researchers pointed out that while Bluesky had suspended 65 percent of the fake accounts, 'X has taken minimal action despite numerous reports on the operation and growing evidence for coordination.' X and Bluesky did not respond to requests for comment. Once the fake and AI generated content is created by Operation Overload, the campaign does something unusual: They send emails to hundreds of media and fact-checking organizations across the globe, with examples of their fake content on various platforms, along with requests for the fact-checkers to investigate if it is real or not. While it may seem counterintuitive for a disinformation campaign to alert those trying to tackle disinformation about their efforts, for the pro-Russia operatives, getting their content posted online by a real news outlet—even if it is covered with the word 'FAKE'—is the ultimate aim. According to the researchers, up to 170,000 such emails were sent to more than 240 recipients since September 2024. The messages typically contained multiple links to the AI-generated content, but the email text was not generated using AI, the researchers said. Pro-Russia disinformation groups have long been experimenting with using AI tools to supercharge their output. Last year a group dubbed CopyCop, likely linked to the Russian government, was shown to be using large language models, or LLMs, to create fake websites designed to look like legitimate media outlets. While these attempts don't typically get much traffic, the accompanying social media promotion can attract attention and in some cases the fake information can end up on the top of Google search results. A recent report from the American Sunlight Project estimated that Russian disinformation networks were producing at least 3 million AI-generated articles each year, and that this content was poisoning the output of AI-powered chatbots like OpenAI's ChatGPT and Google's Gemini. Researchers have repeatedly shown how disinformation operatives are embracing AI tools, and as it becomes increasingly difficult for people to tell real from AI-generated content, experts predict the surge in AI content fuelling disinformation campaigns will continue. 'They already have the recipe that works,' Atanasova says. 'They know what they're doing.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store