logo
#

Latest news with #LLaMA

European Cloud Champion OVHcloud Eyes €1 Billion Milestone
European Cloud Champion OVHcloud Eyes €1 Billion Milestone

Arabian Post

time3 days ago

  • Business
  • Arabian Post

European Cloud Champion OVHcloud Eyes €1 Billion Milestone

OVHcloud posted €271.9 million in revenue for its third quarter of fiscal 2025, representing organic growth of 9.3%, and confirmed its projection to exceed €1 billion in annual revenue. The Paris‑listed cloud provider attributed the Q3 gains to sustained demand for its Public Cloud arm, which grew 17.2% year‑on‑year to €53.6 million, and a revival in Private Cloud, with new customer intake climbing over 25% in its bare‑metal offering. Web Cloud & Other services also nudged up 3.8%. CEO Benjamin Revcolevschi underlined that the business 'demonstrated its resilience' and reiterated that OVHcloud remains 'on track to exceed €1 billion in revenue this year'. The company confirmed its guidance for full‑year organic revenue growth of between 9% and 11%, an adjusted EBITDA margin around 40%, capital expenditure at 30–34% of revenue, and unlevered free cash flow of at least €25 million. ADVERTISEMENT Revenue at a glance In the quarter ended 31 May, Private Cloud generated €169.3 million, up 8.6%, and remains the bulk of activity at 62.3% of total revenues. Public Cloud now accounts for nearly 20%, buoyed by new AI and data analytics products and growth in major regions. Web Cloud & Other includes domains and hosting, which saw modest expansion. Geographically, OVHcloud continues to strengthen beyond its domestic market. France contributed 48% of total revenues, growing 7.2%; ex‑France Europe grew 8.1%; and Rest of World—encompassing North America and Asia–Pacific—surged 15.6%. Drivers of the surge include sovereign cloud interest within Europe, a trend prompted by concerns over data sovereignty and geopolitical tensions over hyperscalers. Revcolevschi stressed that choosing a cloud provider 'is no longer just a technical matter, but also a strategic issue'. OVHcloud is positioning itself as a 'sovereign cloud reference,' responding with new offerings like a '3‑AZ Region' in Milan and expanding its first data‑centre footprint in Italy. The United States continues to be a growth pole, where OVHcloud is rolling out Local Zones in cities such as Boston and Seattle, now totalling ten across the country. The Asia‑Pacific region also showed robust uptake of both public and private cloud services. OVHcloud's inclusion in France's SBF 120 index this June follows a more than 170% increase in the company's stock price this year. Management aimed to ensure fiscal discipline alongside growth, maintaining cost control and a net revenue retention rate of 104%, indicating that existing customers are expanding their usage. Key product and infrastructure milestones were outlined in the company's Q3 investor presentation. Its new Data Platform PaaS, a unified solution for data integration and analytics, and AI Endpoints, which provides easy API access to models including LLaMA, Mistral and Qwen, signify its commitment to AI and data services. The forthcoming Milan region, scheduled for late 2025, fulfils the promise of a European triple‑zone site; a key strategic move for corporations needing multi‑AZ frameworks. Boardroom changes were also noted: Bernard Gault stepped down as lead director on 23 June, succeeded by Pierre Barrial, a former IDEMIA CEO; and Christophe Karvelis‑Senn joined as a non‑voting director, bringing extensive private equity experience. OVHcloud's Q3 results add to a broader continental shift. In an environment where cloud sovereignty is increasingly viewed through a political and regulatory lens, European enterprises and governments are seeking to diversify away from US hyperscalers. OVHcloud, with its integrated model—from server design to data‑centre operations—bets on delivering competitive pricing, full data control, and a lower carbon footprint. Investors have responded positively. The SBF 120 listing recognises not only OVHcloud's growth but also its liquidity and free float standing. With capital expenditures making up just under a third of revenues, the firm retains flexibility to expand capacity without straining cash flow. As public cloud accelerates—boosted by AI, analytics and sovereign demand—OVHcloud positions itself as the front‑runner among Europe‑based providers. With disciplined financial management, new products in AI and a growing global footprint, its progress toward the €1 billion mark reflects a strategic blend of growth and resilience.

AI gamble must be smart, not just fast
AI gamble must be smart, not just fast

Express Tribune

time22-06-2025

  • Business
  • Express Tribune

AI gamble must be smart, not just fast

Listen to article The future of data sharing changed drastically when the US realised that 9/11 was a failure of intelligence agencies to act in concert on then-available data and hence called the incident a "data fusion" crisis. The US Department of Homeland Security began setting up a robust network of "fusion centres" – state and locally run organisations that allow real-time sharing of critical intelligence and datasets between two or more government units for identifying red flags. Fast forward to 2025, and now Artificial Intelligence (AI) is taking over such "fusion centres" worldwide – with possibilities that are endless. AI agents are replacing humans, and language models are generating insights that were previously unheard of. However, as is the case with every technology, the use of AI, especially in the public sector and in legal matters, remains a double-edged sword and must be handled with a pinch of salt. For instance, in June 2023, Schwartz, an attorney with Levidow, Levidow & Oberman in New York, used ChatGPT for legal case research and was fined by the judge for citing false precedents with bogus names in his brief. The large language model (LLM) was apparently hallucinating – a problem where these chatbots make up fictitious data on their own. Similarly, in March 2024, the Microsoft-powered chatbot MyCity gave incorrect legal information that could have led prospective businessmen to break the law. It falsely claimed that landlords could openly discriminate based on the income of tenants and that restaurant owners could take a share of their workers' tips. Hence, when it comes to using AI, public institutions are now faced with a tough choice: should they rely on public AI models hosted by third parties such as ChatGPT, adopt open-source models such as LLaMA, or train their own proprietary AI models in the long run? Choosing the right AI strategy is crucial here. In 2024, Air Canada's virtual assistant was found to be giving factually incorrect information about discounts to a customer who then took the matter to court and was awarded damages. Similarly, when Denmark rolled out AI algorithms in its social security system, the system was found to have an inherent bias against marginalised groups such as the elderly, low-income families, migrants, and foreigners. Ninety per cent of the cases that AI marked as fraud later turned out to be genuine, and the whole episode is now taught as a classic case study in discrimination and breach of the European Union's (EU) AI Act's regulations on social scoring systems. Therefore, if any public sector organisation chooses to use a third-party model trained by OpenAI in its operations, there is a risk of bias against people of colour and disadvantaged groups – as the training data scraped from the internet, social media and discussion forums is usually biased itself. A good AI strategy involves thoughtful and controlled phased deployments with well-planned use cases. For example, the Department of Homeland Security (DHS) began with publicly available AI tools to improve employee productivity but also rolled out its AI vision and development roadmap. In the meantime, it focused on developing specialised AI applications – such as one to train officers dealing with asylum applications and conducting security investigations. By December 2024, DHS had launched DHSChat on its internal secure network – a cutting-edge algorithm that can draft reports, streamline tasks, develop software, and, unlike other large language models, ensures employee data is protected and not used to train external models. In fact, as a best practice and as mandated by the Trump administration's executive order, DHS actively maintains its AI inventory, which includes a list of use cases related to AI in its operations. For countries like Pakistan, our institutions could use a mix of public, open-source and proprietary models – depending on the nature of the task at hand. When it comes to using AI as the new Google, public models are usually fine, but for drafting memos and summarising reports, it is not advisable to use a public model. For that, the Ministry of IT or other institutions can host their own open-source AI models in their data centres or fine-tune them to develop proprietary models. For critical systems, it is always recommended not to entirely replace existing automation with AI. There is a need to install a supervisor for fact-checking and verifying the output of AI models for hallucinations and bias. No matter how lucrative the idea of an AI-driven public sector may be, it is important to thoroughly test and check the behaviour of these models before deploying them. The AI-based transformation project currently being executed at the Federal Board of Revenue (FBR) will serve as a test case for other AI-aspiring public agencies. The writer is a Cambridge graduate and is working as a strategy consultant

Lawyers Just Discovered Something About Meta's AI That Could Cost Zuckerberg Untold Billions of Dollars
Lawyers Just Discovered Something About Meta's AI That Could Cost Zuckerberg Untold Billions of Dollars

Yahoo

time18-06-2025

  • Yahoo

Lawyers Just Discovered Something About Meta's AI That Could Cost Zuckerberg Untold Billions of Dollars

A legal expert found that Meta's AI is able to spit out entire portions of books verbatim — and if he's right, it could be seriously bad news for the company and its CEO Mark Zuckerberg. First, a quick primer. All the AI that's commercially buzzy at the moment, like OpenAI's ChatGPT or Meta's Llama, is trained by feeding in huge amounts of data. Then researchers do a bunch of number crunching using algorithms, basically teaching the system to recognize patterns in all that data so thoroughly that it can then create new patterns — meaning that, say, if you ask for a summary of the plot of one of the "Harry Potter" books, it'll give you (hopefully) a reasonable overview. The problem, Stanford tech law expert Mark Lemley explains in an interview with New Scientist, is that his team's research found that Meta's LLaMA is able to repeat verbatim the exact contents of copyrighted books — such as, in one example he found, lengthy passages from the multi-billion dollar "Harry Potter" series. For Meta, this is a gigantic legal liability. Why? Because if its AI is producing entire excerpts of material used to train it, it starts to look less like its AI is producing transformative works based on general patterns about language and the world it learned from its training data, and more like the AI is acting like a giant .ZIP file of copyrighted work, which users can then reproduce at will. And it looks a lot like it is. When testing out various AI models by companies including OpenAI, DeepSeek, and Microsoft, Lemley's team found that Meta's LLaMA was the only one that spat out book content exactly. Specifically, the researchers found that LLaMA seemed to have memorized material including the first book in J.K. Rowling's "Harry Potter" series, F. Scott Fitzgerald's "The Great Gatsby," and George Orwell's "1984." It's not under debate that Meta, like its peers in the tech industry, used copyrighted materials to train its AI. But its specific methodology for doing so has come under fire: it emerged in copyright lawsuit against Meta by authors including the comedian Sarah Silverman that the model was trained on the "Books3" dataset, which contains almost 200,000 copyrighted publications and which Meta engineers downloaded using an illegal torrent ("Torrenting from a [Meta-owned] corporate laptop doesn't feel right," one of them fussed while doing so, in messages produced in court.) Lemley and his team estimate that if just three percent of the Books3 dataset were found to be infringing, the company behind it could owe nearly $1 billion in statutory damages, and that's not counting any additional payouts based on profits gleaned from such theft. And if the proportion of infringing content is higher, at least in theory Meta could end up nailed to the wall. Lemley is in a weird position, by the way. He previously defended Meta in that same lawsuit we mentioned above, but earlier this year, the Stanford professor announced in a LinkedIn post that he would no longer be representing the company in a protest of Meta and Zuckerberg's right-wing virtue signaling. Back then, he said he believed Meta should win its case — but based on his new research, it sounds like that opinion may have shifted. Meta declined to comment to New Scientist about Lemley's findings. More on Meta: Meta Says It's Okay to Feed Copyrighted Books Into Its AI Model Because They Have No "Economic Value"

Who is Alexender Wang? 28-year-Old, 'Scale AI' CEO Chosen to Lead Meta's $14.3B 'Superintelligence' Bet
Who is Alexender Wang? 28-year-Old, 'Scale AI' CEO Chosen to Lead Meta's $14.3B 'Superintelligence' Bet

International Business Times

time14-06-2025

  • Business
  • International Business Times

Who is Alexender Wang? 28-year-Old, 'Scale AI' CEO Chosen to Lead Meta's $14.3B 'Superintelligence' Bet

In a major move, technology giant Meta has not only acquired a 49% stake in Scale AI by investing $14.3 billion but has also recruited its 28-year-old CEO, Alexandr Wang, to lead Meta's superintelligence unit. This marks a shift in priorities for artificial intelligence development. This is not a regular AI top talent hiring by Meta, as Wang, who dropped out from MIT to build his own AI empire, is not known for his academic excellence but has a reputation for operational execution in his role as one of the two cofounders of Scale AI. His company made its name by mobilizing large networks of human data annotators—through platforms like Remotasks—to train machine learning systems. With this acquisition, Meta is signaling that owning the data "pipes," rather than just the model architectures, is the real power play in the AI arms race. While Meta's competitors Google and OpenAI are focusing on refining the algorithm, Mark Zuckerberg's firm is now strategically focusing more on owning the entire AI lifecycle—from data generation to model training and product deployment. This vertical integration has parallels to the way companies such as Apple control hardware and software to create tighter feedback loops and promote faster innovation. Meta, once a pioneer in open-source models, such as LLaMA, has faced delays in its AI roadmap and talent drain in its key teams in recent times. Bringing in Wang is interpreted as further indication that the company is moving towards a more product-oriented approach to superintelligence, like Sam Altman opted for with OpenAI. The company is betting that this approach of strategic leadership and scalable data operations will outpace the academic-style development of models. The investment values Scale at $29 billion and comes just weeks after a previous funding round—backed by Nvidia and Amazon—that had valued the company at $14 billion. It also marks Meta's second-largest acquisition, following its $19 billion purchase of WhatsApp. With Wang's recruitment immediately after investing in Scale AI, Meta intends to show its serious intent in the supremacy race of AI, with players like Google DeepMind, OpenAI, and China's DeepSeek leading the charge.

Trump Admin's Plans to Push AI Across Government Sites Leaked on Code Sharing Website
Trump Admin's Plans to Push AI Across Government Sites Leaked on Code Sharing Website

Int'l Business Times

time11-06-2025

  • Business
  • Int'l Business Times

Trump Admin's Plans to Push AI Across Government Sites Leaked on Code Sharing Website

The Trump administration's plan to integrate artificial intelligence across federal agencies has been exposed through a leaked draft of a government-run website, revealing an initiative set to launch on July 4 that would track and promote AI use across departments. The early details were uncovered in code uploaded to GitHub by the General Services Administration's Technology Transformation Services (TTS), led by former Tesla engineer Thomas Shedd, according to 404 Media. The website, is described as a centralized platform offering integration with AI tools from OpenAI, Google, Anthropic, AWS Bedrock, and Meta's LLaMA. It also includes an analytics feature that will reportedly measure AI adoption rates by specific government teams. The project is part of a broader push by Shedd and the Department of Government Efficiency, spearheaded by Elon Musk, to rapidly embed AI technologies into government operations. Leaked audio from a TTS meeting in February revealed that Shedd wanted AI tools to write software, review contracts, and standardize usage across agencies—goals that internal staff reportedly viewed with widespread skepticism. Concerns raised by government employees include the potential for AI-generated code to introduce security flaws, create software bugs, or mistakenly recommend cancelling essential contracts. Despite these warnings, the GitHub page suggests that the initiative is moving forward, with set to launch on Independence Day. As of now, redirects to the White House homepage, and the staging version of the site is hosted quietly on The GSA has not commented publicly on the leak or the concerns surrounding the project. Originally published on Latin Times

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store