logo
#

Latest news with #NvidiaH200

IndiaAI plan moves forward with over 17,000 GPUs successfully installed
IndiaAI plan moves forward with over 17,000 GPUs successfully installed

Time of India

time24-06-2025

  • Business
  • Time of India

IndiaAI plan moves forward with over 17,000 GPUs successfully installed

Live Events A little over 17,300 graphics processing units (GPUs) have been successfully installed under the IndiaAI Mission 's ambitious compute infrastructure tender, which has received proposals for 34,333 GPUs across its first two rounds, people aware of the developments told was revealed at a meeting IndiaAI CEO Abhishek Singh held with all empanelled cloud service providers (CSPs) last week to review GPU installation progress, their integration with the IndiaAI compute portal, and allocation of GPU time to end mission seeks to build a scalable cloud computing platform for researchers and startups to train artificial intelligence (AI) models. The third round of bidding has already concluded and the proposals are awaiting technical of the ten CSPs selected in the first round — Jio Platforms and CtrlS Datacenters — are yet to deploy their GPUs, while providers like Yotta, NextGen, and E2E Networks have made significant headway in installing and commissioning GPUs, people cited above IndiaAI Mission flagged this at the review meeting on June 16. However, the companies have time till August 7 to install GPUs, they per the agreement, companies were to install the GPUs within six months from the issue of the letter of intent, which was sent on February a presentation at the review meeting, the IndiaAI team said CtrlS had not confirmed GPU installation although it shared purchase orders. CtrlS had also not started the API integration with the compute portal.'We are in discussions with the department to address the timelines and ensure we fulfil our commitment,' CtrlS told ET in a statement. It claimed it has 'completed the portal integration'.Jio, meanwhile, was in the process of procuring 752 Nvidia H200 GPUs and 268 AMD MI300X GPUs, according to the IndiaAI presentation. Its API integration with the compute portal was ongoing and was expected to be completed by the third quarter of did not respond to ET's request for comment until press time gave the cabinet approval for the Rs 10,000-crore IndiaAI Mission in March last year, with a target of procuring over 10,000 part of the mission, the government is also incentivising the development of local large language models (LLMs) built by startups like Sarvam, Gnani, Gan, and Soket AI Labs with investment capital and other support. The move is aimed at building up India's AI prowess. Yotta Data Services , the highest GPU-contributing company in the lot, has almost 50% of its GPUs installed and the delivery of the rest of its GPUs is awaited. Its API integration is complete.'We are installing and commissioning and going live with an additional 4,096 Nvidia H100 GPUs by July 10,' Sunil Gupta, chief executive of Yotta, told had got allocation from IndiaAI for 4,096 Nvidia H100 GPUs for Sarvam's LLM and 200 Nvidia H100 GPUs for Bhashini in early May to drive inferencing on all its language models.'We have already delivered 1,504 Nvidia H100 GPUs to Sarvam along with an additional 108 as a redundant buffer, and 200 Nvidia H100 GPUs to Bhashini as on May 31 last month,' Gupta remaining 2,592 Nvidia H100 GPUs are being delivered to Sarvam by July 10 next month, he GPUs are under commissioning and performance testing by Nvidia and Yotta engineering teams, at its data centre in Navi Mumbai, before they're the second round of empanelment, the company offered and has been empanelled for 8,192 Nvidia B200 GPUs (Blackwell GPUs).NxtGen Cloud Technologies has installed 51% of its total proposed GPUs. The remaining 49% of GPUs are under procurement, and the company expects delivery by June 30. Its API integration with the compute portal has been completed for the installed GPUs.'The demand for the Nvidia platform outweighs other options at the moment. We will be finetuning our next phase of deployment in favour of newer models from Nvidia,' AS Rajgopal, chief executive of NxtGen, told ET.'We understand that the IndiaAI Mission will take a little while to gather momentum,' he said. 'We continue to work with Nvidia, AMD and Intel to build the market. There are multiple other options emerging, specifically to run models efficiently for inference. We will be introducing three more options going forward.'Cyfuture has installed 4% GPUs while the rest 96% are under procurement. Its API integration is also under process.'We've been empanelled only a few days back and we're confident of installing committed quantities within one quarter,' Anuj Bairathi, chief executive of Cyfuture India, told ET. 'We also expect integration with the India AI portal to be completed by July.'Cyfuture emerged as L1 bidder in all categories it offered in the second round of bidding, Bairathi noted. 'Since no other companies have so far matched our pricing, we stand a chance to get the business once our installation is completed within the next three months,' he Orient, Tata Communications, Ishan, Sify, Netmagic and the IndiaAI Mission did not respond to ET's requests for a press release on June 12, Ishan had said it will offer access to over 1,000 high-performance GPUs.

Latest MLPerf Shows AMD Catching Up With Nvidia, Sort Of...
Latest MLPerf Shows AMD Catching Up With Nvidia, Sort Of...

Forbes

time04-06-2025

  • Business
  • Forbes

Latest MLPerf Shows AMD Catching Up With Nvidia, Sort Of...

As you AI pros know, the 125-member MLCommons organization alternates training and inference benchmarks every three months. This time around, its all about training, which remains the largest AI hardware market, although not by much as inference drives more growth as the industry shift from research (building) to production (using). As usual, Nvidia took home all the top honors. For the first time, AMD joined the training party (they had previously submitted inference benchmarks), while Nvidia trotted out their first GB200 NVL72 runs to demonstrate industry leadership. Each company focussed on their best features. For AMD it is larger HBM memory, while Nvidia exploited its Arm/GPU GB200 superchip and NVLink scaling. the bottom line is that AMD can now compete head to head with H200 for smaller models that fit into MI325's memory. That means AMD cannot compete with Blackwell today, and certainly cannot compete with NVLink-enabled configurations like NVL72. Let's take a look. (Note that Nvidia is a client of Cambrian-AI Research, and I am a former employee of AMD.) AMD has more HBM memory on their MI325 platform than any Nvidia's GPU, and can therefore contain an entire medium-sized model on a single chip. So, they ran the training benchmark that fits, the Llama 2-70B LORA model. The results are reasonably impressive, besting the Nvidia H200 by an average of 8%. While a good result, I doubt many would choose AMD for 8% better performance, even at a somewhat lower price. The real question, of course, is how much better the MI350 will be when it launches next week, likely with higher performance and even more memory. One thing AMD will not offer soon is better networking for scale-up; the UA-Link needed to compete with NVLink is still months away (possibly in the MI400 timeframe in 2026). So, if you only need a 70B model, AMD may be a better deal than Nvidia H200; but not by much. AMD is also showing traction with partners, and better performance from its software, which took quite a beating from SemiAnalysis last December. With better ease-of-use from ROCm, partners can benefit from offering customers a choice; many enterprises do not need the power of an NVL72 or NVLink, especially if they are focussed on simple inference processing. And of course, AMD can offer better availability, as NVIDIA GB200 is much harder to obtain due to overwhelming demand and pre-sold capacity. The rumor mill says GB200 still takes over a full year delivery time if you order today. AMD Partners also submitted MLPerf results. AMD So, if you net it out, the MI325 result foreshadows a decent position for the MI350, but support for only up to 8 GPUs per cluster limits their use for large-scale training deployments. Nvidia says the GB200 NVL72 has now arrived, if you were smart enough to put in an early order. With over fifty benchmark submissions using up to nearly 2500 GPUs, Nvidia and their partners ran every MLPerf benchmark on the ~3000 pound rack, winning each one. CoreWeave submitted the largest configuration, with nearly 2500 GPUs. Nvidia focused on the GB200 NVL72 in this round. Nvidia While the GB200 NVL72 can outperform Hopper by some 30X for inference processing, its advantage for training is 'only' about 2.5X; thats still a lot of savings in time and money. The reason is that inference processing benefits greatly from the lower 4- and 8-bit precision math available in Blackwell, and the new Dynamo "AI Factory OS' optimizes inference processing and reuses previously calculated tokens in KV-Cache. While AMD does not yet have the scale-up networking required to train larger models at Nvidia's level of performance, this benchmark shows that they are getting close enough to be a contender once that networking is ready next year. And AMD can already out-perform the Nvidia H200 once you clear the CUDA development hurdle. It could take a year or more for AMD to replicate the NVL72 architectural benefits, and by then Nvidia will have moved on to the Kyber-based NVL576 with the new NVLink7, Vera CPU and upgraded Rubin GPU. If you start late; you stay behind.

IndiaAI empanelment drives down prices of GPUs in second round
IndiaAI empanelment drives down prices of GPUs in second round

Time of India

time04-06-2025

  • Business
  • Time of India

IndiaAI empanelment drives down prices of GPUs in second round

ETtech Live Events An analysis of the lowest (L1) prices released by the IndiaAI Mission for the second round of the tender for graphics processing units (GPU) shows an up to 10% fall in prices compared with the first first-round bidders will be asked to match these prices, resulting in substantial savings for the users. The GPUs are sourced under a government scheme to provide subsidised compute power to local artificial intelligence decline in the price of the same GPU model from the first round to the second round ranged from just Rs 4 an hour to as high as Rs 1,234 per some companies told ET that the reduced L1 rates will help kickstart artificial intelligence pilot projects in various domains, which previously may have been cost-prohibitive, others warned of a price bloodbath for GPU capacity prices are possibly the lowest in the world for GPU services, helped by the 40% government startups, developers, researchers and corporations exploring and implementing AI would be able to afford GPU capacity by leasing from the identified providers.'All existing empanelled players will be asked to match new L1 prices,' IndiaAI Mission chief executive Abhishek Singh told ET. 'From what we hear from companies, they will mostly match, to retain their priority of allocation of AI workloads.'As many as 53 categories of GPUs have been offered in the second round of the tender, out of which some are new (like Nvidia's B200 GPUs). In 16 categories, prices have reduced compared with the first said assigning AI workloads is an ongoing process and companies have six months to provision the per the IndiaAI website, 4,423 GPUs with a total subsidy of Rs 111.86 crore have been allocated so far. Compute has been allocated to 21 applicants from academia, government entities, early-stage startups, MSMEs and the student startup Sarvam AI , the first startup selected to build an indigenous foundational model under the IndiaAI Mission, has already been allocated 4,000 GPUs, Singh said. Others like Soket, Gnani and Gan are expected to give their estimate on the compute they need.'Allocation is dynamic. Companies have invested heavily in bidding for GPUs. Commitments require significant money and those in the business have done so after their assessments,' he has received the highest subsidy allocated under the programme at Rs 98.68 crore, out of a bill of Rs 246.71 crore for 4,096 Nvidia H100 GPUs, as per the IndiaAI minister Ashwini Vaishnaw on Friday announced the addition of 15,916 GPUs to the existing cluster of 18,417 GPUs under the programme, taking the total to 34,333 include GPUs of Nvidia, AMD, AWS and Intel. The offering comprises 15,100 H100 Nvidia GPUs , 8,192 Nvidia B200 GPUs, 4,812 Nvidia H200 GPUs, and 1,973 Nvidia L40S GPUs.'The combination of reduced L1 rates and government support will help kickstart pilot projects in various domains, enabling more organisations to experiment with and scale AI use cases that previously may have been cost-prohibitive,' said Rishikesh Kamat, senior director, Cloud services division, at NTT Data India , one of the seven empanelled bidders in round Bairathi, chief executive of Cyfuture India, another empanelled bidder in round two, warned of lower prices creating problems for the nascent industry.'It's advantageous for users but is concerning for service providers, as a price bloodbath is on," he said. Moreover, it would be even more fatal for companies if workloads as envisaged by the IndiaAI Mission don't come through, he average L1 hourly GPU price was Rs 612.85 in the first round of empanelment against an average Rs 655.90 in the second round of empanelment. While there has been a decline in price for the same GPUs between the two rounds, the average is higher in the second round because of newer categories of GPUs.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store