
Forget Nvidia's $4 trillion valuation! This tech company is set to cross $4.5 trillion market cap, thanks to its big AI bets
Nvidia has seen a meteoric rise - becoming the first company in the world to have a market capitalization of over $4 trillion. In fact there are only five economies in the world that have a GDP of over $4 trillion as per IMF's 2025 projections.
But even though Nvidia has become the first company to cross the $4 trillion capitalization mark, there's another technology company that may hit $4.5 trillion!
According to Oppenheimer analysts quoted in a Motley Fool report, another AI giant is poised to join Nvidia in the $4 trillion valuation category and potentially reach $4.5 trillion within the next year. Currently, this particular stock presents a more favourable investment opportunity compared to Nvidia, the Oppenheimer analysts believe
Nvidia's Rise: Is The Market Dominance Sustainable?
Nvidia stands out as a big success story - its valuation has seen an over 10 times rise in the last three years.
This remarkable growth stems from substantial investments in artificial intelligence (AI) infrastructure, where Nvidia's graphics processing units (GPUs) serve as essential components.
However, Nvidia's leading position in the AI chip sector encounters challenges.
Other GPU manufacturers are improving their price-performance ratios, whilst Nvidia's major hyperscale clients are increasingly utilising their own custom silicon designs for generative artificial intelligence (AI) applications.
This could possibly impact the company's growth outlook.
Nvidia is an AI chip industry leader, particularly in training hardware. Its superiority comes from advanced tech capabilities and its exclusive CUDA software platform. This creates big barriers for competitors to surmount in the semiconductor market, the Motley Fool report says.
However, major clients such as Meta Platforms and Microsoft are actively seeking to reduce their dependence on Nvidia's AI training hardware, the report said.
Meta is expanding its Meta Training and Inference Accelerator system across various generative AI applications. Their new chip aims to replace Nvidia processors in AI training for the Llama foundation model, whilst they already utilise their custom chips for certain AI inference operations.
Microsoft harbors similar objectives with its Maia chips, although they have delayed their next-generation AI training chip launch to 2026, rather than releasing it this year.
Such delays have previously affected other large-scale computing companies, including Meta, resulting in substantial Nvidia orders.
Nevertheless, as these technology giants enhance their chip design capabilities, they could potentially reduce their reliance on Nvidia's processors substantially over time.
Nvidia maintains a strong market position, particularly following the US government's decision to lift restrictions on H20 chip sales in China.
The company is poised to see a robust earnings growth throughout the year, driven by Chinese market access and hyperscaler demand.
However, what is noteworthy is that Nvidia's shares command a significant premium, trading at nearly 40 times projected earnings, the report noted. Given this elevated valuation and potential long-term challenges, the stock's growth rate might lag behind other major artificial intelligence enterprises.
Which Company can Hit $4.5 Trillion Market Cap?
Currently, only a select few organisations rival Nvidia's market presence. Among the exclusive group of companies valued above $1 trillion, merely three have achieved valuations exceeding $3 trillion, with Nvidia being one of them.
Microsoft, valued at approximately $3.8 trillion presently, stands closest to Nvidia. Oppenheimer analysts project Microsoft could reach the $4 trillion milestone shortly. Their analysis sets a $600 price target for Microsoft shares, suggesting a potential market valuation of $4.5 trillion, representing a 19% increase from its value as of July 15.
Oppenheimer's optimistic outlook comes from many factors:
There is expectation of higher revenue growth from Microsoft's Azure cloud computing service.
Azure has actually emerged as Microsoft's main growth driver. This is due to increasing computational requirements for AI development.
Additionally, Microsoft's investment in OpenAI not only implies a significant Azure customer but also provides essential resources for the broader AI development community.
The surge in demand has been remarkable. Despite Microsoft's substantial investment of $80 billion in capital expenditures, primarily directed towards data centre construction and equipment, the company reports that demand still exceeds supply. Nevertheless, Azure maintains its position as the fastest-growing platform amongst the three major public cloud services.
Analysts' optimistic outlook on Microsoft stems largely from the prospects of Copilot Studio. Whilst they acknowledge modest interest in Microsoft 365's native AI assistant Copilot, they anticipate stronger performance from the customisable AI assistant platform, Copilot Studio. This development allows Microsoft to implement higher pricing for its enterprise software package whilst maintaining customer loyalty.
The higher revenue can in turn be reinvested in Azure and share buyback programmes. This would potentially boost earnings per share via improved profits distributed across a lower share count.
Microsoft shares currently trade at approximately 33 times forward earnings, reflecting a relatively high valuation. However, this multiple appears justified for a company that maintains leadership positions in both cloud computing and enterprise software sectors of the AI industry.
Following news about potential reversal of US restrictions on chip exports to China, Oppenheimer analysts revised their Nvidia price target to $200 per share, suggesting a market capitalisation of $4.9 trillion. However, at current prices, Microsoft presents a more appealing investment opportunity, the report said.
Stay informed with the latest
business
news, updates on
bank holidays
and
public holidays
.
AI Masterclass for Students. Upskill Young Ones Today!– Join Now

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
2 hours ago
- Indian Express
Meet Lumo, the new AI chatbot that protects user privacy
Proton, the company that introduced the encrypted email service Proton Mail, has now unveiled an AI chatbot with focus on user privacy. Named Lumo, the chatbot can generate code, write email, summarise documents, and much more. Proton has dubbed its AI chatbot as an alternative to ChatGPT, Gemini, Copilot, etc. The AI chatbot preserves user privacy while storing data locally on users' devices. Lumo is powered by several open-source large language models that run on Proton's servers in Europe, including Mistral's Nemo, Mistral Small 3, Nvidia's OpenHands 32B, and the Allen Institute for AI's OLMO 2 32B model. Lumo can field requests through different models depending on which is better suited for a query. The company claims that the new chatbot will protect information with 'zero-access' encryption, which grants the user an encryption key that allows them exclusive access to their data. This encryption key will block third parties and even Proton from accessing the user content, meaning the company will not be sharing any personal information. Proton has reportedly used Transport Layer Security (TLS) encryption for data transmission and 'asymmetrically' encrypts prompts, allowing only the Lumo GPU servers to decrypt them. When it comes to features, Ghost mode ensures that your active chat sessions are not saved, not even on local devices. With the Web search feature, Lumo can look up recent or new information on the internet to add to its current knowledge. It can also understand and analyse your files, but does not keep a record of them. Lastly, integration with Proton Drive makes it simple to add end-to-end encrypted files from your Proton Drive to your Lumo chats. The chatbot comes with internet search, however, it is disabled by default to ensure privacy. Once enabled, Lumo will deploy privacy-friendly search engines to provide responses to user queries. It can analyse uploaded files, but it does not store any of the data. Proton Drive files, which are meant to remain end-to-end encrypted while communicating with the chatbot, can also be linked by users to Lumo. The chatbot comes in both a free and premium version. Those without an account with Lumo or Proton, will be able to ask 25 queries per week. They will not be able to access chat histories. On the other hand, users with a free account can ask up to 100 questions per week. Lumo Plus plan is priced at $12.99 a month and comes with unlimited chats, an extended encrypted chat history, and more.


Mint
3 hours ago
- Mint
The new chips designed to solve AI's energy problem
'I can't wrap my head around it," says Andrew Wee, who has been a Silicon Valley data-center and hardware guy for 30 years. The 'it" that has him so befuddled—irate, even—is the projected power demands of future AI supercomputers, the ones that are supposed to power humanity's great leap forward. Wee held senior roles at Apple and Meta, and is now head of hardware for cloud provider Cloudflare. He believes the current growth in energy required for AI—which the World Economic Forum estimates will be 50% a year through 2030—is unsustainable. 'We need to find technical solutions, policy solutions and other solutions that solve this collectively," he says. To that end, Wee's team at Cloudflare is testing a radical new kind of microchip, from a startup founded in 2023, called Positron, which has just announced a fresh round of $51.6 million in investment. These chips have the potential to be much more energy efficient than ones from industry leader Nvidia at the all-important task of inference, which is the process by which AI responses are generated from user prompts. While Nvidia chips will continue to be used to train AI for the foreseeable future, more efficient inference could collectively save companies tens of billions of dollars, and a commensurate amount of energy. There are at least a dozen chip startups all battling to sell cloud-computing providers the custom-built inference chips of the future. Then there are the well-funded, multiyear efforts by Google, Amazon and Microsoft to build inference-focused chips to power their own internal AI tools, and to sell to others through their cloud services. The intensity of these efforts, and the scale of the cumulative investment in them, show just how desperate every tech giant—along with many startups—is to provide AI to consumers and businesses without paying the 'Nvidia tax." That's Nvidia's approximately 60% gross margin, the price of buying the company's hardware. Nvidia is very aware of the growing importance of inference and concerns about AI's appetite for energy, says Dion Harris, a senior director at Nvidia who sells the company's biggest customers on the promise of its latest AI hardware. Nvidia's latest Blackwell systems are between 25 and 30 times as efficient at inference, per watt of energy pumped into them, as the previous generation, he adds. To accomplish their goals, makers of novel AI chips are using a strategy that has worked time and again: They are redesigning their chips, from the ground up, expressly for the new class of tasks that is suddenly so important in computing. In the past, that was graphics, and that's how Nvidia built its fortune. Only later did it become apparent graphics chips could be repurposed for AI, but arguably it's never been a perfect fit. Jonathan Ross is chief executive of chip startup Groq, and previously headed Google's AI chip development program. He says he founded Groq (no relation to Elon Musk's xAI chatbot) because he believed there was a fundamentally different way of designing chips—solely to run today's AI models. Groq claims its chips can deliver AI much faster than Nvidia's best chips, and for between one-third and one-sixth as much power as Nvidia's. This is due to their unique design, which has memory embedded in them, rather than being separate. While the specifics of how Groq's chips perform depends on any number of factors, the company's claim that it can deliver inference at a lower cost than is possible with Nvidia's systems is credible, says Jordan Nanos, an analyst at SemiAnalysis who spent a decade working for Hewlett Packard Enterprise. Positron is taking a different approach to delivering inference more quickly. The company, which has already delivered chips to customers including Cloudflare, has created a simplified chip with a narrower range of abilities, in order to perform those tasks more quickly. The company's latest funding round came from Valor Equity Partners, Atreides Management and DFJ Growth, and brings the total amount of investment in the company to $75 million. Positron's next-generation system will compete with Nvidia's next-generation system, known as Vera Rubin. Based on Nvidia's road map, Positron's chips will have two to three times better performance per dollar, and three to six times better performance per unit of electricity pumped into them, says Positron CEO Mitesh Agrawal. Competitors' claims about beating Nvidia at inference often don't reflect all of the things customers take into account when choosing hardware, says Harris. Flexibility matters, and what companies do with their AI chips can change as new models and use cases become popular. Nvidia's customers 'are not necessarily persuaded by the more niche applications of inference," he adds. Cloudflare's initial tests of Positron's chips were encouraging enough to convince Wee to put them into the company's data centers for more long-term tests, which are continuing. It's something that only one other chip startup's hardware has warranted, he says. 'If they do deliver the advertised metrics, we will open the spigot and allow them to deploy in much larger numbers globally," he adds. By commoditizing AI hardware, and allowing Nvidia's customers to switch to more-efficient systems, the forces of competition might bend the curve of future AI power demand, says Wee. 'There is so much FOMO right now, but eventually, I think reason will catch up with reality," he says. One truism of the history of computing is that whenever hardware engineers figure out how to do something faster or more efficiently, coders—and consumers—figure out how to use all of the new performance gains, and then some. Mark Lohmeyer is vice president of AI and computing infrastructure for Google Cloud, where he provides both Google's own custom AI chips, and Nvidia's, to Google and its cloud customers. He says that consumer and business adoption of new, more demanding AI models means that no matter how much more efficiently his team can deliver AI, there is no end in sight to growth in demand for it. Like nearly all other big AI providers, Google is making efforts to find radical new ways to produce energy to feed that AI—including both nuclear power and fusion. The bottom line: While new chips might help individual companies deliver AI more efficiently, the industry as a whole remains on track to consume ever more energy. As a recent report from Anthropic notes, that means energy production, not data centers and chips, could be the real bottleneck for future development of AI. Write to Christopher Mims at


Indian Express
4 hours ago
- Indian Express
Huawei shows off AI computing system to rival Nvidia's top product
China's Huawei Technologies showed off an AI computing system on Saturday that one industry expert has said rivals Nvidia's most advanced offering, as the Chinese technology giant seeks to capture market share in the country's growing artificial intelligence sector. The CloudMatrix 384 system made its first public debut at the World Artificial Intelligence Conference (WAIC), a three-day event in Shanghai where companies showcase their latest AI innovations, drawing a large crowd to the company's booth. The system has drawn close attention from the global AI community since Huawei first announced it in April. Industry analysts view it as a direct competitor to Nvidia's GB200 NVL72, the U.S. chipmaker's most advanced system-level product currently available in the market. Dylan Patel, founder of semiconductor research group SemiAnalysis, said in an April article that Huawei now had AI system capabilities that could beat Nvidia. Huawei staff at its WAIC booth declined to comment when asked to introduce the CloudMatrix 384 system. A spokesperson for Huawei did not respond to questions. Huawei has become widely regarded as China's most promising domestic supplier of chips essential for AI development, even though the company faces U.S. export restrictions. Nvidia CEO Jensen Huang told Bloomberg in May that Huawei had been 'moving quite fast' and named the CloudMatrix as an example. The CloudMatrix 384 incorporates 384 of Huawei's latest 910C chips and outperforms Nvidia's GB200 NVL72 on some metrics, which uses 72 B200 chips, according to SemiAnalysis. The performance stems from Huawei's system design capabilities, which compensate for weaker individual chip performance through the use of more chips and system-level innovations, SemiAnalysis said. Huawei says the system uses 'supernode' architecture that allows the chips to interconnect at super-high speeds and in June, Huawei Cloud CEO Zhang Pingan said the CloudMatrix 384 system was operational on Huawei's cloud platform.