
Sam Altman's trillion-dollar AI vision starts with 100 million GPUs. Here's what that means for the future of ChatGPT (and you)
That jaw-dropping number, casually mentioned on X just days after ChatGPT Agent launched as we await ChatGPT-5, is a glimpse into the scale of AI infrastructure that could transform everything from the speed of your chatbot to the stability of the global energy grid.
Altman admitted the 100 million GPU goal might be a bit of a stretch — he punctuated the comment with 'lol" — but make no mistake, OpenAI is already on track to surpass 1 million GPUs by the end of 2025. And the implications are enormous.
we will cross well over 1 million GPUs brought online by the end of this year!very proud of the team but now they better get to work figuring out how to 100x that lolJuly 20, 2025
What does 100 million GPUs even mean?
(Image credit: Shutterstock)
For those unfamiliar, I'll start by explaining the GPU, or graphics processing unit. This is a specialized chip originally designed to render images and video. But in the world of AI, GPUs have become the powerhouse behind large language models (LLMs) like ChatGPT.
Unlike CPUs (central processing units), which handle one task at a time very efficiently, GPUs are built to perform thousands of simple calculations simultaneously. That parallel processing ability makes them perfect for training and running AI models, which rely on massive amounts of data and mathematical operations.
So, when OpenAI says it's using over a million GPUs, it's essentially saying it has a vast digital brain made up of high-performance processors, working together to generate text, analyze images, simulate voices and much more.
To put it into perspective, 1 million GPUs already require enough energy to power a small city. Scaling that to 100 million could demand more than 75 gigawatts of power, around three-quarters of the entire UK power grid. It would also cost an estimated $3 trillion in hardware alone, not counting maintenance, cooling and data center expansion.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors
This level of infrastructure would dwarf the current capacity of tech giants like Google, Amazon and Microsoft, and would likely reshape chip supply chains and energy markets in the process.
Why does it matter to you?
While a trillion-dollar silicon empire might sound like insider industry information, it has very real consequences for consumers. OpenAI's aggressive scaling could unlock:
Faster response times in ChatGPT and future assistants
More powerful AI agents that can complete complex, multi-step tasks
Smarter voice assistants with richer, real-time conversations
The ability to run larger models with deeper reasoning, creativity, and memory
In short, the more GPUs OpenAI adds, the more capable ChatGPT (and similar tools) can become.
But there's a tradeoff: all this compute comes at a cost. Subscription prices could rise.
Feature rollouts may stall if GPU supply can't keep pace. And environmental concerns around energy use and emissions will only grow louder.
The race for silicon dominance
(Image credit: Shutterstock)
Altman's tweets arrive amid growing competition between OpenAI and rivals like Google DeepMind, Meta and Anthropic.
All are vying for dominance in AI model performance, and all rely heavily on access to high-performance GPUs, mostly from Nvidia.
OpenAI is reportedly exploring alternatives, including Google's TPUs, Oracle's cloud and potentially even custom chips.
More than speed, this growth is about independence, control and the ability to scale models that could one day rival human reasoning.
Looking ahead at what's next
(Image credit: ANDREW CABALLERO-REYNOLDS/AFP via Getty Images)
Whether OpenAI actually hits 100 million GPUs or not, it's clear the AI arms race is accelerating.
For everyday users, that means smarter AI tools are on the horizon, but so are bigger questions about power, privacy, cost and sustainability.
So the next time ChatGPT completes a task in seconds or holds a surprisingly humanlike conversation, remember: somewhere behind the scenes, thousands (maybe millions) of GPUs are firing up to make that possible and Sam Altman is already thinking about multiplying that by 100.
Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
a minute ago
- Yahoo
AI referrals to top websites were up 357% year-over-year in June, reaching 1.13B
AI referrals to websites still have a way to go to catch up to the traffic that Google Search provides, but they're growing quickly. According to new data from market intelligence provider Similarweb, AI platforms in June generated over 1.13 billion referrals to the top 1,000 websites globally, a figure that's up 357% since June 2024. However, Google Search still accounts for the majority of traffic to these sites, accounting for 191 billion referrals during the same period of June 2025. One particular category of interest these days is news and media. Online publishers are seeing traffic declines and are preparing for a day they're calling 'Google Zero,' when Google stops sending traffic to websites. For instance, The Wall Street Journal recently reported on data that showed how AI overviews were killing traffic to news sites. Plus, a Pew Research Center study out this week found that in a survey of 900 U.S. Google users, 18% of some 69,000 searches showed AI Overviews, which led to users clicking links 8% of the time. When there was no AI summary, users clicked links nearly twice as much, or 15% of the time. Similarweb found that June's AI referrals to news and media websites were up 770% since June 2024. Some sites will naturally rank higher than others that are blocking access to AI platforms, as The New York Times does, as a result of its lawsuit with OpenAI over the use of its articles to train its models. In the news media category, Yahoo led with 2.3 million AI referrals in June 2025, followed by Yahoo Japan (1.9M), Reuters (1.8M), The Guardian (1.7M), India Times (1.2M), and Business Insider (1.0M). In terms of methodology, Similarweb counts AI referrals as web referrals to a domain from an AI platform like ChatGPT, Gemini, DeepSeek, Grok, Perplexity, Claude, and Liner. ChatGPT dominates here, accounting for more than 80% of the AI referrals to the top 1,000 domains. The company's analysis also looked at other categories beyond news, like e-commerce, science and education, tech/search/social media, arts and entertainment, business, and others. In e-commerce, Amazon was followed by Etsy and eBay when it came to those sites seeing the most referrals, at 4.5M, 2.0M, and 1.8M, respectively, during June. Among the top tech and social sites, Google, not surprisingly, was at the top of the list, with 53.1 million referrals in June, followed by Reddit (11.1M), Facebook (11.0M), Github (7.4M), Microsoft (5.1M), Canva (5.0M), Instagram (4.7M), LinkedIn (4.4M), Bing (3.1M), and Pinterest (2.5M). The analysis excluded the OpenAI website because so many of its referrals were from ChatGPT, pointing to its services. Across all other domains, the No. 1 site by AI referrals for each category included YouTube (31.2M), Research Gate (3.6M), Zillow (776.2K), (992.9K), Wikipedia (10.8M), (5.2M), (1.2M), Home Depot (1.2M), Kayak (456.5K), and Zara (325.6K). Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data


Tom's Guide
2 minutes ago
- Tom's Guide
GPT-5 could be OpenAI's most powerful model yet — here's what early testing reveals
The next major language model for ChatGPT may be closer than we think, and early feedback suggests GPT-5 could be a serious upgrade. According to a new report from The Information, someone who's tested the unreleased model described it as a significant step forward in performance. While OpenAI hasn't confirmed when GPT-5 will launch inside ChatGPT or its API platform, CEO Sam Altman recently acknowledged using the model and enjoying the experience. That alone hints that OpenAI is preparing to roll out a more powerful assistant; one designed to improve in areas where earlier versions have started to plateau. The report suggests GPT-5 blends OpenAI's traditional GPT architecture with elements from its reasoning-focused 'o' models. That would give it the flexibility to adjust how much effort it puts into different tasks, doing quick work on easy queries, but applying deeper reasoning to complex problems. This approach mirrors Anthropic's Claude models, which already let users fine-tune how much 'thinking' the model does. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. In GPT-5's case, this could mean faster responses when you're asking something simple, and more thoughtful output for challenges like debugging code or solving abstract math problems. One of GPT-5's biggest reported strengths is software engineering. According to The Information, the model handles both academic coding challenges and real-world tasks, such as editing complex, outdated codebases, more effectively than previous GPT versions. That could make it especially appealing to developers, many of whom currently rely on competitors like Anthropic's Claude. A person who tested GPT-5 told The Information it outperformed Claude Sonnet 4 in side-by-side comparisons. That's just one data point and Claude Opus 4 is still considered Anthropic's most advanced model, but it signals OpenAI is serious about reclaiming ground in this space. Here's where things get a little murky. Some researchers speculate GPT-5 might not be a single, brand-new model, but instead a routing system that dynamically selects the best model, GPT-style or reasoning-based, depending on your prompt. If that's true, it could signal a shift away from scaling traditional LLMs toward optimizing post-training performance through reinforcement learning and synthetic data. That's where models are fine-tuned using expert feedback after training and it's an area where OpenAI has been investing heavily. If GPT-5 lives up to early reports, it could help OpenAI win back developer mindshare and chip away at Anthropic's dominance in coding assistants; a market that could be worth hundreds of millions annually. It would also strengthen OpenAI's pitch to enterprise users and give its chip suppliers, like Nvidia, another reason to celebrate. For users of ChatGPT, the biggest change could be more efficient and accurate answers across the board, especially for bigger tasks that current models still struggle with. We'll have to wait and see what OpenAI officially announces in the coming weeks, but if GPT-5 is as strong as it sounds, the next wave of AI tools could be the most capable yet. Follow Tom's Guide on Google News to get our up-to-date news, how-tos, and reviews in your feeds. Make sure to click the Follow button.


TechCrunch
2 minutes ago
- TechCrunch
Meta names Shengjia Zhao as chief scientist of AI superintelligence unit
Meta CEO Mark Zuckerberg announced Friday that former OpenAI researcher Shengjia Zhao will lead research efforts at the company's new AI unit, Meta Superintelligence Labs (MSL). Zhao contributed to several of OpenAI's largest breakthroughs, including ChatGPT, GPT-4, and the company's first AI reasoning model, o1. 'I'm excited to share that Shengjia Zhao will be the Chief Scientist of Meta Superintelligence Labs,' Zuckerberg said in a post on Threads Friday. 'Shengjia co-founded the new lab and has been our lead scientist from day one. Now that our recruiting is going well and our team is coming together, we have decided to formalize his leadership role.' Zhao will set a research agenda for MSL under the leadership of Alexandr Wang, the former CEO of Scale AI who was recently hired to lead the new unit. We are excited to announce that @shengjia_zhao will be the Chief Scientist of Meta Superintelligence Labs! Shengjia is a brilliant scientist who most recently pioneered a new scaling paradigm in his research. He will lead our scientific direction for our team. Let's go 🚀 — Alexandr Wang (@alexandr_wang) July 25, 2025 Wang, who does not have a research banckground, was viewed as a somewhat unconventional choice to lead an AL lab. The addition of Zhao, who is a reputable research leader known for developing frontier AI models, rounds out the leadership team. To further fill out the unit, Meta has hired several high-level researchers from OpenAI, Google DeepMind, Safe Superintelligence, Apple, and Anthropic, as well as pulling researchers from Meta's existing FAIR and GenAI units. Zuckerberg notes in his post that Zhao has pioneered several breakthroughs, including a 'new scaling paradigm.' The Meta CEO is likely referencing Zhao's work on OpenAI's reasoning model, o1, in which he is listed as a foundational contributor alongside OpenAI co-founder Ilya Sutskever. Meta currently doesn't offer a competitor to o1, so AI reasoning models are a key area of focus for MSL. The Information reported in June that Zhao would be joining Meta Superintelligence Labs, alongside three other influential OpenAI researchers, including Jiahui Yu, Shuchao Bi, and Hongyu Ren. Meta has also recruited Trapit Bansal, another OpenAI researcher who worked on AI reasoning models with Zhao, as well as three employees from OpenAI's Zurich office that worked on multimodality. Zuckerberg has gone to great lengths to set MSL up for success. The Meta CEO has been on a recruiting spree to staff up his AI superintelligence labs, which has entailed sending personal emails to researchers and inviting prospects to his Lake Tahoe estate. Meta has reportedly offered some researcher eight and nine figure compensation packages, some of which are 'exploding offers' that expire in a matter of days. Techcrunch event Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. Tech and VC heavyweights join the Disrupt 2025 agenda Netflix, ElevenLabs, Wayve, Sequoia Capital — just a few of the heavy hitters joining the Disrupt 2025 agenda. They're here to deliver the insights that fuel startup growth and sharpen your edge. Don't miss the 20th anniversary of TechCrunch Disrupt, and a chance to learn from the top voices in tech — grab your ticket now and save up to $675 before prices rise. San Francisco | REGISTER NOW Meta has also upped its investment in cloud computing infrastructure, which should help MSL conduct the massive training runs required to create competitive frontier AI models. By 2026, Zhao and MSL's researchers should have access to Meta's one gigawatt cloud computing cluster, Prometheus, located in Ohio. Once online, Meta will be one of the first technology companies with an AI training cluster of Prometheus' size — one gigawatt is enough energy to power more than 750,000 homes. That should help Meta conduct the massive training runs required to create frontier AI models. With the addition of Zhao, Meta now has two chief AI scientists, including Yann LeCun, the leader of Meta's FAIR. Unlike MSL, FAIR is designed to focus on long-term AI research — techniques that may be used five to 10 years from now. How exactly Meta's three AI units will work together remains to be seen. Nevertheless, Meta now seems to have a formidable AI leadership team to compete with OpenAI and Google.