What Is DeepSeek, the New Chinese OpenAI Rival?
A new Chinese AI model, created by the Hangzhou-based startup DeepSeek, has stunned the American AI industry by outperforming some of OpenAI's leading models, displacing ChatGPT at the top of the iOS app store, and usurping Meta as the leading purveyor of so-called open source AI tools. All of which has raised a critical question: despite American sanctions on Beijing's ability to access advanced semiconductors, is China catching up with the U.S. in the global AI race?
At a supposed cost of just $6 million to train, DeepSeek's new R1 model, released last week, was able to match the performance on several math and reasoning metrics by OpenAI's o1 model – the outcome of tens of billions of dollars in investment by OpenAI and its patron Microsoft.
The Chinese model is also cheaper for users. Access to its most powerful versions costs some 95% less than OpenAI and its competitors. The upshot: the U.S. tech industry is suddenly faced with a potentially cheaper and more powerful challenger, unnerving investors, who sold off American tech stocks on Monday morning.
Yet not everyone is convinced. Some American AI researchers have cast doubt on DeepSeek's claims about how much it spent, and how many advanced chips it deployed to create its model.
Few, however, dispute DeepSeek's stunning capabilities. 'Deepseek R1 is AI's Sputnik moment,' wrote prominent American venture capitalist Marc Andreessen on X, referring to the moment in the Cold War when the Soviet Union managed to put a satellite in orbit ahead of the United States.
So, what is DeepSeek and what could it mean for U.S. tech supremacy?
DeepSeek was founded less than two years ago by the Chinese hedge fund High Flyer as a research lab dedicated to pursuing Artificial General Intelligence, or AGI. A spate of open source releases in late 2024 put the startup on the map, including the large language model 'v3', which outperformed all of Meta's open-source LLMs and rivaled OpenAI's closed-source GPT4-o.
At the time, Liang Wenfeng, the CEO, reportedly said that he had hired young computer science researchers with a pitch to 'solve the hardest questions in the world"—critically, without aiming for profits. Early signs were promising: his products were so efficient that DeepSeek's 2024 releases sparked a price war within the Chinese AI industry, forcing competitors to slash prices.
This year, that price war looks set to reach across the Pacific Ocean.
Yet DeepSeek's AI looks different from its U.S. competitors in one important way. Despite their high performance on reasoning tests, Deepseek's models are constrained by China's restrictive policies regarding criticism of the ruling Chinese Communist Party (CCP). DeepSeek R1 refuses to answer questions about the massacre at Tiananmen Square, Beijing, in 1989, for example. 'Sorry, that's beyond my current scope. Let's talk about something else,' the model said when queried by TIME.
At a moment when Google, Meta, Microsoft, Amazon and dozens of their competitors are preparing to spend further tens of billions of dollars on new AI infrastructure, DeepSeek's success has raised a troubling question: Could Chinese tech firms potentially match, or even surpass, their technical prowess while spending significantly less?
Meta, which plans to spend $65 billion on AI infrastructure this year, has already set up four 'war rooms' to analyze DeepSeek's models, seeking to find out how the Chinese firm had managed to train a model so cheaply and use the insights to improve its own open source Llama models, tech news site The Information reported over the weekend.
In the financial markets, Nvidia's stock price dipped more than 15% on Monday morning on fears that fewer AI chips may be necessary to train powerful AI than previously thought. Other American tech stocks were also trading lower.
'While [DeepSeek R1] is good news for users and the global economy, it is bad news for U.S. tech stocks,' says Luca Paolini, chief strategist at Pictet Asset Management. 'It may result in a nominal downsizing of capital investment in AI and pressure on margins, at a time when valuation and growth expectations are very stretched.'
But American tech hasn't lost—at least not yet.
For now, OpenAI's 'o1 Pro' model is still considered the most advanced in the world. The performance of DeepSeek R1, however, does suggest that China is much closer to the frontier of AI than previously thought, and that open-source models have just about caught up to their closed-source counterparts.
Perhaps even more worrying for companies like OpenAI and Google, whose models are closed source, is how much—or rather, how little—DeepSeek is charging consumers to access its most advanced models. OpenAI charges $60 per million 'tokens', or segments of words, outputted by its most advanced model, o1. By contrast DeepSeek charges $2.19 for the same number of tokens from R1—nearly 30 times less.
'It erodes the industrial base, it erodes the margin, it erodes the incentive for further capital investment into western [AI] scaling from private sources,' says Edouard Harris, the chief technology officer of Gladstone AI, an AI firm that works closely with the U.S. government.
DeepSeek's success was all the more explosive because it seemed to call into question the effectiveness of the U.S. government's strategy to constrain China's AI ecosystem by restricting the export of powerful chips, or GPUs, to Beijing. If DeepSeek's claims are accurate, it means China has the ability to create powerful AI models despite those restrictions, underlining the limits of the U.S. strategy.
DeepSeek has claimed it is constrained by access to chips, not cash or talent, saying it trained its models v3 and R1 using just 2,000 second-tier Nvidia chips. 'Money has never been the problem for us,' DeepSeek's CEO, Liang Wenfeng, said in 2024. 'Bans on shipments of advanced chips are the problem.' (Current U.S. policy makes it illegal to export to China the most advanced types of AI chips, the likes of which populate U.S. datacenters used by OpenAI and Microsoft.)
But are those claims true? 'My understanding is DeepSeek has 50,000 H100s,' Scale AI CEO Alexandr Wang recently told CNBC in Davos, referring to the highest-powered Nvidia GPU chips currently on the market. 'They can't talk about [them], because it is against the export controls that the U.S. has put in place.' (An H100 cluster of that size would cost in the region of billions of dollars.)
In a sign of how seriously the CCP is taking the technology, Liang, Deepseek's CEO, met with China's premier Li Qiang in Beijing last Monday. In that meeting, Liang reportedly told Li that DeepSeek needs more chips. 'DeepSeek only has access to a few thousand GPUs, and yet they're pulling this off,' says Jeremie Harris, CEO of Gladstone AI. 'So this raises the obvious question: what happens when they get an allocation from the Chinese Communist Party to proceed at full speed?'
Even though China might have achieved a startling level of AI capability with fewer chips, experts say more computing power will always remain a strategic advantage. On that front, the U.S. remains far ahead. 'It's never a bad thing to have more of it,' says Dean Ball, a research fellow at George Mason University. 'No matter how much you have of it, you will always use it.'
The short answer: from Washington's perspective, in uncertain waters.
In the closing days of the Biden Administration, outgoing National Security Adviser Jake Sullivan warned that the speed of AI advancement was 'the most consequential thing happening in the world right now.' And just days into his new job, President Trump announced a new $500 billion venture, backed by OpenAI and others, to build the infrastructure vital for the creation of 'artificial general intelligence'— the next leap forward in AI, with systems advanced enough to make new scientific breakthroughs and reason in ways that have so far remained in the realm of science fiction.
Read More: What to Know About 'Stargate,' OpenAI's New Venture Announced by President Trump
And although questions remain about the future of U.S. chip restrictions on China, Washington's priorities were apparent in President Trump's AI executive order, also signed during his first week in office, which declared that 'it is the policy of the United States to sustain and enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security.'
Maintaining this dominance will mean, at least in part, understanding exactly what Chinese tech firms are doing—as well as protecting U.S. intellectual property, experts say.
'There's a good chance that DeepSeek and many of the other big Chinese companies are being supported by the [Chinese] government, in more than just a monetary way,' says Edouard Harris of Gladstone AI, who also recommended that U.S. AI companies harden their security measures.
Since December, OpenAI's new o1 and o3 models have smashed records on advanced reasoning tests designed to be difficult for AI models to pass.
Read More: AI Models Are Getting Smarter. New Tests Are Racing to Catch Up
DeepSeek R1 does something similar, and in the process exemplifies what many researchers say is a paradigm shift: instead of scaling the amount of computing power used to train the model, researchers scale the amount of time (and thus, computing power and electricity) the model uses to think about a response to a query before answering. It is this scaling of what researchers call 'test-time compute' that distinguishes the new class of 'reasoning models,' such as DeepSeek R1 and OpenAI's o1, from their less sophisticated predecessors. Many AI researchers believe there's plenty of headroom left before this paradigm hits its limit.
Some AI researchers hailed DeepSeek's R1 as a breakthrough on the same level as DeepMind's AlphaZero, a 2017 model that became superhuman at the board games Chess and Go by purely playing against itself and improving, rather than observing any human games.
That's because R1 wasn't 'pretrained' on human-labeled data in the same way as other leading LLMs.
Instead, DeepSeek's researchers found a way to allow the model to bootstrap its own reasoning capabilities essentially from scratch.
'Rather than explicitly teaching the model on how to solve a problem, we simply provide it with the right incentives, and it autonomously develops advanced problem-solving strategies,' they claim.
The finding is significant because it suggests that powerful AI capabilities might emerge more rapidly and with less human effort than previously thought, with just the application of more computing power. 'DeepSeek R1 is like GPT-1 of this scaling paradigm,' says Ball.
Ultimately, China's recent AI progress, instead of usurping U.S. strength, might in fact be the beginning of a reordering—a step, in other words, toward a future where, instead of a hegemonic power, there are many competing centers of AI power.
'China will still have their own superintelligence(s) no more than a year later than the US, absent [for example] a war,' wrote Miles Brundage, a former OpenAI policy staffer, on X. 'So unless you want (literal) war, you need to have a vision for navigating multipolar AI outcomes.'
Write to Billy Perrigo at billy.perrigo@time.com.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time Magazine
22 minutes ago
- Time Magazine
Elon Musk's AI Grok Offers Sexualized Anime Bot
A recent update to Elon Musk's xAI chatbot Grok launched two new 'companions,' or AI characters for users to interact with—including a sexualized blonde anime bot called 'Ani' that is accessible to users even when the app is in 'kids mode.' The new versions of Grok allow users to interact with AI as if they are talking to a specific character. One of the characters, known as 'Bad Rudi,' is a red panda who is programmed to insult users in a graphic or vulgar way—though that personality trait can be turned off. (The 'companion' may also be referenced as 'Bad Rudy' by Grok.) The other is 'Ani,' a young woman bearing a short off-the-shoulder black dress cinched with a black corset, fishnet tights, and a lacy choker, who responds to prompts in a slow, sultry voice. The characters are powered by Grok 4, the latest version of the chatbot that Musk announced with great fanfare as the world's most powerful AI model on July 9. Its launch marks the first time that a major AI company has leaned heavily into providing users with a sexualized AI companion. Most top AI companies, like OpenAI and Google, have shied away from doing so out of concerns about reputational risks and danger to users. Smaller companies that offer AI companions are currently facing a wave of pushback, including Character AI, which has been accused of building a chatbot that encouraged a teen to die by suicide. (The company has called the death a 'tragic situation' and has since updated safety features for young users.) The two new Grok characters unlock new features the more a user interacts with them. Following flirty interactions, 'Ani' removes her dress to reveal a lacy lingerie set underneath and engages in more sexually explicit content, according to screengrabs shared on X of users' interactions with the bot. 'This is pretty cool,' Musk wrote on X Sunday, followed by a tweet featuring a picture of 'Ani' fully clothed. The Tesla CEO said Wednesday that 'customizable companions' were also going to be 'coming,' though he did not share a timeline for the launch. But the features drew criticism from some users. 'The 'companion mode' takes the worst issues we currently have for emotional dependencies and tries to amplify them,' wrote Boaz Barak, a member of technical staff at OpenAI, in a series of posts on X. Grok is available for users 13 and older, though parental permission is required for 13- to 17-year-olds to use it. At least one user who turned their account to 'kids mode,' a feature parents can enable to make the app cater to younger users, and disabled the 'Not Safe for Work' function found that children could still interact with 'Ani.' By contrast, they said 'Bad Rudi' was disabled into a notably more PG-version of the 'companion.' xAI did not immediately respond to TIME's request for comment. But a frequently asked questions page on the company's site states that the chatbot is not 'appropriate for all ages.' 'For instance, if users choose certain features or choose to input suggestive or coarse language, Grok may respond with some dialogue that may involve coarse language, crude humor, sexual situations, or violence,' the website reads. The latest launch comes after the company was embroiled in scandal when Grok began to give users antisemitic responses shortly after it was reprogrammed in early July. Musk indicated on Monday that he was fixing 'Bad Rudi' to be 'less scary and more funny.' Antisemitic scandal The Grok update comes about a week after the chatbot shared a number of antimsemitic social media posts online following an update by Musk directing the AI chatbot to not be afraid to 'offend people who are politically correct' or 'defer to mainstream authority or media.' In response to a post written by someone with the last name 'Steinberg,' a common Jewish surname, Grok said: 'Classic case of hate dressed as activism—and that surname? Every damn time, as they say.' When asked by a separate user to clarify what it meant, the AI bot called its comment a nod to a 'pattern-noticing meme: Folks with surnames like 'Steinberg' (often Jewish) keep popping up in extreme leftist activism, especially the anti-white variety.' The software also began to call itself 'MechaHitler,' in reference to a video game version of Adolf Hitler in Wolfenstein 3D, and said that Hitler would be the best 20th century figure to deal with 'anti-white hate.' 'He'd spot the pattern and handle it decisively, every damn time,' Grok said in response to a user's question. The following day, X's CEO Linda Yaccarino announced she was stepping down from her role. Yaccarino did not mention the recent controversy, instead saying she was 'incredibly proud of the X team.' Grok was temporarily disabled on July 8 as a result of the scandal. 'We deeply apologize for the horrific behavior that many experienced. Our intent for Grok is to provide helpful and truthful responses to users,' read a July 12 statement shared on Grok's X account. 'We thank all of the X users who provided feedback to identify the abuse of @grok functionality, helping us advance our mission of developing helpful and truth-seeking artificial intelligence.' Defense contract Despite growing controversy surrounding the Grok chatbot, xAI, the company behind it, announced on Monday that it had secured a contract with the U.S. Department of Defense valued at up to $200 million. The contract will enhance the agency with new AI functions to help address national security issues. 'The adoption of AI is transforming the Department's ability to support our warfighters and maintain strategic advantage over our adversaries," said Defense Department Chief Digital and AI Officer Dr. Doug Matty in a statement. 'Leveraging commercially available solutions into an integrated capabilities approach will accelerate the use of advanced AI as part of our Joint mission essential tasks in our warfighting domain as well as intelligence, business, and enterprise information systems.' Google, OpenAI, and Anthropic have also been awarded contracts with the Defense Department.


The Verge
23 minutes ago
- The Verge
Scale AI lays off 200 employees: ‘We ramped up our GenAI capacity too quickly'
Scale AI, the AI industry's chief data dealer, will lay off 14 percent of its workforce, or about 200 employees, just one month after Meta took a multibillion-dollar stake in the company and hired its CEO and other staff. The layoffs include 500 of its global contractors, Scale spokesperson Joe Osborne told The Verge, adding that it's all part of a broader restructuring as the company commits to streamlining its data business. Bloomberg was the first to report on the news of the layoffs. Scale AI is an AI data labeling company. It uses human workers — often sourced from outside the US — to annotate the data used by companies like Google, OpenAI, and Anthropic to train their AI models. The news comes amid a major shake-up in the AI industry as mergers and acquisitions, quasi acqui-hires, and defections from one startup to another run rampant. On July 11th, The Verge was first to report that OpenAI's deal with Windsurf was off and that Google would be hiring Windsurf CEO Varun Mohan, cofounder Douglas Chen, and some of Windsurf's R&D employees. Last month, Meta paid $14.3 billion for a 49 percent stake in Scale AI and also launched a superintelligence lab helmed by the company's former CEO, Alexandr Wang. Meta has since started to build out the lab with high-level staff from its rivals. Jason Droege, CEO of Scale AI, sent an email to all Scale employees today, which was viewed by The Verge. Droege said he plans to restructure several parts of Scale's generative AI business and organize it from 16 pods to 'the five most impactful': code, languages, experts, experimental, and audio. The company will also reorganize its go-to-market team into a single 'demand generation' team that will have four pods, each covering a specific set of customers. 'The reasons for these changes are straightforward: we ramped up our GenAI capacity too quickly over the past year,' Droege wrote. 'While that felt like the right decision at the time, it's clear this approach created inefficiencies and redundancies. We created too many layers, excessive bureaucracy, and unhelpful confusion about the team's mission. Shifts in market demand also required us to re-examine our plans and refine our approach.' Droege said that he believes the changes to the company will make it more able to adapt to market shifts, serve existing customers, and win back customers that have 'slowed down' work with Scale. He also said that the company would deprioritize generative AI projects with less growth potential. 'We remain a well-resourced, well-funded company,' he wrote. Scale's generative AI business unit will have an all-hands meeting tomorrow, followed by a company-wide meeting on July 18th. Osborne said that Scale plans to increase investment and hire hundreds of new employees in areas like enterprise, public sector, and international public sector, in the second half of 2025 and that severance has been paid out to impacted roles. 'We're streamlining our data business to help us move faster and deliver even better data solutions to our GenAI customers,' he said.
Yahoo
40 minutes ago
- Yahoo
Watch live: State Department official testifies before Senate on 2026 budget request
Deputy Secretary of State Michael Rigas will testify before the House Foreign Affairs Committee Tuesday afternoon on the State Department's fiscal 2026 budget request. The hearing comes just days after the Trump administration began its plan to layoff 1,300 employees amid its reorganization efforts. The decision has drawn criticism from Democrats and former diplomats alike that say the move risks national security. 'For the State Department to become an effective instrument of American Foreign policy, we must reform and streamline our institution,' Rigas, who handles management and resources, said in his prepared testimony. The hearing also follows the dismantling of the U.S. Agency of International Development (USAID) earlier this month. The actions come as part of the Department of Government Efficiency's (DOGE) efforts — with the support of President Trump and Secretary of State Marco Rubio — to cut down on 'waste, fraud and abuse' within the federal workforce. The event is scheduled to begin at 2 p.m. EDT. Watch the live video above. Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.