
Alibaba launches new Qwen LLMs in China's latest open-source AI breakthrough
In a blog post, the Chinese tech giant said Qwen3 promises improvements in reasoning, instruction following, tool usage and multilingual tasks, rivaling other top-tier models such as DeepSeek's R1 in several industry benchmarks.
The LLM series includes eight variations that span a range of architectures and sizes, offering developers flexibility when using Qwen to build AI applications for edge devices like mobile phones.
Qwen3 is also Alibaba's debut into so-called "hybrid reasoning models," which it says combines traditional LLM capabilities with "advanced, dynamic reasoning."
According to Alibaba, such models can seamlessly transition between a "thinking mode" for complex tasks such as coding and a "non-thinking mode" for faster, general-purpose responses.
"Notably, the Qwen3-235B-A22B MoE model significantly lowers deployment costs compared to other state-of-the-art models, reinforcing Alibaba's commitment to accessible, high-performance AI," Alibaba said.
The new models are already freely available for individual users on platforms like Hugging Face and GitHub, as well as Alibaba Cloud's web interface. Qwen3 is also being used to power Alibaba's AI assistant, Quark.
AI analysts told CNBC that the Qwen3 represents a serious challenge to Alibaba's counterparts in China, as well as industry leaders in the U.S.
In a statement to CNBC, Wei Sun, principal analyst of artificial intelligence at Counterpoint Research, said the Qwen3 series is a "significant breakthrough—not just for its best-in-class performance" but also for several features that point to the "application potential of the models."
Those features include Qwen3's hybrid thinking mode, its multilingual support covering 119 languages and dialects and its open-source availability, Sun added.
Open-source software generally refers to software in which the source code is made freely available on the web for possible modification and redistribution. At the start of this year, DeepSeek's open-sourced R1 model rocked the AI world and quickly became a catalyst for China's AI space and open-source model adoption.
"Alibaba's release of the Qwen 3 series further underscores the strong capabilities of Chinese labs to develop highly competitive, innovative, and open-source models, despite mounting pressure from tightened U.S. export controls," said Ray Wang, a Washington-based analyst focusing on U.S.-China economic and technology competition.
According to Alibaba, Qwen has already become one of the world's most widely adopted open-source AI model series, attracting over 300 million downloads worldwide and more than 100,000 derivative models on Hugging Face.
Wang said that this adoption could continue with Qwen3, adding that its performance claims may make it the best open-source model globally — though still behind the world's most cutting-edge models like OpenAI's o3 and o4-mini.
Chinese competitors like Baidu have also rushed to release new AI models after the emergence of DeepSeek, including making plans to shift toward a more open-source business model.
Meanwhile, Reuters reported in February that DeepSeek is accelerating the launch of its successor to its R1, citing anonymous sources.
"In the broader context of the U.S.-China AI race, the gap between American and Chinese labs has narrowed—likely to a few months, and some might argue, even to just weeks," Wang said.
"With the latest release of Qwen 3 and the upcoming launch of DeepSeek's R2, this gap is unlikely to widen—and may even continue to shrink."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Hill
3 minutes ago
- The Hill
Trump's new ‘Monroe Doctrine' is driving China out of Latin America
Chinese President Xi Jinping was noticeably absent from the recent BRICS Summit in Brazil. It was a sign of the times. China is losing ground in Latin America, and many of its companies are packing up for Africa. These events could be the result of a change in the rules of the game — or perhaps just the leadership of President Trump and his new Monroe Doctrine. The Trump administration has renewed and reinforced the Monroe Doctrine of 1823, establishing a zero-tolerance policy for interference from extracontinental powers in the Americas. These changes have forced China to reevaluate and redirect many of its multi-million-dollar projects in transportation, telecommunications, infrastructure and strategic minerals. Political setbacks also include countries seeking closer ties and stronger alliances with Taiwan, a nation that China claims as its own territory. China is losing ground in Mexico. China's BYD, the world's largest electric car company and main competitor to Tesla, announced that it has canceled the construction of a huge electric vehicle plant in Mexico. The project would have the capacity to produce up to 150,000 cars per year, generating millions of dollars for China. The company seems to recognize the Trump effect; the new U.S. leadership has led them to rethink their expansion plans in Latin America. 'Geopolitical issues have a huge impact on the automotive industry,' said Stella Li, vice president of BYD. China also suffered a massive setback in Ecuador's mining sector. The firm Terraearth Resources canceled four projects after Ecuador's government decided to suspend exploration and exploitation activities due to noncompliance with environmental regulations. The Chinese strategy for controlling supplied of lithium is also failing. BYD and Tsingshan have canceled plans to build lithium processing plants in Chile, investments worth around $500 million and generating a projected 1,200 jobs. Lithium is essential for electric vehicles and is considered a strategic material in trade matters and, more importantly, in security issues. Chile has one of the largest lithium reserves in the world, and China lost lucrative business here. The defeats and delays of Chinese firms' projects in Latin America are a result of the new and strengthened U.S. leadership. Secretary of State Marco Rubio's first international trip was not to Europe or Asia, but to Central America. A categorical message was sent: Latin America, and especially Central America, are a priority for the U.S. Rubio's visit resulted in the end of the Belt-and-Road Initiative agreement signed between Panama and China — an unprecedented defeat for the communist geopolitical game in the so-called Global South. In Panama, the telecommunications company Huawei also suffered a blow. The Chinese firm, criticized for its ties to the People's Liberation Army, had to eliminate its systems in 13 strategic locations, which were replaced with American-made technology. Costa Rica is also moving with the winds of change. The Foreign Trade Promotion Agency sent a delegation to Taiwan to explore business opportunities, particularly in the semiconductor sector, where Taipei is a world leader. Costa Rica's Intelligence and Security Office also participated in a training session in Taiwan. Both events generated strong diplomatic complaints from China. The changes in Panama and Costa Rica are not coincidental; they are strategic. These two nations are undisputed leaders in Central America and what they do influences the region. China knows it and is in panic mode. Trump's new Monroe Doctrine and leadership through strength are atypical, unpredictable, politically incorrect, but undeniably successful. China is sending clear signals of pressure and pain, reevaluating, restricting and rerouting many of its investments. It is losing the battle one day at a time. Arturo McFields is an exiled journalist, former Nicaraguan ambassador to the Organization of American States, and a former member of the Norwegian Peace Corps. He is an alumnus of the National Defense University's Security and Defense Seminar and the Harvard Leadership course.


The Verge
4 minutes ago
- The Verge
Breaking down Trump's big gift to the AI industry
President Donald Trump's plan to promote America's AI dominance involves discouraging 'woke AI,' slashing state and federal regulations, and laying the groundwork to rapidly expand AI development and adoption. Trump's proposal, released on July 23rd, is a sweeping endorsement of the technology, full of guidance that ranges from specific executive actions to directions for future research. Some of the new plan's provisions (like promoting open-source AI) have garnered praise from organizations that are often broadly critical of Trump, but the loudest acclaim has come from tech and business groups, whose members stand to gain from fewer restrictions on AI. 'The difference between the Trump administration and Biden's is effectively night and day,' says Patrick Hedger, director of policy at tech industry group NetChoice. 'The Biden administration did everything it could to command and control the fledgling but critical sector … The Trump AI Action Plan, by contrast, is focused on asking where the government can help the private sector, but otherwise, get out of the way.' Others are far more ambivalent. Future of Life Institute, which led an Elon Musk-backed push for an AI pause in 2023, said it was heartened to see the Trump administration acknowledge serious risks, like bioweapons or cyberattacks, could be exacerbated by AI. 'However, the White House must go much further to safeguard American families, workers, and lives,' says Anthony Aguirre, FLI's executive director. 'By continuing to rely on voluntary safety commitments from frontier AI corporations, it leaves the United States at risk of serious accidents, massive job losses, extreme concentrations of power, and the loss of human control. We know from experience that Big Tech promises alone are simply not enough.' For now, here are the ways that Trump aims to promote AI. Congress failed to pass a moratorium on states enforcing their own AI laws as part of a recent legislative package. But a version of that plan was resurrected in this document. 'AI is far too important to smother in bureaucracy at this early stage, whether at the state or Federal level,' the plan says. 'The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states' rights to pass prudent laws that are not unduly restrictive to innovation.' To do this, it suggests federal agencies that dole out 'AI-related discretionary funding' should 'limit funding if the state's AI regulatory regimes may hinder the effectiveness of that funding or award.' It also suggests the Federal Communications Commission (FCC) 'evaluate whether state AI regulations interfere with the agency's ability to carry out its obligations and authorities under the Communications Act of 1934.' The Trump administration also wants the Federal Trade Commission (FTC) to take a hard look at existing AI regulations and agreements to see what it can scale back. It recommends the agency reevaluate investigations launched during the Biden administration 'to ensure that they do not advance theories of liability that unduly burden AI innovation,' and suggests it could throw out burdensome aspects of existing FTC agreements. Some AI-related actions taken during the Biden administration that the FTC might now reconsider include banning Rite Aid's use of AI facial recognition that allegedly falsely identified shoplifters, and taking action against AI-related claims the agency previously found to be deceptive. Trump's plan includes policies designed to help encode his preferred politics in the world of AI. He's ordered a revision of the Biden-era National Institute of Standards and Technology (NIST) AI Risk Management Framework — a voluntary set of best practices for designing safe AI systems — removing 'references to misinformation, Diversity, Equity, and Inclusion, and climate change.' (The words 'misinformation' and 'climate change' don't actually appear in the framework, though misinformation is discussed in a supplementary file.) In addition to that, a new executive order bans federal agencies from procuring what Trump deems 'woke AI' or large language models 'that sacrifice truthfulness and accuracy to ideological agendas,' including things like racial equity. This section of the plan 'seems to be motivated by a desire to control what information is available through AI tools and may propose actions that would violate the First Amendment,' says Kit Walsh, director of the Electronic Frontier Foundation (EEF). 'The plan seeks to require that 'the government only contracts with' developers who meet the administration's ideological criteria. While the government can choose to purchase only services that meet such criteria, it cannot require that developers refrain from also providing non-government users other services conveying other ideas.' The administration describes the slow uptake of AI tools across the economy, including in sensitive areas like healthcare, as a 'bottleneck to harnessing AI's full potential.' The plan describes this cautious approach as one fueled by 'distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards.' To promote the use of AI, the White House encourages a ''try-first' culture for AI across American industry.' This includes creating domain-specific standards for adopting AI systems and measuring productivity increases, as well as regularly monitoring how US adoption of AI compares to international competitors. The White House also wants to integrate AI tools throughout the government itself, including by detailing staff with AI expertise at various agencies to other departments in need of that talent, training government employees on AI tools, and giving agencies ample access to AI models. The plan also specifically calls out the need to 'aggressively adopt AI within its Armed Forces,' including by introducing AI curricula at military colleges and using AI to automate some work. All this AI adoption will profoundly change the demand for human labor, the plan says, likely eliminating or fundamentally changing some jobs. The plan acknowledges that the government will need to help workers prepare for this transition period by retraining people for more in-demand roles in the new economy and providing tax benefits for certain AI training courses. On top of preparing to transition workers from traditional jobs that might be upended by AI, the plan discusses the need to train workers for the additional roles that might be created by it. Among the jobs that might be needed for this new reality are 'electricians, advanced HVAC technicians, and a host of other high-paying occupations,' the plan says. The administration says it wants to 'create a supportive environment for open models,' or AI models that allow users to modify the code that underpins them. Open models have certain 'pros,' like being more accessible to startups and independent developers. Groups like EFF and the Center for Democracy and Technology (CDT), which were critical of many other aspects of the plan, applauded this part. EFF's Walsh called it a 'positive proposal' to promote 'the development of open models and making it possible for a wider range of people to participate in shaping AI research and development. If implemented well, this could lead to a greater diversity of viewpoints and values reflected in AI technologies, compared to a world where only the largest companies and agencies are able to develop AI.' That said, there are also serious 'cons' to the approach that the AI Action Plan didn't seem to get into. For instance, the nature of open models makes them easier to trick and misalign for purposes like creating misinformation on a large scale, or chemical or biological weapons. It's easier to get past built-in safeguards with such models, and it's important to think critically about the tradeoffs before taking steps to drive open-source and open-weight model adoption at scale. Trump signed an executive order on July 23rd meant to fast track permitting for data center projects. The EO directs the commerce secretary to 'launch an initiative to provide financial support' that could include loans, grants, and tax incentives for data centers and related infrastructure projects. Following a similar move by former President Joe Biden, Trump's plan directs agencies to identify federal lands suitable for the 'large-scale development' of data centers and power generation. The EO tells the Department of Defense to identify suitable sites on military installations and the Environmental Protection Agency (EPA) to identify polluted Superfund and Brownfield sites that could be reused for these projects. The Trump administration is hellbent on dismantling environmental regulations, and the EO now directs the EPA to modify rules under the Clean Air Act, Clean Water Act, and Toxic Substances Control Act to expedite permitting for data center projects. The EO and the AI plan, similar to a Biden-era proposal, direct agencies to create 'categorical exclusions' for federally supported data center projects that would exclude them from detailed environmental reviews under the National Environmental Policy Act. And they argue for using new AI tools to speed environmental assessments and applying the 'Fast-41 process' to data center projects to streamline federal permitting. The Trump administration is basically using the AI arms race as an excuse to slash environmental regulations for data centers, energy infrastructure, and computer chip factories. Last week, the administration exempted coal-fired power plants and facilities that make chemicals for semiconductor manufacturing from Biden-era air pollution regulations. The plan admits that AI is a big factor 'increasing pressures on the [power] grid.' Electricity demand is rising for the first time in more than a decade in the US, thanks in large part to data centers — a trend that could trigger blackouts and raise Americans' electricity bills. Trump's AI plan lists some much-needed fixes to stabilize the grid, including upgrading power lines and managing how much electricity consumers use when demand spikes. But the administration is saying that the US needs to generate more electricity to power AI just as it's stopping renewable energy growth, which is like trying to win a race in a vehicle with no front wheels. It wants to meet growing demand with fossil fuels and nuclear energy. 'We will continue to reject radical climate dogma,' the plan says. It argues for keeping existing, mostly fossil-fueled power plants online for longer and limiting environmental reviews to get data centers and new power plants online faster. The lower cost of gas generation has been killing coal power plants for years, but now a shortage of gas turbines could stymie Trump's plans. New nuclear technologies that tech companies are investing in for their data centers probably won't be ready for commercial deployment until the 2030s at the earliest. Republicans, meanwhile, have passed legislation to hobble the solar and wind industries that have been the fastest-growing sources of new electricity in the US. 'Prioritize fundamental advancements in AI interpretability' The Trump administration accurately notes that while developers and engineers know how today's advanced AI models work in a big-picture way, they 'often cannot explain why a model produced a specific output. This can make it hard to predict the behavior of any specific AI system.' It's aiming to fix that, at least when it comes to some high-stakes use cases. The plan states that the lack of AI explainability and predictability can lead to issues in defense, national security, and 'other applications where lives are at stake,' and it aims to promote 'fundamental breakthroughs on these research problems.' The plan's recommended policy actions include launching a tech development program led by the Defense Advanced Research Projects Agency to advance AI interpretability, control systems, and security. It also said the government should prioritize fundamental advancements in such areas in its upcoming National AI R&D Strategic Plan and, perhaps most specifically, that the DOD and other agencies should coordinate an AI hackathon to allow academics to test AI systems for transparency, effectiveness, and vulnerabilities. It's true that explainability and unpredictability are big issues with advanced AI. Elon Musk's xAI, which recently scored a large-scale contract with the DOD, recently struggled to stop its Grok chatbot from spouting pro-Hitler takes — so what happens in a higher-stakes situation? But the government seems unwilling to slow down while this problem is addressed. The plan states that since 'AI has the potential to transform both the warfighting and back-office operations of the DOD,' the US 'must aggressively adopt AI within its Armed Forces if it is to maintain its global military preeminence.' The plan also discusses how to better evaluate AI models for performance and reliability, like publishing guidelines for federal agencies to conduct their own AI system evaluations for compliance and other reasons. That's something most industry leaders and activists support greatly, but it's clear what the Trump administration has in mind will lack a lot of the elements they have been pushing for. Evaluations likely will focus on efficiency and operations, according to the plan, and not instances of racism, sexism, bias, and downstream harms. Courtrooms and AI tools mix in strange ways, from lawyers using hallucinated legal citations to an AI-generated appearance of a deceased victim. The plan says that 'AI-generated media' like fake evidence 'may present novel challenges to the legal system,' and it briefly recommends the Department of Justice and other agencies issue guidance on how to evaluate and deal with deepfakes in federal evidence rules. Finally, the plan recommends creating new ways for the research and academic community to access AI models and compute. The way the industry works right now, many companies, and even academic institutions, can't access or pay for the amount of compute they need on their own, and they often have to partner with hyperscalers — providers of large-scale cloud computing infrastructure, like Amazon, Google, and Microsoft — to access it. The plan wants to fix that issue, saying that the US 'has solved this problem before with other goods through financial markets, such as spot and forward markets for commodities.' It recommends collaborating with the private sector, as well as government departments and the National Science Foundation's National AI Research Resource pilot to 'accelerate the maturation of a healthy financial market for compute.' It didn't offer any specifics or additional plans for that. Posts from this author will be added to your daily email digest and your homepage feed. See All by Lauren Feiner Posts from this author will be added to your daily email digest and your homepage feed. See All by Justine Calma Posts from this author will be added to your daily email digest and your homepage feed. See All by Hayden Field Posts from this author will be added to your daily email digest and your homepage feed. See All by Adi Robertson Posts from this topic will be added to your daily email digest and your homepage feed. See All AI Posts from this topic will be added to your daily email digest and your homepage feed. See All Analysis Posts from this topic will be added to your daily email digest and your homepage feed. See All Policy Posts from this topic will be added to your daily email digest and your homepage feed. See All Politics Posts from this topic will be added to your daily email digest and your homepage feed. See All Regulation


CNN
4 minutes ago
- CNN
Nvidia wants Europe to catch up in the AI race
Europe has fallen behind China and the US in the development of AI capacity, producing less than 1% of the world's semiconductors needed for AI. But the EU hopes to produce 20% of the world's semiconductors by 2030. CNN's Anna Stewart spoke to Jensen Huang, CEO of Nvidia about their plans to build 20 AI factories across the continent.