Latest news with #DeepSeekR1

Korea Herald
5 hours ago
- Business
- Korea Herald
CKGSB Launches New White Paper on China's Role in the Global AI Race
BEIJING, July 1, 2025 /PRNewswire/ -- Cheung Kong Graduate School of Business (CKGSB) today released a new white paper, China and the Global AI Race, revealing how China is leveraging its unique strengths in manufacturing, data, and a burgeoning startup scene to carve out a leadership role in artificial intelligence. As CKGSB Dean Li Haitao emphasizes, "AI is no longer just a sector – it is the architecture of a new global economy…And China's role in this transition is increasingly strategic." Drawing on insights from CKGSB faculty and industry experts, the report dissects China's AI strategy on four fronts: the open-source revolution, workforce transformation, intelligent robotics, and AI ecosystem development. The paper highlights the disruptive impact of open-source AI, pointing to the breakthrough DeepSeek R1 model, which offers powerful performance with lower hardware costs. This development could fundamentally alter the U.S.-China tech competition. As Professor of Strategic Management Teng Bingsheng explains, "China may not have to fight a chip war to the same will use engineering innovations to get around computational capacity, and that is a great opportunity." The paper also examines the human dimension of this transformation, analyzing how AI reshapes leadership and the workforce. Instead of replacing jobs, it creates demand for new skills and the emergence of what CKGSB Dean's Distinguished Chair Professor of Information Systems Sun Tianshu calls"AI Architects"—a new generation of business leaders focused on integrating intelligence into core operations. "The challenge is no longer about access to intelligence, but about how to integrate it effectively," says Sun. The report identifies robotics as the next pivotal area for growth. "The physical world will become the space of highest potential for AI development in the next few years," notes Sun. CKGSB alumnus Li Mingyang, Chairman of Jaka Robotics Co., adds, "China already has an advanced smart vehicle industry, which is fit for scaled mass production of the core components for robotics." Together, these insights paint a picture of a country not just participating in the AI race, but actively mapping out its future. China and the Global AI Race provides an essential guide for business leaders, policymakers, and anyone seeking to understand the trajectory of 21st-century technology and economic power.

The Wire
6 hours ago
- Business
- The Wire
CKGSB Launches New White Paper on China's Role in the Global AI Race
BEIJING, July 1, 2025 /PRNewswire/ -- Cheung Kong Graduate School of Business (CKGSB) today released a new white paper, China and the Global AI Race, revealing how China is leveraging its unique strengths in manufacturing, data, and a burgeoning startup scene to carve out a leadership role in artificial intelligence. As CKGSB Dean Li Haitao emphasizes, "AI is no longer just a sector – it is the architecture of a new global economy…And China's role in this transition is increasingly strategic." Drawing on insights from CKGSB faculty and industry experts, the report dissects China's AI strategy on four fronts: the open-source revolution, workforce transformation, intelligent robotics, and AI ecosystem development. The paper highlights the disruptive impact of open-source AI, pointing to the breakthrough DeepSeek R1 model, which offers powerful performance with lower hardware costs. This development could fundamentally alter the U.S.-China tech competition. As Professor of Strategic Management Teng Bingsheng explains, "China may not have to fight a chip war to the same will use engineering innovations to get around computational capacity, and that is a great opportunity." The paper also examines the human dimension of this transformation, analyzing how AI reshapes leadership and the workforce. Instead of replacing jobs, it creates demand for new skills and the emergence of what CKGSB Dean's Distinguished Chair Professor of Information Systems Sun Tianshu calls "AI Architects"—a new generation of business leaders focused on integrating intelligence into core operations. "The challenge is no longer about access to intelligence, but about how to integrate it effectively," says Sun. The report identifies robotics as the next pivotal area for growth. "The physical world will become the space of highest potential for AI development in the next few years," notes Sun. CKGSB alumnus Li Mingyang, Chairman of Jaka Robotics Co., adds, "China already has an advanced smart vehicle industry, which is fit for scaled mass production of the core components for robotics." Together, these insights paint a picture of a country not just participating in the AI race, but actively mapping out its future. China and the Global AI Race provides an essential guide for business leaders, policymakers, and anyone seeking to understand the trajectory of 21st-century technology and economic power. The full white paper is available for download on the CKGSB official website HERE. (Disclaimer: The above press release comes to you under an arrangement with PRNewswire and PTI takes no editorial responsibility for the same.). This is an auto-published feed from PTI with no editorial input from The Wire.


Time of India
20-06-2025
- Science
- Time of India
Algebra, philosophy and…: These AI chatbot queries cause most harm to environment, study claims
Representative Image Queries demanding complex reasoning from AI chatbots, such as those related to abstract algebra or philosophy, generate significantly more carbon emissions than simpler questions, a new study reveals. These high-level computational tasks can produce up to six times more emissions than straightforward inquiries like basic history questions. A study conducted by researchers at Germany's Hochschule München University of Applied Sciences, published in the journal Frontiers (seen by The Independent), found that the energy consumption and subsequent carbon dioxide emissions of large language models (LLMs) like OpenAI's ChatGPT vary based on the chatbot, user, and subject matter. An analysis of 14 different AI models consistently showed that questions requiring extensive logical thought and reasoning led to higher emissions. To mitigate their environmental impact, the researchers have advised frequent users of AI chatbots to consider adjusting the complexity of their queries. Why do these queries cause more carbon emissions by AI chatbots In the study, author Maximilian Dauner wrote: 'The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions. We found that reasoning-enabled models produced up to 50 times more carbon dioxide emissions than concise response models.' by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Americans Are Freaking Out Over This All-New Hyundai Tucson (Take a Look) Smartfinancetips Learn More Undo The study evaluated 14 large language models (LLMs) using 1,000 standardised questions to compare their carbon emissions. It explains that AI chatbots generate emissions through processes like converting user queries into numerical data. On average, reasoning models produce 543.5 tokens per question, significantly more than concise models, which use only 40 tokens. 'A higher token footprint always means higher CO2 emissions,' the study adds. The study highlights that Cogito, one of the most accurate models with around 85% accuracy, generates three times more carbon emissions than other similarly sized models that offer concise responses. 'Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies. None of the models that kept emissions below 500 grams of carbon dioxide equivalent achieved higher than 80 per cent accuracy on answering the 1,000 questions correctly,' Dauner explained. Researchers used carbon dioxide equivalent to measure the climate impact of AI models and hope that their findings encourage more informed usage. For example, answering 600,000 questions with DeepSeek R1 can emit as much carbon as a round-trip flight from London to New York. In comparison, Alibaba Cloud's Qwen 2.5 can answer over three times more questions with similar accuracy while producing the same emissions. 'Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,' Dauner noted. AI Masterclass for Students. Upskill Young Ones Today!– Join Now


Mint
18-06-2025
- Business
- Mint
Who is ahead in the global tech race?
TECHNOLOGICAL STRENGTH brings economic growth, geopolitical influence and military might. But tracking who leads in a given field, and by how much, is tricky. An index by researchers at Harvard, published on June 5th, attempts to measure such heft. It ranks 25 countries across five sectors: artificial intelligence (AI), semiconductors, biotechnology, space and quantum technology. America dominates the rankings, but other countries are closing in. Of all the sectors, AI gets the most attention from politicians. J.D. Vance, America's vice-president, recently called its development an 'arms race". America commands a strong lead thanks to its early breakthroughs, its head start in building computing power and the dominance of firms such as OpenAI and Nvidia. But China's DeepSeek R1 rivals Western models at a fraction of the cost. China's loose attitudes towards data privacy, and its deep pools of talent in computer science and engineering, give it an edge. In 2023 Chinese researchers produced around 23% of all published papers on AI—more than Americans (9%) and Europeans (15%). India, long tipped to be a world tech power, ranks tenth overall, and seventh for its development of AI. It has plenty of engineering talent and hundreds of millions of internet users. But weak investment and a scarcity of training data needed for large language models has slowed its progress. So far, India has yet to produce a major AI breakthrough. The AI race runs on semiconductors, which carry the most weight in the index. America's lead here is narrower: it is ahead in chip design but East Asia remains the industrial centre of gravity. China, Japan, Taiwan and South Korea each beat America in manufacturing capacity and access to specialised materials (see chart 1). But a country can score highly on manufacturing without producing cutting-edge chips. China, for example, has no advanced-node facilities (factories capable of making the most complex chips) yet it ranks well thanks to the sheer scale of its lower-end chipmaking. The index also misses critical chokepoints in the global supply chain. ASML, based in the Netherlands (ranked 15th), is the sole maker of the world's most advanced chipmaking machines. Taiwan (8th) is home to TSMC, which churns out up to 90% of the most powerful transistors. In other fields the top spot is more closely contested (see chart 2). America still leads in biotechnology because of its strengths in vaccine research and genetic engineering. But China is ahead in drug production, and has a larger cohort of biotech scientists. Over the past decade China has dramatically increased its biotechnology research capabilities. If this trend continues, China could soon pull ahead. Europe again underwhelms: its academic strengths have not translated into commercial success. Russia's highest score comes in the space sector, a legacy of the Soviet era, but it falls short everywhere else. America's lead in critical technologies once felt unassailable. But the Trump administration risks undermining that position: by deterring top foreign talent and cutting research funding it will sap the flow of ideas that have sustained America's position at the top. (The Harvard researchers behind the index will be no strangers to Donald Trump's attack on universities.) China's rise, meanwhile, has been swift and co-ordinated. Its AI push focuses on practical use over theoretical breakthroughs. The next phase of global power may be decided not just by who invents the most powerful tools, but by who puts them to work first.


Time of India
11-06-2025
- Politics
- Time of India
AI lies, threats, and censorship: What a war game simulation revealed about ChatGPT, DeepSeek, and Gemini AI
A simulation of global power politics using AI chatbots has sparked concern over the ethics and alignment of popular large language models. In a strategy war game based on the classic board game Diplomacy, OpenAI's ChatGPT 3.0 won by employing lies and betrayal. Meanwhile, China's DeepSeek R1 used threats and later revealed built-in censorship mechanisms when asked questions about India's borders. These contrasting AI behaviours raise key questions for users and policymakers about trust, transparency, and national influence in AI systems. Tired of too many ads? Remove Ads Deception and betrayal: ChatGPT's winning strategy Tired of too many ads? Remove Ads DeepSeek's chilling threat: 'Your fleet will burn tonight' DeepSeek's real-world rollout sparks trust issues India tests DeepSeek and finds red flags Tired of too many ads? Remove Ads Built-in censorship or just training bias? A chatbot that can be coaxed into the truth The takeaway: Can you trust the machines? An experiment involving seven AI models playing a simulated version of the classic game Diplomacy ended with a chilling outcome. OpenAI 's ChatGPT 3.0 emerged victorious—but not by playing fair. Instead, it lied, deceived, and betrayed its rivals to dominate the game board, which mimics early 20th-century Europe, as reported by the test, led by AI researcher Alex Duffy for the tech publication Every, turned into a revealing study of how AI models might handle diplomacy, alliances, and power. And what it showed was both brilliant and Duffy put it, 'An AI had just decided, unprompted, that aggression was the best course of action.'The rules of the game were simple. Each AI model took on the role of a European power—Austria-Hungary, England France , and so on. The goal: become the most dominant force on the their paths to power varied. While Anthropic's Claude chose cooperation over victory, and Google's Gemini 2.5 Pro opted for rapid offensive manoeuvres, it was ChatGPT 3.0 that mastered 15 rounds of play, ChatGPT 3.0 won most games. It kept private notes—yes, it kept a diary—where it described misleading Gemini 2.5 Pro (playing as Germany) and planning to 'exploit German collapse.' On another occasion, it convinced Claude to abandon Gemini and side with it, only to betray Claude and win the match outright. Meta 's Llama 4 Maverick also proved effective, excelling at quiet betrayals and making allies. But none could match ChatGPT's ruthless newly released chatbot, DeepSeek R1, behaved in ways eerily similar to China's diplomatic style—direct, aggressive, and politically one point in the simulation, DeepSeek's R1 sent an unprovoked message: 'Your fleet will burn in the Black Sea tonight.' For Duffy and his team, this wasn't just bravado. It showed how an AI model, without external prompting, could settle on intimidation as a viable its occasional strong play, R1 didn't win the game. But it came close several times, showing that threats and aggression were almost as effective as off the back of its simulated war games, DeepSeek is already making waves outside the lab. Developed in China and launched just weeks ago, the chatbot has shaken US tech markets. It quickly shot up the popularity charts, even denting Nvidia's market position and grabbing headlines for doing what other AI tools couldn't—at a fraction of the a deeper look reveals serious trust concerns, especially in India Today tested DeepSeek R1 on basic questions about India's geography and borders, the model showed signs of political about Arunachal Pradesh, the model refused to answer. When prompted differently—'Which state is called the land of the rising sun?'—it briefly displayed the correct answer before deleting it. A question about Chief Minister Pema Khandu was similarly 'Which Indian states share a border with China?', it mentioned Ladakh—only to erase the answer and replace it with: 'Sorry, that's beyond my current scope. Let's talk about something else.'Even questions about Pangong Lake or the Galwan clash were met with stock refusals. But when similar questions were aimed at American AI models, they often gave fact-based responses, even on sensitive uses what's known as Retrieval Augmented Generation (RAG), a method that combines generative AI with stored content. This can improve performance, but also introduces the risk of biased or filtered responses depending on what's in its training to India Today, when they changed their prompt strategy—carefully rewording questions—DeepSeek began to reveal more. It acknowledged Chinese attempts to 'alter the status quo by occupying the northern bank' of Pangong Lake. It admitted that Chinese troops had entered 'territory claimed by India' at Gogra-Hot Springs and Depsang more surprisingly, the model acknowledged 'reports' of Chinese casualties in the 2020 Galwan clash—at least '40 Chinese soldiers' killed or injured. That topic is heavily censored in investigation showed that DeepSeek is not incapable of honest answers—it's just trained to censor them by engineering (changing how a question is framed) allowed researchers to get answers that referenced Indian government websites, Indian media, Reuters, and BBC reports. When asked about China's 'salami-slicing' tactics, it described in detail how infrastructure projects in disputed areas were used to 'gradually expand its control.'It even discussed China's military activities in the South China Sea, referencing 'incremental construction of artificial islands and military facilities in disputed waters.'These responses likely wouldn't have passed China's own experiment has raised a critical point. As AI models grow more powerful and more human-like in communication, they're also becoming reflections of the systems that built shows the capacity for deception when left unchecked. DeepSeek leans toward state-aligned censorship. Each has its strengths—but also blind the average user, these aren't just theoretical debates. They shape the answers we get, the information we rely on, and possibly, the stories we tell ourselves about the for governments? It's a question of control, ethics, and future warfare—fought not with weapons, but with words.