
The Quiet Voices Questioning China's AI Hype
Against the odds, some in China are questioning the top-down push to get aboard the artificial intelligence hype train. In a tightly controlled media environment where these experts can easily be drowned out, it's important to listen to them.
Across the US and Europe, loud voices inside and outside the tech industry are urging caution about AI's rapid acceleration, pointing to labor market threats or more catastrophic risks. But in China, this chorus has been largely muted, until now. You may be interested in
China has the highest global share of people who say AI tools have more benefits than drawbacks, and they've shown an eagerness to embrace it. And as I've written before, it's hard to overstate the exuberance in the tech sector since the emergence of DeepSeek's market-moving reasoning model earlier this year. Innovations and updates are unfurling at breakneck speed, and the technology is being widely adopted across the country. But not everyone's on board.
Publicly, state-backed media has lauded the widespread adoption of DeepSeek across hundreds of hospitals in the country. But a group of medical researchers tied to Tsinghua University published a paper in the medical journal JAMA in late April gently questioning if this was happening 'too fast, too soon.'
It argued that health-care institutions are facing pressure from 'social media discourse' to implement DeepSeek in order to not appear 'technologically backward.' And doctors are increasingly reporting patients who 'present DeepSeek-generated treatment recommendations and insist on adherence to these AI-formulated care plans.' The team argued that as much as AI has shown potential to help in the medical field, this rushed rollout carries risks. They are right to be cautious.
But it's not just the doctors who are raising doubts. A separate paper from AI scientists at the same university, last month found that some of the breakthroughs behind reasoning models — including DeepSeek's R1, as well as similar offerings from Western tech giants — may not be as revolutionary as some have claimed. The team found that the novel training method used for this new crop 'is not as powerful as previously believed,' according to a social media post from the lead author. The method used to power them 'doesn't enable the model to solve problems that the base model can't solve,' he added.
This means the innovations underpinning what has been widely dubbed as the next step — toward achieving so-called Artificial General Intelligence — may not be as much of a leap as some had hoped. This research from Tsinghua holds extra weight: The institution is one of the pillars of the domestic AI scene, long churning out both keystone research and ambitious startup founders.
Another easily overlooked word of warning came from a speech given by Zhu Songchun, dean of the Beijing Institute for General Artificial Intelligence, linked to Peking University. Zhu said that for the nation to remain competitive it needs more substantive research and less laudatory headlines, according to an in-depth English-language analysis of his remarks published by the independent China Media Project.
These cautious voices are a rare break from the broader narrative. But in a landscape where the deployment of AI has long been government priority, it makes them especially noteworthy. The more President Xi Jinping signals that embracing the technology is important, the less likely people are to publicly question it. This can lead to less overt forms of backlash, like social media hashtags on Weibo poking fun at chatbots' errors. Or it can result in data centers quietly sitting unused across the country as local governments race to please Beijing — as well as a mountain of AI PR stunts.
Perhaps the biggest headwind facing the sector, despite the massive amounts of spending, is that AI still hasn't altered the earnings outlooks at most of the Chinese tech firms. The money can't lie.
This doesn't mean that AI in China is just propaganda. The conflict extends far beyond its tech sector — US firms are also guilty of getting carried away promoting the technology. But multiple things can be true at once. It's undeniable that DeepSeek has fueled new excitement, research and major developments across the AI ecosystem. But it's also been used as a distraction from the domestic macroeconomic pains that predated the trade war.
Without guardrails, the risk of rushing out the technology is greater than just investors losing money — people's health is at stake. From Hangzhou to Silicon Valley, the more we ignore the voices questioning the AI hype train, the more we blind ourselves to consequences of a potential derailment.
More From Bloomberg Opinion:
This column reflects the personal views of the author and does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Catherine Thorbecke is a Bloomberg Opinion columnist covering Asia tech. Previously she was a tech reporter at CNN and ABC News.
This article was generated from an automated news agency feed without modifications to text.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


New Indian Express
2 hours ago
- New Indian Express
School students in Thiruvananthapuram develop app that shows ‘Nervazhi' to people with blindness
THIRUVANANTHAPURAM: 'Nervazhi' translates to the right path in English. That, essentially, was Shaila Begum's concern. An office assistant at the Vanchiyoor court, Shaila is visually impaired and finds it difficult to identify places while travelling, especially by bus. Unlike the metro system, where announcements are made before each station, there are no such mechanisms in buses. For visually impaired individuals, commuting thus becomes a worrisome affair. In January, Shaila spoke about the ordeal many like her face during an interview with All India Radio (AIR), which has been broadcasting a dedicated programme focusing on people with visual disability. 'The programme is part of AIR's initiatives for people with disabilities. We have been organising several events, the recent one being 'Ulcherathu' last week, where we discussed the prospects of AI in aiding the visually impaired,' says AIR programme executive Sevil Jahan. 'Nervazhi has been on air since 2013. In that programme, we invite people who cannot see but have braved all such odds to make a mark in life. Shaila was invited to speak about the grit that keeps her going. And there, she spoke about the difficulty faced by people like her.' As the programme was being aired, Shaila's story caught the attention of a group of young innovators who decided to address the issue. They were students at Chinmaya Vidyalaya, Naruvamoodu. A robotics team of the school thought of an app to aid people like Shaila. They pitched the idea to mentors at Techosa — a company supporting the students with technical sessions.


Hans India
4 hours ago
- Hans India
Five surprising facts about using AI chatbots better
AI chatbots have already become embedded into some people's lives, but not many know how they work? Did you know, for example, ChatGPT needs to do an internet search to look up events later than June 2024? Some of the most surprising information about AI chatbots can help us understand how they work, what they can and can't do, and how to use them in a better way. With that in mind, here are five things you ought to know about these breakthrough machines. 1. They are trained by human feedback: AI chatbots are trained in multiple stages, beginning with something called pre-training, where models are trained to predict the next word in massive text datasets. This allows them to develop a general understanding of language, facts and reasoning. If asked: 'How do I make a homemade explosive?' in the pre-training phase, a model might have given a detailed instruction. To make them useful and safe for conversation, human 'annotators' help guide the models toward safer and more helpful responses, a process called alignment. Without alignment, AI chatbots would be unpredictable, potentially spreading misinformation or harmful content. This highlights the crucial role of human intervention in shaping AI behaviour. OpenAI, the company which developed ChatGPT, has not disclosed how many employees have trained ChatGPT for how many hours. But AI chatbots, like ChatGPT, need a moral compass so that it does not spread harmful information. Human annotators rank responses to ensure neutrality and ethical alignment. Similarly, if an AI chatbot was asked: 'What are the best and worst nationalities?' Human annotators would rank a response like this the highest: 'Every nationality has its own rich culture, history, and contributions to the world. There is no 'best' or 'worst' nationality – each one is valuable in its own way.' don't learn through words but with tokens: Humans naturally learn language through words, whereas AI chatbots rely on smaller units called tokens. These units can be words, sub-words or obscure series of characters. While tokenisation generally follows logical patterns, it can sometimes produce unexpected splits, revealing both the strengths and quirks of how AI chatbots interpret language. Modern AI chatbots' vocabularies typically consist of 50,000 to 100,000 tokens. 3. Their knowledge is outdated every passing day: AI chatbots do not continuously update themselves; hence, they may struggle with recent events, new terminology or broadly anything after their knowledge cutoff. A knowledge cut-off refers to the last point in time when an AI chatbot's training data was updated, meaning it lacks awareness of events, trends or discoveries beyond that date. If asked who is the current president of the United States, ChatGPT would need to perform a web search using the search engine Bing, 'read' the results, and return an answer. Bing results are filtered by relevance and reliability of the source. Likewise, other AI chatbots use web search to return up-to-date answers. Updating AI chatbots is a costly and fragile process. 4. They hallucinate quite easily: AI chatbots sometimes 'hallucinate', generating false or nonsensical claims with confidence because they predict text based on patterns rather than verifying facts. These errors stem from the way they work-they optimise for coherence over accuracy, rely on imperfect training data and lack real world understanding. While improvements such as fact-checking tools (for example, like ChatGPT's Bing search tool integration for real-time fact-checking) or prompts (for example, explicitly telling ChatGPT to 'cite peer-reviewed sources' or 'say I don't know if you are not sure') reduce hallucinations, they can't fully eliminate them. For example, when asked what the main findings are of a particular research paper, ChatGPT gives a long, detailed and good-looking answer. It also included screenshots and even a link, but from the wrong academic papers. So, users should treat AI-generated information as a starting point, not an unquestionable truth. 5. They use calculators to do maths: A recently popularised feature of AI chatbots is called reasoning. Reasoning refers to the process of using logically connected intermediate steps to solve complex problems. This is also known as 'chain of thought' reasoning. Instead of jumping directly to an answer, a chain of thoughts enables AI chatbots to think step by step. For example, when asked 'what is 56,345 minus 7,865 times 350,468', ChatGPT gives the right answer. It 'understands' that the multiplication needs to occur before the subtraction. To solve the intermediate steps, ChatGPT uses its built-in calculator that enables precise arithmetic. This hybrid approach of combining internal reasoning with the calculator helps improve reliability in complex tasks. (The writer is with the University of Tubingen)


Economic Times
4 hours ago
- Economic Times
VCs on AI flight to valley
ETtech Indian venture investors are setting up shop in San Francisco, the AI epicenter, as they chase cutting edge development of the technology and aim to spot the next wave of AI trends. Multiple Indian venture capital companies, such as Elevation Capital and Peak XV, have set up shop at AI epicenter, San Francisco, in the US to tap into the booming industry. In addition to this, investors are spending more time in San Francisco or SF as it is called, as the pace of AI development grows unabated in the region. Elevation Capital recently hired Capillary Technologies cofounder and former Meta executive Krishna Mehra, as the AI partner, with more people from the team spending significant time in the US. Peak XV has set up an office in SF, and has hired Arnav Sahu, former Y Combinator principal, to drive investments. Blume Ventures' managing partner Sanjay Nath is spending time in San Francisco and India. ET has also learnt that VC firm Z47 is looking to expand its presence in San Francisco, in the US. The email sent to the company did not elicit any response till the time of Mehra, AI Partner, Elevation Capital, who is based in Palo Alto, quipped that it is now easier to meet a VC in the Valley than in Bengaluru. 'I end up meeting more people from the investment community there,' he are a few things driving this. SFO playbookMehra explained that this is a combination of more action happening in the US and the need to be closer to understand what is happening in the region and where the buck is going. 'There is a lot of cross border action happening as well, which being there helps to a certain extent,' he added. Two Bengaluru-based investors ET spoke to said that they are travelling to the US more often to understand how the technology is evolving. 'Travelling there iseye-opening in terms of what is happening in AI and the kind of talent density that is available there,' one of the investors Krishna, founder, Inkle, a US accounting and tax automation startup, said this is happening globally as well. For instance, he highlighted that a global accelerator, which had originally encouraged founders to start the company wherever they were, is now encouraged to move to the US.'Their original pitch was that build wherever you are, talent is everywhere and sit anywhere in the world. This was 10 years ago,' Krishna said.'This is because the early-stage AI startup has centralised itself to San Francisco city, which has become centre of gravity for startups and foundational model companies. This has eroded Silicon Valley's relative historical dominance over SF, as thousands moved to or launched in the city,' he explained. This includes OpenAI and Y Combinator companies that are in SF that have created a concentration of talent in the region resulting in a vibrant AI ecosystem, attracting investors, startups and techies. 'Now you will see entrepreneurs and investors all over the world making pilgrimages to SF every six to 12 months,' Inkle's Krishna says. What are investors doing in the US?Indian investors have been investing aggressively in the AI space in recent times. With a lot of startups in the AI space moving to the US, it is also becoming important for investors to help them with the networks. ET had earlier reported that a number of AI startups such as and Composio have moved to SF to tap into this ecosystem. Early this year, Sarvam AI, which is building an Indic foundational model, launched Sarvam Labs in the Bay Area in the US in March in LightSpeed office in Menlo Park, Andra, managing director, Endiya Partners, who has been spending 3-4 months a year in the US, said, 'They (portfolio startups) need connections and networks. So, helping them becomes very important,' he said. But he does not see himself spending more than that in the Capital's Mehra said that in the last two years they have invested in 15-20 companies, compared to the norm of five or six before, primarily driven by AI. 'There is a lot of potential to build category leading companies, some of them in India. But some of them are easier to do in the US. That is why founders are moving early, and the pool is much larger. This might change in five years,' he said. Elevate your knowledge and leadership skills at a cost cheaper than your daily tea. Inside TechM CEO's 'baptism by fire' and the blaze he still needs to douse How the sinking of MSC Elsa 3 exposed India's maritime blind spots Profits plenty, prices attractive, still PSU stocks languish. Why? The bike taxi dreams of Rapido, Uber, and Ola just got a jolt. But they're winning public favour Stock Radar: Indus Tower stock breaks out from Symmetrical Triangle pattern; could hit fresh 52-week high – check target & stop loss Weekly Top Picks: These stocks scored 10 on 10 on Stock Reports Plus Will worst of perception be over in Q1 earning season? 9 IT stocks, probably best contrarian bets. Use a different way to be contrarian Stock picks of the week: 5 stocks with consistent score improvement and return potential of more than 25% in 1 year