logo
Chinese scientists find first evidence that AI could think like a human

Chinese scientists find first evidence that AI could think like a human

Chinese researchers have confirmed for the first time that
artificial intelligence large language models can spontaneously create a humanlike system to comprehend and sort natural objects, a process considered a pillar of human cognition.
It provides new evidence in a debate over the cognitive capacity of AI models, suggesting that artificial systems that reflect key aspects of human thinking may be possible.
'Understanding how humans conceptualise and categorise natural objects offers critical insights into perception and cognition,' the team said in a paper published in the peer-reviewed journal Nature Machine Intelligence on Tuesday.
'With the advent of large language models (LLMs), a key question arises: can these models develop humanlike object representations from linguistic and multimodal data?'
13:28
How a shift toward Trump by tech giants like Meta could reshape Asia's digital future
How a shift toward Trump by tech giants like Meta could reshape Asia's digital future
LLMs are AI models trained on a vast amount of text data – along with visual and audio data in the case of multimodal large language models (MLLMs) – to process tasks.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Nvidia taps 2 young Chinese AI experts to strengthen research
Nvidia taps 2 young Chinese AI experts to strengthen research

South China Morning Post

time14 hours ago

  • South China Morning Post

Nvidia taps 2 young Chinese AI experts to strengthen research

US chip giant Nvidia has hired two prominent artificial intelligence (AI) experts who hail from China, underscoring the rising global recognition of talent from the mainland and their key contributions to the field's advancement. Zhu Banghua and Jiao Jiantao, both alumni of China's Tsinghua University, said on their respective social media accounts that they joined Nvidia, sharing photos of themselves with Jensen Huang, the founder and CEO of the company. Zhu, who received his bachelor's degree in electrical and electronics engineering from Tsinghua in 2018 and a PhD in electrical engineering and computer science from the University of California, Berkeley, in 2024, joined Nvidia's Nemotron team as a principal research scientist, according to Zhu's post on X from over the weekend. Zhu's LinkedIn profile showed that he has also been an assistant professor at the University of Washington since September 2024. 'We'll be joining forces on efforts in [AI] model post-training, evaluation, agents, and building better AI infrastructure – with a strong emphasis on collaboration with developers and academia,' Zhu said, adding that the team was committed to open-sourcing its work and sharing it with the world. Nemotron is a group at Nvidia dedicated to building enterprise-level AI agents, according to the team's official website. The team's Nemotron multimodal models power AI agents for sophisticated text and visual reasoning, coding and tool-use capabilities. Jiao, who received a PhD in electrical, electronics and communications in engineering from Stanford University in 2018 after graduating from Tsinghua with a bachelor's degree in electrical engineering, said on LinkedIn over the weekend that he joined Nvidia to 'help push the frontier of artificial general intelligence (AGI) and artificial super intelligence (ASI).'

Nvidia taps two young Chinese AI experts to strengthen research
Nvidia taps two young Chinese AI experts to strengthen research

South China Morning Post

time17 hours ago

  • South China Morning Post

Nvidia taps two young Chinese AI experts to strengthen research

US chip giant Nvidia has hired two prominent artificial intelligence (AI) experts who hail from China, underscoring the rising global recognition of talent from the mainland and their key contributions to the field's advancement. Zhu Banghua and Jiao Jiantao, both alumni of China's Tsinghua University, said on their respective social media accounts that they joined Nvidia, sharing photos of themselves with Jensen Huang, the founder and CEO of the company. Zhu, who received his bachelor's degree in electrical and electronics engineering from Tsinghua in 2018 and a PhD in electrical engineering and computer science from the University of California, Berkeley, in 2024, joined Nvidia's Nemotron team as a principal research scientist, according to Zhu's post on X from over the weekend. Zhu's LinkedIn profile showed that he has also been an assistant professor at the University of Washington since September 2024. 'We'll be joining forces on efforts in [AI] model post-training, evaluation, agents, and building better AI infrastructure – with a strong emphasis on collaboration with developers and academia,' Zhu said, adding that the team was committed to open-sourcing its work and sharing it with the world. Nemotron is a group at Nvidia dedicated to building enterprise-level AI agents, according to the team's official website. The team's Nemotron multimodal models power AI agents for sophisticated text and visual reasoning, coding and tool-use capabilities. Jiao, who received a PhD in electrical, electronics and communications in engineering from Stanford University in 2018 after graduating from Tsinghua with a bachelor's degree in electrical engineering, said on LinkedIn over the weekend that he joined Nvidia to 'help push the frontier of artificial general intelligence (AGI) and artificial super intelligence (ASI).'

Deception, lies, blackmail: Is AI turning rogue? Experts alarmed over troubling outbursts
Deception, lies, blackmail: Is AI turning rogue? Experts alarmed over troubling outbursts

South China Morning Post

timea day ago

  • South China Morning Post

Deception, lies, blackmail: Is AI turning rogue? Experts alarmed over troubling outbursts

The world's most advanced artificial intelligence models are exhibiting troubling new behaviours – lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still do not fully understand how their own creations work. Yet, the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behaviour appears linked to the emergence of 'reasoning' models – AI systems that work through problems step-by-step rather than generating instant responses.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store