logo
#

Latest news with #中国科学院

Human heart structure beats 21 days in pig embryo, Chinese chimera research team says
Human heart structure beats 21 days in pig embryo, Chinese chimera research team says

South China Morning Post

time04-07-2025

  • Health
  • South China Morning Post

Human heart structure beats 21 days in pig embryo, Chinese chimera research team says

Chinese scientists have, for the first time, cultivated a beating heart structure with human cells in a pig embryo, reporting that the heart continued to beat for 21 days unaided. Advertisement The study, led by Lai Liangxue's team from the Guangzhou Institutes of Biomedicine and Health under the Chinese Academy of Sciences, was announced at the International Society for Stem Cell Research's annual meeting in Hong Kong on June 12. Previously, the team had cultivated human kidneys in pigs for up to 28 days. According to a report in Nature on June 13, the team reprogrammed human stem cells by introducing genes to prevent cell death and improve their survival in pigs. 06:23 Can China claim the leadership mantle after the US quits the WHO and Paris Agreement? Can China claim the leadership mantle after the US quits the WHO and Paris Agreement? At the early blastocyst stage – early in pregnancy when a ball of cells forms – they implanted pre-modified human stem cells into pig embryos, which were then transferred to surrogate sows.

Chinese scientists claim AI is capable of spontaneous human-like understanding
Chinese scientists claim AI is capable of spontaneous human-like understanding

Yahoo

time16-06-2025

  • Science
  • Yahoo

Chinese scientists claim AI is capable of spontaneous human-like understanding

Chinese researchers claim to have found evidence that large language models (LLMs) can comprehend and process natural objects like human beings. This, they suggest, is done spontaneously even without being explicitly trained to do so. According to the researchers from the Chinese Academy of Sciences and South China University of Technology in Guangzhou, some AIs (like ChatGPT or Gemini) can mirror a key part of human cognition, which is sorting information. Their study, published in Nature Machine Intelligence, investigated whether LLM models can develop cognitive processes similar to those of human object representation. Or, in other words, to find out if LLMs can recognize and categorize things based on function, emotion, environment, etc. To discover if this is the case, the researchers gave AIs 'odd-one-out' tasks using either text (for ChatGPT-3.5) or images (for Gemini Pro Vision). To this end, they collected 4.7 million responses across 1,854 natural objects (like dogs, chairs, apples, and cars). They found that of the models created, sixty-six conceptual dimensions were created to organize objects, just the way humans would. These dimensions extended beyond basic categories (such as 'food') to encompass complex attributes, including texture, emotional relevance, and suitability for children. The scientists also found that multimodal models (combining text and image) aligned even more closely with human thinking, as AIs process both visual and semantic features simultaneously. Furthermore, the team discovered that brain scan data (neuroimaging) revealed an overlap between how AI and the human brain respond to objects. The findings are interesting and provide, it appears, evidence that AI systems might be capable of genuinely 'understanding' in a human-like way, rather than just mimicking responses. It also suggests that future AIs could have more intuitive, human-compatible reasoning, which is essential for robotics, education, and human-AI collaboration. However, it is also important to note that LLMs don't understand objects the way humans do emotionally or experientially. AIs work by recognizing patterns in language or images that often correspond closely to human concepts. While that may appear to be 'understanding' on the surface, it's not based on lived experience or grounded sensory-motor interaction. Also, some parts of AI representations may correlate with brain activity, but this doesn't mean they can 'think' like humans or share the same architecture. If anything, they can be thought of as more of a sophisticated facsimile of human pattern recognition rather than a thinking machine. LLMs are more like a mirror made from millions of books and pictures, reflecting those models at the user based on learned patterns. The study's findings suggest that LLMs and humans might be converging on similar functional patterns, such as organizing the world into categories. This challenges the view that AIs can only 'appear' smart by repeating patterns in data. But, if, as the study argues, LLMS are starting to build conceptual models of the world independently, it would mean that we could be edging closer to artificial general intelligence (AGI)—a system that can think and reason across many tasks like a human. You can access the study in the journal Nature Machine Intelligence.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store