logo
Chinese scientists claim AI is capable of spontaneous human-like understanding

Chinese scientists claim AI is capable of spontaneous human-like understanding

Yahoo16-06-2025
Chinese researchers claim to have found evidence that large language models (LLMs) can comprehend and process natural objects like human beings. This, they suggest, is done spontaneously even without being explicitly trained to do so.
According to the researchers from the Chinese Academy of Sciences and South China University of Technology in Guangzhou, some AIs (like ChatGPT or Gemini) can mirror a key part of human cognition, which is sorting information.
Their study, published in Nature Machine Intelligence, investigated whether LLM models can develop cognitive processes similar to those of human object representation. Or, in other words, to find out if LLMs can recognize and categorize things based on function, emotion, environment, etc.
To discover if this is the case, the researchers gave AIs 'odd-one-out' tasks using either text (for ChatGPT-3.5) or images (for Gemini Pro Vision). To this end, they collected 4.7 million responses across 1,854 natural objects (like dogs, chairs, apples, and cars).
They found that of the models created, sixty-six conceptual dimensions were created to organize objects, just the way humans would. These dimensions extended beyond basic categories (such as 'food') to encompass complex attributes, including texture, emotional relevance, and suitability for children.
The scientists also found that multimodal models (combining text and image) aligned even more closely with human thinking, as AIs process both visual and semantic features simultaneously. Furthermore, the team discovered that brain scan data (neuroimaging) revealed an overlap between how AI and the human brain respond to objects.
https://www.youtube.com/watch?v=CB7NNsI27ks&pp=ygUOY2FuIExMTXMgdGhpbms%3D
The findings are interesting and provide, it appears, evidence that AI systems might be capable of genuinely 'understanding' in a human-like way, rather than just mimicking responses. It also suggests that future AIs could have more intuitive, human-compatible reasoning, which is essential for robotics, education, and human-AI collaboration.
However, it is also important to note that LLMs don't understand objects the way humans do emotionally or experientially.
AIs work by recognizing patterns in language or images that often correspond closely to human concepts. While that may appear to be 'understanding' on the surface, it's not based on lived experience or grounded sensory-motor interaction.
Also, some parts of AI representations may correlate with brain activity, but this doesn't mean they can 'think' like humans or share the same architecture.
If anything, they can be thought of as more of a sophisticated facsimile of human pattern recognition rather than a thinking machine. LLMs are more like a mirror made from millions of books and pictures, reflecting those models at the user based on learned patterns.
The study's findings suggest that LLMs and humans might be converging on similar functional patterns, such as organizing the world into categories. This challenges the view that AIs can only 'appear' smart by repeating patterns in data.
But, if, as the study argues, LLMS are starting to build conceptual models of the world independently, it would mean that we could be edging closer to artificial general intelligence (AGI)—a system that can think and reason across many tasks like a human.
You can access the study in the journal Nature Machine Intelligence.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

躍升全球第5大資產!《富爸爸》作者曝比特幣正處「香蕉區」 喊話突破「這價位」再加碼
躍升全球第5大資產!《富爸爸》作者曝比特幣正處「香蕉區」 喊話突破「這價位」再加碼

Yahoo

time2 hours ago

  • Yahoo

躍升全球第5大資產!《富爸爸》作者曝比特幣正處「香蕉區」 喊話突破「這價位」再加碼

[FTNN新聞網]記者江佳蓉/綜合報導 全球最大加密貨幣比特幣(Bitcoin)近日屢創歷史新高,穩步邁向12萬美元大關,市值一度超越亞馬遜,成為全球第五大資產,僅次於黃金、輝達(NVIDIA)、微軟(Microsoft)和蘋果(Apple)。對此,投資理財暢銷書《富爸爸,窮爸爸》作者羅伯特清崎(Robert Kiyosaki)指出,比特幣正處於瘋狂上漲的「香蕉區(Banana Zone)」,並透露:「當比特幣單價超過11.7萬美元時,我將會再買一個比特幣」。 羅伯特清崎週六(12日)於社交平台X發文表示,《富爸爸》的另一堂課「豬會變肥,豬公會被宰。(PIGs get fat. HOGs get slaughtered.)」,並透露之所以分享這句話,是因為他最近以11萬美元(約新台幣321.35萬元)買進一枚比特幣,且形容目前自己正處在所謂的「香蕉區(Banana Zone)」,清崎指出,「在香蕉區裡,那些豬公(貪婪的人)會蜂擁而至,因為他們正被名為錯失恐懼症(FOMO)的症狀逼瘋了」。 據外媒報導,「香蕉區」是由宏觀投資者帕爾(Raoul Pal)所提出的術語,用來形容比特幣進入瘋狂上漲階段,由於此階段價格呈拋物線式急升,走勢如同「香蕉」的弧形,因此得名。這是一段價格飆升、情緒高漲、風險與機會並存的市場狂潮。 清崎在文中表示,自己手上已握有足夠的比特幣,目前在等待「這群貪婪的人被宰」,待眾人急於拋售、並責怪比特幣賠錢時,「我將趁著比特幣『大拍賣』再買進更多比特幣」,他直言「你的獲利是在買進的時候就決定了,而不是賣出的時候」。此外,他今(13)日也再度發文表示,「當比特幣單價超過11.7萬美元,我打算盡快再買一個比特幣,致富從未如此簡單,我愛所有的比特幣。」 Another RICH DAD LESSON:'PIGs get get slaughtered.'I state this lesson because I bought my latest BITCOIN at $110k. I am now in position for what Raoul Pal calls 'the Banana Zone.'In the Banana Zone the HOGS willrush in….driven to insanity by the dreaded… — Robert Kiyosaki (@theRealKiyosaki) July 11, 2025 YAY:Bitcoin over $117 K a coin. Going to buy one more Bitcoin never been easier to become rich…. even a study, learn, and find out if Bitcoin is your path to becoming a care. I love my BITCOINS…. all of them. — Robert Kiyosaki (@theRealKiyosaki) July 13, 2025 ◎《FTNN新聞網》提醒您:本資料僅供參考,投資人應獨立判斷,審慎評估並自負投資風險。 更多FTNN新聞網報導0056除息列車將啟動!八大公股撒5.9億卡位 這檔「年化配息近15%」被狂敲一整週營收探4月新低!自營商單週狠砍「晶圓二哥」逾8千張 老AI這檔「獲大摩點讚」仍中刀6億14家金控H1獲利年減35%!財經專家揭「這檔衰退244%」 點3因素:金融業年終怕要縮水了

Grok 4 just revealed a $300 a month plan — here's what it includes
Grok 4 just revealed a $300 a month plan — here's what it includes

Tom's Guide

time3 hours ago

  • Tom's Guide

Grok 4 just revealed a $300 a month plan — here's what it includes

The AI market is non-stop. As competitors scramble to be the top dog, Elon Musk's xAI is the latest to make a big move, launching the latest version of its Grok chatbot. An entire hour after the livestream was meant to kick off, Elon Musk and a few members of the xAI team took to the stage, revealing Grok 4. It's better at coding, more intelligent, and more capable at taking on large amounts of information. Unfortunately for the Grok team, this update has been pretty drastically overshadowed by other news. Just days before, xAI was facing backlash over racist and antisemitic responses from Grok's earlier versions, and its support for conspiracy theories. Following that, the company's CEO Linda Yaccarino announced she was stepping down from her role. With all of this, Grok 4 is a chance for the company to show that it is still just as competitive in the world of AI, despite fierce competition. But, as it announced reams of improvements and exciting changes, there is one new issue that came from the launch. Okay, it's not so much an issue, more of a concern. xAI is now the owner of the most expensive AI chatbot subscription plan, with a whopping $300 a month price tag. That's not to say its competitors aren't jacking up the prices. Perplexity, ChatGPT, Gemini, Claude, and others now all offer a higher-performance plan with a big price tag. They all, however, went $100 lower with a $200 a month price tag. Get instant access to breaking news, the hottest reviews, great deals and helpful tips. We've already made a point of questioning these prices. For the average person, they are pretty steep and signal a slow descent into AI priority for those with the cash to splash. Features are locked behind these paywalls, and these so-called power users get better speeds and priority in queues. But Musk and his team have pushed the idea that this plan specifically is actually worth all that money. So is that true? xAI has claimed that this is the world's most powerful AI model, outperforming any and all competitors. That's a big claim, so what can it actually do? Like the other versions of Grok 4, this is still a chatbot in the traditional sense. However, it's got a lot more tricks up its sleeve. It includes a multi-agent version of Grok, which runs multiple reasoning agents in parallel, comparing their outputs to boost accuracy and depth. In other words, whenever you ask the model a question, multiple agents (or versions of the chatbot) attempt to tackle the problem from a variety of angles. A final version reviews all of the responses, selecting the best one or blending them together. It's a bit like asking a team of experts a question, picking out the best bits of advice from each of them. This is a big step up from what we've seen from the likes of ChatGPT and Gemini, which, even when using deep research (where the model takes more time and effort on each prompt), it isn't analyzing your queries in anywhere near as much depth. YouTuber Ray Fernando took the dive and bought the $300 plan, testing its performance and comparing it to OpenAI's nearest plan. He found its performance was impressive, pumping out long, detailed information about how to make money in niche areas, stocks to invest in, and freelance opportunities. The supposed benefit of Grok is its level of expertise, understanding any topic from a variety of angles at quick speeds. Grok 4 Super Heavy is undoubtedly impressive and right now, could well be the best performing AI tool out there. So, obviously you should invest in it, right? For most people, no. A tiny minority of people will get the full benefits needed for this plan. It is heavily targeted at coders, business owners and massive power users of AI. The same can be said for any of the $200-a-month plans. They are impressive tools with prices to match. But, so are the cheaper plans. If you're looking to upgrade to a paid AI plan, try out one of the cheaper options before you make the staggering jump to $300 a month.

Is it OK to use AI in your job search? Experts say yes. Here's how to do it right
Is it OK to use AI in your job search? Experts say yes. Here's how to do it right

Hamilton Spectator

time3 hours ago

  • Hamilton Spectator

Is it OK to use AI in your job search? Experts say yes. Here's how to do it right

Sandra Lavoy noticed awkward pauses and hesitation from a job candidate when she asked questions on a video call. The pauses didn't seem natural; neither did the responses. Lavoy, the regional director at employment agency Robert Half, suspected the candidate was using artificial intelligence to generate answers during a live job interview. 'I questioned it,' she recalled. 'And they jumped off the call.' That experience wasn't a one-off for Lavoy, so she started asking candidates to show up in person. With the unemployment rate around seven per cent, those on the hunt for work are looking to get an edge on fellow job seekers. Some are turning to AI to generate pristine, error-free resumés and even prepare for interviews. But that trend has many on the hiring side questioning its ethics. Companies have started noticing the misuse of AI tools during live interviews and it has become a trend over the last couple of months, said Alexandra Tillo, senior talent strategy adviser at Indeed Canada. Many recruiters don't mind the use of AI in job searches, Tillo said but it raises an alarm when candidates forgo all personality when writing a cover letter or rely heavily on technology during interviews rather than their own knowledge. Similar responses to situational or behavioural questions from multiple candidates, with a delivery that lacks emotional intelligence, is what's tipping off recruiters to inauthentic candidates, she added. 'It's very hard to judge someone's skills, especially if the answer is not truly their own and it does lead to a bit of a waste of time ... (and a) lack of trust,' Tillo said. A tough job market leaves little room for errors from candidates — likely one of the reasons some feel compelled to use AI during live interviews, Tillo speculates. Employers are taking longer to hire the right candidate: sifting through a heap of applications and relying heavily on AI-powered application tracking systems. Meanwhile, candidates are using AI to insert the right keywords in the hopes of getting through those systems, said Ariel Hennig Wood, career coach at Canada Career Counselling. 'We're losing the personalized resumés and then we're losing the personalized response on the employer side,' she said. But there are ways AI can be used effectively when looking for work, Wood said. Her strategy includes step-by-step prompt engineering — telling generative AI programs and apps such as chatbot ChatGPT exactly what it needs to do for every phase of the job search. 'When it comes to employer research, AI can definitely be your best friend,' Wood said. AI could help gather insights on information ranging from a company's turnover rate to why employees like working there. The next piece is the cover letter. She suggests starting with a generic template borrowed from AI, then personalizing it with your own voice through the right prompts. 'Instead of just saying, 'I want a job,' it should be: 'I want this job, and this is why I'm a good fit. This is why I feel connected to this role,'' Wood said. Then tailor that research to the resumé and cover letter, while also doing an analysis of the job posting to add the right keywords, she added. 'AI needs to be used in the job search process to be effective against application-tracking systems,' said Wood. Then, Wood suggested leveraging AI for practice interview questions — such as generating questions you might be asked or pulling out achievements from your resumé to make answers relevant to the job interview. 'You can record yourself answering the interview questions, and then it will give you AI-generated feedback, which can be helpful,' she said. But also get feedback from a friend or career counsellor, Wood added. Once a candidate lands the job, Wood said AI can help with offer negotiations. 'It can scan the offer and flag anything that may be out of the norm,' she said. 'It could tell you ... where there could be room for negotiation in the offer.' AI isn't just a tool to polish resumés for Karan Saraf, who is studying public relations and is on the lookout for a job. Some days, he uses it to make sense of his scattered thoughts when applying to a job, while other times, it's about role-playing interviews. And his strategy worked, landing him interviews in a tough youth job market. Saraf said as long as he's not plagiarizing or misleading employers, he doesn't feel the need to disclose that he leveraged AI in his job search journey. 'But then, if I'm ever asked this question, I would be honest about it,' he said. 'That's part of being an ethical AI user.' Wood said an ethical AI user would know exactly what's in their resumé, if questioned. 'I don't believe that you need to go into an interview and say, 'By the way, I prepped with AI for this,'' she said. 'It's such a common tool now that everybody's using and if you are using it ethically, there's nothing to disclose.' But Carlie Bell thinks that creates an imbalance between employers and job seekers. Upcoming Ontario legislation mandates companies to disclose in their public job postings their use of AI in screening, selecting and assessing applicants starting Jan. 1, 2026. Other provinces haven't yet opted for similar measures. 'It is employers ... who are going to be held to legal standards around this kind of stuff and expectations, but there is still nothing there to really guide the job seekers,' said Bell, director of consulting at Citation Canada. Bell anticipates employers will also start expecting job seekers to disclose their use of AI for transparency both ways. Still, using AI in a job search isn't likely to harm a candidate as long as they continue to be creative and talk about personal experiences, Bell said. 'In a world where everybody's the same ... and you're trying to compete essentially against machines on both sides, what we know is that the human really matters still,' Bell said. This report by The Canadian Press was first published July 13, 2025.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store