logo
AI art can't match human creativity, yet — researchers – DW – 06/11/2025

AI art can't match human creativity, yet — researchers – DW – 06/11/2025

DW11-06-2025

Generative AI models are bad at representing things that require human senses, like smell and touch. Their creativity is 'hollow and shallow,' say experts.
Anyone can sit down with an artificial intelligence (AI) program, such as ChatGPT, to write a poem, a children's story, or a screenplay. It's uncanny: the results can seem quite "human" at first glance. But don't expect anything with much depth or sensory "richness", as researchers explain in a new study.
They found that the Large Language Modes (LLMs) that currently power Generative AI tools are unable to represent the concept of a flower in the same way that humans do.
In fact, the researchers suggest that LLMs aren't very good at representing any 'thing' that has a sensory or motor component — because they lack a body and any organic human experience.
"A large language model can't smell a rose, touch the petals of a daisy or walk through a field of wildflowers. Without those sensory and motor experiences, it can't truly represent what a flower is in all its richness. The same is true of some other human concepts," said Qihui Xu, lead author of the study at Ohio State University, US.
The study suggests that AI's poor ability to represent sensory concepts like flowers might also explain why they lack human-style creativity.
"AI doesn't have rich sensory experiences, which is why AI frequently produces things that satisfy a kind of minimal definition of creativity, but it's hollow and shallow," said Mark Runco, a cognitive scientist at Southern Oregon University, US, who was not involved in the study.
The study was published in the journal Nature Human Behaviour , June 4, 2025.
What are the challenges to book preservation?
To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video
AI poor at representing sensory concepts
The more scientists probe the inner workings of AI models, the more they are finding just how different their 'thinking' is compared to that of humans. Some say AIs are so different that they are more like alien forms of intelligence.
Yet objectively testing the conceptual understanding of AI is tricky. If computer scientists open up a LLM and look inside, they won't necessarily understand what the millions of numbers changing every second really mean.
Xu and colleagues aimed to test how well LLMs can 'understand' things based on sensory characteristics. They did this by testing how well LLMs represent words with complex sensory meanings, measuring factors, such as how emotionally arousing a thing is or whether you can mentally visualize a thing, and movement or action-based representations.
For example, they analyzed the extent to which humans experience flowers by smelling, or experience them using actions from the torso, such as reaching out to touch a petal. These ideas are easy for us to grasp, since we have intimate knowledge of our noses and bodies, but it's harder for LLMs, which lack a body.
Overall, LLMs represent words well — but those words lack any connection to the senses or motor actions that we experience or feel as humans.
But when it comes to words that have connections to things we see, taste or interact with using our body, that's where AI fails to convincingly capture human concepts.
What's meant by 'AI art is hollow'
AI creates representations of concepts and words by analyzing patterns from a dataset that is used to train it. This idea underlies every algorithm or task, from writing a poem, to predicting whether an image of a face is you or your neighbor.
Most LLMs are trained on text data scraped from the internet, but some LLMs are also trained on visual learning, from still-images and videos.
Xu and colleagues found that LLMs with visual learning exhibited some similarity with human representations in visual-related dimensions. Those LLMs beat other LLMs trained just on text. But this test was limited to visual learning — it excluded other human sensations, like touch or hearing.
This suggests that the more sensory information an AI model receives as training data, the better it can represent sensory aspects.
AI's impact on the working world
To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video
AI keeps learning and improving
The authors noted that LLMs are continually improving and said it was likely that AI will get better at capturing human concepts in the future.
Xu said that when future LLMs are augmented with sensor data and robotics, they may be able to actively make inferences about and act upon the physical world.
But independent experts DW spoke to suggested the future of sensory AI remained unclear.
"It's possible an AI trained on multisensory information could deal with multimodal sensory aspects without any problem," said Mirco Musolesi, a computer scientist at University College London, UK, who was not involved in the study.
However, Runco said even with more advanced sensory capabilities, AI will still understand things like flowers completely differently to humans.
Our human experience and memory are tightly linked with our senses — it's a brain-body interaction that stretches beyond the moment. The smell of a rose or the silky feel of its petals, for example, can trigger joyous memories of your childhood or lustful excitement in adulthood.
AI programs do not have a body, memories or a 'self'. They lack the ability to experience the world or interact with it as animals and human-animals do — which, said Runco, means "the creative output of AI will still be hollow and shallow."
Edited by: Zulfikar Abbany

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

US, Canada trade talks to resume after digital tax scrapped – DW – 06/30/2025
US, Canada trade talks to resume after digital tax scrapped – DW – 06/30/2025

DW

time5 hours ago

  • DW

US, Canada trade talks to resume after digital tax scrapped – DW – 06/30/2025

After Canada abandoned a planned digital services tax on US tech firms, trade negotiations between Ottawa and Washington are set to resume. Trade talks between the United States and Canada are set to resume this week after the latter scrapped plans for a digital services tax (DST) on US tech firms. Originally announced in 2020, the DST had foreseen a 3% levy on revenue earned from Canadian digital services users and would have impacted US companies such as Amazon, Meta and Apple. US President Donald Trump had slammed the proposed tax as a "blatant attack" and called off trade talks on Friday, threatening retaliatory tariffs on Canadian goods. But the planned tax was rescinded just hours before coming into effect and trade negotiations between the two sides are set to continue with an agreement expected by July 21, according to a statement from the Canadian Foreign Ministry. "Canada's preference has always been a multilateral agreement related to digital services taxation," it read, adding that the withdrawal of the DST comes "in anticipation of a mutually beneficial comprehensive trade arrangement with the United States." To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video Previously, the Canadian government had justified the DST as a step to address a shortfall in taxation on revenues generated from Canadians. The European Union (EU) has implemented a similar measure and Trump had accused Ottawa of mimicking Brussels. Canada is the United States' second-largest trading partner after Mexico, and the largest purchaser of US exports. Canada bought $349.4 billion (€297.4bn) of US goods last year and exported $412.7 billion (€351.3bn) worth in the opposition direction, according to the US Census Bureau.

Fact check: Climate deniers misinterpret Antarctic ice study – DW – 06/26/2025
Fact check: Climate deniers misinterpret Antarctic ice study – DW – 06/26/2025

DW

timea day ago

  • DW

Fact check: Climate deniers misinterpret Antarctic ice study – DW – 06/26/2025

Satellite data shows that Antarctic ice sheets have grown in size, prompting claims that climate change is in reverse or even a hoax. But it's not that simple. A recent study has found that the Antarctic ice sheet mass has slightly increased in size in recent years, prompting a wave of claims on social media (such as here and here) that global warming may be reversing. Published in March 2025 by researchers at Tongji University in Shanghai, China, the study reported that the Antarctic ice sheet gained approximately 108 billion tons of ice annually between 2021 and 2023. This data focused on four glacier basins in the Wilkes Land-Queen Mary Land region of the eastern Antarctic ice sheet, has been misinterpreted by some climate skeptics as evidence that climate change is a "hoax." DW Fact Check looked at the numbers. Claim: Posts on platforms like X (formerly Twitter) have gone viral, with one stating, "Moral of the story: Never believe a climate alarmist," garnering over 270,000 views. Another viewed more than 55,000 times, claimed,"Scientists have had to pause the Climate Change Hoax Scam." DW Fact Check: Misleading One post even featured a GIF that the user believed showed new land emerging off the coast of Dubai due to falling sea levels — apparently unaware of the artificial Palm Islands constructed there between 2001 and 2007. The findings in the Chinese study are based on publicly available data from NASA's Gravity Recovery and Climate Experiment satellites, which have been measuring the Earth's gravitational field since 2002 and have documented changes in the planet's ice and water masses. The data may be correct, but its interpretation by conspiratorial social media users is not — a situation not helped by the researchers' decision to insert an increasing average trend curve next to the preceding decreasing curve depicting ice mass. "This is perfect fodder for people who are intentionally looking to spread disinformation," said Johannes Feldmann, a physicist at the Potsdam Institute for Climate Impact Research near Berlin. Feldmann emphasized that climate science relies on long-term data — typically over 20 to 30 years — to identify meaningful trends. "Two, three or even five years are far too little to identify a long-term trend," he explained. Cherry-picking short-term data is a common tactic among climate change deniers. "There are always phases where the increase [in temperature] levels off a bit, which people suddenly take to mean: global warming has stopped, the trend is reversing," Feldmann added. "But it's never turned out to be true." To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video The Antarctic ice sheet, like many natural systems, is subject to fluctuations. A 2023 study from the University of Leeds in the United Kingdom highlighted how meteorological events, such as unusually heavy or light snowfall, can temporarily affect ice mass and sea levels. Therefore, fluctuations such as those observed between 2021 and 2023 are to be expected. "We're dealing with a natural system that is subject to fluctuations — and this is nothing unusual," said Angelika Humbert, a glaciologist at the Alfred Wegener Institute in Bremerhaven, northern Germany. "We sometimes have years with a lot of snow and sometimes years with no snow at all, and it's the same for ice sheets." The Tongji University researchers themselves acknowledged this in a separate 2023 study, linking increased ice mass in eastern Antarctica to unusually high snowfall. "Given the warmer atmosphere, we know that these snowfall events could increase in the coming years," said Feldmann. "On the one hand, this means more snow could fall more often [on the ice sheets] but also that more could melt — because it's getting warmer. "This is all well-researched and will continue to be researched," he added. "There was a brief increase [in Antarctic ice mass], but it didn't come anywhere close to replacing the losses of recent decades. The long-term development we are observing is a large-scale loss of the Antarctic ice sheet."

AI Is Learning To Lie, Scheme, And Threaten Its Creators
AI Is Learning To Lie, Scheme, And Threaten Its Creators

Int'l Business Times

timea day ago

  • Int'l Business Times

AI Is Learning To Lie, Scheme, And Threaten Its Creators

The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. "O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives. For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception." The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up." Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder. "This is not just hallucinations. There's a very strategic kind of deception." The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception." Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS). Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread. "I don't think there's much awareness yet," he said. All this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. "Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.". Researchers are exploring various approaches to address these challenges. Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for solutions. As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it." Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability. The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals AFP

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store