logo
AI might now be as good as humans at detecting emotion, political leaning and sarcasm in online conversations

AI might now be as good as humans at detecting emotion, political leaning and sarcasm in online conversations

Yahoo2 days ago
When we write something to another person, over email or perhaps on social media, we may not state things directly, but our words may instead convey a latent meaning – an underlying subtext. We also often hope that this meaning will come through to the reader.
But what happens if an artificial intelligence (AI) system is at the other end, rather than a person? Can AI, especially conversational AI, understand the latent meaning in our text? And if so, what does this mean for us?
Latent content analysis is an area of study concerned with uncovering the deeper meanings, sentiments and subtleties embedded in text. For example, this type of analysis can help us grasp political leanings present in communications that are perhaps not obvious to everyone.
Understanding how intense someone's emotions are or whether they're being sarcastic can be crucial in supporting a person's mental health, improving customer service, and even keeping people safe at a national level.
Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK's latest coverage of news and research, from politics and business to the arts and sciences.
These are only some examples. We can imagine benefits in other areas of life, like social science research, policy-making and business. Given how important these tasks are – and how quickly conversational AI is improving – it's essential to explore what these technologies can (and can't) do in this regard.
Work on this issue is only just starting. Current work shows that ChatGPT has had limited success in detecting political leanings on news websites. Another study that focused on differences in sarcasm detection between different large language models – the technology behind AI chatbots such as ChatGPT – showed that some are better than others.
Finally, a study showed that LLMs can guess the emotional 'valence' of words – the inherent positive or negative 'feeling' associated with them. Our new study published in Scientific Reports tested whether conversational AI, inclusive of GPT-4 – a relatively recent version of ChatGPT – can read between the lines of human-written texts.
The goal was to find out how well LLMs simulate understanding of sentiment, political leaning, emotional intensity and sarcasm – thus encompassing multiple latent meanings in one study. This study evaluated the reliability, consistency and quality of seven LLMs, including GPT-4, Gemini, Llama-3.1-70B and Mixtral 8 × 7B.
We found that these LLMs are about as good as humans at analysing sentiment, political leaning, emotional intensity and sarcasm detection. The study involved 33 human subjects and assessed 100 curated items of text.
For spotting political leanings, GPT-4 was more consistent than humans. That matters in fields like journalism, political science, or public health, where inconsistent judgement can skew findings or miss patterns.
GPT-4 also proved capable of picking up on emotional intensity and especially valence. Whether a tweet was composed by someone who was mildly annoyed or deeply outraged, the AI could tell – although, someone still had to confirm if the AI was correct in its assessment. This was because AI tends to downplay emotions. Sarcasm remained a stumbling block both for humans and machines.
The study found no clear winner there – hence, using human raters doesn't help much with sarcasm detection.
Why does this matter? For one, AI like GPT-4 could dramatically cut the time and cost of analysing large volumes of online content. Social scientists often spend months analysing user-generated text to detect trends. GPT-4, on the other hand, opens the door to faster, more responsive research – especially important during crises, elections or public health emergencies.
Journalists and fact-checkers might also benefit. Tools powered by GPT-4 could help flag emotionally charged or politically slanted posts in real time, giving newsrooms a head start.
There are still concerns. Transparency, fairness and political leanings in AI remain issues. However, studies like this one suggest that when it comes to understanding language, machines are catching up to us fast – and may soon be valuable teammates rather than mere tools.
Although this work doesn't claim conversational AI can replace human raters completely, it does challenge the idea that machines are hopeless at detecting nuance.
Our study's findings do raise follow-up questions. If a user asks the same question of AI in multiple ways – perhaps by subtly rewording prompts, changing the order of information, or tweaking the amount of context provided – will the model's underlying judgements and ratings remain consistent?
Further research should include a systematic and rigorous analysis of how stable the models' outputs are. Ultimately, understanding and improving consistency is essential for deploying LLMs at scale, especially in high-stakes settings.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
This collaboration emerged through the COST OPINION network. We extend special thanks to network members for helping out with work on this article: Ljubiša Bojić, Anela Mulahmetović Ibrišimović, and Selma Veseljević Jerković.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Scientists launch controversial project to create the world's first artificial human DNA
Scientists launch controversial project to create the world's first artificial human DNA

Yahoo

time2 hours ago

  • Yahoo

Scientists launch controversial project to create the world's first artificial human DNA

[Source] Researchers at five British universities have launched the Synthetic Human Genome Project (SynHG) with an initial grant of approximately $12.6 million from Wellcome, the U.K.'s largest biomedical research charity. Unveiled on Thursday, the five-year effort is led by molecular biologist Jason W. Chin at the Medical Research Council Laboratory of Molecular Biology in Cambridge and aims to assemble an entire human chromosome, base by base, inside the lab. Writing a genome Instead of tweaking existing DNA with tools such as CRISPR, SynHG will attempt to 'write' long stretches of code before inserting them into cultured human skin cells to study how chromosome architecture drives health and disease. The project builds on Chin's earlier success constructing a fully synthetic E. coli genome. The laboratory playbook blends generative-AI sequence design with high-throughput robotic assembly, allowing scientists to plan and assemble millions of DNA bases. Patrick Yizhi Cai of the University of Manchester, who oversees these methods, says the approach 'leverag[es] cutting-edge generative AI and advanced robotic assembly technologies to revolutionize synthetic mammalian chromosome engineering.' Trending on NextShark: Why experts are cautious Geneticist Robin Lovell-Badge of London's Francis Crick Institute emphasized the importance of understanding not only the scientific potential but also the societal values and risks involved. He warned that as research progresses, there is the possibility of creating synthetic cells that could, if used in humans, lead to tumors or produce novel infectious particles if not carefully designed. Lovell-Badge recommended that any engineered cells should include safeguards, such as inducible genetic kill switches, to ensure they can be eliminated from the body or targeted by the immune system if needed. Sarah Norcross, director of the Progress Educational Trust, echoed the need for transparency and public engagement, highlighting that synthesizing human genomes is controversial and requires researchers and the public to be in active communication. Norcross welcomed the project's built-in social science program, which surveys communities across Asia-Pacific, Africa, Europe and the Americas as the science unfolds and is led by social scientist Joy Yueyue Zhang, as a way to ensure that public interests and concerns are considered from the outset. Trending on NextShark: Road ahead Over the next five years, the consortium will iterate design–build–test cycles, aiming first for an error-free synthetic chromosome representing roughly 2% of human DNA. Alongside the laboratory milestones, the team plans to release an open-access toolkit covering both the technical and governance lessons learned. Trending on NextShark: This story is part of The Rebel Yellow Newsletter — a bold weekly newsletter from the creators of NextShark, reclaiming our stories and celebrating Asian American voices. Subscribe free to join the movement. If you love what we're building, consider becoming a paid member — your support helps us grow our team, investigate impactful stories, and uplift our community. ! Trending on NextShark: Download the NextShark App: Want to keep up to date on Asian American News? Download the NextShark App today!

Modella AI and AstraZeneca link for cancer clinical development
Modella AI and AstraZeneca link for cancer clinical development

Yahoo

time4 hours ago

  • Yahoo

Modella AI and AstraZeneca link for cancer clinical development

Modella AI has signed a multi-year agreement with AstraZeneca to expedite AI-driven oncology clinical development. The partnership will give AstraZeneca access to Modella AI's multi-modal foundation models. The agreement will enable the use of Modella AI's latest models, with rich feature extraction from different types of data, to speed up clinical development across AstraZeneca's worldwide oncology portfolio. AstraZeneca oncology research and development (R&D) chief AI and data scientist Jorge Reis-Filho stated: 'At AstraZeneca, AI is integrated across every aspect of clinical development. 'Through the use of foundation models, combined with our unique datasets and AI expertise, we are confident in our strategy to accelerate development and increase the probabilities of success in our oncology clinical trials.' AstraZeneca will use Modella AI's platform for cancer research R&D capabilities to improve biomarker discovery and clinical development while enhancing patient outcomes. By integrating these advanced foundation models into its R&D pipeline, AstraZeneca seeks to enable data-driven discovery methods with increased speed. Modella AI CEO Jill Stefanelli stated: 'Foundation models are transforming precision medicine. They are the backbone of AI-powered biomedical discovery and mark the first step toward fully autonomous AI agents. 'Our state-of-the-art multimodal foundation models provide powerful features from different data types for downstream tasks. When integrated with AstraZeneca's research engine, they will have the potential to accelerate data-driven development and enable the development of new AI agents that can automate complex R&D workflows.' In June 2025, the US Food and Drug Administration (FDA) approved AstraZeneca and Daiichi Sankyo's Datroway (datopotamab deruxtecan) for the treatment of adults with locally advanced or metastatic non-small cell lung cancer (NSCLC) that exhibits mutations in the epidermal growth factor receptor. "Modella AI and AstraZeneca link for cancer clinical development" was originally created and published by Pharmaceutical Technology, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

Hyundai's IONIQ 6 N set for global debut
Hyundai's IONIQ 6 N set for global debut

Yahoo

time4 hours ago

  • Yahoo

Hyundai's IONIQ 6 N set for global debut

Hyundai Motor Company is set to debut its next-generation electric sports sedan, IONIQ 6 N, at the Goodwood Festival of Speed, im the UK, on 10 July 2025. According to the company, the IONIQ 6 N builds upon the success of the IONIQ 5 N with the introduction of advanced 'N e-shift technology', available in every drive mode. The IONIQ 6 N features a 'fully redesigned' suspension geometry that takes advantage of its low ride height. It includes a lowered roll centre and 'enlarged' caster trail for enhanced steering feedback. The new electronically controlled stroke sensing (ECS) dampers are engineered to balance comfort and response. The enhanced N Drift Optimizer offers a wider range of customisation options for drift control, tailored to the driver's skill level. Body features such as flared fenders, lightweight wheels, and a swan-neck rear spoiler highlight the vehicle's dynamic capabilities. The IONIQ 6 N's public unveiling at the festival will be part of a larger presentation lineup of the Hyundai N brand's performance vehicles in a booth. N Management Group head and vice president Joon Park said: 'The IONIQ 6 N has been developed to provide the most engaging driving experience possible in an EV. Hyundai N will once again disrupt the EV segment, not with headline grabbing numbers, but by demonstrating how fun an electric car driving experience can be.' "Hyundai's IONIQ 6 N set for global debut" was originally created and published by Just Auto, a GlobalData owned brand. The information on this site has been included in good faith for general informational purposes only. It is not intended to amount to advice on which you should rely, and we give no representation, warranty or guarantee, whether express or implied as to its accuracy or completeness. You must obtain professional or specialist advice before taking, or refraining from, any action on the basis of the content on our site. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store