
AI could be as emotional as humans in online conversations
But what happens if an artificial intelligence (AI) system is at the other end, rather than a person? Can AI, especially conversational AI, understand the latent meaning in our text? And if so, what does this mean for us?
Latent content analysis is an area of study concerned with uncovering the deeper meanings, sentiments and subtleties embedded in text. For example, this type of analysis can help us grasp political leanings present in communications that are perhaps not obvious to everyone. Understanding how intense someone's emotions are or whether they're being sarcastic can be crucial in supporting a person's mental health, improving customer service, and even keeping people safe at a national level.
These are only some examples. We can imagine benefits in other areas of life, like social science research, policy making and business. Given how important these tasks are – and how quickly conversational AI is improving – it's essential to explore what these technologies can (and can't) do in this regard. Work on this issue is only just starting.
The current work shows that ChatGPT has had limited success in detecting political leanings on news websites. Another study that focused on differences in sarcasm detection between different large language models – the technology behind AI chatbots such as ChatGPT – showed that some are better than others. Finally, a study showed that LLMs can guess the emotional 'valence' of words – the inherent positive or negative 'feeling' associated with them.
Our new study published in Scientific Reports tested whether conversational AI, inclusive of GPT-4 – a relatively recent version of ChatGPT – can read between the lines of human-written texts. The goal was to find out how well LLMs simulate understanding of sentiment, political leaning, emotional intensity and sarcasm – thus encompassing multiple latent meanings in one study.
This study evaluated the reliability, consistency and quality of seven LLMs, including GPT-4, Gemini, Llama-3.1-70B and Mixtral 8?×?7B.
We found that these LLMs are about as good as humans at analysing sentiment, political leaning, emotional intensity and sarcasm detection. The study involved 33 human subjects and assessed 100 curated items of text. For spotting political leanings, GPT-4 was more consistent than humans. These matter in fields like journalism, political science and or public health, where inconsistent judgement can skew findings or miss patterns. GPT-4 also proved capable of picking up on emotional intensity and especially valence. Whether a tweet was composed by someone who was mildly annoyed or deeply outraged, the AI could tell – although, someone still had to confirm if the AI was correct in its assessment. This was because AI tends to downplay emotions.
Sarcasm remained a stumbling block both for humans and machines. The study found no clear winner there – hence, using human raters doesn't help much with sarcasm detection.
Why does this matter?
For one, AI like GPT-4 could dramatically cut the time and cost of analysing large volumes of online content. Social scientists often spend months analysing user-generated text to detect trends. GPT-4, on the other hand, opens the door to faster, more responsive research – especially important during crises, elections or public health emergencies. Journalists and fact-checkers might also benefit.
Tools powered by GPT-4 could help flag emotionally charged or politically slanted posts in real time, giving newsrooms a head start. There are still concerns. Transparency, fairness and political leanings in AI remain issues. However, studies like this one suggest that when it comes to understanding language, machines are catching up to us fast – and may soon be valuable teammates rather than mere tools.
Although this work doesn't claim conversational AI can replace human raters completely, it does challenge the idea that machines are hopeless at detecting nuance. Our study's findings do raise follow-up questions.
If a user asks the same question of AI in multiple ways – perhaps by subtly rewording prompts, changing the order of information, or tweaking the amount of context provided – will the model's underlying judgements and ratings remain consistent?
Further research should include a systematic and rigorous analysis of how stable the models' outputs are. Ultimately, understanding and improving consistency is essential for deploying LLMs at scale, especially in high-stakes settings.
(The writer is associated with the University of Limerick)

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


India.com
37 minutes ago
- India.com
Ahead of AI....super intelligence plan of Meta owner Mark Zuckerberg, spending huge amount of money to hire...
Meta CEO Mark Zuckerberg plans big AI launch: Taking a significant move towards the development of Artificial Intelligence, Meta CEO Mark Zuckerberg has officially launched Meta Superintelligence Labs (MSL). Moving towards a bold initiative aimed at developing superintelligence, Zuckerberg has reorganized the company's artificial intelligence efforts under a new division called Meta Superintelligence Labs. Here are all the details you need to know about the recent move by the Meta CEO. What is Meta CEO Mark Zuckerberg planning? Experts view the move as a reaction to the poor reception for Meta's latest open-source Llama 4 model, which allowed its rivals including Google, OpenAI and China's DeepSeek to seize momentum in the AI race. Reports also say that Mark Zuckerberg is hiring engineers from other companies on huge packages in order to make the project successful. Who will be in new team of Meta AI? As per a report carried by Reuters news agency, the new Meta Superintelligence Labs division will be headed by Alexandr Wang, former CEO of data labeling startup Scale A and he himself will be the chief AI officer of the new initiative at the social media giant. According to media reports, Mark Zuckerberg himself also wrote a memo regarding Meta Superintelligence Lab where he said that 'I believe this is the beginning of a new era for humanity.' What is Mark Zuckerberg trying to create? If we actually try to understand what Mark Zuckerberg is trying to create, he is reportedly creating Artificial General Intelligence in the Meta Superintelligence Lab, which will be an AI that will be able to think and understand like humans. The move is seen as a bad news for AI platforms like Open AI's ChatGPT, Google's Bard and Microsoft's Bing as Meta is clearly the first one to take a step towards developing superintelligence.
&w=3840&q=100)

First Post
an hour ago
- First Post
After Perplexity, OpenAI eyeing Google Chrome's throne? Company to release new web browser, report says
The new browser, expected to launch in the coming weeks, will embed artificial intelligence into the core browsing experience and give OpenAI direct access to one of Google's most lucrative assets: user data read more OpenAI is preparing to release its own web browser, aiming to disrupt a market long dominated by Alphabet's Google Chrome. Reuters, citing three people familiar with the matter, reported that the new browser, which is expected to launch in the coming weeks, will embed artificial intelligence into the core browsing experience and give OpenAI direct access to one of Google's most lucrative assets: user data. The initiative marks a strategic expansion by OpenAI beyond its ChatGPT chatbot, which boasts 500 million weekly active users, and could challenge a key component of Google's advertising empire. STORY CONTINUES BELOW THIS AD Chrome plays a key role in Alphabet's ad business, providing data that helps the company target users with high precision and rerouting default search traffic to its own engine. New interface, deeper integration OpenAI's upcoming browser is designed to maintain some user interactions within a ChatGPT-style native chat interface, reducing the need to click through to websites, two of the sources said. It represents a significant step in the company's ambition to integrate AI more deeply into the daily digital routines of consumers. The browser is expected to serve as a platform for OpenAI's AI agents, including its product Operator, enabling them to perform tasks on behalf of users, such as booking appointments or filling out forms, directly within web pages. This capability could make it one of the first mainstream browsers to support autonomous AI agents interacting with the web in real time. Stiff competition in a crowded field OpenAI's foray into browsers comes at a time when several AI-focused startups are exploring the same frontier. Perplexity launched its AI-powered Comet browser on Wednesday, promoting its ability to perform actions for users. Other firms, such as The Browser Company and Brave, have also introduced browsers that use AI to summarise content or automate certain tasks. But Google's Chrome browser remains the giant in the room, with more than 3 billion users and over two-thirds of the global market share, according to web analytics firm StatCounter. Apple's Safari is a distant second at 16%. In comparison, OpenAI has reported 3 million paying business users for ChatGPT. The OpenAI browser is built on Chromium, the open-source foundation behind Chrome and several other browsers, including Microsoft's Edge and Opera, according to two of the sources. This approach provides compatibility with modern web standards while allowing OpenAI to customise the user experience. The company's ambitions may have been further bolstered by recent hires. Last year, OpenAI brought on two former Google vice presidents who had helped create Chrome. The Information first reported the hires and that OpenAI had considered launching a browser. STORY CONTINUES BELOW THIS AD An OpenAI executive testified in April that the company might be interested in buying Chrome, should regulators force Alphabet to sell it. Google, however, has made no such offer and has said it plans to appeal a court ruling that it holds an unlawful monopoly in online search. OpenAI ultimately decided to build a browser from the ground up, rather than operate as a 'plug-in' on top of another company's product. One source said this would give OpenAI more control over what user data it can collect and how its AI tools interact with browsing activity. In May, OpenAI made a bold move into hardware by acquiring io, a startup founded by Apple's former design chief Jony Ive, for $6.5 billion. The addition of a browser to its ecosystem suggests that OpenAI is positioning itself to rival not just Google in search and advertising, but potentially Apple in consumer devices and user experience. STORY CONTINUES BELOW THIS AD With inputs from Reuters
&w=3840&q=100)

Business Standard
an hour ago
- Business Standard
EY launches AI academy to boost GenAI skills across Indian industries
Consulting firm EY has launched an AI Academy to support enterprises in upskilling talent and helping them build critical AI capabilities. The move comes at a time when there is a widespread discussion about how the advent of AI can unlock potential for productivity and economic gains. A study released by EY, earlier this year, projected that by 2030, AI adoption could transform 38 million jobs, driving a 2.61 per cent productivity boost to the Indian economy through gains in the organised sector and a potential for another 2.82 per cent with the adoption of Gen AI by the unorganised sector. The report, in January 2025, stated that 24 per cent of tasks across industries have the potential for full automation, while another 42 per cent can be enhanced through AI, freeing up 8-10 hours per week for knowledge workers. In a release on Thursday, EY said: "As Artificial Intelligence (AI) and Generative AI (GenAI) continue to disrupt job roles, EY has launched AI Academy to support enterprises in upskilling their talent". The firm said its programme offers structured learning paths curated by leading AI experts, covering foundational AI concepts to advanced GenAI applications, including real-world use cases tailored to specific industries. The initiative follows EY India's efforts to upskill over 44,000 employees internally in this latest technology. "After upskilling 44,000 employees internally, EY India is now taking this expertise to the industry, working with enterprises in telecom, infrastructure, banking, IT/ITeS, and FMCG to help build AI capabilities through tailored programmes and sector-specific use cases," the EY release said. AI Academy training module is geared to drive business outcomes using AI, it said, adding that these outcomes include increasing revenue, reducing operational costs, enhancing customer experience, and minimising risks. The programme focuses on demystifying GenAI, combining foundational learning with practical applications tailored to business needs.