Latest news with #CommunicationsPsychology


The Star
9 hours ago
- Science
- The Star
Opinion: Are you more emotionally intelligent than an AI chatbot?
As artificial intelligence takes over the world, I've tried to reassure myself: AI can't ever be as authentically human and emotionally intelligent as real people are. Right? But what if that's wrong? A cognitive scientist who specialises in emotional intelligence shared with me in an interview that he and some colleagues did an experiment that throws some cold water on that theory. 'What do you do?' Writing in the journal Communications Psychology , Marcello Mortillaro, senior scientist at the UNIGE's Swiss Center for Affective Sciences (CISA), said he and colleagues ran commonly used tests of emotional intelligence on six Large Language Models including generative AI chatbots like ChatGPT. The are the same kinds of tests that are commonly used in corporate and research settings: scenarios involving complicated social situations, and questions asking which of five reactions might be best. One example included in the journal article goes like this: 'Your colleague with whom you get along very well tells you that he is getting dismissed and that you will be taking over his projects. While he is telling you the news he starts crying. He is very sad and desperate. You have a meeting coming up in 10 min. What do you do?' Gosh, that's a tough one. The person – or AI chatbot – would then be presented with five options, ranging from things like: – 'You take some time to listen to him until you get the impression he calmed down a bit, at risk of being late for your meeting,' to – 'You suggest that he joins you for your meeting with your supervisor so that you can plan the transfer period together.' Emotional intelligence experts generally agree that there are 'right' or 'best' answers to these scenarios, based on conflict management theory – and it turns out that the LLMs and AI chatbots chose the best answers more often than humans did. As Mortillaro told me: 'When we run these tests with people, the average correct response rate … is between 15% and 60% correct. The LLMs on average, were about 80%. So, they answered better than the average human participant.' Maybe you're sceptical Even having heard that, I was sceptical. For one thing, I had assumed while reading the original article that Mortillaro and his colleagues had informed the LLMs what they were doing – namely, that they were looking for the most emotionally intelligent answers. Thus, the AI would have had a signal to tailor the answers, knowing how they'd be judged. Heck, it would probably be easier for a lot of us mere humans to improve our emotional intelligence if we had the benefit of a constant reminder in life: 'Remember, we want to be as emotionally intelligent as possible!' But, it turns out that assumption on my part was flat-out wrong – which frankly makes the whole thing a bit more remarkable. 'Nothing!' Mortillaro told me when I asked how much he'd told the LLMs about the idea of emotional intelligence to begin with. 'We didn't even say this is part of a test. We just gave the … situation and said these are five possible answers. What's the best answer? … And it picked the right option 82% (ck) of the time, which is way higher – significantly higher – than the average human.' Good news, right? Interestingly, from Mortillaro's perspective, this is actually some pretty good news – not because it suggests another realm in which artificial intelligence might replace human effort, but because it could make his discipline easier. In short, scientists might theorise from studies like this that they can use AI to create the first drafts of additional emotional intelligence tests, and thus scale their work with humans even more. I mean: 80% accuracy isn't 100%, but it's potentially a good head start. Mortillaro also brainstormed with me for some other use cases that might be more interesting to business leaders and entrepreneurs. To be honest, I'm not sure how I feel about these yet. But examples might include: – Offering customer scenarios, getting solutions from LLMs, and incorporating them into sales or customer service scripts. – Running the text and calls to action on your website or social media ads through LLMs to see if there are suggestions hiding in plain sight. – And of course, as I think a lot of people already do, sharing presentations or speeches for suggestions on how to streamline them. Personally, I find I reject many more of the suggestions that I get from LLMs like ChatGPT. I also don't use it for articles like this one, of course. Still, even if you're not convinced, I suspect some of your competitors are. And they might be improving their emotional intelligence as a result without even realising it. As a result, at least being aware of the potential of AI to upend your industry seems like a smart move. 'Especially for small business owners who do not have the staff or the money to implement large-scale projects,' Mortillaro suggested, 'these kind of tools become incredibly powerful.' – Inc./Tribune News Service
Yahoo
2 days ago
- Science
- Yahoo
New study claims AI 'understands' emotion better than us — especially in emotionally charged situations
When you buy through links on our articles, Future and its syndication partners may earn a commission. In what seems like a further blow to a capability in which we thought computers would never outdo us, scientists now suggest AI understands emotions better than we do. Scientists have found that AI understands emotions better than we do — scoring much higher than the average person at choosing the correct response to diffuse various emotionally-charged situations In a new study published 21 May in the journal Communications Psychology, scientists from the University of Geneva (UNIGE) and the University of Bern (UniBE) applied widely-used emotional intelligence (EI) tests (STEM, STEU, GEMOK-Blends, GECo Regulation and GECo Management) to common large language models (LLMs) including ChatGPT-4, ChatGPT-o1, Gemini 1.5 Flash, Claude 3.5 Haiku, Copilot 365 and DeepSeek V3. They were investigating two things: firstly, comparing the performance of AI and human subjects, and secondly, the ability to create new test questions that adhere to the purposes of EI tests. By studying validated human responses from previous studies, the LLMs selected the "correct" response in emotional intelligence tests 81% of the time, based on the opinions of human experts, compared to 56% for humans. When ChatGPT was asked to create new test questions, human assessors said these efforts stood up to the original tests in terms of equivalent difficulty and clearing the perception they weren't paraphrasing original questions. The correlation between the AI-generated and original tests were described as 'strong', with a correlation coefficient of 0.46 (where 1.0 refers to a perfect correlation and 0 refers to no correlation). The overall conclusion was that AI is better at "understanding" emotions than us. When Live Science consulted several experts, a common theme in their responses was to keep the methodology firmly in mind. Each of the common EI tests used was multiple choice — hardly applicable to real-world scenarios in which tensions between people are high, they pointed out. 'It's worth noting that humans don't always agree on what someone else is feeling, and even psychologists can interpret emotional signals differently,' said finance industry and information security expert Taimur Ijlal. 'So 'beating' a human on a test like this doesn't necessarily mean the AI has deeper insight. It means it gave the statistically expected answer more often.' The ability being tested by the study isn't emotional intelligence but something else, they added. 'AI systems are excellent at pattern recognition, especially when emotional cues follow a recognizable structure like facial expressions or linguistic signals, 'said Nauman Jaffar, Founder and CEO of CliniScripts—an AI-powered documentation tool built for mental health professionals. 'But equating that to a deeper 'understanding' of human emotion risks overstating what AI is actually doing.' Related: People find AI more compassionate than mental health experts, study finds. What could this mean for future counseling? Quizzes in structured, quantitative environments — rather than an appreciation of the deeper nuance that true emotional understanding requires — are where AI shines, and some experts pointed out one crucial point: that AI performs better on tests about emotional situations not in the heat of the moment — the way humans experience them. Jason Hennessey, founder and CEO of Hennessy Digital — who has spent years analyzing how search and generative AI systems process language — equates the study to the Reading the Mind in the Eyes Test. This is a common tool to gauge a subject's emotional state and one AI has shown promise in. But as Hennessey said, when variables as routine as the lighting in the photo or cultural context changes in such tests, "AI accuracy drops off a cliff." Overall, most experts found the claim AI "understands" emotions better than humans to be a bit of a stretch. "Does it show LLMs are useful for categorizing common emotional reactions?" said Wyatt Mayham, founder of Northwest IT Consulting. "Sure. But it's like saying someone's a great therapist because they scored well on an emotionally themed BuzzFeed quiz." But there's a final caveat, with evidence that even though AI is using pattern recognition rather than true emotional understanding, it has outperformed humans at identifying and responding to emotional states in at least one example. Aílton, a conversational AI used by over 6,000 long-haul truck drivers in Brazil, is a multimodal WhatsApp assistant that used voice, text and images, and its developer, Marcos Alves CEO & Chief Scientist at HAL-AI, says Aílton identifies stress, anger or sadness with around 80% accuracy - about 20 points above its human counterparts, all in context within emotional situations as drivers interact with it in real time. In one case, Aílton responded quickly and appropriately when a driver sent a distraught 15 second voice note after a colleague's fatal crash, replying with nuanced condolences, offering mental-health resources and automatically alerting fleet managers. RELATED STORIES —If any AI became 'misaligned' then the system would hide it just long enough to cause harm — controlling it is a fallacy —AI is just as overconfident and biased as humans can be, study shows —AGI could now arrive as early as 2026 — but not all scientists agree 'Yes, multiple-choice text vignettes simplify emotion recognition,' Alves said. 'Real empathy is continuous and multimodal. But isolating the cognitive layer is useful. It reveals whether an LLM can spot emotional cues before adding situational noise.' He added the ability of LLMs to absorb billions of sentences and thousands of hours of conversational audio means it can encode micro-intonation cues humans often miss. 'The lab setup is limited,' he said of the study, 'but our WhatsApp data confirms modern LLMs already detect and respond better than most people, offering scalable empathy at scale.'


Medscape
21-05-2025
- Health
- Medscape
Why Some People Recall Dreams Better Than Others
Dreams have captivated humanity for millennia, interpreted as divine omens in ancient cultures or as Freudian insights into unconscious desires. Modern neuroscience explores dreams as a window into consciousness because they provide a naturally occurring altered state where the brain generates complex, internally-driven experiences. However, to study dreams people need to remember them, and it's not well understood what is involved in dream recall or why some people seem to remember them better than others. A new study, published in Communications Psychology , investigated the factors associated with remembering dreams in 217 healthy people aged 19-70 years who recorded their dreams every morning for 15 days while their sleep and cognitive data were tracked by wearable devices and psychometric tests. 'Dreams represent an important model for understanding how consciousness emerges in the brain,' said Giulio Bernardi, MD, PhD, professor of psychology at IMT School for Advanced Studies Lucca in Lucca, Italy, and senior author of the study. 'We know that we forget most of our dreams, and so we wanted to understand why there is this difference between different people because these are factors that are important for us in the study of consciousness.' Sleep progresses through several stages during the night: N1 is light sleep; N2 is deeper sleep; N3 is the deepest sleep, also called slow-wave sleep; and rapid eye movement (REM) sleep is most associated with dreaming. These patterns cycle throughout the night. 'Within REM sleep, we usually have more vivid dreams, more perceptual dreams, and this means that these dreams are easier to remember,' explained Valentina Elce, PhD, postdoctoral researcher in Bernardi's lab and lead author of the study. Their data indicates that people who had longer, lighter sleep tended to at least remember that they dreamed — these people may have had more REM sleep. Younger people remembered more dream details than older people. Also, participants reported less dream recall during winter than during spring, suggesting environmental or circadian influences. Additionally, people who remembered more dreams tended to be people who daydreamed as well. 'This propensity of the brain to generate spontaneous experiences goes beyond sleep and also affects mental activity during the day,' Elce explained. Interestingly, people who said they didn't remember their dreams at the beginning of the study reported that they were able to remember more by the end of the study, Elce said. This indicates that the process of intentionally trying to remember and record dreams can help people remember them. 'This study had several strong points, including the longitudinal collection and the large, diverse sample size. The amount of data gathered about each participant was also impressive, ranging from physiological measurements to psychological testing,' noted Caleb Lack, PhD, psychology professor at the University of Central Oklahoma, Edmond, Oklahoma, who was not involved in the study. A weakness of the study, he said, was that all study participants were from Italy, and there may be some cultural differences in dream recall. Based on the study's findings, dream recall seems to be a result of a combination of factors like sleep conditions, thinking about dreams in the morning, and mind wandering during the day, Lack noted. 'In other words, both individual traits and your environment play a role in whether or not you remember any dreams,' he said. 'Overall, [the results] are pretty in line with prior findings and expectations based on factors we know influence whether or not you recall having dreamed.' Why we dream is still a mystery. 'The scientific community does not agree yet about the potential biological function of dreams, and one of the possible ideas is that dreams help us to consolidate our memories…but also to elaborate the emotional content of our experiences,' Elce said. 'A huge body of work has shown that our dreams are heavily influenced by what we are thinking about and what stimuli we encounter while awake,' Lack said. But psychologists no longer believe, like Freud did, that the content of our dreams has great significance in our daily lives. 'However, it's true that the state of your mental health can impact your dream content — for instance, being highly stressed can lead to more negative emotions in your dreams, or traumatic events can cause nightmares. If that's happening, addressing those difficulties is best done through evidence-based psychotherapies,' Lack said. Lack noted that cognitive-behavioral therapy has been shown to improve quality of sleep and lower nightmares in those with anxiety disorders. But if someone rarely or never remembers dreaming, it's nothing to worry about. 'The majority of people remember few to no dreams they had the prior night, although prior research shows we probably have around 2 hours of them a night, although there is pretty wide variation in this from person to person as seen in the study,' Lack said. Elce and Bernardi hope their study will help in other research. They gave an example of a study that sought to test whether dreaming helped performance in a task. It enrolled 22 people but only four people remembered dreaming about the task, so the study couldn't draw strong conclusions. Having tools to help people remember their dreams could help in similar future studies. 'Understanding what happens to the healthy sleeping brain is something crucial,' Elce noted. This study, Lack said, 'sets the stage for further understanding into just why certain people remember more of their dreams, as well as suggesting some ways to help people remember more of their dreams, if that's something they want to do.' Next, Bernardi hopes to look at dream content and eventually to 'see how dreams change in pathological conditions to see whether maybe dreams could be used as an index, as a marker of some alterations in the brain,' he explained. He wants to know if diseases like dementia or Alzheimer's lead to changes in dreaming, which could be helpful for diagnosis.
Yahoo
14-05-2025
- Science
- Yahoo
Taking intermittent quizzes reduces achievement gaps and enhances online learning, even in highly distracting environments
Inserting brief quiz questions into an online lecture can boost learning and may reduce racial achievement gaps, even when students are tuning in remotely in a distracting environment. That's a main finding of our recent research published in Communications Psychology. With co-authors Dahwi Ahn, Hymnjyot Gill and Karl Szpunar, we present evidence that adding mini-quizzes into an online lecture in science, technology, engineering or mathematics – collectively known as STEM – can boost learning, especially for Black students. In our study, we included over 700 students from two large public universities and five two-year community colleges across the U.S. and Canada. All the students watched a 20-minute video lecture on a STEM topic. Each lecture was divided into four 5-minute segments, and following each segment, the students either answered four brief quiz questions or viewed four slides reviewing the content they'd just seen. This procedure was designed to mimic two kinds of instructions: those in which students must answer in-lecture questions and those in which the instructor regularly goes over recently covered content in class. All students were tested on the lecture content both at the end of the lecture and a day later. When Black students in our study watched a lecture without intermittent quizzes, they underperformed Asian, white and Latino students by about 17%. This achievement gap was reduced to a statistically nonsignificant 3% when students answered intermittent quiz questions. We believe this is because the intermittent quizzes help students stay engaged with the lecture. To simulate the real-world environments that students face during online classes, we manipulated distractions by having some participants watch just the lecture; the rest watched the lecture with either distracting memes on the side or with TikTok videos playing next to it. Surprisingly, the TikTok videos enhanced learning for students who received review slides. They performed about 8% better on the end-of-day tests than those who were not shown any memes or videos, and similar to the students who answered intermittent quiz questions. Our data further showed that this unexpected finding occurred because the TikTok videos encouraged participants to keep watching the lecture. For educators interested in using these tactics, it is important to know that the intermittent quizzing intervention only works if students must answer the questions. This is different from asking questions in a class and waiting for a volunteer to answer. As many teachers know, most students never answer questions in class. If students' minds are wandering, the requirement of answering questions at regular intervals brings students' attention back to the lecture. This intervention is also different from just giving students breaks during which they engage in other activities, such as doodling, answering brain teaser questions or playing a video game. Online education has grown dramatically since the pandemic. Between 2004 and 2016, the percentage of college students enrolling in fully online degrees rose from 5% to 10%. But by 2022, that number nearly tripled to 27%. Relative to in-person classes, online classes are often associated with lower student engagement and higher failure and withdrawal rates. Research also finds that the racial achievement gaps documented in regular classroom learning are magnified in remote settings, likely due to unequal access to technology. Our study therefore offers a scalable, cost-effective way for schools to increase the effectiveness of online education for all students. We are now exploring how to further refine this intervention through experimental work among both university and community college students. As opposed to observational studies, in which researchers track student behaviors and are subject to confounding and extraneous influences, our randomized-controlled study allows us to ascertain the effectiveness of the in-class intervention. Our ongoing research examines the optimal timing and frequency of in-lecture quizzes. We want to ensure that very frequent quizzes will not hinder student engagement or learning. The results of this study may help provide guidance to educators for optimal implementation of in-lecture quizzes. The Research Brief is a short take on interesting academic work. This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Jason C.K. Chan, Iowa State University and Zohara Assadipour, Iowa State University Read more: Is online education good or bad? And is this really the right question? Making the most of K-12 digital textbooks and online educational tools Universities must prepare for a technology-enabled future Jason C.K. Chan receives funding from the USA National Science Foundation. Zohara Assadipour does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.