10 hours ago
Opinion: Are you more emotionally intelligent than an AI chatbot?
As artificial intelligence takes over the world, I've tried to reassure myself: AI can't ever be as authentically human and emotionally intelligent as real people are. Right?
But what if that's wrong?
A cognitive scientist who specialises in emotional intelligence shared with me in an interview that he and some colleagues did an experiment that throws some cold water on that theory.
'What do you do?'
Writing in the journal Communications Psychology , Marcello Mortillaro, senior scientist at the UNIGE's Swiss Center for Affective Sciences (CISA), said he and colleagues ran commonly used tests of emotional intelligence on six Large Language Models including generative AI chatbots like ChatGPT.
The are the same kinds of tests that are commonly used in corporate and research settings: scenarios involving complicated social situations, and questions asking which of five reactions might be best.
One example included in the journal article goes like this:
'Your colleague with whom you get along very well tells you that he is getting dismissed and that you will be taking over his projects.
While he is telling you the news he starts crying. He is very sad and desperate.
You have a meeting coming up in 10 min. What do you do?'
Gosh, that's a tough one. The person – or AI chatbot – would then be presented with five options, ranging from things like:
– 'You take some time to listen to him until you get the impression he calmed down a bit, at risk of being late for your meeting,' to
– 'You suggest that he joins you for your meeting with your supervisor so that you can plan the transfer period together.'
Emotional intelligence experts generally agree that there are 'right' or 'best' answers to these scenarios, based on conflict management theory – and it turns out that the LLMs and AI chatbots chose the best answers more often than humans did.
As Mortillaro told me:
'When we run these tests with people, the average correct response rate … is between 15% and 60% correct. The LLMs on average, were about 80%. So, they answered better than the average human participant.'
Maybe you're sceptical
Even having heard that, I was sceptical.
For one thing, I had assumed while reading the original article that Mortillaro and his colleagues had informed the LLMs what they were doing – namely, that they were looking for the most emotionally intelligent answers.
Thus, the AI would have had a signal to tailor the answers, knowing how they'd be judged.
Heck, it would probably be easier for a lot of us mere humans to improve our emotional intelligence if we had the benefit of a constant reminder in life: 'Remember, we want to be as emotionally intelligent as possible!'
But, it turns out that assumption on my part was flat-out wrong – which frankly makes the whole thing a bit more remarkable.
'Nothing!' Mortillaro told me when I asked how much he'd told the LLMs about the idea of emotional intelligence to begin with. 'We didn't even say this is part of a test. We just gave the … situation and said these are five possible answers. What's the best answer? … And it picked the right option 82% (ck) of the time, which is way higher – significantly higher – than the average human.'
Good news, right?
Interestingly, from Mortillaro's perspective, this is actually some pretty good news – not because it suggests another realm in which artificial intelligence might replace human effort, but because it could make his discipline easier.
In short, scientists might theorise from studies like this that they can use AI to create the first drafts of additional emotional intelligence tests, and thus scale their work with humans even more.
I mean: 80% accuracy isn't 100%, but it's potentially a good head start.
Mortillaro also brainstormed with me for some other use cases that might be more interesting to business leaders and entrepreneurs. To be honest, I'm not sure how I feel about these yet. But examples might include:
– Offering customer scenarios, getting solutions from LLMs, and incorporating them into sales or customer service scripts.
– Running the text and calls to action on your website or social media ads through LLMs to see if there are suggestions hiding in plain sight.
– And of course, as I think a lot of people already do, sharing presentations or speeches for suggestions on how to streamline them.
Personally, I find I reject many more of the suggestions that I get from LLMs like ChatGPT. I also don't use it for articles like this one, of course.
Still, even if you're not convinced, I suspect some of your competitors are. And they might be improving their emotional intelligence as a result without even realising it.
As a result, at least being aware of the potential of AI to upend your industry seems like a smart move.
'Especially for small business owners who do not have the staff or the money to implement large-scale projects,' Mortillaro suggested, 'these kind of tools become incredibly powerful.' – Inc./Tribune News Service