
AI might now be as good as humans at detecting emotion, political leaning, sarcasm in online conversations

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Deccan Herald
a day ago
- Deccan Herald
AI might now be as good as humans at detecting emotion, political leaning, sarcasm in online conversations
Why does this matter? For one, AI like GPT-4 could dramatically cut the time and cost of analysing large volumes of online content. Social scientists often spend months analysing user-generated text to detect trends. GPT-4, on the other hand, opens the door to faster, more responsive research – especially important during crises, elections or public health emergencies.

Business Insider
a day ago
- Business Insider
I'm a CEO running an 8-figure AI company. I'm also an extreme procrastinator — and I think that's a good thing.
This as-told-to essay is based on a transcribed conversation with Richard White, CEO of AI note-taking company Fathom. The following has been edited for length and clarity. Everyone talks about procrastination as a personal failing. I disagree. I'm an extreme procrastinator, and I've been building successful companies, like UserVoice and, most recently, Fathom, for 15 years. It's been one of my greatest assets as an entrepreneur. I see procrastination as ruthless prioritization in disguise. Consider procrastination as data collection Procrastination is a way to gather more information before making critical decisions. When I delay a choice, I'm not being lazy; I'm waiting for the optimal moment when I have enough data to make the right call. In college, I judged the size of a project and left it to the last achievable minute. I might have frustrated my peers or not gotten the most out of every seminar, but I'd do exactly what was needed and nothing more. Since then, I've learned to be more thoughtful about my approach. I used this philosophy to build Fathom, which now has an eight-figure valuation. We started building the company in 2020. Instead of rushing to market with whatever technology was available, we waited. We gathered data. We watched AI capabilities evolve. For example, prior to the rollout of GPT-4 and Claude 2, Fathom would yield basic call summaries. When GPT-4 was made available, we saw its capabilities and knew concerted investment on our side would yield massive gains. It was a foundation for our more advanced call summary features, and any earlier investment wouldn't have been as useful to our company. The same principle applies to my personal life. I plan trips at the last minute because I want to see what opportunities emerge, what's actually happening in my life, and what I might miss out on if I commit too early. In other work environments or even relationships, being a procrastinator can annoy people. However, the real and most common downside of procrastinating is underestimating the effort required and starting something too late to meet the deadline. As a CEO, I get to define the deadlines or, in our case, create a deadline-free environment. Urgent matters to trump important matters I've adopted an unfashionable approach for a CEO: urgent trumps important. This keeps our entire company moving forward without anyone waiting on me to make progress. It means that sometimes important but non-urgent things languish. I tell my team that if something's truly important, they should keep tagging me until I respond. This creates a culture where people at all levels in the company can advocate for what matters, and truly important tasks don't get lost. I've developed what I call the "Jenga model" for running my company. Like the game, when a piece looks too difficult or risky to move, I leave it and come back to it later. I can think about a problem and then put it back down without fear. Months later, I'll pick it up again, and suddenly, the answer falls right out. I'll prioritize problems that will get bigger with time, such as making an important product change, as well as problems where the solutions are low stakes or reversible. Higher-stakes decisions that are non-reversible should be deferred to gather data as long as possible, or broken out into lower-stakes decisions that help gather data to inform the larger issue. For product development, we circulate ideas internally while waiting for technological improvements. We don't rush features to market. Instead, we wait for the AI to get better, watch for what could go wrong, and optimize our timing. I don't think I have ever missed out on an opportunity. The reality in startups is that few things have a "hard" deadline. Implementing a deadline-free environment at Fathom means there hasn't been much negative feedback on this model. My team understands what we're prioritizing versus what we're doing later. CEOs need to play to their strengths Working alongside great entrepreneurs over the years has taught me that you can't build something around yourself that doesn't play to your strengths. My strength isn't planning or rigid schedules. My strength is recognizing optimal timing, gathering information, and making high-impact decisions. I delegate open-ended goals to my teams rather than micromanaging tasks. I encourage people at every level to make decisions. Most people think efficiency means doing things as quickly as possible. I think efficiency means doing things at the right time. You might be wrong about when something is needed or the time cost of execution, but that's the risk you take using your best collective judgment. This mindset has served Fathom incredibly well. We're exploring ways to use AI to take better notes, reduce unnecessary meetings, and democratize information sharing within companies. The next time someone tells you that procrastination is holding you back, ask yourself: Are you really procrastinating, or are you waiting for better information? Are you being lazy, or are you being strategically patient? Sometimes the best thing you can do is put the problem down and come back to it when you can solve it easily and effectively.
&w=3840&q=100)

Business Standard
a day ago
- Business Standard
AI may now match humans in spotting emotion, sarcasm in online chats
When we write something to another person, over email or perhaps on social media, we may not state things directly, but our words may instead convey a latent meaning – an underlying subtext. We also often hope that this meaning will come through to the reader. But what happens if an artificial intelligence (AI) system is at the other end, rather than a person? Can AI, especially conversational AI, understand the latent meaning in our text? And if so, what does this mean for us? Latent content analysis is an area of study concerned with uncovering the deeper meanings, sentiments and subtleties embedded in text. For example, this type of analysis can help us grasp political leanings present in communications that are perhaps not obvious to everyone. Understanding how intense someone's emotions are or whether they're being sarcastic can be crucial in supporting a person's mental health, improving customer service, and even keeping people safe at a national level. These are only some examples. We can imagine benefits in other areas of life, like social science research, policy-making and business. Given how important these tasks are – and how quickly conversational AI is improving – it's essential to explore what these technologies can (and can't) do in this regard. Work on this issue is only just starting. Current work shows that ChatGPT has had limited success in detecting political leanings on news websites. Another study that focused on differences in sarcasm detection between different large language models – the technology behind AI chatbots such as ChatGPT – showed that some are better than others. Finally, a study showed that LLMs can guess the emotional 'valence' of words – the inherent positive or negative 'feeling' associated with them. Our new study published in Scientific Reports tested whether conversational AI, inclusive of GPT-4 – a relatively recent version of ChatGPT – can read between the lines of human-written texts. The goal was to find out how well LLMs simulate understanding of sentiment, political leaning, emotional intensity and sarcasm – thus encompassing multiple latent meanings in one study. This study evaluated the reliability, consistency and quality of seven LLMs, including GPT-4, Gemini, Llama-3.1-70B and Mixtral 8 × 7B. We found that these LLMs are about as good as humans at analysing sentiment, political leaning, emotional intensity and sarcasm detection. The study involved 33 human subjects and assessed 100 curated items of text. For spotting political leanings, GPT-4 was more consistent than humans. That matters in fields like journalism, political science, or public health, where inconsistent judgement can skew findings or miss patterns. GPT-4 also proved capable of picking up on emotional intensity and especially valence. Whether a tweet was composed by someone who was mildly annoyed or deeply outraged, the AI could tell – although, someone still had to confirm if the AI was correct in its assessment. This was because AI tends to downplay emotions. Sarcasm remained a stumbling block both for humans and machines. The study found no clear winner there – hence, using human raters doesn't help much with sarcasm detection. Why does this matter? For one, AI like GPT-4 could dramatically cut the time and cost of analysing large volumes of online content. Social scientists often spend months analysing user-generated text to detect trends. GPT-4, on the other hand, opens the door to faster, more responsive research – especially important during crises, elections or public health emergencies. Journalists and fact-checkers might also benefit. Tools powered by GPT-4 could help flag emotionally charged or politically slanted posts in real time, giving newsrooms a head start. There are still concerns. Transparency, fairness and political leanings in AI remain issues. However, studies like this one suggest that when it comes to understanding language, machines are catching up to us fast – and may soon be valuable teammates rather than mere tools. Although this work doesn't claim conversational AI can replace human raters completely, it does challenge the idea that machines are hopeless at detecting nuance. Our study's findings do raise follow-up questions. If a user asks the same question of AI in multiple ways – perhaps by subtly rewording prompts, changing the order of information, or tweaking the amount of context provided – will the model's underlying judgements and ratings remain consistent? Further research should include a systematic and rigorous analysis of how stable the models' outputs are. Ultimately, understanding and improving consistency is essential for deploying LLMs at scale, especially in high-stakes settings.