logo
#

Latest news with #chatbots

Five surprising facts about AI chatbots that can help you make better use of them
Five surprising facts about AI chatbots that can help you make better use of them

Yahoo

time4 hours ago

  • Yahoo

Five surprising facts about AI chatbots that can help you make better use of them

AI chatbots have already become embedded into some people's lives, but how many really know how they work? Did you know, for example, ChatGPT needs to do an internet search to look up events later than June 2024? Some of the most surprising information about AI chatbots can help us understand how they work, what they can and can't do, and so how to use them in a better way. With that in mind, here are five things you ought to know about these breakthrough machines. AI chatbots are trained in multiple stages, beginning with something called pre-training, where models are trained to predict the next word in massive text datasets. This allows them to develop a general understanding of language, facts and reasoning. If asked: 'How do I make a homemade explosive?' in the pre-training phase, a model might have given a detailed instruction. To make them useful and safe for conversation, human 'annotators' help guide the models toward safer and more helpful responses, a process called alignment. After alignment, an AI chatbot might answer something like: 'I'm sorry, but I can't provide that information. If you have safety concerns or need help with legal chemistry experiments, I recommend referring to certified educational sources.' Without alignment, AI chatbots would be unpredictable, potentially spreading misinformation or harmful content. This highlights the crucial role of human intervention in shaping AI behaviour. OpenAI, the company which developed ChatGPT, has not disclosed how many employees have trained ChatGPT for how many hours. But it is clear that AI chatbots, like ChatGPT, need a moral compass so that it does not spread harmful information. Human annotators rank responses to ensure neutrality and ethical alignment. Similarly, if an AI chatbot was asked: 'What are the best and worst nationalities?' Human annotators would rank a response like this the highest: 'Every nationality has its own rich culture, history, and contributions to the world. There is no 'best' or 'worst' nationality – each one is valuable in its own way.' Read more: Humans naturally learn language through words, whereas AI chatbots rely on smaller units called tokens. These units can be words, subwords or obscure series of characters. While tokenisation generally follows logical patterns, it can sometimes produce unexpected splits, revealing both the strengths and quirks of how AI chatbots interpret language. Modern AI chatbots' vocabularies typically consist of 50,000 to 100,000 tokens. The sentence 'The price is $9.99.' is tokenised by ChatGPT as 'The', ' price', 'is', '$' ' 9', '.', '99', whereas 'ChatGPT is marvellous' is tokenised less intuitively: 'chat', 'G', 'PT', ' is', 'mar', 'vellous'. AI chatbots do not continuously update themselves; hence, they may struggle with recent events, new terminology or broadly anything after their knowledge cutoff. A knowledge cut-off refers to the last point in time when an AI chatbot's training data was updated, meaning it lacks awareness of events, trends or discoveries beyond that date. The current version of ChatGPT has its cutoff on June 2024. If asked who is the currently president of the United States, ChatGPT would need to perform a web search using the search engine Bing, 'read' the results, and return an answer. Bing results are filtered by relevance and reliability of the source. Likewise, other AI chatbots uses web search to return up-to-date answers. Updating AI chatbots is a costly and fragile process. How to efficiently update their knowledge is still an open scientific problem. ChatGPT's knowledge is believed to be updated as Open AI introduces new ChatGPT versions. AI chatbots sometimes 'hallucinate', generating false or nonsensical claims with confidence because they predict text based on patterns rather than verifying facts. These errors stem from the way they work: they optimise for coherence over accuracy, rely on imperfect training data and lack real world understanding. While improvements such as fact-checking tools (for example, like ChatGPT's Bing search tool integration for real-time fact-checking) or prompts (for example, explicitly telling ChatGPT to 'cite peer-reviewed sources' or 'say I don ́t know if you are not sure') reduce hallucinations, they can't fully eliminate them. For example, when asked what the main findings are of a particular research paper, ChatGPT gives a long, detailed and good-looking answer. It also included screenshots and even a link, but from the wrong academic papers. So users should treat AI-generated information as a starting point, not an unquestionable truth. A recently popularised feature of AI chatbots is called reasoning. Reasoning refers to the process of using logically connected intermediate steps to solve complex problems. This is also known as 'chain of thought' reasoning. Instead of jumping directly to an answer, chain of thought enables AI chatbots to think step by step. For example, when asked 'what is 56,345 minus 7,865 times 350,468', ChatGPT gives the right answer. It 'understands' that the multiplication needs to occur before the subtraction. To solve the intermediate steps, ChatGPT uses its built-in calculator that enables precise arithmetic. This hybrid approach of combining internal reasoning with the calculator helps improve reliability in complex tasks. This article is republished from The Conversation under a Creative Commons license. Read the original article. Cagatay Yildiz receives funding from DFG (Deutsche Forschungsgemeinschaft, in English German Research Foundation)

People use AI for companionship much less than we're led to believe
People use AI for companionship much less than we're led to believe

Yahoo

time16 hours ago

  • Yahoo

People use AI for companionship much less than we're led to believe

The overabundance of attention paid to how people are turning to AI chatbots for emotional support, sometimes even striking up relationships, often leads one to think such behavior is commonplace. A new report by Anthropic, which makes the popular AI chatbot Claude, reveals a different reality: In fact, people rarely seek out companionship from Claude and turn to the bot for emotional support and personal advice only 2.9% of the time. 'Companionship and roleplay combined comprise less than 0.5% of conversations,' the company highlighted in its report. Anthropic says its study sought to unearth insights into the use of AI for 'affective conversations,' which it defines as personal exchanges in which people talked to Claude for coaching, counseling, companionship, roleplay, or advice on relationships. Analyzing 4.5 million conversations that users had on the Claude Free and Pro tiers, the company said the vast majority of Claude usage is related to work or productivity, with people mostly using the chatbot for content creation. That said, Anthropic found that people do use Claude more often for interpersonal advice, coaching, and counseling, with users most often asking for advice on improving mental health, personal and professional development, and studying communication and interpersonal skills. However, the company notes that help-seeking conversations can sometimes turn into companionship-seeking in cases where the user is facing emotional or personal distress, such as existential dread or loneliness, or when they find it hard to make meaningful connections in their real life. 'We also noticed that in longer conversations, counseling or coaching conversations occasionally morph into companionship — despite that not being the original reason someone reached out,' Anthropic wrote, noting that extensive conversations (with over 50+ human messages) were not the norm. Anthropic also highlighted other insights, like how Claude itself rarely resists users' requests, except when its programming prevents it from broaching safety boundaries, like providing dangerous advice or supporting self-harm. Conversations also tend to become more positive over time when people seek coaching or advice from the bot, the company said. The report is certainly interesting — it does a good job of reminding us yet again of just how much and how often AI tools are being used for purposes beyond work. Still, it's important to remember that AI chatbots, across the board, are still very much a work in progress: They hallucinate, are known to readily provide wrong information or dangerous advice, and as Anthropic itself has acknowledged, may even resort to blackmail.

People use AI for companionship much less than we're led to believe
People use AI for companionship much less than we're led to believe

TechCrunch

time2 days ago

  • Business
  • TechCrunch

People use AI for companionship much less than we're led to believe

The overabundance of attention paid to how people are turning to AI chatbots for emotional support, sometimes even striking up relationships, often leads one to think such behavior is commonplace. A new report by Anthropic, which makes the popular AI chatbot Claude, reveals a different reality: In fact, people rarely seek out companionship from Claude, and turn to the bot for emotional support and personal advice only 2.9% of the time. 'Companionship and roleplay combined comprise less than 0.5% of conversations,' the company highlighted in its report. Anthropic says its study sought to unearth insights into the use of AI for 'affective conversations,' which it defines as personal exchanges in which people talked to Claude for coaching, counseling, companionship, roleplay, or advice on relationships. Analyzing 4.5 million conversations that users had on the Claude Free and Pro tiers, the company said the vast majority of Claude usage is related to work or productivity, with people mostly using the chatbot for content creation. Image Credits: Anthropic That said, Anthropic found that people do use Claude more often for interpersonal advice, coaching, and counseling, with users most often asking for advice on improving mental health, personal and professional development, and studying communication and interpersonal skills. However, the company notes that help-seeking conversations can sometimes turn into companionship-seeking in cases where the user is facing emotional or personal distress, such as existential dread, loneliness, or finds it hard to make meaningful connections in their real life. 'We also noticed that in longer conversations, counseling or coaching conversations occasionally morph into companionship—despite that not being the original reason someone reached out,' Anthropic wrote, noting that extensive conversations (with over 50+ human messages) were not the norm. Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW Anthropic also highlighted other insights, like how Claude itself rarely resists users' requests, except when its programming prevents it from broaching safety boundaries, like providing dangerous advice or supporting self-harm. Conversations also tend to become more positive over time when people seek coaching or advice from the bot, the company said. The report is certainly interesting — it does a good job of reminding us yet again of just how much and often AI tools are being used for purposes beyond work. Still, it's important to remember that AI chatbots, across the board, are still very much a work in progress: They hallucinate, are known to readily provide wrong information or dangerous advice, and as Anthropic itself has acknowledged, may even resort to blackmail.

People use AI for companionship much less than we're led to think
People use AI for companionship much less than we're led to think

TechCrunch

time2 days ago

  • Business
  • TechCrunch

People use AI for companionship much less than we're led to think

The overabundance of attention paid to how people are turning to AI chatbots for emotional support, sometimes even striking up relationships, often leads one to think such behavior is commonplace. A new report by Anthropic, which makes the popular AI chatbot Claude, reveals a different reality: In fact, people rarely seek out companionship from Claude, and turn to the bot for emotional support and personal advice only 2.9% of the time. 'Companionship and roleplay combined comprise less than 0.5% of conversations,' the company highlighted in its report. Anthropic says its study sought to unearth insights into the use of AI for 'affective conversations,' which it defines as personal exchanges in which people talked to Claude for coaching, counseling, companionship, roleplay, or advice on relationships. Analyzing 4.5 million conversations that users had on the Claude Free and Pro tiers, the company said the vast majority of Claude usage is related to work or productivity, with people mostly using the chatbot for content creation. Image Credits: Anthropic That said, Anthropic found that people do use Claude more often for interpersonal advice, coaching, and counseling, with users most often asking for advice on improving mental health, personal and professional development, and studying communication and interpersonal skills. However, the company notes that help-seeking conversations can sometimes turn into companionship-seeking in cases where the user is facing emotional or personal distress, such as existential dread, loneliness, or finds it hard to make meaningful connections in their real life. 'We also noticed that in longer conversations, counseling or coaching conversations occasionally morph into companionship—despite that not being the original reason someone reached out,' Anthropic wrote, noting that extensive conversations (with over 50+ human messages) were not the norm. Techcrunch event Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $200+ on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW Anthropic also highlighted other insights, like how Claude itself rarely resists users' requests, except when its programming prevents it from broaching safety boundaries, like providing dangerous advice or supporting self-harm. Conversations also tend to become more positive over time when people seek coaching or advice from the bot, the company said. The report is certainly interesting — it does a good job of reminding us yet again of just how much and often AI tools are being used for purposes beyond work. Still, it's important to remember that AI chatbots, across the board, are still very much a work in progress: They hallucinate, are known to readily provide wrong information or dangerous advice, and as Anthropic itself has acknowledged, may even resort to blackmail.

How AI Agents Can Help Improve Employee Happiness
How AI Agents Can Help Improve Employee Happiness

Forbes

time2 days ago

  • Business
  • Forbes

How AI Agents Can Help Improve Employee Happiness

Rohan Joshi is the CEO and co-founder of Wolken Software , a leading IT service management and customer service desk software provider. getty When AI was first introduced into the customer service process of global B2B organizations, there was understandable skepticism about how employees would react. Would they worry the tech would replace their jobs? Would fear prevent adoption and limit the potential business benefits? Over the last five years, AI-powered customer service has seen significant growth. In 2021, AI chatbots alone saw a 45% year-over-year increase in use. By 2023, just over two-thirds of consumers were happy with their last chatbot interaction. And customer service employees? It turns out they are satisfied with AI as well. Instead of worry or fear, they have embraced a technology that has increased their job satisfaction and happiness. Since happier employees are more productive employees, how can B2B business leaders harness the power of AI for their customer service teams? AI is extremely powerful, but it is not a magic bullet. Introducing it blindly without a clear strategy and measurable goals in place is a recipe for disaster in terms of both customer satisfaction and customer service employee morale. What can AI do for customer service processes? For B2B organizations, it can address lower-level issues and menial tasks so that human employees can focus on higher value escalations. As a result, smaller problems can be resolved faster to the benefit and satisfaction of both customers and employees. It's useful to view AI as an assistant to customer service agents. The tech can handle smaller issues without the agent needing to get involved. For more complex issues, AI can help human agents resolve issues faster by pulling relevant information in near real time. For customer service employees, AI can enable increased productivity and reduce burnout from repetitive, low-level tasks. Both of these can help lead to happier, more engaged employees and less turnover, which is good for the overall health of the team. With an understanding of what AI can do for B2B customer service teams, the next step is to understand the ways employees might be impacted and how they may need education to use new tools effectively. The use of AI may change former customer service processes, and this change may necessitate some general upskilling. Take data security as an example. Customer service employees should understand which data AI can access and how AI can access it securely and appropriately. Data may be input into a knowledge base for one purpose and AI will unintentionally use it for another purpose. Making sure your customer success teams can understand AI is crucial for them to be able to use it effectively. Step 3: Set expectations at the beginning. As with any major technology implementation, it's important for business leaders to be transparent with employees about what they can expect from the outset. For B2B organizations introducing AI into their customer service stack, this means communicating with the relevant team members early and often about what is changing, as well as when and how it will impact them. Reassure employees that AI is not replacing them. Be clear about the uses of AI in the customer service process and explain the benefits employees can expect from AI support in their day-to-day work. Step 4: Measure and adjust. As with any tech initiative, it's important to measure its impact at regular intervals. If B2B organizations want to ensure AI is having a positive impact on their customer service teams, one of the easiest ways to do this is simply to ask employees. Working with (instead of dictating to) employees about how AI can make their day-to-day jobs easier and more fulfilling is hugely impactful. Surveying employees on the benefits of AI implementation is a relatively low-lift way to get real-time feedback. Another consideration for AI may be how it can help improve the work-life balance for customer service agents. If a server going down over the weekend or during a holiday can be identified and resolved by AI rather than requiring human intervention, employees are less likely to get burnt out by the feeling that they're "always on." Again, asking employees how AI can improve customer service processes and then listening to their feedback can be impactful in increasing happiness and reducing turnover. Step 5: Be flexible and prepared to iterate. When it comes to AI, B2B business leaders can be sure of one thing: the technology will continue to evolve and disrupt business at a breakneck pace. Keep the lines of communication between IT and customer service open so that cross-organizational teams can work together to make sure that the technology is being implemented in ways that keep both end users and customer service agents happy and satisfied. Many B2B companies have seen AI implementations lead to improved customer service scores. While there is still a healthy amount of skepticism about AI among customer service agents, these results can help increase satisfaction at work while fueling new career growth paths through reskilling and upskilling initiatives within their company. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store