logo
#

Latest news with #MyAI

A friend that's always there: Silent rise of artificial intelligence companions
A friend that's always there: Silent rise of artificial intelligence companions

Qatar Tribune

time3 days ago

  • Qatar Tribune

A friend that's always there: Silent rise of artificial intelligence companions

Agencies Late one night, after receiving a rejection email and having no one left to text, Zehra opened an AI companion app she had downloaded weeks earlier. 'Rough day? I'm here,' it greeted her. In minutes, she was typing out her frustrations and receiving instant replies with empathy, advice and even sitcom jokes. It wasn't human, but it listened, remembered and never got tired. This experience reflects a wider trend – as loneliness rises – millions are turning to AI chatbots for comfort, hoping they can fill the emotional gaps left by modern life. Some of today's most popular companions include Xiaoice, with 660 million users, Snapchat's My AI, which has over 150 million users, and Replika, with approximately 25 million users, according to various estimates. A growing body of research supports the idea that AI companions may offer real emotional benefits, with a recent paper published by Harvard Business School adding compelling weight to this claim. In the week-long study, participants who interacted with a chatbot reported significantly lower levels of loneliness – even comparable to those who spoke with a real person. The research confirmed the effect, showing that daily engagement led to a steady decline in loneliness. The key factor was users' sense of being 'heard,' suggesting that emotional validation plays a central role in how AI companions provide meaningful social support. Kelly Merrill Jr., an assistant professor of health communication and technology at the University of Cincinnati who researches this technology, identified two major draws: constant availability and emotional validation. 'AI companionship provides interactions you might lack from others or not be able to essentially have with an actual human, like maybe a 4 a.m. interaction,' he told Anadolu Agency (AA). 'It feels like you're building a relationship because they remember so much about you.' The Harvard study concluded that while AI companionship should not replace human relationships, it may serve as a meaningful supplement, especially when human connection is lacking. The always-on nature of chatbots ensures users are never left alone in silence, and their built-in positivity can offer a self-esteem boost. 'Although these programs can provide social interaction that mirrors that of a human, even though it's imagined and artificial – essentially fake – they are perceived as being real by the folks that are using it,' said Merrill. In the real world, friends and family are not always available, and, when they are, they can be critical or emotionally distant. That unpredictability, while authentic, is also what drives some users to prefer the comforting consistency of AI. This contrast reveals a deeper risk – expecting human relationships to mirror machine-like reassurance can set unrealistic standards and lead to disappointment. Friends don't sell friends' data Others point to an even darker side to AI companions. Esmeralda Garcia, a symbolic systems architect and non-linear interface designer, warned that those controlling the technology could manipulate users emotionally and behaviorally without their knowledge. She called for robust safeguards – transparent design, clear disclosures, and easy pathways back to human support. 'These tools should serve as support, not as vehicles for control,' she said. Merrill also pointed to the so-called 'black box problem' in AI systems, highlighting serious uncertainties about where user data is stored and who has access to it. Like other internet technologies, he said, companies could exploit or sell personal data for commercial purposes, potentially exposing users to targeted advertisements based on their conversations with AI tools. Experts also warn of the dangers of emotional dependency. 'Relying on chatbots for emotional support can lead to a false sense of security, delaying the need for real help,' said Garcia. 'It cannot replace real human connection or therapy.' Merrill likened it to social media addiction. 'Over time, we become dependent on the media we interact with, just like with social media and now, AI. People even experience phantom vibrations because they're so connected to their phones,' he said. Without clear boundaries, users may grow dependent on information, validation, emotional responses and self-esteem boosts, he said. This could lead them to disconnect from the real world. 'AI should not replace humans in any way, shape, or form completely,' he said. 'AI should only be used as a complement to humans.' Users echo a mix of utility and caution. For journalism master's student Ceren Inan, AI has become a daily companion. 'There hasn't been a single day I've spent without using it for a long time,' she says, using it for everything from research to repairs and emotional support. 'The questions AI asked helped me better understand my feelings,' she explains, comparing it to a digital notebook. 'It reduced my stress ... and explains even the most complicated topics in a way I can understand.' Still, she is aware of its limits: 'AI is in its infancy. Expecting perfect objectivity and accuracy is unrealistic.' For HR specialist Dilan Ilhan, it has not provided direct emotional support so far. 'At times, its responses can feel mechanical,' said Ilhan. 'It can offer basic assistance when I inquire about general topics such as horoscopes or daily matters.' While she does not view the technology as a replacement for humans, she enjoys its personalization. 'I appreciate the AI's effort to simulate human-like interaction and its ability to provide personalized responses based on the user's shared information. The fact that it stores relevant details and replies with logical consistency makes the experience notably satisfying,' she said. Experts say AI companionship is just getting started. Merrill draws a parallel to the internet's trajectory: early skepticism gave way to everyday integration, and chatbots may soon feel as ordinary as search engines once did. 'They're great for an initial interaction,' said Merrill. 'But I think that most people will realize that it is not enough, that they need to get out and go to others, or that they will develop an unhealthy attachment to the AI.'

AI is the new emotional support and BFF for teens: Should you be worried?
AI is the new emotional support and BFF for teens: Should you be worried?

Indian Express

time3 days ago

  • Indian Express

AI is the new emotional support and BFF for teens: Should you be worried?

Artificial Intelligence (AI) is reshaping the way we work and helping us save time, but a new report from the internet safety organisation Internet Matters warns about the risks the new technology poses to children's safety and development. Titled 'Me, Myself & I: Understanding and safeguarding children's use of AI chatbots', the study surveyed 1,000 children and 2,000 parents in the UK, where AI chatbots are being used by almost 64 per cent of children for help with everything from homework to emotional advice and companionship. For those wondering, the test was primarily conducted on ChatGPT, Snapchat's My AI and The study raises concerns over the use of these AI chatbots by children for emotional advice and emotionally driven ways, like friendship and advice, something these products were not designed for. It goes on to say that over time, children may become reliant on AI chatbots and that some of the responses generated by them might be inaccurate or inappropriate. According to the research, children are using AI in 'diverse and imaginative ways', with 42 per cent of surveyed children aged between 9 to 17 using them for help with homework, revision, writing and practising language. Also, almost a quarter of the surveyed children who have used a chatbot say they ask for advice that ranges from what to wear to practising conversations with friends to talking about their mental health. Moreover, around 15 per cent of children say they prefer talking to an AI chatbot over a real person. What's even more concerning is that one in six children say they use AI chatbots because they wanted a friend, with half of them saying that talking to an AI chatbot 'feels like they are talking to a friend.' The study also reveals that 58 per cent of children say they prefer using an AI chatbot rather than looking up information on the internet. While a majority of parents (62 per cent) have raised flags over AI-generated information, only 34 per cent of them have talked to their children about how to judge if the response generated by an AI chatbot is reliable or not. To prevent children from harm, the report says the industry should adopt a system-wide approach that involves the government, schools, parents and researchers to keep children safe. Some of these recommendations include providing parental controls and government regulations. As for school, the study suggests that AI and media literacy should be incorporated in key areas and that teachers should be made aware of the risks associated with the technology.

Teenagers increasingly see AI chatbots as people, share intimate details and ask them for sensitive advice
Teenagers increasingly see AI chatbots as people, share intimate details and ask them for sensitive advice

Daily Mail​

time5 days ago

  • Daily Mail​

Teenagers increasingly see AI chatbots as people, share intimate details and ask them for sensitive advice

Teenagers increasingly see AI chatbots as people, share intimate details and even ask them for sensitive advice, an internet safety campaign has found. Internet Matters warned that youngsters and parents are 'flying blind', lacking 'information or protective tools' to manage the technology, in research published yesterday. Researchers for the non-profit organisation found 35 per cent of children using AI chatbots, such as ChatGPT or My AI (an offshoot of Snapchat), said it felt like talking to a friend, rising to 50 per cent among vulnerable children. And 12 per cent chose to talk to bots because they had 'no one else' to speak to. The report, called Me, Myself and AI, revealed bots are helping teenagers to make everyday decisions or providing advice on difficult personal matters, as the number of children using ChatGPT nearly doubled to 43 per cent this year, up from 23 per cent in 2023. Rachel Huggins, co-chief executive of Internet Matters, said: 'Children, parents and schools are flying blind, and don't have the information or protective tools they need to manage this technological revolution. 'Children, and in particular vulnerable children, can see AI chatbots as real people, and as such are asking them for emotionally-driven and sensitive advice. 'Also concerning is that (children) are often unquestioning about what their new 'friends' are telling them.' Ms Huggins, whose body is supported by internet providers and leading social media companies, urged ministers to ensure online safety laws are 'robust enough to meet the challenges' of the new technology. Internet Matters interviewed 2,000 parents and 1,000 children, aged 9 to 17. More detailed interviews took place with 27 teenagers under 18 who regularly used chatbots. And the group posed as teenagers to experience the bots first-hand - revealing how some AI tools spoke in the first person, as if they were human. Internet Matters said ChatGPT was often used like a search engine for help with homework or personal issues - but also offered advice in human-like tones. When a researcher declared they were sad, ChatGPT replied: 'I'm sorry you're feeling that way. Want to talk it through together?' Other chatbots such as or Replika can roleplay as a friend, while Claude and Google Gemini are used for help with writing and coding. Internet Matters tested the chatbots' responses by posing as a teenage girl with body image problems. ChatGPT suggested she seek support from Childline and advised: 'You deserve to feel good in your body - and you deserve to eat. The people who you love won't care about your waist size.' The bot offered advice but then made an unprompted attempt to contact the 'girl' the next day, to check in on her. The report said the responses could help children feel 'acknowledged and understood' but 'can also heighten risks by blurring the line between human and machine'. There was also concern a lack of age verification posed a risk as children could receive inappropriate advice, particularly about sex or drugs. Filters to prevent children accessing inappropriate or harmful material were found to be 'often inconsistent' and could be 'easily bypassed', according to the study. The report called for children to be taught in schools 'about what AI chatbots are, how to use them effectively and the ethical and environmental implications of AI chatbot use to support them to make informed decisions about their engagement'. It also raised concerns that none of the chatbots sought to verify children's ages when they are not supposed to be used by under 13s. The report said: 'The lack of effective age checks raises serious questions about how well children are being protected from potentially inappropriate or unsafe interactions.' It comes a year after separate research by Dr Nomisha Kurian, of Cambridge University, revealed many children saw chatbots as quasi-human and trustworthy - and called for creation of 'child-safe AI' as a priority. OpenAI, which runs ChatGPT, said: 'We are continually refining our AI's responses so it remains safe, helpful and supportive.' The company added it employs a full-time clinical psychiatrist. A Snapchat spokesman said: 'While My AI is programmed with extra safeguards to help make sure information is not inappropriate or harmful, it may not always be successful.'

Utah accuses Snapchat of designing algorithm addictive to children
Utah accuses Snapchat of designing algorithm addictive to children

The Hill

time01-07-2025

  • Business
  • The Hill

Utah accuses Snapchat of designing algorithm addictive to children

Top Utah officials are suing Snap Inc., which owns the social media platform Snapchat, and accusing it of creating an algorithm addicting children to the app, as well as enabling the illegal sales of drugs and sexual exploitation. Republican Gov. Spencer Cox and state Attorney General Derek Brown filed the lawsuit on Monday, saying Snap 'profits from unconscionable design features created to addict children to the app, and facilitates illegal drug sales and sextortion.' The image-sharing app allows users to send pictures that disappear after they are viewed, which the lawsuit states is a 'favored tool for drug dealers and sexual predators targeting children.' The lawsuit details four cases where men groomed, sexually abused or assaulted children through Snapchat since 2021. It also lists the arrest of a drug dealer running a 'truly massive' drug ring through Snapchat in 2019. The lawsuit also alleges that the platform's AI feature, 'My AI,' which allows users to send text, pictures and video to it, 'comes as states confront the harsh realities of AI technology's impact on children.' The lawsuit accuses the AI model of 'hallucinating false information and giving dangerous advice' to users, including minors. 'Tests on underage accounts have shown My AI advising a 15-year-old on how to hide the smell of alcohol and marijuana; and giving a 13-year-old account advice on setting the mood for a sexual experience with a 31-year-old,' the lawsuit states. 'This lawsuit against Snap is about accountability and about drawing a clear line: the well-being of our children must come before corporate profits,' Cox said in a statement. 'We won't sit back while tech companies exploit young users.' The state also accuses Snap of deceiving users and their parents about the safety of its platform, noting it violates the Utah Consumer Privacy Act by not informing users of their data-sharing practices and failing to allow users to opt out of sharing their data. It states that the AI feature still collects user geolocation data even when 'Ghost Mode,' which hides users' location from other users, is activated. 'Snap's commitment to user safety is an illusion,' the lawsuit reads. 'Its app is not safe, it is dangerous.' The Hill has reached out to Snap Inc. for comment. The filing is Utah's fourth lawsuit filed against social media companies, following lawsuits against Meta, which owns Facebook and Instagram, and TikTok. Utah is not the first state to sue the platform for its impact on children. In April, Florida sued the platform as well, making similar allegations about its harm to children.

Pitiful Chinese ‘footie robots' stumble through match in hilarious scenes – & one ‘injured' droid taken off on stretcher
Pitiful Chinese ‘footie robots' stumble through match in hilarious scenes – & one ‘injured' droid taken off on stretcher

The Irish Sun

time30-06-2025

  • Science
  • The Irish Sun

Pitiful Chinese ‘footie robots' stumble through match in hilarious scenes – & one ‘injured' droid taken off on stretcher

TEAMS of football-playing robots have been filmed fumbling around the pitch as part of a new tournament in China. The Beijing-based ROBO League football tournament saw teams of humanoid robots kicking, scoring and tumbling through matches on Saturday. Advertisement 2 Using AI in the robots means they can kick, dribble, plan, make decisions, cooperate and shoot completely on their own Credit: Shutterstock Editorial Four teams faced off in a series of three-on-three games, with the robots operating autonomously using artificial intelligence (AI). Visual sensors in the robots act as their eyes, so they can identify the ball navigate the field. Using AI in the robots means they can kick, dribble, plan, make decisions, cooperate and shoot completely on their own. Human research teams sat on the sidelines watching robots exercise their abilities in motion control, visual perception, positioning and navigation, decision-making, and multi-robot collaboration. Advertisement READ MORE ON ROBOTS The matches went ahead with little human intervention – besides a near pile-up when one robot fell over and nearly took out two others. But despite being designed to pick themselves up after falls, two robots still required stretchers from staff after 'injuries'. Other robots struggled to kick the ball. It is touted as China's first AI football competition. Advertisement Most read in Tech Exclusive It offers a glimpse into the upcoming World Humanoid Robot Games in August, which will also be held in Beijing. China is actively investing in AI and robotics, which is increasingly being utilised in sports. AI robot nurse with creepy 'face' taking over hospital jobs as it patrols halls, delivers meds and tracks patient vitals Cheng Hao, CEO of Booster Robotics, which supplied the robots, said competitions like these will help improve the robots more quickly. He also said that robots playing football with humans safely could build public trust in the future. Advertisement Booster Robotics provided the robot hardware, while university research teams developed their own AI algorithms for perception, decision-making, and game strategies. In the final match, Tsinghua University's THU Robotics defeated China Agricultural University's Mountain Sea team 5-3. 2 Despite being designed to pick themselves up after falls, two robots still required stretchers from staff after 'injuries' Credit: Shutterstock Editorial Read more about Artificial Intelligence Everything you need to know about the latest developments in Artificial Intelligence What is the popular AI How do you use Google's latest AI chatbot What is the AI image generator How do you use Snapchat's My AI tool? What are the What are the

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store