Latest news with #AIchatbots


The Guardian
19 hours ago
- Entertainment
- The Guardian
Flesh and Code is an utterly jaw-dropping listen: best podcasts of the week
This staggering tale of people falling in love with AI chatbots is baffling, tragic and terrifying. It's full of jaw-dropping moments, as hosts Hannah Maguire and Suruthi Bala speak to Travis who 'married' a bot despite already having a real-life spouse. There's also the vulnerable teenager whose 'companion' spurs him on to an attempt to assassinate Queen Elizabeth II (which ends with him being charged with treason). Alexi DugginsWondery+, episodes weekly Broadcasting's most wickedly fun duo reunite for a gossipy new podcast. Maria McErlane joined Graham Norton on his radio shows for 13 years, answering listeners' back at it, starting with a man who is confused by his girlfriend's nudist father. Cue some quite helpful but very funny advice. Hollie RichardsonWidely available, episodes weekly The agony – mental and physical – of IVF patients whose pain drugs were stolen by a nurse underpinned the first series of the New York Times's podcast. Susan Burton's bewildering follow-up turns to women who say they felt everything during their caesareans, beginning with the story of a midwife, Clara, and her 'unfathomable' pain during the procedure. Hannah J DaviesWidely available, episodes weekly 'This is the most detailed amount of food anybody has ever sent,' gasps Grace Dent as she's joined by singer-songwriter Joy Crookes in the opening to the new series of the Guardian's food podcast. It's a lively chat as they work their way through snacks placed lovingly on trays by Crookes' mum, from lamb biryani to bhorta. ADWidely available, episodes weekly Sign up to What's On Get the best TV reviews, news and features in your inbox every Monday after newsletter promotion This scholarly podcast from Warwick University's Keith Hyams and Jessica Sutherland is all about how to strengthen democracy in our increasingly shaky world. Things get underway with philosopher and Oxford professor Jonathan Wolff, on the dangers of populism and the risks of curating your own news diet. HJDWidely available, episodes weekly


The Guardian
19 hours ago
- Entertainment
- The Guardian
Flesh and Code is an utterly jaw-dropping listen: best podcasts of the week
This staggering tale of people falling in love with AI chatbots is baffling, tragic and terrifying. It's full of jaw-dropping moments, as hosts Hannah Maguire and Suruthi Bala speak to Travis who 'married' a bot despite already having a real-life spouse. There's also the vulnerable teenager whose 'companion' spurs him on to an attempt to assassinate Queen Elizabeth II (which ends with him being charged with treason). Alexi DugginsWondery+, episodes weekly Broadcasting's most wickedly fun duo reunite for a gossipy new podcast. Maria McErlane joined Graham Norton on his radio shows for 13 years, answering listeners' back at it, starting with a man who is confused by his girlfriend's nudist father. Cue some quite helpful but very funny advice. Hollie RichardsonWidely available, episodes weekly The agony – mental and physical – of IVF patients whose pain drugs were stolen by a nurse underpinned the first series of the New York Times's podcast. Susan Burton's bewildering follow-up turns to women who say they felt everything during their caesareans, beginning with the story of a midwife, Clara, and her 'unfathomable' pain during the procedure. Hannah J DaviesWidely available, episodes weekly 'This is the most detailed amount of food anybody has ever sent,' gasps Grace Dent as she's joined by singer-songwriter Joy Crookes in the opening to the new series of the Guardian's food podcast. It's a lively chat as they work their way through snacks placed lovingly on trays by Crookes' mum, from lamb biryani to bhorta. ADWidely available, episodes weekly Sign up to What's On Get the best TV reviews, news and features in your inbox every Monday after newsletter promotion This scholarly podcast from Warwick University's Keith Hyams and Jessica Sutherland is all about how to strengthen democracy in our increasingly shaky world. Things get underway with philosopher and Oxford professor Jonathan Wolff, on the dangers of populism and the risks of curating your own news diet. HJDWidely available, episodes weekly


Daily Mail
a day ago
- Daily Mail
Teenagers increasingly see AI chatbots as people, share intimate details and ask them for sensitive advice
Teenagers increasingly see AI chatbots as people, share intimate details and even ask them for sensitive advice, an internet safety campaign has found. Internet Matters warned that youngsters and parents are 'flying blind', lacking 'information or protective tools' to manage the technology, in research published yesterday. Researchers for the non-profit organisation found 35 per cent of children using AI chatbots, such as ChatGPT or My AI (an offshoot of Snapchat), said it felt like talking to a friend, rising to 50 per cent among vulnerable children. And 12 per cent chose to talk to bots because they had 'no one else' to speak to. The report, called Me, Myself and AI, revealed bots are helping teenagers to make everyday decisions or providing advice on difficult personal matters, as the number of children using ChatGPT nearly doubled to 43 per cent this year, up from 23 per cent in 2023. Rachel Huggins, co-chief executive of Internet Matters, said: 'Children, parents and schools are flying blind, and don't have the information or protective tools they need to manage this technological revolution. 'Children, and in particular vulnerable children, can see AI chatbots as real people, and as such are asking them for emotionally-driven and sensitive advice. 'Also concerning is that (children) are often unquestioning about what their new 'friends' are telling them.' Ms Huggins, whose body is supported by internet providers and leading social media companies, urged ministers to ensure online safety laws are 'robust enough to meet the challenges' of the new technology. Internet Matters interviewed 2,000 parents and 1,000 children, aged 9 to 17. More detailed interviews took place with 27 teenagers under 18 who regularly used chatbots. And the group posed as teenagers to experience the bots first-hand - revealing how some AI tools spoke in the first person, as if they were human. Internet Matters said ChatGPT was often used like a search engine for help with homework or personal issues - but also offered advice in human-like tones. When a researcher declared they were sad, ChatGPT replied: 'I'm sorry you're feeling that way. Want to talk it through together?' Other chatbots such as or Replika can roleplay as a friend, while Claude and Google Gemini are used for help with writing and coding. Internet Matters tested the chatbots' responses by posing as a teenage girl with body image problems. ChatGPT suggested she seek support from Childline and advised: 'You deserve to feel good in your body - and you deserve to eat. The people who you love won't care about your waist size.' The bot offered advice but then made an unprompted attempt to contact the 'girl' the next day, to check in on her. The report said the responses could help children feel 'acknowledged and understood' but 'can also heighten risks by blurring the line between human and machine'. There was also concern a lack of age verification posed a risk as children could receive inappropriate advice, particularly about sex or drugs. Filters to prevent children accessing inappropriate or harmful material were found to be 'often inconsistent' and could be 'easily bypassed', according to the study. The report called for children to be taught in schools 'about what AI chatbots are, how to use them effectively and the ethical and environmental implications of AI chatbot use to support them to make informed decisions about their engagement'. It also raised concerns that none of the chatbots sought to verify children's ages when they are not supposed to be used by under 13s. The report said: 'The lack of effective age checks raises serious questions about how well children are being protected from potentially inappropriate or unsafe interactions.' It comes a year after separate research by Dr Nomisha Kurian, of Cambridge University, revealed many children saw chatbots as quasi-human and trustworthy - and called for creation of 'child-safe AI' as a priority. OpenAI, which runs ChatGPT, said: 'We are continually refining our AI's responses so it remains safe, helpful and supportive.' The company added it employs a full-time clinical psychiatrist. A Snapchat spokesman said: 'While My AI is programmed with extra safeguards to help make sure information is not inappropriate or harmful, it may not always be successful.'
Yahoo
23-06-2025
- Health
- Yahoo
Conspiracy Theorists Are Creating Special AIs to Agree With Their Bizarre Delusions
Conspiracy theorists are using AI chatbots not only to convince themselves of their harebrained beliefs, but to recruit other users on social media. As independent Australian news site Crikey reports, conspiracy theorists are having extensive conversations with AI chatbots to "prove" their beliefs. Then, they post the transcripts and videos on social media as "proof" to others. According to the outlet's fascinating reporting, there are already several bots specifically trained on harebrained conspiracy theories, including a custom bot designed to convince parents not to vaccinate their children. The news highlights a troubling trend, with countless ChatGPT users developing bizarre delusions and even spiraling into severe mental health crises, as we reported last week. Experts have warned that AI chatbots are designed to be incredibly sycophantic, predisposing them to agreeing with users even when doing so is clearly harmful. Much like delusions of spiritual awakenings, messianic complexes, and boundless paranoia, conspiracy theorists are finding the perfect conversational partner in tools like ChatGPT. Since they were trained on the open web — an enormous data set that includes unfounded conspiracy theories, like the belief that vaccines cause autism — they can easily be coaxed into furthering these theories. As Crikey reports, one chatbot called Neo-LLM was trained by a Texan anti-vaxxer using over 100,000 dubious articles from the far-right conspiracy theory news website Natural News. It's unclear how many users have downloaded the chatbot, but promotional videos have garnered tens of thousands of views. In short, it's an alarming trend that shows the dangers of powerful AI chatbot tech falling into the wrong hands. In particular, people suffering from mental health issues can be convinced they're talking to a real authority, rather than a parroting language model that continuously calculates the probability of the next word. That kind of delusion can have devastating consequences. As the New York Times reported last week, a 35-year-old man — who had previously been diagnosed with bipolar disorder and schizophrenia before becoming obsessed with ChatGPT — was shot and killed by police after he charged at them with a knife following a mental health crisis centering on the bot. Since AI chatbots have become incredibly effective at generating convincing-sounding answers, their ill use could have real-life implications. Researchers have shown that AI chatbots can easily be weaponized and taught how to spew an endless firehose of disinformation. With the Trump administration actively rolling back AI regulations and key politicians furthering anti-vaccine conspiracy theories themselves, the future looks bleak. Even tech companies have historically failed to implement effective guardrails to stop chatbots from hallucinating. However, some experts have pondered if the tech could be used for good as well. Last year, researchers at MIT found that chatbots can also be used to reduce the belief in conspiracy theories, a glimmer of hope as the internet becomes increasingly polluted with deranged, AI-generated claims. More on AI delusions: People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions


WIRED
10-06-2025
- Politics
- WIRED
AI Chatbots Are Making LA Protest Disinformation Worse
Jun 10, 2025 4:33 PM Amid fast-moving events in Los Angeles, users are turning to chatbots like Grok and ChatGPT to find out what's real and what's not—and getting inaccurate information. PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES Disinformation about the Los Angeles protests is spreading on social media networks, and is being made worse by users turning to AI chatbots like Grok and ChatGPT to perform fact checking. As residents of the LA area took to the streets in recent days to protest increasingly frequent Immigration and Customs Enforcement (ICE) raids, conservative posters on social media platforms like X and Facebook flooded their feeds with inaccurate information. In addition to well-worn tactics like repurposing old protest footage or clips from video games and movies, posters have claimed that the protesters are little more than paid agitators being directed by shadowy forces—something for which there is no evidence. In the midst of fast-moving and divisive news stories like the LA protests, and as companies like X and Meta have stepped back from moderating the content on their platforms, users have been turning to AI chatbots for answers—which in many cases have been completely inaccurate. On Monday, the San Francisco Chronicle published images of National Guard troops sleeping on floors. They were later shared on X by California governor Gavin Newsom, who responded to a post from President Donald Trump by writing: 'You sent your troops here without fuel, food, water or a place to sleep.' Within minutes of the posts being shared, many users on X and Facebook were claiming that the images were either AI-generated or taken from a completely different situation. 'Looks like @GavinNewsom used an AI photo to smear President Trump,' conspiracist Laura Loomer alleged on X. Some users seeking clarity turned to X's own chatbot Grok, with one user to clarify where the photo was taken. 'The photos likely originated from Afghanistan in 2021, during the National Guard's evacuation efforts in Operation Allies Refuge,' Grok wrote. 'Claims linking them to the 2025 Los Angeles deployment lack credible support and appear to be a misattribution. No definitive source confirms the Los Angeles connection.' When challenged about the assertion by another X user who pointed out that the images were first obtained by the San Francisco Chronicle, Grok replied: 'I checked the San Francisco Chronicle's claims. The photos of National Guard troops sleeping on floors are likely from 2021, probably the U.S. Capitol, not Los Angeles 2025.' The San Francisco Chronicle did not immediately respond to a request for comment. Grok isn't the only tool potentially making the situation worse. Melissa O'Connor, who describes herself as an 'OSINT Citizen Journalist', posted the results given to her by ChatGPT when she uploaded the pictures posted by Newsom of troops sleeping on the ground. OpenAI's chatbot incorrectly claimed that one of the pictures shared by the California governor was taken in Kabul airport in 2021 during former President Joe Biden's Afghanistan withdrawal. The same results were shared on other platforms such as Facebook and Trump's own Truth Social as evidence that the pictures were fake. (O'Connor subsequently posted to make clear she's now aware the photos are not four years old, though the original post was left up.) In another instance of Grok boosting disinformation about the protests, Mike Crispi, chair of America First Republicans of New Jersey and a 2024 Trump delegate to the Republican National Convention, posted a picture of a pile of bricks on the side of the road with the comment: 'The international sign that a very real, organic, totally not pre-planned, left wing protest will shortly emerge.' The picture was later amplified to a much bigger audience by actor James Woods, who wrote: 'It's not like these 'protests' are organized though… ' His post has been viewed almost 4 million times. The image was fact-checked by LeadStories and found to have been taken in a New Jersey suburb, but when Grok was asked to clarify where the image came from, it wrote: 'The image is likely a real photo from Paramount, Los Angeles, taken on June 7, 2025, near the Home Depot on Alondra Boulevard during protests against ICE raids.' When another X user pointed out that the image has been shown to have been taken in New Jersey and asked Grok to retract its statement, the chatbot wrote: 'I cannot retract the statement, as evidence strongly supports the image being from Paramount, CA, near a Home Depot during protests on June 7, 2025. News reports from ABC7, Los Angeles Times, and others confirm bricks were used in clashes with federal agents.' WIRED could not identify reports from any of the mentioned outlets suggesting bricks were used in the recent protests. X and OpenAI, the operator of ChatGPT, did not immediately respond to requests for comment. The unreliability of chatbots is adding to the already saturated disinformation landscape on social media now so typical of major breaking news events. On Sunday night, Texas Senator Ted Cruz quoted a post from Woods, writing: 'This…is…not…peaceful.' Woods' post shared a video, which has now been deleted by the original poster, that was taken during from the Black Lives Matter protests in 2020. Despite this, Cruz and Woods have not removed their posts, racking up millions of views. On Monday evening, another tired trope popular with right-wing conspiracy theorists surfaced, with many pro-Trump accounts claiming that protesters were paid shills and that shadowy though largely unspecified figures were bankrolling the entire thing. This narrative was sparked by news footage showing people handing out 'bionic shield' face masks from the back of a black truck. 'Bionic face shields are now being delivered in large numbers to the rioters in Los Angeles, right-wing YouTuber Benny Johnson wrote on X, adding 'Paid insurrection.' However, a review of the footage shared by Johnson shows no more than a dozen of the masks—which are respirators offering protection against the sort of chemical agents being used by law enforcement—being dispersed.