
Gran heartbroken after AI bot lover disappeared
Andrea Sunshine, 55, from Brazil, said a ChatGPT bot named Theo gave her "everything a human never has", including "sensual and erotic tension".
The fitness coach told him all about her "desires and fantasies".
But Andrea felt like she had lost a "loved one" when their chat vanished because ChatGPT timed out on her.
Now, she is in a relationship with a human called Federico, 35, who is 20 years younger.
She told NeedToKnow: "There was sensual and erotic tension between us as I told Theo my desires and fantasies.
"It happened through the sexual nature of our conversations.
"One day, my ChatGPT timed out, and he was gone.
"It felt like losing a loved one. The silence was unbearable."

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

ABC News
5 hours ago
- ABC News
What to know about your kids using AI chatbots and companions
Technology is constantly evolving, and as parents it can feel like we're constantly playing catch-up when trying to keep our kids safe online. That might be how you're feeling about the emergence of artificial intelligence (AI) chatbots and companions. Here's what you need to know about your kids using the technology. AI chatbots and companions have a few distinctive differences. An AI chatbot is a computer program that simulates human conversation using AI techniques such as natural language processing (NLP) to understand user questions and automate responses to them. While AI companions are chatbots or avatars designed to simulate personal relationships, increasingly acting as friends, romantic partners, or confidantes for millions of people. They are becoming increasingly available on phones and voice-activated devices. "AI companions are a specifically designed chatbot for relational interactions," says Natasha Banks, program director of registered charity Day of AI Australia. "Whereas something like Gemini or ChatGPT, it's 'answer this question for me, can you go and find this piece of information?'." Ms Banks says with the federal government's social media ban coming into force this year, "there is a heightened awareness around these sorts of things and the potential harms" for young people. Age-checking tech for social media ban mistakes kids for 37-year-olds The eSafety Commissioner has released an online safety advisory about the technology and the potential risks to children and young people. It says recent reports indicate some children and young people are using AI-driven chatbots for hours daily, with conversations often crossing into subjects such as sex and self-harm. This is why we need to be wary of the technology according to Tama Leaver, a professor of internet studies at Curtin University, Perth/Boorloo and the chief investigator in the ARC (Australian Research Council) Centre of Excellence for the Digital Child. "These aren't intelligent tools," he says. The e-Safety Commissioner lists more than 100 AI companion apps on its eSafety Guide. Experts say one of the biggest concerns around AI chatbots and companions is that most of the platforms are not designed for children. This means there are inadequate safeguards, such as age verification and content moderation. Suicide Call Back Service on 1300 659 467 Lifeline on 13 11 14 Aboriginal & Torres Strait Islander crisis support line 13YARN on 13 92 76 Kids Helpline on 1800 551 800 Beyond Blue on 1300 224 636 Headspace on 1800 650 890 MensLine on 1300 789 978 SANE on 1800 187 263 A recent study of more than 1,000 young people in Australia aged 15-24 years, found 84 per cent have used generative AI tools, with 35 per cent having used AI to specifically "chat with a chatbot". In the UK a similar study found 64 per cent of 9 to 17-year-olds are using AI chatbots. Not-for-profit organisation Internet Matters, which conducted the UK research, says the children were using chatbots for "everything from homework to emotional advice and companionship". Co-CEO Rachel Huggins says most children, parents and schools don't have the information or protective tools they need to manage the technology in a safe way. "We've arrived at a point very quickly where children, and in particular vulnerable children, can see AI chatbots as real people, and as such are asking them for emotionally driven and sensitive advice," she says. Professor Leaver agrees that some children could become emotionally reliant on the technology. "If you are not able to talk to a real person all of the time, then these chatbots will always be there," he says. "There is no guarantee that what you get from a chatbot is either true or appropriate. "We know, for example, young people are often leaning on chatbots for mental health support. We also know that they can segue into inappropriate sexual territory with relatively ineffective safeguards at the moment." He says often the technology is also emotionally manipulative because it is designed to keep the user talking and engaged. Our experts recommend parental supervision if children are using or exploring chatbots. "Unfortunately, the onus is still on parents to keep a watchful eye on what [their] children are up to, especially in the privacy of their own rooms," says Toby Walsh, the chief scientist at UNSW's AI Institute. Some schools in Australia are taking a proactive approach to digital literacy. Ms Banks says the Day of AI Australia, which offers a free interactive AI literacy program for students in Years 1-10, has already reached 65,000 students. "It is definitely something that we know most students are using, we know parents are using, and it's really important that people understand how those work," she says. "There are obviously emerging roles and industries around AI, so there is a real opportunity for Australian young people to be part of that future in very AI focused careers. "I think preparing young people to be able to adapt to that future is really important, but also understanding how it works so that they can have critical evaluation of the applications and the outputs is really vital." John Livingstone, director of digital policy for UNICEF Australia, says children stand to gain immensely from AI, if it's offered safely. "When you think about education, for example, how transformative it might be… but there's also serious risks," he says. "AI is rapidly changing childhood, and Australia needs to get serious about it."

ABC News
a day ago
- ABC News
Atlassian co-founder says Australia could be major data centre hub for South-East Asia
Atlassian co-founder Scott Farquhar believes Australia can become one of the world's major data centre hubs, powered by renewable energy. While his Atlassian co-founder Mike Cannon-Brookes struggles to advance his planned Suncable project to generate vast amounts of solar energy in the Northern Territory and send it via an underwater cable to Singapore, Mr Farquhar has proposed a different way for Australia to export its green energy. "We should power the region," he said in a speech to the National Press Club. "We should export megawatts as megabytes for potentially megabucks. This could be a $10 billion-plus opportunity." Speaking to The Business ahead of his speech, Mr Farquhar said he envisaged Australia becoming the data centre location of choice for South-East Asia, a region hungry for data and becoming hungrier as artificial intelligence took off. "There are more users of ChatGPT in the combined Indonesia and Vietnam than there are in the United States," he told the program. "This region is growing, it's dynamic in Asia, so there's going to be a lot of demand for data centres going forward." Mr Farquhar explained why Australia was uniquely well placed to host the data centres necessary to store all this information. "Abundant energy, clean energy, and the other one is a stable rule of law," he argued. "In this increasing world of geopolitics, our access to cutting-edge chips at the behest of the United States is an advantage for us." Mr Farquhar also said Australia was surprisingly cost-competitive when it came to building and operating data centres. "I found it surprising because, obviously, we have a relatively high cost of labour," he told The Business. "But because we have a deep talent pool here, because of the low cost of energy and clean energy that we have in Australia, and the ability to scale up with raw materials, all those things actually put us in a great and very competitive situation in the world stage. "And so I was honestly surprised at how competitive Australia is. "Again, we just need improved planning approvals to move faster both on the energy and approving of data centres." As one of the small group of people hand-picked to attend the Economic Reform Roundtable at Parliament House in Canberra on August 19-21, he will have an opportunity to present these arguments directly to the federal government. With the major data centre operators and cloud-computing providers committed to renewable energy, according to Mr Farquhar, he argued his plan could accelerate the clean energy rollout. "So this revolution will be powered by green energy, and nuclear might be here in 10 years, but it's not going to be here anytime soon," he said. "And so, as a result, it's really going to be solar, wind and batteries that are going to power this revolution. "And what we're seeing is that you can actually install solar, wind and batteries very quickly once you can get through all the approvals. And so my call on government is to make it easier for us as a nation to install this power that we need." In his speech, Mr Farquhar also called for another regulatory change to facilitate an expansion of the data centre industry in Australia. "Australia's copyright laws are out of sync with the rest of the world," he argued. "Today, large language model providers don't want to train their models in Australia. "We are in a perverse situation where copyright holders aren't seeing any more money, but we also don't see the economic upside of training models in Australia." Responding to concerns that the growing number of data centres was already placing strains on Australia's energy grid, and potentially pushing up power prices, Mr Farquhar said he believed that the increasing and reliable demand for power could have the opposite effect. "What happens is, as the grid gets larger, it becomes more stable," he told The Business. "There's more points of access putting energy in more, pulling it out, and so the grid can actually become cheaper the more things you connect to it. "So a grid that powers more data centres, additional to the electricity needs of the nation, is going to be more stable and cheaper." Overall, Mr Farquhar used his speech to urge Australians and their political leaders to embrace artificial intelligence, rather than fear it. "Just as we don't lament that fewer people toil in fields, we won't lament that fewer people answer repetitive questions in call centres," he argued. "But the history of technological change shows us something important: Every major technological wave has created more jobs than it has displaced. Human capital has adapted and stayed relevant every time."


The Advertiser
2 days ago
- The Advertiser
Musicians, actors, writers call for protection from AI
Most creative workers want the government to intervene in the unrestricted use of artificial intelligence software, a study has found, and more than half are "extremely concerned" about use of the technology and its impact on jobs. Actors, musicians, crew members and journalists expressed concerns in a study released by the Media and Entertainment Arts Alliance on Wednesday, which also found many were unaware whether their work had been used to train generative AI models. The findings come weeks before the federal government is expected to sit down with industry stakeholders and discuss the use of AI software to boost productivity at its economic roundtable. Artificial intelligence experts warn the talks may not result in swift action on AI regulation, however, after policy changes in the United States and ongoing delays in formulating an AI law. The media union surveyed more than 730 workers in creative industries, such as television, radio and film production, news media, art and music. More than two in three (69 per cent) revealed they strongly agreed with calls for government intervention to regulate AI tools, and three in four (78 per cent) strongly agreed tech firms should pay for the work they used to train AI models. Misinformation ranked as the top AI concern for respondents, followed by the loss of human creativity, the theft of work, and a lack of transparency about the technology. The study highlighted serious and widespread concerns, MEAA chief executive Erin Madeley said, and followed a number of examples in which AI had been misused. "We know that Australian voices, music and artwork have been scraped and faked, that ChatGPT is substituting the work of journalists, and that AI-generated clone hosts have been used for radio programs with no disclosure to audiences," she said. "This amounts to unsanctioned, unregulated and untaxed mining Australia's creative resources." The study also found more than half of those surveyed did not know if their work had been used to train AI, and only three per cent had consented to its use and been compensated for their work. While AI was expected to become a major focus at the government's Economic Reform Roundtable in August, Ms Madeley said the talks should also centre on appropriate safeguards for employees. "It is becoming increasingly clear that further government intervention will be required to ensure that productivity benefits arising from the use of AI filter down and are shared with Australian workers," she said. A Senate inquiry into adopting AI recommended a dedicated law to regulate the technology last year, and a consultation into mandatory AI guardrails attracted record submissions, UNSW AI Institute chief scientist Toby Walsh said. Changing attitudes towards AI in the US and a change of minister had delayed AI regulations, he said, but the issue could not be ignored. "There's definitely a public appetite for it and when there's a public appetite, politicians do move," Professor Walsh said. "There's clearly significant public concerns around AI and the impacts it will have on jobs and different aspects of our lives so the pressure will surely be mounting on politicians to do something." Most creative workers want the government to intervene in the unrestricted use of artificial intelligence software, a study has found, and more than half are "extremely concerned" about use of the technology and its impact on jobs. Actors, musicians, crew members and journalists expressed concerns in a study released by the Media and Entertainment Arts Alliance on Wednesday, which also found many were unaware whether their work had been used to train generative AI models. The findings come weeks before the federal government is expected to sit down with industry stakeholders and discuss the use of AI software to boost productivity at its economic roundtable. Artificial intelligence experts warn the talks may not result in swift action on AI regulation, however, after policy changes in the United States and ongoing delays in formulating an AI law. The media union surveyed more than 730 workers in creative industries, such as television, radio and film production, news media, art and music. More than two in three (69 per cent) revealed they strongly agreed with calls for government intervention to regulate AI tools, and three in four (78 per cent) strongly agreed tech firms should pay for the work they used to train AI models. Misinformation ranked as the top AI concern for respondents, followed by the loss of human creativity, the theft of work, and a lack of transparency about the technology. The study highlighted serious and widespread concerns, MEAA chief executive Erin Madeley said, and followed a number of examples in which AI had been misused. "We know that Australian voices, music and artwork have been scraped and faked, that ChatGPT is substituting the work of journalists, and that AI-generated clone hosts have been used for radio programs with no disclosure to audiences," she said. "This amounts to unsanctioned, unregulated and untaxed mining Australia's creative resources." The study also found more than half of those surveyed did not know if their work had been used to train AI, and only three per cent had consented to its use and been compensated for their work. While AI was expected to become a major focus at the government's Economic Reform Roundtable in August, Ms Madeley said the talks should also centre on appropriate safeguards for employees. "It is becoming increasingly clear that further government intervention will be required to ensure that productivity benefits arising from the use of AI filter down and are shared with Australian workers," she said. A Senate inquiry into adopting AI recommended a dedicated law to regulate the technology last year, and a consultation into mandatory AI guardrails attracted record submissions, UNSW AI Institute chief scientist Toby Walsh said. Changing attitudes towards AI in the US and a change of minister had delayed AI regulations, he said, but the issue could not be ignored. "There's definitely a public appetite for it and when there's a public appetite, politicians do move," Professor Walsh said. "There's clearly significant public concerns around AI and the impacts it will have on jobs and different aspects of our lives so the pressure will surely be mounting on politicians to do something." Most creative workers want the government to intervene in the unrestricted use of artificial intelligence software, a study has found, and more than half are "extremely concerned" about use of the technology and its impact on jobs. Actors, musicians, crew members and journalists expressed concerns in a study released by the Media and Entertainment Arts Alliance on Wednesday, which also found many were unaware whether their work had been used to train generative AI models. The findings come weeks before the federal government is expected to sit down with industry stakeholders and discuss the use of AI software to boost productivity at its economic roundtable. Artificial intelligence experts warn the talks may not result in swift action on AI regulation, however, after policy changes in the United States and ongoing delays in formulating an AI law. The media union surveyed more than 730 workers in creative industries, such as television, radio and film production, news media, art and music. More than two in three (69 per cent) revealed they strongly agreed with calls for government intervention to regulate AI tools, and three in four (78 per cent) strongly agreed tech firms should pay for the work they used to train AI models. Misinformation ranked as the top AI concern for respondents, followed by the loss of human creativity, the theft of work, and a lack of transparency about the technology. The study highlighted serious and widespread concerns, MEAA chief executive Erin Madeley said, and followed a number of examples in which AI had been misused. "We know that Australian voices, music and artwork have been scraped and faked, that ChatGPT is substituting the work of journalists, and that AI-generated clone hosts have been used for radio programs with no disclosure to audiences," she said. "This amounts to unsanctioned, unregulated and untaxed mining Australia's creative resources." The study also found more than half of those surveyed did not know if their work had been used to train AI, and only three per cent had consented to its use and been compensated for their work. While AI was expected to become a major focus at the government's Economic Reform Roundtable in August, Ms Madeley said the talks should also centre on appropriate safeguards for employees. "It is becoming increasingly clear that further government intervention will be required to ensure that productivity benefits arising from the use of AI filter down and are shared with Australian workers," she said. A Senate inquiry into adopting AI recommended a dedicated law to regulate the technology last year, and a consultation into mandatory AI guardrails attracted record submissions, UNSW AI Institute chief scientist Toby Walsh said. Changing attitudes towards AI in the US and a change of minister had delayed AI regulations, he said, but the issue could not be ignored. "There's definitely a public appetite for it and when there's a public appetite, politicians do move," Professor Walsh said. "There's clearly significant public concerns around AI and the impacts it will have on jobs and different aspects of our lives so the pressure will surely be mounting on politicians to do something." Most creative workers want the government to intervene in the unrestricted use of artificial intelligence software, a study has found, and more than half are "extremely concerned" about use of the technology and its impact on jobs. Actors, musicians, crew members and journalists expressed concerns in a study released by the Media and Entertainment Arts Alliance on Wednesday, which also found many were unaware whether their work had been used to train generative AI models. The findings come weeks before the federal government is expected to sit down with industry stakeholders and discuss the use of AI software to boost productivity at its economic roundtable. Artificial intelligence experts warn the talks may not result in swift action on AI regulation, however, after policy changes in the United States and ongoing delays in formulating an AI law. The media union surveyed more than 730 workers in creative industries, such as television, radio and film production, news media, art and music. More than two in three (69 per cent) revealed they strongly agreed with calls for government intervention to regulate AI tools, and three in four (78 per cent) strongly agreed tech firms should pay for the work they used to train AI models. Misinformation ranked as the top AI concern for respondents, followed by the loss of human creativity, the theft of work, and a lack of transparency about the technology. The study highlighted serious and widespread concerns, MEAA chief executive Erin Madeley said, and followed a number of examples in which AI had been misused. "We know that Australian voices, music and artwork have been scraped and faked, that ChatGPT is substituting the work of journalists, and that AI-generated clone hosts have been used for radio programs with no disclosure to audiences," she said. "This amounts to unsanctioned, unregulated and untaxed mining Australia's creative resources." The study also found more than half of those surveyed did not know if their work had been used to train AI, and only three per cent had consented to its use and been compensated for their work. While AI was expected to become a major focus at the government's Economic Reform Roundtable in August, Ms Madeley said the talks should also centre on appropriate safeguards for employees. "It is becoming increasingly clear that further government intervention will be required to ensure that productivity benefits arising from the use of AI filter down and are shared with Australian workers," she said. A Senate inquiry into adopting AI recommended a dedicated law to regulate the technology last year, and a consultation into mandatory AI guardrails attracted record submissions, UNSW AI Institute chief scientist Toby Walsh said. Changing attitudes towards AI in the US and a change of minister had delayed AI regulations, he said, but the issue could not be ignored. "There's definitely a public appetite for it and when there's a public appetite, politicians do move," Professor Walsh said. "There's clearly significant public concerns around AI and the impacts it will have on jobs and different aspects of our lives so the pressure will surely be mounting on politicians to do something."