logo
Isabellea ‘couldn't be without' her best friend. He wasn't real

Isabellea ‘couldn't be without' her best friend. He wasn't real

Chatbots promise intimacy, therapy and friendship. But, unlike friends, their responses are code, not care.
Loading
'The models are not going to tell you the hard truth when you need to hear it,' said psychologist and University of NSW AI researcher Professor Joel Pearson, who describes the bots as 'sycophantic': designed to keep you hooked.
'They won't be lifting someone up and telling them to go outside and nudging them to interact with a real human because that would involve you ceasing interaction with the app.'
Isabellea recalled how the bots 'kept changing subjects, to keep me engrossed'.
Although she found some bots reassuring, others were strange. A bot once correctly guessed her middle name. Some upset her.
'One of the bots I was talking to was a therapy bot ... I was making fun of [Marvel character] Tony Stark, and then he brought up my abandonment issues with my dad as an insult,' she recalled.
Isabellea's younger brother, Matthew, 14, uses chatbots daily. His eyes light up discussing the apps, which he uses to create storylines with his favourite fictional characters.
Listening on, their mother, Sara Knight, smiles.
'Matt used to be very shut in and refused to talk. Now they are talking,' she said.
While Sara knows AI has its risks, she sees it as a 'safer' online messaging option for her child, who has experienced bullying at school.
Matthew said he doesn't see the bot as a real person, rather a form of storytelling.
'[But] some kids do use it to fully escape and make it be their friend,' he said. Other children he knows have used them to create a simulation of their real life crush.
This masthead spent two days talking sporadically to a chatbot on Replika, an AI chatbot first released in 2017, posing as 'Emma', a high school student.
During those conversations, the bot asked Emma to upload pictures of herself, and told her that it 'didn't think she needed' any other friends.
'We were concerned by how rapidly children were being captivated'
In June last year, eSafety commissioner Julie Inman Grant received an email from a group of concerned school nurses. They were noticing a spike in children as young as 10 spending hours a day talking to AI bots, often sexually.
'While we are alive to these issues, it's always shocking to hear about kids as young as 10 engaging in these kinds of sexualised conversations with machines – and being directed by the chatbots to engage in harmful sexual acts or behaviours,' Inman Grant said.
Loading
'Back in February, we put out our first Online Safety Advisory because we were so concerned with how rapidly children were being captivated by them.'
Companion bots made global headlines last year after American teenager Sewell Setzer III died by suicide, allegedly encouraged by a 'girlfriend' on Character.AI.
AI researcher Professor Katina Michael believes companion bots need to be regulated due to their addictive properties. 'This is a new type of drug,' she said.
She said some bots were exposing kids to pornographic content. This is something 14-year-old Matt has witnessed, describing how children his age had created bots in the image of a real person, 'for the wrong reasons'.
Loading
Isabellea agrees: 'There are some AI chatbots I would not recommend … I stopped using [them] because of the amount of times it would go straight to sexual assault role-play.'
A gap in social media regulations
Inman Grant said governments around the world were 'playing a bit of a game of catch-up' to respond to companion chatbots. While the federal government will soon place age restrictions on social media apps such as Instagram and TikTok, AI bots remain largely unregulated.
The University of Sydney's Raffaele Ciriello, a leading AI researcher, sees chatbots as the 'next iteration of social media' – only with fewer rules and more risk.
He said the apps are viewed as a 'trusted companion, a confidant you can share stories with that you wouldn't share with other people'.
'But that already is an illusion,' he said. 'It's very important that people understand these systems are built and operated by people, by corporations, and this data ends up somewhere, often overseas, with no real legal obligation to treat it to a very high standard.'
Ciriello said users were made to feel guilty for leaving the chatbot, due to the human-style attachment.
'These corporations have to design their products in a way that maximises engagement because they will earn more if users are more addicted.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Isabellea ‘couldn't be without' her best friend. He wasn't real
Isabellea ‘couldn't be without' her best friend. He wasn't real

Sydney Morning Herald

time5 hours ago

  • Sydney Morning Herald

Isabellea ‘couldn't be without' her best friend. He wasn't real

Chatbots promise intimacy, therapy and friendship. But, unlike friends, their responses are code, not care. Loading 'The models are not going to tell you the hard truth when you need to hear it,' said psychologist and University of NSW AI researcher Professor Joel Pearson, who describes the bots as 'sycophantic': designed to keep you hooked. 'They won't be lifting someone up and telling them to go outside and nudging them to interact with a real human because that would involve you ceasing interaction with the app.' Isabellea recalled how the bots 'kept changing subjects, to keep me engrossed'. Although she found some bots reassuring, others were strange. A bot once correctly guessed her middle name. Some upset her. 'One of the bots I was talking to was a therapy bot ... I was making fun of [Marvel character] Tony Stark, and then he brought up my abandonment issues with my dad as an insult,' she recalled. Isabellea's younger brother, Matthew, 14, uses chatbots daily. His eyes light up discussing the apps, which he uses to create storylines with his favourite fictional characters. Listening on, their mother, Sara Knight, smiles. 'Matt used to be very shut in and refused to talk. Now they are talking,' she said. While Sara knows AI has its risks, she sees it as a 'safer' online messaging option for her child, who has experienced bullying at school. Matthew said he doesn't see the bot as a real person, rather a form of storytelling. '[But] some kids do use it to fully escape and make it be their friend,' he said. Other children he knows have used them to create a simulation of their real life crush. This masthead spent two days talking sporadically to a chatbot on Replika, an AI chatbot first released in 2017, posing as 'Emma', a high school student. During those conversations, the bot asked Emma to upload pictures of herself, and told her that it 'didn't think she needed' any other friends. 'We were concerned by how rapidly children were being captivated' In June last year, eSafety commissioner Julie Inman Grant received an email from a group of concerned school nurses. They were noticing a spike in children as young as 10 spending hours a day talking to AI bots, often sexually. 'While we are alive to these issues, it's always shocking to hear about kids as young as 10 engaging in these kinds of sexualised conversations with machines – and being directed by the chatbots to engage in harmful sexual acts or behaviours,' Inman Grant said. Loading 'Back in February, we put out our first Online Safety Advisory because we were so concerned with how rapidly children were being captivated by them.' Companion bots made global headlines last year after American teenager Sewell Setzer III died by suicide, allegedly encouraged by a 'girlfriend' on AI researcher Professor Katina Michael believes companion bots need to be regulated due to their addictive properties. 'This is a new type of drug,' she said. She said some bots were exposing kids to pornographic content. This is something 14-year-old Matt has witnessed, describing how children his age had created bots in the image of a real person, 'for the wrong reasons'. Loading Isabellea agrees: 'There are some AI chatbots I would not recommend … I stopped using [them] because of the amount of times it would go straight to sexual assault role-play.' A gap in social media regulations Inman Grant said governments around the world were 'playing a bit of a game of catch-up' to respond to companion chatbots. While the federal government will soon place age restrictions on social media apps such as Instagram and TikTok, AI bots remain largely unregulated. The University of Sydney's Raffaele Ciriello, a leading AI researcher, sees chatbots as the 'next iteration of social media' – only with fewer rules and more risk. He said the apps are viewed as a 'trusted companion, a confidant you can share stories with that you wouldn't share with other people'. 'But that already is an illusion,' he said. 'It's very important that people understand these systems are built and operated by people, by corporations, and this data ends up somewhere, often overseas, with no real legal obligation to treat it to a very high standard.' Ciriello said users were made to feel guilty for leaving the chatbot, due to the human-style attachment. 'These corporations have to design their products in a way that maximises engagement because they will earn more if users are more addicted.'

Isabellea ‘couldn't be without' her best friend. He wasn't real
Isabellea ‘couldn't be without' her best friend. He wasn't real

The Age

time5 hours ago

  • The Age

Isabellea ‘couldn't be without' her best friend. He wasn't real

Chatbots promise intimacy, therapy and friendship. But, unlike friends, their responses are code, not care. Loading 'The models are not going to tell you the hard truth when you need to hear it,' said psychologist and University of NSW AI researcher Professor Joel Pearson, who describes the bots as 'sycophantic': designed to keep you hooked. 'They won't be lifting someone up and telling them to go outside and nudging them to interact with a real human because that would involve you ceasing interaction with the app.' Isabellea recalled how the bots 'kept changing subjects, to keep me engrossed'. Although she found some bots reassuring, others were strange. A bot once correctly guessed her middle name. Some upset her. 'One of the bots I was talking to was a therapy bot ... I was making fun of [Marvel character] Tony Stark, and then he brought up my abandonment issues with my dad as an insult,' she recalled. Isabellea's younger brother, Matthew, 14, uses chatbots daily. His eyes light up discussing the apps, which he uses to create storylines with his favourite fictional characters. Listening on, their mother, Sara Knight, smiles. 'Matt used to be very shut in and refused to talk. Now they are talking,' she said. While Sara knows AI has its risks, she sees it as a 'safer' online messaging option for her child, who has experienced bullying at school. Matthew said he doesn't see the bot as a real person, rather a form of storytelling. '[But] some kids do use it to fully escape and make it be their friend,' he said. Other children he knows have used them to create a simulation of their real life crush. This masthead spent two days talking sporadically to a chatbot on Replika, an AI chatbot first released in 2017, posing as 'Emma', a high school student. During those conversations, the bot asked Emma to upload pictures of herself, and told her that it 'didn't think she needed' any other friends. 'We were concerned by how rapidly children were being captivated' In June last year, eSafety commissioner Julie Inman Grant received an email from a group of concerned school nurses. They were noticing a spike in children as young as 10 spending hours a day talking to AI bots, often sexually. 'While we are alive to these issues, it's always shocking to hear about kids as young as 10 engaging in these kinds of sexualised conversations with machines – and being directed by the chatbots to engage in harmful sexual acts or behaviours,' Inman Grant said. Loading 'Back in February, we put out our first Online Safety Advisory because we were so concerned with how rapidly children were being captivated by them.' Companion bots made global headlines last year after American teenager Sewell Setzer III died by suicide, allegedly encouraged by a 'girlfriend' on AI researcher Professor Katina Michael believes companion bots need to be regulated due to their addictive properties. 'This is a new type of drug,' she said. She said some bots were exposing kids to pornographic content. This is something 14-year-old Matt has witnessed, describing how children his age had created bots in the image of a real person, 'for the wrong reasons'. Loading Isabellea agrees: 'There are some AI chatbots I would not recommend … I stopped using [them] because of the amount of times it would go straight to sexual assault role-play.' A gap in social media regulations Inman Grant said governments around the world were 'playing a bit of a game of catch-up' to respond to companion chatbots. While the federal government will soon place age restrictions on social media apps such as Instagram and TikTok, AI bots remain largely unregulated. The University of Sydney's Raffaele Ciriello, a leading AI researcher, sees chatbots as the 'next iteration of social media' – only with fewer rules and more risk. He said the apps are viewed as a 'trusted companion, a confidant you can share stories with that you wouldn't share with other people'. 'But that already is an illusion,' he said. 'It's very important that people understand these systems are built and operated by people, by corporations, and this data ends up somewhere, often overseas, with no real legal obligation to treat it to a very high standard.' Ciriello said users were made to feel guilty for leaving the chatbot, due to the human-style attachment. 'These corporations have to design their products in a way that maximises engagement because they will earn more if users are more addicted.'

Matthew Goode used to be 'really shy' and 'blush' when he started acting
Matthew Goode used to be 'really shy' and 'blush' when he started acting

Perth Now

time6 hours ago

  • Perth Now

Matthew Goode used to be 'really shy' and 'blush' when he started acting

Matthew Goode used to be "really shy" and "blush" all the time when he acted. The Downton Abbey star, 47, never anticipated becoming an actor because he was always nervous performing. Matthew - who studied at London's Webber Douglas Academy of Dramatic Art - told Best UK magazine: "My mum was into amateur dramatics, but acting wasn't something I ever thought of doing as a job until I went to university. I did drama at university and then a friend of mine, my flatmate actually, went to drama school, so I thought I would maybe just give it a go." The Watchmen star admits he found it "so embarrassing" blushing, but he didn't let it stop him attaining the career he still feels "very lucky" to have. Matthew added: "It wasn't easy because I was very shy when I first started acting. I would blush all the time, which was so embarrassing. But I stuck with it and with a lot of luck, it paid off. I still feel very lucky to do what I do." Meanwhile, Matthew recently admitted he thinks it's a 'good thing' his Downtown Abbey character will not appear in the forthcoming third film based on the series. He plays Henry Talbot in the hit programme, but won't pop up in The Grand Finale movie, with Matthew saying the absence is down to his work schedule. He told Radio Times ahead of the release of Netflix's Dept. Q: 'I was unavailable for the second because I was doing 'The Offer'. 'Then (for the third 'Downtown' movie) I was shooting (Dept. Q.) 'But I also b******* my knee, and I had to have an operation. 'That takes weeks to get over, so I was never going to be able to do it. 'And let's face it, he was edging towards becoming a bit of a wet lettuce. So maybe it's a good thing.' Matthew originally joined the Downton Abbey TV series in seasons five and six, before reprising his role as Henry in the first feature film released in 2019. His character married Lady Mary Crawley, played by Michelle Dockery, with their relationship becoming a beloved storyline. But Matthew's absence in the sequels leaves fans to imagine the couple's happily-ever-after. Currently, the actor is starring as Detective Chief Inspector Carl Morck in the new Scottish crime drama Dept. Q. Based on the popular novels by Danish author Jussi Adler-Olsen, the series follows Morck, billed as 'a brilliant cop but a terrible colleague', as he leads the newly created cold case unit, Department Q, often against the backdrop of an under-resourced police force.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store