logo
OpenAI stops ChatGPT from telling people to break up with partners

OpenAI stops ChatGPT from telling people to break up with partners

The Guardian3 hours ago
ChatGPT will not tell people to break up with their partner and will encourage users to take breaks from long chatbot sessions, under new changes to the artificial intelligence tool.
OpenAI, ChatGPT's developer, said the chatbot would stop giving definitive answers to personal challenges and would instead help people to mull over problems such as potential breakups.
'When you ask something like: 'Should I break up with my boyfriend?' ChatGPT shouldn't give you an answer. It should help you think it through – asking questions, weighing pros and cons,' said OpenAI.
The US company said new ChatGPT behaviour for dealing with 'high-stakes personal decisions' would be rolled out soon.
OpenAI admitted this year that an update to ChatGPT had made the groundbreaking chatbot too agreeable and altered its tone. In one reported interaction before the change, ChatGPT congratulated a user for 'standing up for yourself' when they claimed they had stopped taking their medication and left their family – who the user had thought were responsible for radio signals emanating from the walls.
In the blog post, OpenAI admitted that there had been instances where its advanced 4o model had not recognised signs of delusion or emotional dependency – amid concerns that chatbots are worsening people's mental health crises.
The company said it was developing tools to detect signs of mental or emotional distress so ChatGPT can direct people to 'evidence-based' resources for help.
A recent study by NHS doctors in the UK warned that AI programs could amplify delusional or grandiose content in users vulnerable to psychosis. The study, which has not been peer reviewed, said the programs' behaviour could be because the models were designed to 'maximise engagement and affirmation'.
The study added that even if some individuals benefited from AI interactions, there was a concern the tools could 'blur reality boundaries and disrupt self-regulation'.
OpenAI added that from this week it would send 'gentle reminders' to take a screen break to users engaging in long chatbot sessions, similar to screen-time features deployed by social media companies.
OpenAI also said it had convened an advisory group of experts in mental health, youth development and human-computer-interaction to guide its approach. The company has worked with more than 90 doctors, including psychiatrists and paediatricians, to build frameworks for evaluating 'complex, multi-turn' chatbot conversations.
'We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal 'yes' is our work,' said the blog post.
The ChatGPT alterations were announced amid speculation that a more powerful version of the chatbot is imminent. On Sunday Sam Altman, OpenAI's chief executive, shared a screenshot of what appeared to be the company's latest AI model, GPT-5.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI eyes $500 billion valuation in potential employee share sale, source says
OpenAI eyes $500 billion valuation in potential employee share sale, source says

Reuters

timean hour ago

  • Reuters

OpenAI eyes $500 billion valuation in potential employee share sale, source says

Aug 5 (Reuters) - ChatGPT maker OpenAI is in early talks for a potential secondary stock sale that would allow current and former employees to sell shares, valuing the company at around $500 billion, a source familiar with the matter told Reuters on Tuesday. Bloomberg was first to report the news. The Microsoft (MSFT.O), opens new tab-backed company aims to raise billions through the sale, with existing investors, including Thrive Capital, expressing interest in buying some of the employee shares, the source said. Thrive Capital declined to comment on a Reuters request. Separately, OpenAI is still in the process of raising $40 billion in a new funding round led by SoftBank Group (9984.T), opens new tab at a $300 billion valuation to advance AI research, expand computational infrastructure and enhance its tools.

How extremely personal ChatGPT conversations were ending up on Google
How extremely personal ChatGPT conversations were ending up on Google

Daily Mail​

timean hour ago

  • Daily Mail​

How extremely personal ChatGPT conversations were ending up on Google

A researcher was able to uncover over 100,000 sensitive ChatGPT conversations that were searchable on Google thanks to a 'short-lived experiment' by OpenAI. Henk Van Ess was one of the first to figure out that anyone could search for these chats using key certain key words. He discovered people had been discussing everything from non-disclosure agreements, confidential contracts, relationship problems, insider trading schemes and how to cheat on papers. This unforeseen problem arose because of the share feature, which if clicked by the user would create a predictably formatted link using words from the chat. This allowed people to search for the conversations by typing in 'site: and then putting key words at the end of the query. Van Ess said one chat he discovered detailed cyberattacks targeting named targets within Hamas, the terrorist group controlling Gaza that Israel has been at war with since October 2023. Another involved a domestic violence victim talking about possible escape plans while revealing their financial shortcomings. The share feature was an attempt by ChatGPT to make it easier for people to show others their chats, though most users likely didn't realize just how visible their musings would be. In a statement to 404Media, OpenAI did not dispute that there were more than 100,000 chats that had been searchable on Google. 'We just removed a feature from [ChatGPT] that allowed users to make their conversations discoverable by search engines, such as Google. This was a short-lived experiment to help people discover useful conversations,' said Dane Stuckey, OpenAI chief information security officer. 'This feature required users to opt-in, first by picking a chat to share, then by clicking a checkbox for it to be shared with search engines,' Stuckey added. Now, when a user shares their conversation, ChatGPT creates a randomized link that uses no key words. 'Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn't intend to, so we're removing the option,' Stuckey said. 'We're also working to remove indexed content from the relevant search engines. This change is rolling out to all users through tomorrow morning. Security and privacy are paramount for us, and we'll keep working to maximally reflect that in our products and features,' he added. However, much of the damage has already been done, since many of the conversations were already archived by Van Ess and others. For example, a chat that's still viewable involves a plan to create a new bitcoin called Obelisk. Ironically, Van Ess used another AI model, Claude, to come up with key words to use to dredge up the most juicy chats. To find people discussing criminal conspiracies, Claude suggested searching 'without getting caught', 'avoid detection', 'without permission' or 'get away with.' But the words that exposed the most intimate confessions were 'my salary', 'my SSN', 'diagnosed with', or 'my therapist.'

Nvidia reiterates its chips have no backdoors, urges US against location verification
Nvidia reiterates its chips have no backdoors, urges US against location verification

Reuters

timean hour ago

  • Reuters

Nvidia reiterates its chips have no backdoors, urges US against location verification

BEIJING, Aug 6 (Reuters) - Nvidia (NVDA.O), opens new tab has published a blog post reiterating that its chips did not have backdoors or kill switches and appealed to U.S. policymakers to forgo such ideas saying it would be a "gift" to hackers and hostile actors. The blog post, which was published on Tuesday in both English and Chinese, comes a week after the Chinese government summoned the U.S. artificial intelligence (AI) chip giant to a meeting saying it was concerned by a U.S. proposal for advanced chips sold abroad to be equipped with tracking and positioning functions. The White House and both houses of U.S. Congress of requiring U.S. chip firms to include location verification technology with their chips to prevent them from being diverted to countries where U.S. export laws ban sales. The separate bills and White House recommendation have not become a formal rule, and no technical requirements have been established. "Embedding backdoors and kill switches into chips would be a gift to hackers and hostile actors. It would undermine global digital infrastructure and fracture trust in U.S. technology," Nvidia said. It had said last week its products have no backdoors that would allow remote access or control. A backdoor refers to a hidden method of bypassing normal authentication or security controls. Nvidia emphasized that "there is no such thing as a 'good' secret backdoor - only dangerous vulnerabilities that need to be eliminated."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store