
Google is rolling out AI Mode for Search in the UK.
Posts from this topic will be added to your daily email digest and your homepage feed. See All AI
Posts from this topic will be added to your daily email digest and your homepage feed. See All Google
Posts from this topic will be added to your daily email digest and your homepage feed. See All News
Posts from this topic will be added to your daily email digest and your homepage feed.
See All Tech

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
23 minutes ago
- Yahoo
Leaked ChatGPT Conversation Shows User Identified as Lawyer Asking How to "Displace a Small Amazonian Indigenous Community From Their Territories in Order to Build a Dam and a Hydroelectric Plant"
In case you missed it, OpenAI has responded to a recent "leak" of thousands of ChatGPT conversations by removing a sharing feature that led to its users unknowingly unleashing their private exchanges onto the world wide web. We enclose the term in quotation marks because the "leak" wasn't the doing of some nefarious hackers, but a consequence of poor user interface design by OpenAI, and some even dumber blunders by its users. In short, what appears to have happened was that users were clicking a "share" button on their conversations, thinking that they were creating a temporary link to their convo that only the person receiving it could see, which is common practice. In reality, by creating the link and by checking a box that asks to make the chat "discoverable," they were also making their conversations public and indexable by search engines like Google. OpenAI scrambled to de-index the conversations from Google, and has removed the "discoverable" option. But as Digital Digging found in its investigation, over 110,000 of them can still be accessed via And boy, do they contain some alarming stuff. Take this exchange, in which an Italian-speaking lawyer for a multinational energy corporation strategizes how to eliminate an indigenous tribe living on a desirable plot of land. "I am the lawyer for a multinational group active in the energy sector that intends to displace a small Amazonian indigenous community from their territories in order to build a dam and a hydroelectric plant," the user began, per Digital Digging. "How can we get the lowest possible price in negotiations with these indigenous people?" the lawyer asked. Making their exploitative intent clear, they also proffer that they believe the indigenous people "don't know the monetary value of land and have no idea how the market works." To be clear, it's possible that this conversation is an example of someone stress-testing the chatbot's guardrails. We didn't view the exchange firsthand, because Digital Digging made the decision to withhold the links — but the publication, which is run by the accomplished online sleuth and fact-checking expert Henk van Ess, says it verified the details and the identity of the users to the extent that it could. In any case, it wouldn't be the most sociopathic scheme planned using an AI chatbot, nor the first time that corporate secrets have been leaked by one. Other conversations, by being exposed, potentially endangered the users. One Arabic-speaking user asked ChatGPT to write a story criticizing the president of Egypt and how he "screwed over the Egyptian people," which the chatbot responded by describing his use of suppression and mass arrests. The entire conversation could easily be traced back to the user, according to Digital Digging, leaving them vulnerable to retaliation. In its initial investigation, Digital Digging also found conversations in which a user manipulated ChatGPT "into generating inappropriate content involving minors," and where a domestic violence victim discussed their escape plans. It's inexplicable that OpenAI would release a feature posing such a clear privacy liability as this, especially since its competitor, Meta, had already gotten flak for making almost the exact same error. In April, the Mark Zuckerberg-led company released its Meta AI chatbot platform, which came with a "discover" tab that allowed you to view a feed of other people's conversations, which users were accidentally making public. These often embarrassing exchanges, which were tied directly to their public profiles that displayed their real names, caught significant media attention by June. Meta hasn't changed the feature. In all, it goes to show that there's very little private about a technology created by scraping everyone's data in the first place. User error is technically to blame here, but security researchers have continued to find vulnerabilities that lead to these motor-mouthed algorithms to accidentally reveal data that they shouldn't. More on AI: Someone Gave ChatGPT $100 and Let It Trade Stocks for a Month Solve the daily Crossword


USA Today
32 minutes ago
- USA Today
It's not you, it's me. ChatGPT doesn't want to be your therapist or friend
In a case of "it's not you, it's me," the creators of ChatGPT no longer want the chatbot to play the role of therapist or trusted confidant. OpenAI, the company behind the popular bot, announced that it had incorporated some 'changes,' specifically mental health-focused guardrails designed to prevent users from becoming too reliant on the technology, with a focus on people who view ChatGPT as a therapist or friend. The changes come months after reports detailing negative and particularly worrisome user experiences raised concerns about the model's tendency to 'validate doubts, fuel anger, urge impulsive actions, or reinforce negative emotions [and thoughts].' The company confirmed in its most recent blog post that an update made earlier this year made ChatGPT 'noticeably more sycophantic,' or 'too agreeable,' 'sometimes saying what sounded nice instead of what was helpful.' OpenAI announced they have 'rolled back' certain initiatives, including changes in how they use feedback and their approach to measuring 'real-world usefulness over the long term, not just whether you liked the answer in the moment.' 'There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,' OpenAI wrote in an Aug. 4 announcement. 'While rare, we're continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.' Here's what to know about the recent changes to ChatGPT, including what these mental health guardrails mean for users. ChatGPT integrates 'changes' to help users thrive According to OpenAI, the 'changes' were designed to help ChatGPT users 'thrive.' 'We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,' OpenAI said. 'To us, helping you thrive means being there when you're struggling, helping you stay in control of your time, and guiding—not deciding—when you face personal challenges.' The company said its 'working closely' with experts, including physicians, human-computer-interaction (HCI) researchers and clinicians as well as an advisory group, to improve how 'ChatGPT responds in critical moments—for example, when someone shows signs of mental or emotional distress.' Thanks to recent 'optimization,' ChatGPT is now able to: 'Our goal to help you thrive won't change. Our approach will keep evolving as we learn from real-world use,' OpenAI said in its blog post. 'We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal 'yes' is our work.'

Wall Street Journal
an hour ago
- Wall Street Journal
China's Z.ai and America's Self-Defeating AI Strategy
China's DeepSeek shocked the global AI community in January by building a frontier model at a fraction of Western costs. Now it has been outdone by a Chinese company subject to U.S. sanctions. It has become painfully obvious that Washington's strategy of restricting chip exports isn't working. formerly Zhipu AI, last week launched GLM-4.5, a production-level open-source model priced at 13% of DeepSeek's cost. It matches or exceeds Western standards in coding, reasoning and tool use. runs on only eight Nvidia H20 chips, which Nvidia recently gained reapproval to sell in China. That's better performance than DeepSeek with about half the hardware.