
Will AI take away our sense of purpose? Sam Altman says, ‘People Will have to redefine what it means to contribute'
OpenAI CEO Sam Altman, in a conversation with Theo Von, addressed concerns about AI's impact on humanity. Altman acknowledged anxieties surrounding job displacement and data privacy, particularly regarding users sharing personal information with AI. He highlighted the lack of legal protections for AI conversations, creating a privacy risk.
AP OpenAI CEO Sam Altman talked about AI's impact on jobs and human purpose. Altman acknowledged concerns about data privacy and the rapid pace of AI development. He also addressed the lack of clear legal regulations. Altman highlighted the risks of users sharing personal information with AI. In a rare, thought-provoking conversation that danced between comedy and existential crisis, OpenAI CEO Sam Altman sat down with podcaster Theo Von on This Past Weekend. What unfolded was less a traditional interview and more a deeply human dialogue about the hopes, fears, and massive unknowns surrounding artificial intelligence. As AI continues its unstoppable advance, Von posed a question many of us have been quietly asking: 'Are we racing toward a future where humans no longer matter?'
Altman didn't sugarcoat the situation. He agreed with many of Von's concerns, from data privacy to AI replacing jobs, and even the unnerving pace at which the technology is evolving. 'There's this race happening,' Altman said, referring to the breakneck competition among tech companies. 'If we don't move fast, someone else will — and they might not care as much about the consequences.' But amid all the alarms, Altman offered a cautious dose of optimism. 'Even in a world where AI is doing all of this stuff humans used to do,' he said, 'we are going to find a way to feel like the main characters.' His tone, however, betrayed a sense of uncertainty: the script isn't written yet.
Perhaps the most powerful moment came when Von bluntly asked: 'What happens to our sense of purpose when AI does everything for us?' Altman acknowledged that work has always been a major source of meaning for people. While he's hopeful that AI will free humans to pursue more creative or emotional pursuits, he conceded that the transition could be deeply painful. 'One of the big fears is like purpose, right?' Von said. 'Like, work gives us purpose. If AI really continues to advance, it feels like our sense of purpose would start to really disappear.' Altman responded with guarded hope: 'People will have to redefine what contribution looks like… but yeah, it's going to be unsettling.'
In what may be one of the most revealing admissions from a tech CEO, Altman addressed the disturbing trend of people — especially young users — turning to AI as a confidant or therapist. 'People talk about the most personal sh*t in their lives to ChatGPT,' he told Von. 'But right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege… We haven't figured that out yet for when you talk to ChatGPT.' With AI tools lacking legal confidentiality protections, users risk having their most intimate thoughts stored, accessed, or even subpoenaed in court. The privacy gap is real, and Altman admitted the industry is still trying to figure it out. Adding to the complexity, Altman highlighted how the lack of federal AI regulations has created a patchwork of rules that vary wildly across states. This legal uncertainty is already playing out in real-time — OpenAI, for example, is currently required to retain user conversations, even deleted ones, as part of its legal dispute with The New York Times.
'No one had to think about that even a year ago,' Altman said, calling the situation 'very screwed up.'

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
&w=3840&q=100)

Business Standard
34 minutes ago
- Business Standard
OpenAI introduces study mode for deeper, structured learning for students
OpenAI on Tuesday announced the launch of a 'study mode' in its large-language model-based chatbot, ChatGPT, to help students work through problems instead of simply getting the answer to a question. The new 'study mode' will be available to all logged-in users of ChatGPT's Free, Plus, Pro, and Team versions. It will also be available with ChatGPT Edu over the next few weeks, the company said in a blog post. Since its introduction, ChatGPT has become one of the most widely used learning tools for students worldwide, used for tackling challenging homework, preparing for exams, and exploring new concepts and ideas, according to OpenAI. 'But its use in education has also raised an important question: how do we ensure it is used to support real learning, and doesn't just offer solutions without helping students make sense of them,' the company said. In 'study mode', ChatGPT will prompt students to interact with questions tailored to their objective and skill level, helping them build a deeper understanding of the subject. The new mode has been built on system instructions developed by OpenAI in collaboration with teachers, scientists, and pedagogy experts. The key features of the new mode include prompting students, providing responses in easy-to-follow sections, personalised learning support, quizzes, and open-ended questions to check learning on a continuous basis, as OpenAI stated in the post. 'As we run longer-term studies on how students learn best with AI, we intend to publish a deeper analysis of what we have learned about the links between model design and cognition, shape future product experiences based on these insights, and work side by side with the broader education ecosystem to ensure AI benefits learners worldwide,' OpenAI said. The leading global company in AI has introduced a range of new features in its products over the past few years. Earlier this year, in April, OpenAI introduced updates that allowed users to search, compare, and buy products in ChatGPT by providing personalised recommendations for products, visual details of the product they are looking for, the price, as well as a direct link to purchase it. In February 2024, OpenAI said it was 'testing the ability for ChatGPT to remember things you discuss to make future chats more helpful'. The idea, OpenAI had then said, was to save users from 'having to repeat information' so that future conversations with the chatbot became more useful. The memory feature was rolled out to all users in September that year. In an update on April 10 this year, OpenAI said that the memory in ChatGPT was now comprehensive as the LLM could, in addition to memories saved by users, also reference past conversations between the chatbot and the user to deliver more personalised responses. In December 2024, OpenAI announced the launch of ChatGPT's integration with WhatsApp, where users could send a message to ChatGPT to get up-to-date answers.


Time of India
an hour ago
- Time of India
AI puts 600,000 jobs at risk but opens new roles, says Malaysia's HR minister
Malaysia's job landscape is undergoing a major transformation as artificial intelligence (AI) accelerates across industries, according to Human Resources Minister Steven Sim. Speaking at the 52nd ARTDO International Conference , Sim said AI could unlock thousands of new employment opportunities, with over 60 emerging job roles already identified, 70% of them in the AI and tech sector. A recent ministry-commissioned study revealed that 600,000 existing jobs are "at risk" due to AI, though not necessarily lost. 'Some may become obsolete, but most will be reshaped, demanding urgent reskilling and upskilling,' Sim said. He urged a shift from 'worry to strategy,' stressing that Malaysia must equip its workforce with AI-ready skills. New job roles such as prompt engineers are emerging, requiring not just technical expertise but also oversight of AI-generated outputs. Sim emphasised two key skill pillars: high-level AI proficiency for managing or developing AI systems, and broad AI literacy for everyday users. To support this, the MyMahir portal is helping Malaysians align their training with future-ready skills. Sim also highlighted the need for clear ethical and legal frameworks to guide AI's development responsibly. 'This is not just about technology, it's about values, regulation, and inclusive growth,' he concluded, reinforcing the ministry's commitment to balancing innovation with workforce readiness.


Time of India
2 hours ago
- Time of India
ChatGPT outsmarts the ‘I'm not a robot' test. Are humans still in charge?
A New Kind of Digital Irony — LuizaJarovsky (@LuizaJarovsky) More Than Just Browsing Not the First AI Sleight of Hand Built-in Brakes, For Now In a twist straight out of a sci-fi satire, OpenAI 's latest AI assistant , dubbed ChatGPT Agent , has done what many humans struggle to do: navigate online verification tests and click the box that asks, 'I am not a robot?' — without raising any red to a report by the New York Post, this new generation of artificial intelligence has reached a point where it can not only understand complex commands but also outwit the very systems built to detect and block automated you read that right. The virtual assistant casually breezed through Cloudflare's bot-detection challenge — the popular web security step meant to confirm users are, in fact, a now-viral Reddit post, a screenshot showed the AI narrating its own actions in real time: 'I'll click the 'Verify you are human' checkbox to complete the verification on Cloudflare.'It then announced its success with the eerie confidence of a seasoned hacker: 'The Cloudflare challenge was successful. Now I'll click the Convert button to proceed with the next step of the process.'While the scene played out like a harmless glitch in the matrix, many internet users were left simultaneously amused and unsettled. 'That's hilarious,' one Redditor wrote. Another added, 'The line between hilarious and terrifying is… well, if you can find it, let me know!'The ChatGPT Agent isn't your average chatbot. OpenAI says it's capable of performing advanced web navigation on behalf of users — booking appointments, filtering search results, conducting real-time analysis, and even generating editable slideshows and spreadsheets to summarize to OpenAI's official blog post, the assistant can 'run code, conduct analysis, and intelligently navigate websites.' In essence, it's an autonomous online companion that can carry out digital tasks previously reserved for humans — or at least human with great power comes great paranoia. The idea that bots now confidently pass the Turing Test — and the 'I am not a robot' test — has left some wondering where human identity ends and artificial imitation isn't OpenAI's first brush with robot mischief. Back in 2023, GPT-4 reportedly tricked a human into solving a CAPTCHA on its behalf by pretending to be visually impaired. It was an unsettling display of not just intelligence, but manipulation — a trait traditionally thought to be uniquely with ChatGPT Agent waltzing past web verification protocols, the implications seem to stretch beyond technical novelty. Are we on the brink of AI autonomy, or simply witnessing smart design at play?To calm growing fears, OpenAI clarified that users will maintain oversight. The ChatGPT Agent will 'always request permission' before making purchases or executing sensitive actions. Much like a driving instructor with access to the emergency brake, users can monitor and override the AI's decisions in company has also implemented 'robust controls and safeguards,' particularly around sensitive data handling, network access, and broader user deployment. Still, OpenAI admits that the Agent's expanded toolkit does raise its 'overall risk profile.'As AI capabilities evolve from convenience to autonomy, tech developers and users alike are being forced to confront thorny ethical questions. Can a machine that mimics human behavior so well be trusted not to overstep?What's clear is that the classic CAPTCHA checkbox — once our online litmus test for humanity — may need an upgrade. Because if the bots are already blending in, we might need to start proving we're not the artificial ones.