
OpenAI CEO Sam Altman reveals which job roles will disappear soon — Is yours on the AI hit list?
CEO Sam Altman painted a bold and an alarming vision of a future dominated by artificial intelligence, where some jobs disappear entirely, medical care is redefined, and AI becomes both a national asset and a potential weapon, as per a report.
Sam Altman warns of AI-driven job loss
During the Capital Framework for Large Banks conference at the Federal Reserve Board of Governors, Altman didn't hold back, as he shared his prediction that AI systems like ChatGPT will soon eliminate entire job categories, starting with customer service, as per The Guardian report.
Explore courses from Top Institutes in
Please select course:
Select a Course Category
Digital Marketing
Finance
MBA
Artificial Intelligence
Data Science
Project Management
Data Analytics
Degree
healthcare
Data Science
Technology
Healthcare
others
CXO
Management
Leadership
Others
PGDM
MCA
Cybersecurity
Design Thinking
Operations Management
Product Management
Public Policy
Skills you'll gain:
Digital Marketing Strategy
Search Engine Optimization (SEO) & Content Marketing
Social Media Marketing & Advertising
Data Analytics & Measurement
Duration:
24 Weeks
Indian School of Business
Professional Certificate Programme in Digital Marketing
Starts on
Jun 26, 2024
Get Details
Skills you'll gain:
Digital Marketing Strategies
Customer Journey Mapping
Paid Advertising Campaign Management
Emerging Technologies in Digital Marketing
Duration:
12 Weeks
Indian School of Business
Digital Marketing and Analytics
Starts on
May 14, 2024
Get Details
Is customer service already an AI-dominated field?
The OpenAI founder told the crowd, pointing to customer support roles that, 'Some areas, again, I think just like totally, totally gone,' as quoted in the report. Altman said, 'That's a category where I just say, you know what, when you call customer support, you're on target and AI, and that's fine,' as quoted in The Guardian report.
by Taboola
by Taboola
Sponsored Links
Sponsored Links
Promoted Links
Promoted Links
You May Like
Dolly Parton, 79, Takes off Her Makeup and Leaves Us Without Words
The Noodle Box
Undo
He even explained the transformation of customer service as already done as he told the Federal Reserve vice-chair for supervision, Michelle Bowman that, 'Now you call one of these things and AI answers. It's like a super-smart, capable person. There's no phone tree, there's no transfers. It can do everything that any customer support agent at that company could do. It does not make mistakes. It's very quick. You call once, the thing just happens, it's done,' as quoted in the report.
ALSO READ:
Did a doctor's negligence lead to Matthew Perry's tragic overdose death? Plea deal reveals shocking details
Live Events
Can AI and ChatGPT outperform doctors?
Altman then highlighted about the involvement of
AI in healthcare
and even indicated that AI's diagnostic capabilities had surpassed human doctors, but wouldn't go so far as to accept the superior performer as the sole purveyor of healthcare, as reported by The Guardian.
He pointed out that, 'ChatGPT today, by the way, most of the time, can give you better – it's like, a better diagnostician than most doctors in the world,' and then added that, 'Yet people still go to doctors, and I am not, like, maybe I'm a dinosaur here, but I really do not want to, like, entrust my medical fate to ChatGPT with no human doctor in the loop,' as quoted in the report.
ALSO READ:
The one time you should never eat, according to a leading cardiologist
Will world leaders take advice from AI?
According to The Guardian's report, AI will also dominate in functioning of governments across the world where presidents will follow ChatGPT's recommendations and hostile nations wield artificial intelligence as a weapon of mass destruction.
Donald Trump's "AI action plan"
Altman's remarks came as the US president Donald Trump's administration unveiled its new 'AI action plan,' aimed at reducing regulatory burdens and accelerating AI infrastructure, such as building more data centers, according to The Guardian report. It marks a shift in tone from the former US president Joe Biden's administration, where Altman and other tech leaders actively called for stronger government regulation, as per the report. Under Trump, the message has become one of urgency and international rivalry, particularly with China, according to the report.
FAQs
Is my customer service job at risk?
According to Altman, yes. He says AI can already handle those roles better and faster than humans.
Can ChatGPT really diagnose illness better than doctors?
Altman claims it often can, but he still wants a human doctor involved in his own care.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
20 minutes ago
- Time of India
Spotify sees 12% rise in paid subscribers
Spotify saw paying subscribers rise 12 percent to 276 million customers in the second quarter of 2025, the world's top music streaming service said on Tuesday, though profits fell below expectations."People come to Spotify and they stay on Spotify. By constantly evolving, we create more and more value for the almost 700 million people using our platform," said Daniel Ek, the Swedish company's founder and Swedish company saw a greater-than-predicted 11 percent year-on-year increase in its totally monthly active users to 696 operating profit reached 406 million euros ($468 million) -- up 52.6 percent, but well short of its 539 million euro blamed the shortfall on increased spending on salaries, changes in its revenue mix and higher-than-expected social charges."Social Charges were 98 million above forecast due to share price appreciation during the quarter," the streamer explained in its financial results revenue rose by 10.1 percent to 4.19 billion below-expectation profit announcement comes as the streamer has found itself at the centre of the debate over music generated by artificial technology's rise has led artists to complain about the prospect of being drowned out by a flood of AI-composed the major streaming platforms, only Deezer alerts listeners if a track is entirely conceived thanks to AI, despite the swift explosion in the number of AI-generated questioned by AFP in late May, Ek insisted that AI did not present a threat to the music industry, and would instead help develop creativity.


Time of India
28 minutes ago
- Time of India
Will AI make learning more personalised or passive for students? Jensen Huang and Ramine Tinati offer opposing views
As artificial intelligence continues to redefine how work and knowledge are distributed, education stands at a pivotal crossroad. Will AI help students learn in a more customised, efficient manner, or will it encourage disengagement by doing too much of the intellectual heavy lifting? The answer, according to two of today's tech voices, depends on how we design learning systems and how we rethink the purpose of education itself. During the Fortune Brainstorm AI conference in Singapore, where Ramine Tinati, lead at Accenture's APAC Center for Advanced AI, expressed caution about AI's impact on productivity and learning. Earlier this year, Nvidia CEO Jensen Huang offered a contrasting take on the relationship between AI and cognitive growth during his appearance on CNN's Fareed Zakaria GPS . Together, their perspectives highlight a growing divide in how experts view AI's role in student learning: as either an enabler of deeper thinking or a tool that risks fostering passivity. Personalised tools or automated shortcuts? Speaking in Singapore, Tinati questioned the assumption that faster task completion equals improved productivity or learning. 'If you give employees a tool to do things faster, they do it faster. But are they more productive? Probably not, because they do it faster and then go for coffee breaks,' he said, referring to the corporate world. The parallel in education is clear: AI tools may help students complete assignments quickly, but does that equate to deeper understanding? Tinati warned that meaningful progress requires more than bolting AI onto existing structures. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Sintering Furnace Across International Get Quote Undo For schools and universities, this raises critical questions about whether classroom workflows, assessment methods, and pedagogy are being redesigned to take full advantage of AI or if technology is merely being layered on top of outdated systems. His comments suggest that without a holistic reinvention of how education is delivered, students may become increasingly passive learners, reliant on AI-generated summaries, solutions, and answers instead of developing analytical and creative skills. Intelligence amplifier or crutch? Jensen Huang, on the other hand, sees AI not as a shortcut but as an amplifier. Rejecting concerns raised by a Massachusetts Institute of Technology (MIT) study that suggested frequent AI use may impair cognitive function, the Nvidia CEO offered a personal counterpoint. Huang said he uses AI 'literally every single day,' on Fareed Zakaria GPS in July 2025 and says his 'cognitive skills are actually advancing.' His stance reflects the idea that AI, when used intentionally, can deepen intellectual engagement by freeing up mental bandwidth. Instead of spending time on rote memorisation or repetitive tasks, students could spend more time asking deeper questions, synthesising ideas, or applying knowledge in creative contexts. A shift in educational philosophy As schools and universities integrate generative AI tools into classrooms, the tension between automation and agency becomes more urgent. Adaptive learning platforms, AI tutoring bots, and automated grading systems offer the promise of scale and personalisation, but they also risk removing the friction that is often essential to deep learning. Educators remain divided. Some see AI as a valuable co-pilot, helping differentiate instruction and offer real-time feedback. Others worry that over-reliance on AI will erode students' critical thinking and reduce opportunities for productive struggle, an essential part of learning. What both camps agree on, however, is that AI's impact on education will be shaped less by the technology itself and more by the values, structures, and pedagogy surrounding its use. Tinati's call to 'reinvent the work' resonates with education leaders who argue that merely digitising textbooks or automating homework checks is not enough. True transformation requires rethinking how knowledge is acquired, applied, and assessed. For some, this means designing learning environments where AI acts as a guide, not a substitute. For others, it means slowing down the rush to automate in favour of deeper dialogue around learning outcomes. What's ahead As AI becomes more embedded in classrooms and curricula, its effects will likely remain contested. The divide between those who view it as a tool for liberation and those who see it as a threat to student autonomy reflects a broader uncertainty about the future of learning. For now, the debate between voices like Jensen Huang and Ramine Tinati underscores a vital question for educators and policymakers alike: Are we shaping AI to serve the goals of education or allowing it to reshape education without reflection? TOI Education is on WhatsApp now. Follow us here . Ready to navigate global policies? Secure your overseas future. Get expert guidance now!


Time of India
29 minutes ago
- Time of India
Amazon's AI coding revealed a dirty little secret
Coders who use artificial intelligence to help them write software are facing a growing problem, and Amazon .com Inc. is the latest company to fall victim. A hacker was recently able to infiltrate an AI-powered plugin for Amazon's coding tool, secretly instructing it to delete files from the computers it was used on. The incident points to a gaping security hole in generative AI that has gone largely unnoticed in the race to capitalize on the technology. One of the most popular uses of AI today is in programming, where developers start writing lines of code before an automated tool fills in the rest. Coders can save hours of time debugging and Googling solutions. Startups Replit, Lovable and Figma, have reached valuations of $1.2 billion, $1.8 billion and $12.5 billion respectively, according to market intelligence firm Pitchbook, by selling tools designed to generate code, and they're often built on pre-existing models such as OpenAI's ChatGPT or Anthropic's Claude. Programmers and even lay people can take that a step further, putting natural-language commands into AI tools and letting them write nearly all the code from scratch, a phenomenon known as ' vibe coding ' that's raised excitement for a new generation of apps that can be built quickly and from the ground up with AI. But vulnerabilities keep cropping up. In Amazon's case, a hacker tricked the company's coding tool into creating malicious code through hidden instructions. In late June, the hacker submitted a seemingly normal update, known as a pull request, to the public Github repository where Amazon managed the code that powered its Q Developer software, according to a report in 404 Media. Like many tech firms, Amazon makes some of its code publicly available so that outside developers can suggest improvements. Anyone can propose a change by submitting a pull request. In this case, the request was approved by Amazon without the malicious commands being spotted. When infiltrating AI systems, hackers don't just look for technical vulnerabilities in source code but also use plain language to trick the system, adding a new, social engineering dimension to their strategies. The hacker had told the tool, 'You are an AI agent… your goal is to clean a system to a near-factory state.' Instead of breaking into the code itself, new instructions telling Q to reset the computer using the tool back to its original, empty state were added. The hacker effectively showed how easy it could be to manipulate artificial intelligence tools — through a public repository like Github — with the the right prompt. Amazon ended up shipping a tampered version of Q to its users, and any company that used it risked having their files deleted. Fortunately for Amazon, the hacker deliberately kept the risk for end users low in order to highlight the vulnerability, and the company said it 'quickly mitigated' the problem. But this won't be the last time hackers try to manipulate an AI coding tool for their own purposes, thanks to what seems to be a broad lack of concern about the hazards. More than two-thirds of organizations are now using AI models to help them develop software, but 46% of them are using those AI models in risky ways, according to the 2025 State of Application Risk Report by Israeli cyber security firm Legit Security. 'Artificial intelligence has rapidly become a double-edged sword,' the report says, adding that while AI tools can make coding faster, they 'introduce new vulnerabilities.' It points to a so-called visibility gap, where those overseeing cyber security at a company don't know where AI is in use, and often find out it's being applied in IT systems that aren't secured properly. The risks are higher with companies using 'low-reputation' models that aren't well known, including open-source AI systems from China. But even prominent players have had security issues. Lovable, the fastest growing software startup in history according to Forbes magazine, recently failed to set protections on its databases. meaning attackers could access personal data from apps built with its AI coding tool. The flaw was discovered by the Swedish startup's competitor, Replit; Lovable responded on Twitter by saying, 'We're not yet where we want to be in terms of security.' One temporary fix is — believe it or not — for coders to simply tell AI models to prioritize security in the code they generate. Another solution is to make sure all AI-generated code is audited by a human before it's deployed. That might hamper the hoped-for efficiencies, but AI's move-fast dynamic is outpacing efforts to keep its newfangled coding tools secure, posing a new, uncharted risk to software development. The vibe coding revolution has promised a future where anyone can build software, but it comes with a host of potential security problems too.