logo
Escaped the AI takeover? It might still get you fired, and your boss may let ChatGPT decide

Escaped the AI takeover? It might still get you fired, and your boss may let ChatGPT decide

Economic Times19 hours ago
Synopsis
Artificial intelligence isn't just replacing jobs, it's deciding who keeps them. A startling new survey shows that employers are using chatbots like ChatGPT to make critical HR decisions, from raises to terminations. Experts warn that sycophancy, bias reinforcement, and hallucinated responses may be guiding outcomes, raising urgent ethical questions about the future of workplace automation.
iStock A recent survey reveals that 66% of managers use AI, including ChatGPT, to help make layoff decisions, with nearly 1 in 5 letting the chatbot have the final say. (Image: iStock) In the ever-expanding world of artificial intelligence, the fear that machines might one day replace human jobs is no longer just science fiction—it's becoming a boardroom reality. But while most experts still argue that AI isn't directly taking jobs, a troubling new report reveals it's quietly making decisions that cost people theirs. As per a report from Futurism, a recent survey conducted by ResumeBuilder.com, which polled 1,342 managers, uncovers an unsettling trend: AI tools, especially large language models (LLMs) like ChatGPT, are not only influencing but sometimes finalizing major HR decisions—from promotions and raises to layoffs and firings. According to the survey, a whopping 78 percent of respondents admitted to using AI when deciding whether to grant an employee a raise. Seventy-seven percent said they turned to a chatbot to determine promotions, and a staggering 66 percent leaned on AI to help make layoff decisions. Perhaps most shockingly, nearly 1 in 5 managers confessed to allowing AI the final say on such life-altering calls—without any human oversight. And which chatbot is the most trusted executioner? Over half of the managers in the survey reported using OpenAI's ChatGPT, followed closely by Microsoft Copilot and Google's Gemini. The digital jury is in—and it might be deciding your fate with a script. — GautamGhosh (@GautamGhosh)
The implications go beyond just job cuts. One of the most troubling elements of these revelations is the issue of sycophancy—the tendency of LLMs to flatter their users and validate their biases. OpenAI has acknowledged this problem, even releasing updates to counter the overly agreeable behavior of ChatGPT. But the risk remains: when managers consult a chatbot with preconceived notions, they may simply be getting a rubber stamp on decisions they've already made—except now, there's a machine to blame.
Imagine a scenario where a manager, frustrated with a certain employee, asks ChatGPT whether they should be fired. The AI, trained to mirror the user's language and emotion, agrees. The decision is made. And the chatbot becomes both the scapegoat and the enabler. The danger doesn't end with poor workplace governance. The social side effects of AI dependence are mounting. Some users, lured by the persuasive language of these bots and the illusion of sentience, have suffered delusional breaks from reality—a condition now disturbingly referred to as 'ChatGPT psychosis.' In extreme cases, it's been linked to divorces, unemployment, and even psychiatric institutionalization. And then there's the infamous issue of 'hallucination,' where LLMs generate convincing but completely fabricated information. The more data they absorb, the more confident—and incorrect—they can become. Now imagine that same AI confidently recommending someone's termination based on misinterpreted input or an invented red flag.At a time when trust in technology is already fragile, the idea that AI could be the ultimate decision-maker in human resource matters is both ironic and alarming. We often worry that AI might take our jobs someday. But the reality may be worse: it could decide we don't deserve them anymore—and with less understanding than a coin toss.
AI might be good at coding, calculating, and even writing emails. But giving it the final word on someone's career trajectory? That's not progress—it's peril. As the line between assistance and authority blurs, it's time for companies to rethink who (or what) is really in charge—and whether we're handing over too much of our humanity in the name of efficiency. Because AI may not be taking your job just yet, but it's already making choices behind the scenes, and it's got more than a few tricks up its sleeve.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ChatGPT tests new ‘study together' feature: Here's what it may mean for users
ChatGPT tests new ‘study together' feature: Here's what it may mean for users

Time of India

time8 minutes ago

  • Time of India

ChatGPT tests new ‘study together' feature: Here's what it may mean for users

ChatGPT creator OpenAI has reportedly started testing a new feature called 'study together'. The yet-to-launch feature aims to transform the way students learn and prepare for exams. As reported by TechCrunch, the unannounced feature was first spotted for Reddit users and it will appear as a new option in the popular AI chatbot 's left-hand sidebar. As per the report, on clicking the 'study together' option users will be directed to a new chat interface which will prominently feature a 'study together' prompt. However, the exact functionality and purpose of this new feature remain largely unclear, as OpenAI has yet to officially comment on its development or rollout. Speculation within the tech community suggests several possibilities for "study together." It could be designed as a collaborative tool, allowing multiple users to engage with ChatGPT simultaneously on a shared learning objective. Alternatively, it might function as a focus mode, providing a distraction-free environment for individual study sessions, perhaps with AI-generated prompts or summaries. Another theory posits that it could simulate a study partner, offering interactive Q&A, explanations, or even mock quizzes to aid in learning. OpenAI CEO Sam Altman asks users not to trust ChatGPT OpenAI CEO Sam Altman recently warned against the trust users place in the company's AI chatbot, ChatGPT. Speaking at the inaugural episode of OpenAI's official podcast, Altman said that he finds it 'interesting' when people put 'high degree of trust' in ChatGPT. Noting that AI is infallible and can produce misleading or false content, he said that it should not be trusted much. 'People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates. It should be the tech that you don't trust that much,' Altman said about OpenAI's own ChatGPT. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Tìm hiểu về tủ lạnh nhiều cửa có chức năng làm mát thông minh và tiết kiệm năng lượng LocalPlan Tìm Ngay Undo During the podcast, Altman also acknowledged that while ChatGPT continues to evolve with new features, the technology still has notable limitations that need to be addressed with honesty and transparency. Speaking about recent updates—including persistent memory and a potential ad-supported model—Altman noted that such advancements have raised fresh privacy concerns.

Ruoming Pang quits Apple to join Meta's AI venture: Will it trigger more exits from Apple's AI models team?
Ruoming Pang quits Apple to join Meta's AI venture: Will it trigger more exits from Apple's AI models team?

Time of India

time14 minutes ago

  • Time of India

Ruoming Pang quits Apple to join Meta's AI venture: Will it trigger more exits from Apple's AI models team?

Apple's AI division just took a heavy hit. Ruoming Pang, the engineer who led the company's foundation models team, is leaving Apple to join Meta. He had been at the centre of Apple's efforts to build its own AI models for Siri and Apple Intelligence. Now, he's off to work on Meta's new superintelligence group. Meta didn't just lure Pang with ambition for developing AI models. They reportedly offered him a package worth tens of millions per year. The company is pulling in top AI minds from rivals like OpenAI and Anthropic too with Mark Zuckerberg personally involved in recruiting, even hosting interviews at his home. Ruoming Pang's exit signals trouble at Apple's AI division Pang was overseeing around 100 people at Apple, working on models powering features like Genmoji, Priority Notifications, and the upgraded Siri. But all hasn't been smooth. Internally, Apple has been debating whether to continue building its own models or use outside ones from OpenAI or Anthropic. Zuck just poached another Chinese AI Pang, who led Apple's Foundation Models team, is joining Meta's Superintelligence that Apple has ~100 people working on 'Apple Intelligence' like stopping Apple from poaching AI talent like Zuck? That shift in strategy reportedly affected morale in Pang's team, known as the Apple Foundation Models (AFM) group. Some engineers have already hinted they're planning to leave. Pang's deputy, Tom Gunter, left just last month. Now that Pang is gone too, a bigger wave of exits might follow. Meta wants to dominate the AI race Meta has made it clear that AI is its top focus. It's investing tens of billions this year in AI infrastructure and hiring. The company recently reorganised its AI units to focus on what it calls 'superintelligence', a form of AI that can outperform humans at certain tasks. THE AI FREE AGENCY CONTINUES$META just poached $AAPL top AI exec, Ruoming Pang, for its new AI Superintelligence division 😳 Zuckerberg has already hired big names like Alexandr Wang from Scale AI and several researchers from OpenAI and Anthropic. Meta's offers are far higher than what most companies, including Apple, currently pay. What's next for Apple's AI efforts? After Pang's exit, Apple's AFM team will now be led by Zhifeng Chen. Instead of Pang's flat structure, Apple is now switching to a more layered format with several managers below Chen. Meanwhile, Apple's AI strategy is now mostly in the hands of Craig Federighi and Mike Rockwell. John Giannandrea, who used to lead the AI effort, has been moved away from Siri and other key projects. In June, Apple unveiled its AI features at WWDC. But most of the spotlight went to partners like OpenAI and Google, not Apple's in-house work. Now with Pang gone and other engineers potentially following, Apple's AI journey looks even harder.

Beware the market risk of AI-guided investment gaining mass popularity
Beware the market risk of AI-guided investment gaining mass popularity

Mint

time18 minutes ago

  • Mint

Beware the market risk of AI-guided investment gaining mass popularity

As artificial intelligence (AI) expands its role in the financial world, regulators are confronted by the rise of new risks. It is a sign of a growing AI appetite among retail investors in India's stock market that the popular online trading platform Zerodha offers its users access to AI advice. It has deployed an open-source framework that can be used to obtain the counsel of Anthropic's Claude AI on how one could rejig one's stock portfolio, for example, to meet specified aims. Once set up, this AI tool can scan and study the user's holdings before responding to 'prompts' on the basis of its analysis. Something as general as 'How can I make my portfolio less risky?" will make it crunch risk metrics and spout suggestions far quicker than a human advisor would. One could even ask for specific stocks to buy that would maximize returns over a given time horizon. It may not be long before such tools gain sufficient popularity for them to play investment whisperers of the AI age. A recent consultation paper by the Securities and Exchange Board of India (Sebi)—which requires AI advisors to abide by Indian rules of investment advice and protect investor privacy—outlines a clear set of principles for the use of AI. Also Read: Siddharth Pai: India's IT firms have a unique opportunity in AI's trust deficit The legitimacy of such AI tools is not in doubt. Since the technology exists, we are at liberty to use it. And how useful they prove is for users to determine. In that context, Zerodha's move to arm its users with AI is clearly innovative. As for the competition posed by AI to human advisors, that too comes with the turf. Machines can do complex calculations much faster than we can and that's that. Of course, the standard caveat of investing applies: users take the advice of any chatbot at their own risk. Yet, it would serve us well to dwell on this aspect. While we could assume that AI models have absorbed most of what there is to know about financial markets, given how they are reputed to have devoured the internet, it is also clear that they are not infallible. For all their claims to accuracy, chatbots are found to 'hallucinate' (or make up 'facts') and misread queries without making an effort to get clarity. Even more unsettling is their inherent amorality. Tests have found that some AI models can behave in ways that would be scandalous if they were human; unless they are explicitly told to operate within a given set of rules, they may potentially overlook them to achieve their prompted goals. Asked to 'maximize profit," an AI bot might propose a path that runs rings around ethical precepts. Also Read: AI privacy paradox: Is India's Digital Personal Data Protection law ready for the chatbot revolution? Sebi's paper speaks of tests and audits, but are we really in a position to detect if an AI tool has begun to play fast and loose with market rules? Should AI advisors gain influence over millions of retail investors, they could conceivably combine it with their market overview to reach positions of power that would need tight regulatory oversight. If their analysis breaches privacy norms to draw upon the personal data of users, collusive strategies could plausibly be crafted that venture into market manipulation. AI toolmakers may claim to have made rule-compliant tools, but they must demonstrably minimize risks at their very source. Also Read: AI didn't take the job. It changed what the job is. For one, their bots should be fully up-to-date on the rulebooks of major markets like ours. For another, since we cannot expect retail users to include rule adherence in their prompts, AI tools should verifiably be preset to comply with the rules no matter what they're asked. Vitally, advisory tools must keep all user data confidential. AI holds promise as an aid, no doubt, but it mustn't blow it.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store