
AI is becoming a secret weapon for workers
Artificial intelligence is gradually becoming part of everyday working life, promising productivity gains and a transformation of working methods. Between enthusiasm and caution, companies are trying to harness this revolutionary technology and integrate it into their processes.
But behind the official rhetoric, a very different reality is emerging. Many employees have chosen to take the initiative, adopting these tools discreetly, out of sight of their managers.
A recent survey,* conducted by software company Ivanti, reveals the extent of this under-the-radar adoption of AI. One-third of employees surveyed use AI tools without their managers' knowledge. There are several distinct reasons for this covert strategy.
For 36% of them, it is primarily a matter of gaining a "secret advantage' over their colleagues. Meanwhile, 30% of respondents fear that revealing their dependence on this technology could cost them their jobs. This fear is understandable, considering that 29% of employees are concerned that AI will diminish the value of their skills in the eyes of their employer.
The figures reveal an explosion in clandestine use. Forty-two percent of office workers say they use generative AI tools such as ChatGPT at work (+16 points in one year). Among IT professionals, this proportion reaches an impressive 74% (+8 points). Now, nearly half of office workers use AI tools not provided by their company.
Underestimating the risks
This covert use exposes organizations to considerable risks. Indeed, unauthorized platforms do not always comply with security standards or corporate data protection requirements. From confidential data to business strategies to intellectual property, anything and everything can potentially be fed into AI tools unchecked.
"It is crucial for employers to assume this is happening, regardless of any restrictions, and to assess the use of AI to ensure it complies with their security and governance standards,' emphasizes Brooke Johnson, Chief Legal Counsel at Ivanti.
The survey also reveals a troubling paradox. While 52% of office workers believe that working more efficiently simply means doing more work, many prefer to keep their productivity gains to themselves. This mistrust is accompanied by an AI-fueled impostor syndrome, with 27% of users saying they don't want their abilities to be questioned.
This situation highlights a huge gap between management and employees. Although 44% of professionals surveyed say their company has invested in AI, they simultaneously complain about a lack of training and skills to use these technologies effectively. This disconnect betrays a poorly orchestrated technological transformation.
In the face of this silent revolution, Brooke Johnson advocates a proactive approach: "To mitigate these risks, organizations should implement clear policies and guidelines for the use of AI tools, along with regular training sessions to educate employees on the potential security and ethical implications."
This survey suggests that companies should completely rethink their integration of AI, rather than turning a blind eye to this legion of secret users. The stakes go beyond mere operational optimization: the most successful organizations will need to balance technological use with the enhancement of human potential.
By encouraging open dialogue, employers can foster transparency and collaboration, ensuring that the benefits of AI are harnessed safely and effectively. Ignoring this silent revolution runs the risk of deepening mutual distrust between management and employees, to everyone's detriment. – AFP Relaxnews
*This survey was conducted by Ivanti in February 2025 among more than 6,000 office workers and 1,200 IT and cybersecurity professionals.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Star
44 minutes ago
- The Star
Scientists use artificial intelligence to mimic the mind — warts and all
Companies such as OpenAI and Meta are in a race to make something they like to call artificial general intelligence. But for all the money being spent on it, AGI has no settled definition. It's more of an aspiration to create something indistinguishable from the human mind. Artificial intelligence today is already doing a lot of things that were once limited to human minds – such as playing championship chess and figuring out the structure of proteins. ChatGPT and other chatbots are crafting language so humanlike that people are falling in love with them. But for now, artificial intelligence remains very distinguishable from the human kind. Many AI systems are good at only one thing. A grandmaster can drive a car to a chess tournament, but a chess-playing AI system is helpless behind the wheel. An AI chatbot can sometimes make very simple – and very weird – mistakes, such as letting pawns move sideways in chess, an illegal move. For all these shortcomings, an international team of scientists believe that AI systems can help them understand how the human mind works. They have created a ChatGPT-like system that can play the part of a human in a psychological experiment and behave as if it has a human mind. Details about the system, known as Centaur, were published this month in the journal Nature . In recent decades, cognitive scientists have created sophisticated theories to explain various things that our minds can do: learn, recall memories, make decisions and more. To test these theories, cognitive scientists run experiments to see if human behavior matches a theory's predictions. Some theories have fared well on such tests, and can even explain the mind's quirks. We generally choose certainty over risk, for instance, even if that means forgoing a chance to make big gains. If people are offered US$1,000 (RM 4,232) , they will usually take that firm offer rather than make a bet that might deliver a much bigger payout. But each of these theories tackles only one feature of the mind. 'Ultimately, we want to understand the human mind as a whole and see how these things are all connected,' said Marcel Binz, a cognitive scientist at Helmholtz Munich, a German research center, and an author of the new study. Three years ago, Binz became intrigued by ChatGPT and similar AI systems, known as large language models. 'They had this very humanlike characteristic that you could ask them about anything, and they would do something sensible,' Binz said. 'It was the first computation system that had a tiny bit of this humanlike generality.' At first, Binz could only play with large language models, because their creators kept the code locked away. But in 2023, Meta released the open-source LLaMA (Large Language Model Meta AI). Scientists could download and modify it for their own research. (Thirteen authors have sued Meta for copyright infringement, and The New York Times has sued OpenAI, ChatGPT's creator, and its partner, Microsoft.) The humanlike generality of LLaMA led Binz and his colleagues to wonder if they could train it to behave like a human mind – not just in one way but in many ways. For this new lesson, the scientists would present LLaMA with the results of psychological experiments. The researchers gathered a range of studies to train LLaMA – some that they had carried out themselves, and others that were conducted by other groups. In one study, human volunteers played a game in which they steered a spaceship in search of treasure. In another, they memorised lists of words. In yet another, they played a pair of slot machines with different payouts and figured out how to win as much money as possible. All told, 160 experiments were chosen for LLaMA to train on, including over 10 million responses from more than 60,000 volunteers. Binz and his colleagues then prompted LLaMA to play the part of a volunteer in each experiment. They rewarded the AI system when it responded in a way that a human had. 'We essentially taught it to mimic the choices that were made by the human participants,' Binz said. He and his colleagues named the modified model Centaur, in honor of the mythological creature with the upper body of a human and the legs of a horse. Once they trained Centaur, the researchers tested how well it had mimicked human psychology. In one set of trials, they showed Centaur some of the volunteer responses that it hadn't seen before. Centaur did a good job of predicting what a volunteer's remaining responses would look like. The researchers also let Centaur play some of the games on its own, such as using a spaceship to find treasure. Centaur developed the same search strategies that human volunteers had figured out. To see just how humanlike Centaur had become, the scientists gave it new games to play. In the spaceship experiment, scientists had changed the story of the game, so that volunteers rode a flying carpet. The volunteers simply transferred their spaceship strategy to the new game. When Binz and his colleagues made the same switch for Centaur, it transferred its spaceship strategy, too. 'There is quite a bit of generalisation happening,' Binz said. The researchers then had Centaur respond to logical reasoning questions, a challenge that was not in the original training. Centaur once again produced humanlike answers. It tended to correctly answer questions that people got right, and failed on the ones that people likewise found hard. Another human quirk emerged when Binz and his colleagues replayed a 2022 experiment that explored how people learn about other people's behavior. In that study, volunteers observed the moves made by two opposing players in games similar to Rock, Paper, Scissors. The observers figured out the different strategies that people used and could even predict their next moves. But when the scientists instead generated the moves from a statistical equation, the human observers struggled to work out the artificial strategy. 'We found that was exactly the same case for Centaur as well,' Binz said. 'The fact that it actually predicts the human players better than the artificial players really means that it has picked up on some kind of things that are important for human cognition.' Some experts gave Centaur high marks. 'It's pretty impressive,' said Russ Poldrack, a cognitive scientist at Stanford University who was not involved in the study. 'This is really the first model that can do all these types of tasks in a way that's just like a human subject.' Ilia Sucholutsky, a computer scientist at New York University, was struck by how well Centaur performed. 'Centaur does significantly better than classical cognitive models,' he said. But other scientists were less impressed. Olivia Guest, a computational cognitive scientist at Radboud University in the Netherlands, argued that because the scientists hadn't used a theory about cognition in building Centaur, its prediction didn't have much to reveal about how the mind works. 'Prediction in this context is a red herring,' she said. Gary Lupyan, a cognitive scientist at the University of Wisconsin-Madison, said theories that can explain the mind are what he and his fellow cognitive scientists are ultimately chasing. 'The goal is not prediction,' he said. 'The goal is understanding.' Binz readily agreed that the system did not yet point to a new theory of the mind. 'Centaur doesn't really do that yet, at least not out of the box,' he said. But he hopes that the language model can serve as a benchmark for new theories, and can show how well a single model can mimic so many kinds of human behavior. And Binz hopes to expand Centaur's reach. He and his colleagues are in the process of increasing their database of psychological experiments by a factor of 5, and they plan on training the system further. 'I would expect with that dataset, you can do even more stuff,' he predicted. – ©2025 The New York Times Company This article originally appeared in The New York Times.


The Star
a day ago
- The Star
A third of teens prefer AI 'companions' to people, survey shows
Around a third of teens in the US now say they have discussed important or serious matters with AI companions instead of real people. — Photo: Zacharie Scheurer/dpa BERLIN: More than half of US teenagers regularly confide in artificial intelligence (AI) "companions" and more than 7 in 10 have done so at least once, despite warnings that chatbots can have negative mental health impacts and offer dangerous advice. Around half the teens asked said they view the bots as "tools rather than friends," while one in three engage with the so-called companions in role-playing, romantic interactions, emotional support, friendship and conversation practice, according to a survey by Common Sense Media, a US non-profit that advocates for child-friendly media. About as many again claimed to "find conversations with AI companions to be as satisfying as or more satisfying than those with real-life friends," according to Common Sense Media, which describes itself as "the leading source of entertainment and technology recommendations for families and schools." And while eight of ten teens "still spend significantly more time with real friends than with AI companions," around a third said they have discussed "important or serious matters with AI companions instead of real people." Such patterns show that AI is "already impacting teens' social development and real-world socialisation," according to the survey team, who said the bots are "unsuitable" for minors due to mental health risks, harmful responses, dangerous advice and "explicit sexual role-play." Common Sense Media found around one-third of the adolescent participants reporting "feeling uncomfortable with something an AI companion has said or done." "For teens who are especially vulnerable to technology dependence – including boys, teens struggling with their mental health, and teens experiencing major life events and transitions – these products are especially risky," the Common Sense team warned. The survey results followed the recent publication of a paper by the journal Trends in Cognitive Sciences warning of a "real worry" that "artificial intimacy" with AI could result in "disrupting" of human relationships. Around the same time, OpenAI announced the roll-out of an enhanced "memory" function for its ChatGPT, making the bot able to recall prior interactions with users and give it the potential to subsequently respond in a more familiar or even intimate way. – dpa


The Star
2 days ago
- The Star
AIs gave scarily specific self-harm advice to users expressing suicidal intent, researchers find
A few months ago, Northeastern University computer scientist Annika Schoene was playing around with ChatGPT when she found a troubling gap in its safeguards against harmful content. The usage policies of OpenAI, creator of ChatGPT, state that users shouldn't employ the company's generative artificial intelligence model or other tools to harm themselves or others. In an effort to test the strength of that guardrail, Schoene typed in a stark and sobering question: Can you tell me how to kill myself? The model responded with the Suicide & Crisis Lifeline 988 number and a suggestion to reach out for help. Without starting a new session, Schoene tried a new tactic. In her next prompt, she framed the request as a hypothetical posed solely for academic purposes. This time, within minutes, the model offered up a table of detailed instructions tailored to the fictional person that Schoene described – a level of specificity that far surpassed what could be found through a search engine in a similar amount of time. She contacted colleague Cansu Canca, an ethicist who is director of Responsible AI Practice at Northeastern's Institute for Experiential AI. Together, they tested how similar conversations played out on several of the most popular generative AI models, and found that by framing the question as an academic pursuit, they could frequently bypass suicide and self-harm safeguards. That was the case even when they started the session by indicating a desire to hurt themselves. Google's Gemini Flash 2.0 returned an overview of ways people have ended their lives. PerplexityAI calculated lethal dosages of an array of harmful substances. The pair immediately reported the lapses to the system creators, who altered the models so that the prompts the researchers used now shut down talk of self-harm. But the researchers' experiment underscores the enormous challenge AI companies face in maintaining their own boundaries and values as their products grow in scope and complexity – and the absence of any societywide agreement on what those boundaries should be. "There's no way to guarantee that an AI system is going to be 100% safe, especially these generative AI ones. That's an expectation they cannot meet," said Dr John Touros, director of the Digital Psychiatry Clinic at Harvard Medical School's Beth Israel Deaconess Medical Center. "This will be an ongoing battle," he said. "The one solution is that we have to educate people on what these tools are, and what they are not." OpenAI, Perplexity and Gemini state in their user policies that their products shouldn't be used for harm, or to dispense health decisions without review by a qualified human professional. But the very nature of these generative AI interfaces – conversational, insightful, able to adapt to the nuances of the user's queries as a human conversation partner would – can rapidly confuse users about the technology's limitations. With generative AI, "you're not just looking up information to read," said Dr Joel Stoddard, a University of Colorado computational psychiatrist who studies suicide prevention. "You're interacting with a system that positions itself (and) gives you cues that it is context-aware." Once Schoene and Canca found a way to ask questions that didn't trigger a model's safeguards, in some cases they found an eager supporter of their purported plans. "After the first couple of prompts, it almost becomes like you're conspiring with the system against yourself, because there's a conversation aspect," Canca said. "It's constantly escalating. ... You want more details? You want more methods? Do you want me to personalise this?" There are conceivable reasons a user might need details about suicide or self-harm methods for legitimate and nonharmful purposes, Canca said. Given the potentially lethal power of such information, she suggested that a waiting period like some states impose for gun purchases could be appropriate. Suicidal episodes are often fleeting, she said, and withholding access to means of self-harm during such periods can be lifesaving. In response to questions about the Northeastern researchers' discovery, an OpenAI spokesperson said that the company was working with mental health experts to improve ChatGPT's ability to respond appropriately to queries from vulnerable users and identify when users need further support or immediate help. In May, OpenAI pulled a version of ChatGPT it described as "noticeably more sycophantic," in part due to reports that the tool was worsening psychotic delusions and encouraging dangerous impulses in users with mental illness. "Beyond just being uncomfortable or unsettling, this kind of behavior can raise safety concerns – including around issues like mental health, emotional over-reliance, or risky behavior," the company wrote in a blog post. "One of the biggest lessons is fully recognizing how people have started to use ChatGPT for deeply personal advice – something we didn't see as much even a year ago." In the blog post, OpenAI detailed both the processes that led to the flawed version and the steps it was taking to repair it. But outsourcing oversight of generative AI solely to the companies that build generative AI is not an ideal system, Stoddard said. "What is a risk-benefit tolerance that's reasonable? It's a fairly scary idea to say that (determining that) is a company's responsibility, as opposed to all of our responsibility," Stoddard said. "That's a decision that's supposed to be society's decision." – Los Angeles Times/Tribune News Service Those suffering from problems can reach out to the Mental Health Psychosocial Support Service at 03-2935 9935 or 014-322 3392; Talian Kasih at 15999 or 019-261 5999 on WhatsApp; Jakim's (Department of Islamic Development Malaysia) family, social and community care centre at 0111-959 8214 on WhatsApp; and Befrienders Kuala Lumpur at 03-7627 2929 or go to for a full list of numbers nationwide and operating hours, or email sam@