Latest news with #JessicaReif


CNET
29-05-2025
- Business
- CNET
Use AI at Work? Your Coworkers May Be Judging You
Bosses everywhere are saying generative AI is the future. The signals emanating from the C-suites of corporations big and small are clear: If artificial intelligence doesn't take your job, it will at least change it significantly. The catch: If you use AI at work, your coworkers and maybe even your managers may think you're lazy. That is if you can get hired in the first place. This is the finding of a new study by researchers at Duke University published this month in the journal PNAS. Across four studies, the researchers examined whether people who used AI at work worried others would see them as lazy or incompetent and whether those fears were valid. "We found there was this universal social evaluation penalty where people described as using AI are evaluated as being less competent, less diligent, lazier than people who are described as receiving help from all sorts of other searches," Jessica Reif, a Ph.D. candidate at the Duke University Fuqua School of Business and lead author of the study, told me. The study highlights the difference between the hype over AI at work and the reality on the ground. Although business leaders and AI companies can't stop themselves from envisioning a utopian AI future in which autonomous agents do most of the work and humans focus on truly creative tasks, workers are skeptical. That skepticism — only 23% of American adults said they expect AI will improve how people do their jobs in a recent survey by Pew — affects how people view coworkers who use these tools. People worry they are judged for using AI The Duke University team first looked at whether employees would hesitate to admit they use an AI tool relative to a non-AI tool. The first of four studies found the 500 online participants were more likely to believe they would be judged by a manager or colleague as being lazy, replaceable or less competent if they said they use a generative AI tool versus a non-AI tool. The second test confirmed it. The 1,215 participants read a paragraph about an employee and rated how lazy, competent, diligent, ambitious, independent, self-assured or dominant they perceived the person to be. The people being rated were described as either receiving help from generative AI (like a lawyer using a tool to summarize information) or non-AI sources (like a paralegal) or were in a control group with no statement about help. People who received AI help were seen as more lazy, less competent, less diligent, less independent and less self-assured than either the control group or those receiving non-AI help. The case of a lawyer getting help from AI versus a paralegal is just one example. The researchers used 384 different scenarios, with different jobs and types of help. "What we found is that this was pretty consistent across all the occupations we queried," Reif said. In their third study, the researchers had 1,718 participants serve as "managers" to hire someone for a task. Some of the "candidates" were reported as using AI regularly, and some were people who never use AI. The managers were also asked about their own AI use. Managers who use AI regularly were more likely to see candidates who use AI as a good fit, while those who don't usually preferred candidates who don't. The third study was unclear about whether AI would actually be helpful for the task, so in the final study, participants were asked to imagine they were hiring a gig worker for a task. They were then asked to evaluate workers who either used AI tools or non-AI tools and rate how they would perceive them for manual tasks or digital tasks. The results found that while people who used AI were seen as more lazy, that perception is reduced if the evaluator uses AI or if AI is clearly useful for the task. But just because there isn't a penalty doesn't mean there's an advantage, perception-wise, for AI users in that last study, according to Richard Larrick, one of the authors and a professor of management at Duke University. "The people themselves who are heavy AI users don't actually kind of give any particular benefit or reward, in terms of their perceptions, to the AI user," Larrick said. "So it isn't like there's some boost in perceptions when high AI users think about another AI user. It's just that you wipe out for them the laziness perception." Your CEO may think AI is the future Ever since large language models like ChatGPT burst onto the scene in 2022, management consultants and corporate executives have been touting generative AI as the next big thing in the workplace. Workplace apps from companies like Google and Microsoft seem more packed each day with new AI functions and prompts. As the technology has matured a bit and more useful applications have arisen, that perception has only gotten stronger for many companies. Shopify and Duolingo, for instance, both recently announced they would prioritize AI-driven work and try to see if an AI can do a job before hiring a new employee or contractor. A commandment from a CEO to be AI-first is one thing. Actually changing the culture in your workplace and among the people you work around is entirely different. "I think there are cases where, when the rubber meets the road implementing tools like generative AI, there are challenges," Reif said. "What we're showing is just one such challenge of many." She speculated as more employers, especially tech-savvy ones, prioritize AI use and skills, the social costs will drop eventually. "I think it's going to take a while for this penalty to really go away," she said. Larrick said that even if general perceptions around AI users change, the social penalty may only disappear for certain tasks. For some work, using generative AI will be more acceptable. For others, it won't. How to avoid judgment from coworkers One way not to be judged at work is not to use AI on the job. And that may be what people are doing already, just based on the simple fact that people will judge you, as the researchers found in their first study. "As long as my choice of adopting AI is based on my theory of what others will think, even as what other people think changes, if my theory doesn't change fast enough, I still might be reluctant to use it and to reveal it," Larrick said. Another way to deal with the perception of laziness is to point out whether AI is saving you time and whether the time you save is being used well, Reif said. Perceived laziness isn't the only problem with using generative AI at work. There are concerns about whether the work you ask it to do is accurate or competent. So be sure you're checking your work and show that you are, in fact, using skills that can't be easily replaced, said Jack Soll, one of the authors and a professor of management at Duke University. "The more that employees can make their peers and their bosses understand that it takes skill and knowledge in order to use it appropriately, I think others can then appreciate their AI use," he said.


Express Tribune
13-05-2025
- Business
- Express Tribune
Using AI tools like ChatGPT at work may harm your reputation, study finds
A new study from Duke University reveals a hidden downside to using AI at work: it might quietly hurt your professional reputation. Published in the Proceedings of the National Academy of Sciences (PNAS), the study shows that employees who rely on AI tools like ChatGPT, Claude, or Gemini are often seen by their peers and managers as lazy, less competent, and less independent—even when the AI boosts productivity. Researchers Jessica Reif, Richard Larrick, and Jack Soll from Duke's Fuqua School of Business conducted four large-scale experiments involving over 4,400 participants. The results showed a consistent 'social evaluation penalty' tied to AI use in the workplace. 'Although AI can enhance productivity, its use carries social costs,' the authors wrote. In one experiment, participants who imagined using AI at work believed they would be judged more harshly than those using traditional tools. In another, participants evaluating employee profiles viewed AI users as less hireable and more replaceable—especially if the evaluator didn't use AI themselves. That bias, the study found, wasn't limited by age, gender, or job title. Negative judgments cut across all demographics. And this perception had real consequences: managers unfamiliar with AI were less likely to hire candidates who used it regularly. However, there was one major exception—when AI use was clearly tied to the job's needs, the reputational hit softened. In other words, context matters. Interestingly, people who used AI tools often were less likely to judge others harshly, suggesting familiarity may reduce bias. But the fear of stigma runs deep. Many employees hide their use of AI tools from their bosses, earning them the nickname 'secret cyborgs,' a term coined by Wharton professor Ethan Mollick. The study underscores a tricky balancing act. While AI promises productivity gains, employees may be quietly penalized for embracing it. And as AI adoption accelerates, this reputational dilemma may become a central issue in future workplace dynamics. In other words: AI might help you work smarter—but it could still make you look worse.


Time of India
12-05-2025
- Business
- Time of India
Workers using AI tools seen as less competent: Study
Employees who rely on artificial intelligence (AI) tools such as ChatGPT, Gemini or Copilot are often perceived as less intelligent, less hardworking and even lazier than their peers, according to a new study by Duke University. The research highlights a potential social bias that could slow the broader acceptance of AI in the workplace , despite its proven benefits in boosting productivity. #Operation Sindoor The damage done at Pak bases as India strikes to avenge Pahalgam Why Pakistan pleaded to end hostilities Kashmir's Pahalgam sparks Karachi's nightmare The study, published in the Proceedings of the National Academy of Sciences, was conducted by researchers Jessica Reif, Richard Larrick and Jack Soll. It involved four online experiments with 4,400 participants to examine how workers who use AI are perceived by others. In the first experiment, participants were asked to imagine themselves using an AI tool to complete a task, then assess how they believed their colleagues would judge them. Most expected to be seen as lazy, incompetent or easily replaceable. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Join new Free to Play WWII MMO War Thunder War Thunder Play Now Undo The second experiment asked respondents to evaluate co-workers who used AI to complete assignments. The perceptions remained largely negative, with such workers viewed as less competent, less confident and lacking independence. A third experiment placed participants in the position of hiring managers reviewing job applicants. Candidates who admitted to using AI for work were rated less favourably. However, the bias diminished when the hiring managers themselves had experience using AI tools . Live Events In the final experiment, the researchers explored how perceptions changed when AI use was both appropriate for the task and clearly improved productivity. Under these conditions, the negative judgments reduced significantly. Discover the stories of your interest Blockchain 5 Stories Cyber-safety 7 Stories Fintech 9 Stories E-comm 9 Stories ML 8 Stories Edtech 6 Stories Across all experiments, one trend stood out. Participants with direct experience using AI were consistently more accepting of both their own and others' AI use. The findings suggest that social perceptions may act as a barrier to the adoption of AI in professional settings. Even when the tools deliver measurable improvements in efficiency, hesitation around their use may persist due to workplace culture. The study comes at a time when the role of AI in the future of work is under intense scrutiny. While AI is being adopted to automate routine functions, concerns remain over its impact on human jobs. Last month, the United Nations Conference on Trade and Development warned that AI could affect up to 40% of jobs globally .