logo
How AI Adoption Is Sitting With Workers

How AI Adoption Is Sitting With Workers

T here's a danger to focusing primarily on CEO statements about AI adoption in the workplace, warns Brian Merchant, a journalist-in-residence at the AI Now Institute, an AI policy and research institute.
'There's a wide gulf between the prognostications of tech company CEOs and what's actually happening on the ground,' he says. Merchant in 2023 published Blood in the Machine, a book about how the historical Luddites resisted automation during the industrial revolution. In his substack newsletter by the same name, Merchant has written about how AI implementation is now reshaping work.
To better understand workers' perspectives on how AI is changing jobs, we spoke with Merchant. Here are excerpts from our conversation, edited for length and clarity:
There have been a lot of headlines recently about how AI adoption has led to headcount reductions. How do you define the AI jobs crisis?
There is a real crisis in work right now, and AI poses a distinct kind of threat. But that threat to me, based on my understanding of technological trends in history, is less that we're looking at a widespread, mass-automation, job-wipe-out event and more at a particular set of logics that generative AI gives management and employers.
There are jobs that are uniquely vulnerable. They might not be immense in number, but they're jobs that people think are pretty important—writing and artistic creation and that kind of thing. So you do have those jobs being threatened, but then we also have this crisis where AI supplies managers and bosses with this imperative where, whether or not the AI can replace somebody, it's still being pushed as a justification for doing so. We saw this a lot with DOGE and the hollowing out of the public workforce and the AI-first strategies that were touted over there.
More often than facilitating outright job replacement, automation is used by bosses to break down tasks, deskill labor, or use as leverage against workers. This was true in the Luddites' time, and it's true right now. A lot of the companies that say they're 'AI-first' are merely taking the opportunity to reduce salaried headcount and replace it with cheaper, more precarious contract labor. This is what happened with Klarna, the fintech company that has famously been one of the most vocal advocates of AI anywhere.
[Editor's note: In May, Klarna CEO Sebastian Siemiatkowski told Bloomberg that the company was reversing its well-publicized move to replace 700 human call-center workers with AI and instead hiring humans again. 'As cost unfortunately seems to have been a too predominant evaluation factor when organizing this, what you end up having is lower quality,' Siemiatkowski said.]
After all, firms still need people to ensure the AI output is up to par, edit it, or to 'duct tape it' to make sure it works well enough with existing systems—bosses just figure they can take the opportunity to call that 'unskilled' work and pay the people who are doing it less.
Your project, 'AI Killed My Job,' is an ongoing, multi-part series that dives deeper into how the AI jobs crisis is impacting workers day-to-day. What themes or patterns are emerging from those stories?
I invited workers who have been impacted by AI to reach out and share their stories. The project has just begun, and I've already gotten hundreds of responses at this point. I expected to see AI being used as a tool by management to try to extract more labor and more value from people, to get people to work harder, and to have it kind of deteriorate conditions rather than replace work outright. That's been born out, and that's what I've seen.
The first installment that I ran was around tech workers. Some people have the assumption that the tech industry is a little bit more homogeneous in its enthusiasm for AI, but that's really not the case. A lot of the workers who have to deal with them are not happy with AI and the way that AI is being used in their companies and the impact it's having on their work.
There's a few people [included in the first installment] who have lost their jobs as part of layoffs initiated by a company that has an AI-first strategy, including at CrowdStrike and Dropbox, and I'm hearing from many people who haven't quite lost their jobs yet, but are exponentially concerned that they will. But, by and large, what you're seeing now is managers using AI to justify speeding up work, trying to get employees to use it to be more productive at the expense of quality or the things that people used to enjoy about their jobs.
There are people who are frustrated to see management really encouraging the use of more AI at the expense of security or product quality. There's a story from a Google worker who watched colleagues feed AI-generated code into key infrastructures, which was pretty unsettling to many. That such an important and powerful company that runs such crucial web infrastructure would allow AI-generated code to be used in their systems with relatively few safeguards was really surprising. [Editor's note: A Google spokesperson said that the company actively encourages AI use internally, with roughly 30% of the company's code now being AI generated. They cited CEO Sundar Pichai's estimate that AI has increased engineering velocity by 10% but said that engineers have rigorous code review, security, and maintenance standards.] We're also seeing it being used to displace accountability, with managers using AI as a way to deflect blame should something go wrong, or, 'It's not my fault; it's AI's fault.'
Your book, Blood in the Machine, tells the story of the historical Luddites' uprising against rising automation during the industrial revolution. What can we learn from that era that's still relevant today?
One lesson we can learn from the Luddites is that we should be seeking ways to make more people and stakeholders involved in the process of developing and deploying technology. The Luddites were not anti-technology. They rose up and they smashed the machine because they had no other choice. The deck was stacked against them, and a lot of them were quite literally starving. Collective bargaining was illegal for them. And, just like today, conditions were increasingly difficult as the democratic levers that people can pull to demand a seat at the table were vanishingly few. (I mean, Silicon Valley just teamed up with the GOP to try and get an outright 10-year ban passed on states' abilities to regulate AI). That leads to strife, it leads to anger, it leads to feeling like you don't have a say or any options.
Now, we're looking at artists and writers and content creators and coders and you name it, watching their livelihoods becoming more precarious with worsening conditions, if not getting erased outright. As you squeeze these more and more populations of people, then it's not unthinkable that you would see what happened then happen again in some capacity. You're already seeing the roots of that with people vandalizing Waymo cars, which they see as the agents of big tech and automation. That's a reason employers might want to consider that human element rather than putting the pedal to the metal with regards to AI automation because there's a lot of fear, anxiety, and anger at the way that all of this has taken shape and it's playing out.
What should employers do instead?
When it comes to employers, at the end of the day, if you're shelling out for a bunch of AI, then you're either hoping that your employees will use it to be more productive for you and work harder for you, or you're hoping to get rid of employees. Ideally, the employer would say it's the former. It would trust its employees to know how best to generate more value and make them more productive. In reality, even if a company goes that far, they can still turn around and trim labor costs elsewhere and mandate workers to use AI to pick up laid-off colleagues' workloads and ratchet up productivity. So what you really need is a union contract or something codified in law that you can't just fire people and replace them with AI.
You see some union contracts that include language about the ways that AI or automation can be implemented and when it can't, and what the worker has say over. Right now, that is the best means of giving people power over a technology that's going to affect their working life. The problem with that is we have such low union density in the United States that it limits who can enjoy such a benefit to those who are sort of formally organized. There are also attempts at legislation that put checks on what automation can and can't touch, when AI can be used in the hiring process or what kinds of data it can collect. Overall, there has to be a serious check on the power of Silicon Valley before we can hope to get workers' voices heard in terms of how the technology's affecting them.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

OpenAI eyes $500 billion valuation in potential employee share sale, source says
OpenAI eyes $500 billion valuation in potential employee share sale, source says

Yahoo

time38 minutes ago

  • Yahoo

OpenAI eyes $500 billion valuation in potential employee share sale, source says

(Reuters) -ChatGPT maker OpenAI is in early talks for a potential secondary stock sale that would allow current and former employees to sell shares, valuing the company at around $500 billion, a source familiar with the matter told Reuters on Tuesday. Bloomberg was first to report the news. The Microsoft-backed company aims to raise billions through the sale, with existing investors, including Thrive Capital, expressing interest in buying some of the employee shares, the source said. Thrive Capital declined to comment on a Reuters request. Separately, OpenAI is still in the process of raising $40 billion in a new funding round led by SoftBank Group at a $300 billion valuation to advance AI research, expand computational infrastructure and enhance its tools.

It's not you, it's me. ChatGPT doesn't want to be your therapist or friend
It's not you, it's me. ChatGPT doesn't want to be your therapist or friend

USA Today

time40 minutes ago

  • USA Today

It's not you, it's me. ChatGPT doesn't want to be your therapist or friend

In a case of "it's not you, it's me," the creators of ChatGPT no longer want the chatbot to play the role of therapist or trusted confidant. OpenAI, the company behind the popular bot, announced that it had incorporated some 'changes,' specifically mental health-focused guardrails designed to prevent users from becoming too reliant on the technology, with a focus on people who view ChatGPT as a therapist or friend. The changes come months after reports detailing negative and particularly worrisome user experiences raised concerns about the model's tendency to 'validate doubts, fuel anger, urge impulsive actions, or reinforce negative emotions [and thoughts].' The company confirmed in its most recent blog post that an update made earlier this year made ChatGPT 'noticeably more sycophantic,' or 'too agreeable,' 'sometimes saying what sounded nice instead of what was helpful.' OpenAI announced they have 'rolled back' certain initiatives, including changes in how they use feedback and their approach to measuring 'real-world usefulness over the long term, not just whether you liked the answer in the moment.' 'There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,' OpenAI wrote in an Aug. 4 announcement. 'While rare, we're continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.' Here's what to know about the recent changes to ChatGPT, including what these mental health guardrails mean for users. ChatGPT integrates 'changes' to help users thrive According to OpenAI, the 'changes' were designed to help ChatGPT users 'thrive.' 'We also know that AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,' OpenAI said. 'To us, helping you thrive means being there when you're struggling, helping you stay in control of your time, and guiding—not deciding—when you face personal challenges.' The company said its 'working closely' with experts, including physicians, human-computer-interaction (HCI) researchers and clinicians as well as an advisory group, to improve how 'ChatGPT responds in critical moments—for example, when someone shows signs of mental or emotional distress.' Thanks to recent 'optimization,' ChatGPT is now able to: 'Our goal to help you thrive won't change. Our approach will keep evolving as we learn from real-world use,' OpenAI said in its blog post. 'We hold ourselves to one test: if someone we love turned to ChatGPT for support, would we feel reassured? Getting to an unequivocal 'yes' is our work.'

China's Z.ai and America's Self-Defeating AI Strategy
China's Z.ai and America's Self-Defeating AI Strategy

Wall Street Journal

timean hour ago

  • Wall Street Journal

China's Z.ai and America's Self-Defeating AI Strategy

China's DeepSeek shocked the global AI community in January by building a frontier model at a fraction of Western costs. Now it has been outdone by a Chinese company subject to U.S. sanctions. It has become painfully obvious that Washington's strategy of restricting chip exports isn't working. formerly Zhipu AI, last week launched GLM-4.5, a production-level open-source model priced at 13% of DeepSeek's cost. It matches or exceeds Western standards in coding, reasoning and tool use. runs on only eight Nvidia H20 chips, which Nvidia recently gained reapproval to sell in China. That's better performance than DeepSeek with about half the hardware.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store