
Is AI causing mass layoffs at tech companies? Kind of, experts say
The reality is more complicated, with companies trying to signal that they're making themselves more efficient as they prepare for broader changes wrought by AI.
Tech job postings are down 36 per cent from 2020, according to a new report from the job listing site Indeed. But that isn't only because companies want to replace workers with artificial intelligence (AI).
A new report from career website Indeed says tech job postings in July were down 36 per cent from their early 2020 levels, with AI representing only one factor in stalling a rebound.
ChatGPT's debut in 2022, for example, also corresponded with the end of a pandemic-era hiring binge.
'We're kind of in this period where the tech job market is weak, but other areas of the job market have also cooled at a similar pace,' said Brendon Bernard, an economist at the Indeed Hiring Lab.
'Tech job postings have actually evolved pretty similarly to the rest of the economy, including relative to job postings where there really isn't that much exposure to AI".
That nuance is not always clear from the last six months of tech layoff emails, which often include a nod to AI in addition to expressions of sympathy.
Workday CEO Carl Eschenbach, for example, said in an email announcing mass layoffs earlier this year that "companies everywehre are reimagining how work gets done," citing the "increasing demand" for AI at his company as the reason behind the layoffs.
The same rhetoric is being used internationally, for example by India's tech giant Tata Consultancy, which justified the 12,000 cuts to its organisation by saying that it is getting ready to deploy "AI at scale for our clients and ourselves".
AI spending is a more common factor
What's more common than AI replacing jobs, though, is the need for more dollars to implement AI throughout the company, experts said.
Tech companies are trying to justify huge amounts of spending to pay for data centres, chips, and the energy needed to build AI systems.
Bryan Hayes, a strategist at Zacks Investment Research, said there is a "double-edged sword," of restructuring in the AI age. Companies are trying to "find the right balance between maintaining an appropriate headcount but also allowing artificial intelligence to come to the forefront".
Hayes said broader tech layoffs have helped improve profit margins, but what it means for the employment prospects of these workers is hard to gauge.
'Will AI replace some of these jobs? Absolutely,' said Hayes. 'But it's also going to create a lot of jobs".
Those tech employees that are able to show that they can "leverage artificial intelligence and help the companies innovate and create new products and services, are going to be the ones that are in high demand," he added.
Hayes pointed to Meta Platforms, the parent company of Facebook and Instagram, which is on a spree of offering lucrative packages to recruit elite AI scientists from competitors such as OpenAI.
The Indeed report shows that AI specialists are faring better than software engineers, but postings for those jobs have also gone down.
Bernard said that is because of the "cyclical ups and downs of the sector".
Entry-level jobs most affected
The Indeed report found that AI is having the deepest impact on entry-level jobs across sectors, including marketing, administrative assistance, and human resources.
That's because those jobs have tasks that overlap with generative AI tools.
Postings for workers with at least five years of experience fared better, the report found.
'The plunge in tech hiring started before the new AI age, but the shifting experience requirements is something that happened a bit more recently,' Bernard said.
On the other end of the spectrum, some types of jobs appeared more immune to AI changes. That included health workers who draw blood, followed by nursing assistants, workers who remove hazardous materials, painters, and embalmers.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Euronews
a day ago
- Euronews
EU AI Act doesn't do enough to protect artists' copyright, groups say
As the European Artificial Intelligence Act (AI Act)comes into force, groups representing artists say there are still many loopholes that need to be fixed for them to thrive in a creative world increasingly dominated by AI. The AI Act, celebrated for being the first comprehensive legislation to regulate AI globally, is riddled with problems, these organisations say. Groups like the European Composer and Songwriter Alliance (ECSA) and the European Grouping of Societies of Authors and Composers (GESAC) argue that it fails to protect creators whose works are used to train generative AI models. Without a clear way to opt out or get paid when tech companies use their music, books, movies, and other art to train their AI models, experts say that their work is continually at risk. 'The work of our members should not be used without transparency, consent, and remuneration, and we see that the implementation of the AI Act does not give us,' Marc du Moulin, ECSA's secretary general, told Euronews Next. 'Putting the cart before the horse' The purpose of the AI Act is to make sure AI stays 'safe, transparent, traceable, non-discriminatory and environmentally friendly,' the European Commission, the European Union's executive body, says in an explainer on the law. The law rates AI companies based on four levels of risk: minimal, limited, high, or unacceptable. Those in the unacceptable range are already banned, for example AIs that are manipulative or that conduct social scoring, where they rank individuals based on behaviour or economic status. Most generative AI falls into a minimal risk category, the Commission says. The owners of those technologies still have some requirements, like publishing summaries of the copyrighted data that companies used to train their AIs. Under the EU's copyright laws, companies are allowed to use copyrighted materials for text and data mining, like they do in AI training, unless a creator has 'reserved their rights,' Du Moulin said. Du Moulin said it's unclear how an artist can go about opting out of their work being shared with AI companies. 'This whole conversation is putting the cart before the horse. You don't know how to opt out, but your work is already being used,' he said. The EU's AI Code of Practice on General-Purpose (GPAI), a voluntary agreement for AI companies, asks providers to commit to a copyright policy, put in place safeguards to avoid any infringements of rights, and designate a place to receive and process complaints. Signatories so far include major tech and AI companies such as Amazon, Google, Microsoft, and OpenAI. AI providers have to respect copyright laws, the Commission says The additional transparency requirements under the AI Act give artists clarity on who has already used their material and when, du Moulin added, making it difficult to claim any payment for work that's already been scraped to train AI models. 'Even if the AI Act has some good legal implications, it only works for the future – it will not be retroactive,' Du Moulin said. 'So everything which has been scraped already … it's a free lunch for generative AI providers who did not pay anything'. Adriana Moscono, GESAC's general manager, said some of her members tried opting out by sending letters and emails to individual AI companies to get a license for their content, but were not successful. 'There was no answer,' Moscono told Euronews Next. 'There was absolute denial of the recognition of … the need to respect copyright and to get a license. So please, European Commission, encourage licensing'. Thomas Regnier, a Commission spokesperson, said in a statement to Euronews Next that AI providers have to respect the rights holders when they carry out text and data mining, and if there have been infringements, they can settle it privately. The AI Act 'in no way affects existing EU copyright laws,' Regnier continued. Mandate licence negotiations, groups ask Du Moulin and Moscono are asking the Commission to urgently clarify the rules around opting out and copyright protection in the law. 'The code of practice, the template and the guidelines, they don't provide us any capacity to improve our situation,' Moscono said. 'They're not guaranteeing … a proper application of the AI Act'. The advocates said the Commission could also mandate that AI companies negotiate blanket or collective licenses with the respective artist groups. Germany's Society for Musical Performing and Mechanical Reproduction Rights (GEMA) filed two copyright lawsuits against AI companies OpenAI, the parent of ChatGPT, and Suno AI, an AI music generation app. While not directly related to the AI Act, Du Moulin says the verdict could determine to what extent AI companies could be bound to copyright laws. The Commission and the European Court of Justice, the EU's high court, have also signalled that they will review the text and data mining exemption in the copyright legislation issued in 2019, Du Moulin said. New AI companies have to make sure they are compliant with the AI Act's regulations by 2026. That deadline extends to 2027 to companies already operating in the EU.


Sustainability Times
2 days ago
- Sustainability Times
'AI Will Change Everything About Nuclear' as US Lab Partners With Amazon Cloud to Build the First Smart Reactors in American History
IN A NUTSHELL 🔧 Idaho National Laboratory partners with Amazon Web Services to develop AI-powered digital twins for nuclear reactors. partners with Amazon Web Services to develop for nuclear reactors. 💡 The collaboration aims to modernize the U.S. nuclear sector, making reactors autonomous and efficient . and . 🌐 The initiative is part of a national push to integrate artificial intelligence into energy infrastructure. into energy infrastructure. 🔍 Focus on safety, cost reduction, and sustainability in nuclear energy development. The United States is taking a bold step in nuclear energy innovation, leveraging the power of artificial intelligence (AI) to transform how nuclear reactors are designed and operated. The Idaho National Laboratory (INL) has partnered with Amazon Web Services (AWS) to develop autonomous nuclear systems. This collaboration aims to create digital twins of nuclear reactors using AWS's advanced cloud technology. The move is part of a larger effort to modernize the nuclear energy sector, which has historically faced challenges such as high costs and regulatory hurdles. Digital Twins: A Groundbreaking Approach The concept of digital twins is at the heart of this initiative. Digital twins are virtual replicas of physical systems that enable detailed modeling and simulation. By utilizing AWS's cloud infrastructure, INL aims to create digital twins of small modular reactors (SMRs). These reactors, with capacities ranging from 20 to 300 megawatts, are poised to benefit from AI-driven efficiencies. John Wagner, Director of INL, highlighted the significance of this collaboration, stating that it marks a leap forward in integrating AI with nuclear energy research. 'Our collaboration with Amazon Web Services marks a significant leap forward in integrating advanced AI technologies into our nuclear energy research and development initiatives,' Wagner noted. The partnership underscores the critical role of linking the nation's nuclear energy laboratory with AWS to accelerate nuclear energy deployment. By using real-time data, these digital twins will enhance modeling capabilities, facilitate simulations, and eventually allow for safe autonomous operations. This initiative is expected to revolutionize how nuclear plants are built and operated, offering potential cost reductions and improved safety. America's Artificial Sun Is Here and It's Already Tearing the Country Apart Between Tech Elites, Climate Rebels, and Energy Giants Harnessing the Power of Machine Learning As part of INL's broader vision, the integration of machine learning with nuclear technology aims to create an AI-nuclear ecosystem. This ecosystem will connect Department of Energy (DOE) labs, tech companies, and energy developers. The ultimate goal is to develop nuclear reactors that are not only faster to construct but also safer and more intelligent in operation. The INL-AWS partnership follows a similar collaboration between Westinghouse and Google Cloud, highlighting the growing importance of AI in the nuclear sector. By combining AI platforms with proprietary nuclear data, these partnerships aim to accelerate the development of advanced nuclear technologies. In May 2025, President Donald Trump signed executive orders to streamline reactor permitting and expand domestic nuclear fuel production. These efforts are part of a broader strategy to modernize the U.S. nuclear energy infrastructure and support increasing AI-driven power demands. 'Hotter Than Hellfire Itself': Critics Slam Fusion Reactor Part That Withstands Temperatures Higher Than Asteroid Impacts A National Push for AI-Driven Nuclear Power The U.S. government has recognized the critical role of nuclear energy in maintaining technological competitiveness and supporting future data center growth. The release of a national AI Action Plan in July 2025 identified reliable, dispatchable energy, including nuclear power, as essential components of this strategy. The partnership between INL and AWS is a reflection of this national push. By leveraging customized chips like Inferentia and Trainium, along with tools such as Amazon SageMaker, the collaboration aims to drive the adoption of AI in nuclear applications. Chris Ritter, division director of Scientific Computing and AI at INL, emphasized the importance of this partnership in accessing AI models and specialized cloud services. This initiative is not just about technological advancements; it is also about redefining the future of energy production. By embracing AI, the U.S. is positioning itself at the forefront of global nuclear innovation. 'Nuclear Fusion Just Got Real': Scientists Unveil Breakthrough That Could Deliver Endless Clean Energy and Erase Fossil Fuel Dependency The Road Ahead for AI and Nuclear Energy While the potential benefits of AI-driven nuclear energy are immense, the path forward is not without challenges. The integration of AI into nuclear systems requires careful consideration of safety protocols, regulatory compliance, and public acceptance. However, the collaboration between INL and AWS is a promising step toward overcoming these hurdles. As the U.S. continues to invest in AI-driven nuclear technologies, the focus will be on creating a sustainable and secure energy future. The development of autonomous reactors and digital twins represents a significant shift in how nuclear energy is perceived and utilized. The question remains: How will this transformation in nuclear energy impact global energy dynamics, and what role will AI play in shaping the future of sustainable power? This article is based on verified sources and supported by editorial technologies. Did you like it? 4.6/5 (22)


Euronews
2 days ago
- Euronews
Hyper-realistic AI generated news anchors fool the internet
"In a stunning move, Canada has declared war on the US", says a blonde American news anchor, in a video which has spread across social media from TikTok to X. Looking straight into the camera, the anchor continues, "Let's go to Joe Braxton, who's live at the border." But those who make it to the seven-second mark of the video bear the most chance of edging closer to the truth. "I am currently at the border, but there is no war", says the reporter before revealing, "Mum, Dad, I know this may look real, but it's all AI." Although the anchors in these clips appear to display the same enthusiasm, energy and diction as many authentic newsreaders, they are generated by artificial intelligence (AI). Many of these videos are created with Veo 3, Google's AI video generation software, which allows users to create advanced eight-second videos, syncing audio and video seamlessly. Through this technology, users are prompting this software to make fake news anchors say crazy things. How can you spot that these videos are fake? A number of pointers can help online users decipher whether a video with a legitimate-looking TV anchor is real or not. One tell-tale clue is the fact that in these videos, many of the "reporters" who appear to be out reporting on the field are holding the same mic, which has the generic term "NEWS" on it. In reality, although many TV channels have the term "news" somewhere in their name (for instance, BBC News, Fox News or Euronews), no major channels are just called "News". In other cases, the logos displayed on presenters' mics, notebooks, clothing, as well as in the background and on screen, are gibberish. AI is not able to distinguish what makes a series of letters legible because it primarily focuses on visual patterns rather than on the semantic meaning of text. In turn, it frequently generates illegible text. This is because AI works on a prompt basis, so if an individual enters a prompt which does not specifically state which words should be included in the video it generates, the machine will generate its own text. Deepfake news anchors used by states An increasing number of authentic TV channels have been experimenting with AI newsreaders in recent years, either through fully AI-generated presenters or by asking real people to give sign-off authorisation for their image or voice. In October, a Polish radio station sparked controversy after dismissing its journalists and relaunching this week with AI "presenters". However, state actors have also been using AI anchors to peddle propaganda. For instance, in a report published in 2023, AI analytical firm Graphika revealed that a fictitious news outlet named "Wolf News" had been promoting the Chinese Communist Party's interests through videos spread across social media, presented by AI-generated presenters. When AI anchors bypass repressive censorship in dictatorships Although AI anchors can increase the spread of fake news and disinformation, in some instances, they can free journalists who live in repressive regimes from the dangers of public exposure. In July 2024, Venezuelan President Nicolas Maduro was re-elected in a harshly contested election, which was marred by electoral fraud, according to rights groups. Following his re-election, Maduro — who has been in power since 2013 — further cracked down on the press, endangering journalists and media workers. To fight back, journalists launched Operación Retuit (Operation ReTweet) in August 2024. In a series of 15 punchy social media-style videos, a female and a male AI-generated anchor called "Bestie" and "Buddy" report on the political situation of Venezuela, sharing factual evidence.