
Ex-Google exec's shocking warning: AI will create 15 years of ‘hell' — starting sooner than we think
Mo Gawdat, who left Google X as its chief business officer in 2018 and has become a popular author and public speaker, painted a grim picture of widespread job losses, economic inequality and social chaos from the AI revolution.
'The next 15 years will be hell before we get to heaven,' Gawdat told British entrepreneur Steven Bartlett on his 'Diary of a CEO' podcast on Monday.
4 Mo Gawdat, a former Google executive, warns that AI could trigger over a decade of upheaval, wiping out white-collar jobs and fueling social unrest.
YouTube / The Diary Of A CEO
Gawdat, 58, pointed to his own startup, Emma.love, which builds emotional and relationship-focused artificial intelligence. It is run by three people.
'That startup would have been 350 developers in the past,' he told Bartlett in the interview, first reported by Business Insider.
'As a matter of fact, podcaster is going to be replaced.'
Gawdat specifically warned that 'the end of white-collar work' will begin by the late 2020s, representing a fundamental shift in how society operates.
Unlike previous technological revolutions that primarily affected manual labor, he argues this wave of automation will target educated professionals and middle-class workers who form the backbone of modern economies.
The Egyptian-born tech whiz, who was a millionaire by age 29, believes this massive displacement will create dangerous levels of economic inequality.
Without proper government oversight, AI technology will channel unprecedented wealth and influence to those who own or control these systems, while leaving millions of workers struggling to find their place in the new economy, according to Gawdat.
Beyond economic concerns, Gawdat anticipates serious social consequences from this rapid transformation.
4 Gawdat says rapid advances in AI technology will soon threaten even highly skilled professions once thought immune from automation.
Nina Lawrenson/peopleimages.com – stock.adobe.com
Gawdat said AI will trigger significant 'social unrest' as people grapple with losing their livelihoods and sense of purpose — resulting in rising rates of mental health problems, increased loneliness and deepening social divisions.
'Unless you're in the top 0.1%, you're a peasant,' Gawdat said. 'There is no middle class.'
Despite his gloomy predictions, Gawdat said that the period of 'hell' will be followed by a 'utopian' era that would begin after 2040, when workers will be free from doing repetitive and mundane tasks.
4 The rapid advancements in AI have been demonstrated in products such as OpenAI's ChatGPT.
Ascannio – stock.adobe.com
Instead of being 'focused on consumerism and greed,' humanity could instead be guided by 'love, community, and spiritual development,' according to Gawdat.
Gawdat said that it is incumbent on governments, individuals and businesses to take proactive measures such as the adoption of universal basic income to help people navigate the transition.
'We are headed into a short-term dystopia, but we can still decide what comes after that,' Gawdat told the podcast, emphasizing that the future remains malleable based on choices society makes today.
He argued that outcomes will depend heavily on decisions regarding regulation, equitable access to technology, and what he calls the 'moral programming' of AI algorithms.
'Our last hurrah as a species could be how we adapt, re-imagine, and humanize this new world,' Gawdat said.
Gawdat's predictions about mass AI-driven disruption are increasingly backed by mainstream economic data and analysis.
4 Dario Amodei, CEO of Anthropic, has warned of a 'white-collar bloodbath.'
AP
Anthropic CEO Dario Amodei has warned of a 'white-collar bloodbath,' predicting that up to half of all entry-level office jobs could vanish within five years.
The World Economic Forum says 40% of global employers expect to reduce staff due to AI, and Harvard researchers estimate that 35% of white-collar tasks are now automatable.
Meanwhile, Challenger, Gray & Christmas reports that over 27,000 job cuts since 2023 have been directly attributed to AI, with tens of thousands more expected.
Goldman Sachs and McKinsey project a multi-trillion-dollar boost to global GDP from AI, but the IMF cautions that these gains may worsen inequality without targeted policy responses.
Analysts from MIT and PwC echo Gawdat's fears of wage collapse, wealth concentration, and social unrest — unless governments act swiftly to manage the transition.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Hill
27 minutes ago
- The Hill
OpenAI, Google, Anthropic AI models added to government purchasing system
Artificial intelligence (AI) models from OpenAI, Google and Anthropic have been added to a government purchasing system, allowing federal agencies to buy and use the AI products. The General Services Administration (GSA) announced Tuesday that ChatGPT, Gemini and Claude had been added to the agency's Multiple Award Schedule for purchase. 'America's global leadership in AI is paramount, and the Trump Administration is committed to advancing it,' GSA acting administrator Michael Rigas said in a statement. 'By making these cutting-edge AI solutions available to federal agencies, we're leveraging the private sector's innovation to transform every facet of government operations,' he continued. This follows the addition of xAI's Grok to the GSA schedule, which it announced last month after unveiling a new suite of products for U.S. government customers and scoring a Pentagon contract alongside the three other tech firms. The agency pointed to President Trump's AI Action Plan for the new additions to its purchasing system. The AI framework, released last month, called for accelerating AI adoption in the federal government. It specifically advocated for the creation of an AI procurement toolbox managed by the GSA that would 'allow any Federal agency to easily choose among multiple models in a manner compliant with relevant privacy, data governance, and transparency laws.' The recommendations for federal AI adoption represent one small portion of Trump's wide-ranging AI Action Plan, which also called for limiting state and federal regulations, fast-tracking permitting for data center and energy construction and creating export packages of U.S. technology.


Axios
27 minutes ago
- Axios
OpenAI releases powerful new open models
OpenAI on Tuesday debuted two freely downloadable models that it says can, for certain tasks, match the performance of some modes of ChatGPT. Why it matters: OpenAI is aiming the new models at customers who want the cost savings and privacy benefits that come from running AI models directly on their own devices rather than relying on cloud-based services like ChatGPT or its rivals. It's also pitching the open models for countries that want to avoid getting their AI tools from the cloud servers of Google, Microsoft or other tech giants. The big picture: The arrival of China's DeepSeek earlier this year jolted the open-model world and suggested that China might be taking the lead in that category, while Meta's commitment to its open source Llama project has come into question as the company pivots to the "superintelligence" race. What they're saying: "We're excited to make this model, the result of billions of dollars of research, available to the world to get AI into the hands of the most people possible," CEO Sam Altman said in a statement. "Going back to when we started in 2015, OpenAI's mission is to ensure AGI that benefits all of humanity," Altman said. "To that end, we are excited for the world to be building on an open AI stack created in the United States, based on democratic values, available for free to all and for wide benefit." Driving the news: OpenAI is releasing two new open models, both capable of chain-of-thought reasoning and accessing the web. They can also, if desired, work in conjunction with larger cloud-based AI models. The first, a 117 billion parameter model called gpt-oss-120b, can run on a single GPU with 80 gigabytes of RAM. The second, with 21 billion parameters called gpt-oss-20b, is designed to run on laptops or other devices with 16 gigabytes of RAM. Both models are available via Hugging Face and other cloud providers. Microsoft is also making available a version of the smaller model that has been optimized to run on Windows devices. The company provided various benchmarks showing the open models performing at or near the performance of the company's o3 and o4-mini models. Yes, but: The new open-models are text-only, as compared to most of OpenAI's recent models, which are so-called multimodal models capable of processing and outputting text, images, audio and video. Between the lines: Technically, the models are "open weights" versus "open source," meaning anyone can download and fine-tune the models but there's no public access to other key information, like training data details.


New York Times
28 minutes ago
- New York Times
OpenAI to Give Away Some of the Technology That Powers ChatGPT
In a move that will be met with both applause and hand-wringing from artificial intelligence experts, OpenAI said on Tuesday that it was freely sharing two of its A.I. models used to power online chatbots. Since OpenAI unveiled ChatGPT three years ago, sparking the A.I. boom, it has mostly kept its technology under wraps. But many other companies, looking to undercut OpenAI, have aggressively shared their technology through a process called open source. Now, OpenAI hopes to level the playing field and ensure that businesses and other software developers stick with its technology. OpenAI's shift adds more fuel to a long-running debate between researchers who believe it is in every company's interest to open-source their technology, and national security hawks and A.I. safety pessimists who believe American companies should not be sharing their technology. The China hawks and A.I. worriers appear to be losing ground. In a notable reversal, the Trump administration recently allowed Nvidia, the world's leading maker of the computer chips used to create A.I. systems, to sell a version of its chips in China. Many of the San Francisco company's biggest rivals, particularly Meta and the Chinese start-up DeepSeek, have already embraced open source, setting OpenAI up as one of the few A.I. companies not sharing what it was working on. The models being offered by OpenAI, called gpt-oss-120b and gpt-oss-20b, do not match the performance of OpenAI's most powerful A.I. technologies. But they still rank among the world's leading models, according to benchmark test results shared by the company. If people use those newly open-source models, OpenAI hopes they will also pay for its more powerful products. 'If we are providing a model, people are using us,' Greg Brockman, OpenAI's president and one of its founders, said in an interview with The New York Times. 'They are dependent on us providing the next breakthrough. They are providing us with feedback and data and what it takes for us to improve that model. It helps us make further progress.' Open source has been a common practice among software companies for decades. As OpenAI and other companies began developing the kind of technology that would eventually drive chatbots like ChatGPT nearly a decade ago, they often open-sourced them. 'If you lead in open source, it means you will soon lead in A.I.,' said Clément Delangue, chief executive of Hugging Face, a company that hosts many of the world's open-source A.I. projects. 'It accelerates progress.' But after OpenAI shared a technology called GPT-2 in late 2019, it stopped open-sourcing its most powerful systems, citing safety concerns. Many of OpenAI's rivals followed its lead. When OpenAI unveiled ChatGPT in late 2022, a growing chorus of A.I. experts argued that open-source technologies could cause serious harm. This kind of technology can help spread disinformation, hate speech and other toxic language. Many researchers also worry that they could one day help people build bioweapons or wreak havoc as governments and businesses connected them to power grids, stock markets and weapons. But the public conversation started to shift in 2023, when Meta shared an A.I. system called LLama. Meta's decision to go against the grain fueled a growing open-source ecosystem in the United States and other parts of the world. By late 2024, when DeepSeek released of a technology called V3, China had shown that its open-source systems could challenge many of the leading U.S. systems. (The New York Times has sued OpenAI and its partner, Microsoft, accusing them of copyright infringement of news content related to A.I. systems. OpenAI and Microsoft have denied those claims.) OpenAI said that it has released open-source systems in part because some businesses and individuals prefer to run these kinds of technologies on their own computer hardware, rather than over the internet. One of the new systems, gpt-oss-20b, is designed to run a laptop. The other, gpt-oss-120b, requires a more powerful system equipped with the specialized computer chips used to build the leading A.I. systems. Mr. Brockman acknowledged that. A.I. can be used to both harm and empower people. But he said that the same is true of any powerful technology. He said that OpenAI has spent months — even years — building and testing its new open-source systems in an effort to reduce any harm. The debate over open source is expected to continue as companies and regulators weigh the potential harm against the power of the time-tested tech method. Many companies have changed strategy over the years, and will continue to do so. After creating a new superintelligence lab, Mark Zuckerberg and his fellow executives at Meta are considering their own shift in strategy. They might abandon the A.I. technology that the company has freely shared with researchers and businesses, called Behemoth, and move toward a more guarded strategy involving closed source software.