
Human-level AI is not inevitable. We have the power to change course
Altman captures a Silicon Valley mantra: technology marches forward inexorably.
Another widespread techie conviction is that the first human-level AI – also known as artificial general intelligence (AGI) – will lead to one of two futures: a post-scarcity techno-utopia or the annihilation of humanity.
For countless other species, the arrival of humans spelled doom. We weren't tougher, faster or stronger – just smarter and better coordinated. In many cases, extinction was an accidental byproduct of some other goal we had. A true AGI would amount to creating a new species, which might quickly outsmart or outnumber us. It could see humanity as a minor obstacle, like an anthill in the way of a planned hydroelectric dam, or a resource to exploit, like the billions of animals confined in factory farms.
Altman, along with the heads of the other top AI labs, believes that AI-driven extinction is a real possibility (joining hundreds of leading AI researchers and prominent figures).
Given all this, it's natural to ask: should we really try to build a technology that may kill us all if it goes wrong?
Perhaps the most common reply says: AGI is inevitable. It's just too useful not to build. After all, AGI would be the ultimate technology – what a colleague of Alan Turing called 'the last invention that man need ever make'. Besides, the reasoning goes within AI labs, if we don't, someone else will do it – less responsibly, of course.
A new ideology out of Silicon Valley, effective accelerationism (e/acc), claims that AGI's inevitability is a consequence of the second law of thermodynamics and that its engine is 'technocapital'. The e/acc manifesto asserts: 'This engine cannot be stopped. The ratchet of progress only ever turns in one direction. Going back is not an option.'
For Altman and e/accs, technology takes on a mystical quality – the march of invention is treated as a fact of nature. But it's not. Technology is the product of deliberate human choices, motivated by myriad powerful forces. We have the agency to shape those forces, and history shows that we've done it before.
No technology is inevitable, not even something as tempting as AGI.
Some AI worriers like to point out the times humanity resisted and restrained valuable technologies.
Fearing novel risks, biologists initially banned and then successfully regulated experiments on recombinant DNA in the 1970s.
No human has been reproduced via cloning, even though it's been technically possible for over a decade, and the only scientist to genetically engineer humans was imprisoned for his efforts.
Nuclear power can provide consistent, carbon-free energy, but vivid fears of catastrophe have motivated stifling regulations and outright bans.
And if Altman were more familiar with the history of the Manhattan Project, he might realize that the creation of nuclear weapons in 1945 was actually a highly contingent and unlikely outcome, motivated by a mistaken belief that the Germans were ahead in a 'race' for the bomb. Philip Zelikow, the historian who led the 9/11 Commission, said: 'I think had the United States not built an atomic bomb during the Second World War, it's actually not clear to me when or possibly even if an atomic bomb ever is built.'
It's now hard to imagine a world without nuclear weapons. But in a little-known episode, then president Ronald Reagan and Soviet leader Mikhail Gorbachev nearly agreed to ditch all their bombs (a misunderstanding over the 'Star Wars' satellite defense system dashed these hopes). Even though the dream of full disarmament remains just that, nuke counts are less than 20% of their 1986 peak, thanks largely to international agreements.
These choices weren't made in a vacuum. Reagan was a staunch opponent of disarmament before the millions-strong Nuclear Freeze movement got to him. In 1983, he commented to his secretary of state : 'If things get hotter and hotter and arms control remains an issue, maybe I should go see [Soviet leader Yuri] Andropov and propose eliminating all nuclear weapons.'
There are extremely strong economic incentives to keep burning fossil fuels, but climate advocacy has pried open the Overton window and significantly accelerated our decarbonization efforts.
In April 2019, the young climate group Extinction Rebellion (XR) brought London to a halt, demanding the UK target net-zero carbon emissions by 2025. Their controversial civil disobedience prompted parliament to declare a climate emergency and the Labour party to adopt a 2030 target to decarbonize the UK's electricity production.
The Sierra Club's Beyond Coal campaign was lesser-known but wildly effective. In just its first five years, the campaign helped shutter more than one-third of US coal plants. Thanks primarily to its move from coal, US per capita carbon emissions are now lower than they were in 1913.
In many ways, the challenge of regulating efforts to build AGI is much smaller than that of decarbonizing. Eighty-two percent of global energy production comes from fossil fuels. Energy is what makes civilization work, but we're not dependent on a hypothetical AGI to make the world go round.
Further, slowing and guiding the development of future systems doesn't mean we'd need to stop using existing systems or developing specialist AIs to tackle important problems in medicine, climate and elsewhere.
It's obvious why so many capitalists are AI enthusiasts: they foresee a technology that can achieve their long-time dream of cutting workers out of the loop (and the balance sheet).
But governments are not profit maximizers. Sure, they care about economic growth, but they also care about things like employment, social stability, market concentration, and, occasionally, democracy.
It's far less clear how AGI would affect these domains overall. Governments aren't prepared for a world where most people are technologically unemployed.
Capitalists often get what they want, particularly in recent decades, and the boundless pursuit of profit may undermine any regulatory effort to slow the speed of AI development. But capitalists don't always get what they want.
At a bar in San Francisco in February, a longtime OpenAI safety researcher pronounced to a group that the e/accs shouldn't be worried about the 'extreme' AI safety people, because they'll never have power. The boosters should actually be afraid of AOC and Senator Josh Hawley because they 'can really fuck things up for you'.
Assuming humans stick around for many millennia, there's no way to know we won't eventually build AGI. But this isn't really what the inevitabilists are saying. Instead, the message tends to be: AGI is imminent. Resistance is futile.
But whether we build AGI in five, 20 or 100 years really matters. And the timeline is far more in our control than the boosters will admit. Deep down, I suspect many of them realize this, which is why they spend so much effort trying to convince others that there's no point in trying. Besides, if you think AGI is inevitable, why bother convincing anybody?
We actually had the computing power required to train GPT-2 more than a decade before OpenAI actually did it, but people didn't know whether it was worth doing.
But right now, the top AI labs are locked in such a fierce race that they aren't implementing all the precautions that even their own safety teams want. (One OpenAI employee announced recently that he quit 'due to losing confidence that it would behave responsibly around the time of AGI'.) There's a 'safety tax' that labs can't afford to pay if they hope to stay competitive; testing slows product releases and consumes company resources.
Governments, on the other hand, aren't subject to the same financial pressures.
An inevitabilist tech entrepreneur recently said regulating AI development is impossible 'unless you control every line of written code'. That might be true if anyone could spin up an AGI on their laptop. But it turns out that building advanced, general AI models requires enormous arrays of supercomputers, with chips produced by an absurdly monopolistic industry. Because of this, many AI safety advocates see 'compute governance' as a promising approach. Governments could compel cloud computing providers to halt next generation training runs that don't comply with established guardrails. Far from locking out upstarts or requiring Orwellian levels of surveillance, thresholds could be chosen to only affect players who can afford to spend more than $100m on a single training run.
Governments do have to worry about international competition and the risk of unilateral disarmament, so to speak. But international treaties can be negotiated to widely share the benefits from cutting-edge AI systems while ensuring that labs aren't blindly scaling up systems they don't understand.
And while the world may feel fractious, rival nations have cooperated to surprising degrees.
The Montreal Protocol fixed the ozone layer by banning chlorofluorocarbons. Most of the world has agreed to ethically motivated bans on militarily useful weapons, such as biological and chemical weapons, blinding laser weapons, and 'weather warfare'.
In the 1960s and 70s, many analysts feared that every country that could build nukes, would. But most of the world's roughly three-dozen nuclear programs were abandoned. This wasn't the result of happenstance, but rather the creation of a global nonproliferation norm through deliberate statecraft, like the 1968 Non-Proliferation Treaty.
On the few occasions when Americans were asked if they wanted superhuman AI, large majorities said 'no'. Opposition to AI has grown as the technology has become more prevalent. When people argue that AGI is inevitable, what they're really saying is that the popular will shouldn't matter. The boosters see the masses as provincial neo-Luddites who don't know what's good for them. That's why inevitability holds such rhetorical allure for them; it lets them avoid making their real argument, which they know is a loser in the court of public opinion.
The draw of AGI is strong. But the risks involved are potentially civilization-ending. A civilization-scale effort is needed to compel the necessary powers to resist it.
Technology happens because people make it happen. We can choose otherwise.
Garrison Lovely is a freelance journalist

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Geeky Gadgets
19 minutes ago
- Geeky Gadgets
The Truth Behind AI'd Gold Medal Math Olympiad : What the Media Isn't Saying
What if the next headline you read about AI wasn't just exciting—but also misleading? Imagine seeing 'AI Wins Gold at the International Math Olympiad' and immediately picturing a machine outsmarting the brightest human minds in real-world problem-solving. Sounds new, right? But here's the catch: while OpenAI's model did earn a gold medal, it also stumbled on the most creative problem, exposing the limits of its reasoning. This isn't just a story of triumph—it's a reminder of how easily we can misinterpret AI's achievements when headlines oversimplify the nuances. In a world captivated by AI breakthroughs, the way we read and interpret these milestones matters more than ever. This perspective AI Explained unpacks the layers behind AI's latest accomplishments, from its gold medal at the IMO to the unveiling of GPT-5, and explores what these advancements truly mean for society. You'll discover why AI's victories often come with caveats, how competition between tech giants shapes the narrative, and why transparency in AI research is more urgent than ever. Along the way, we'll challenge the hype and highlight the critical questions that often go unasked. Understanding AI's strengths and limitations isn't just about staying informed—it's about shaping how we prepare for the future. After all, the headlines may dazzle, but the real story lies in the details we often overlook. AI Wins Gold at IMO AI's Performance at the International Math Olympiad OpenAI's model successfully solved five out of six problems at the IMO, earning a gold medal. This is particularly noteworthy because the model was not specifically trained for mathematics. However, it struggled with the most complex problem, which required creative reasoning—a skill that remains challenging for AI to replicate. This limitation underscores a crucial distinction: while AI demonstrates exceptional computational efficiency, it often falls short in areas requiring nuanced ingenuity and abstract thinking. Achievements like these, though impressive, are confined to controlled environments and do not necessarily translate to solving real-world challenges. By highlighting both strengths and weaknesses, this milestone serves as a reminder of the boundaries of current AI technology. Competitive Dynamics in AI Research The announcement also sheds light on the competitive nature of AI research. OpenAI's achievement comes amid reports that Google DeepMind has achieved similar results, though detailed findings have not yet been released. The timing of OpenAI's announcement has sparked speculation about strategic positioning in the race for AI dominance. This rivalry reflects a broader trend in the field, where public perception and technological milestones increasingly shape the narrative. As organizations compete to showcase their breakthroughs, the focus often shifts from collaboration to competition. This competitive environment raises questions about transparency and the potential for shared progress, as companies prioritize proprietary advancements over open collaboration. How Not to Read a Headline on AI Watch this video on YouTube. Browse through more resources below from our in-depth content covering more areas on AI reasoning. Implications for the Workforce AI's growing proficiency in reasoning and professional tasks has profound implications for the workforce. Tools like OpenAI's agent mode demonstrate the potential to enhance productivity, but they also raise concerns about job displacement, particularly in entry-level roles. For example, AI can now draft reports, analyze data, and assist in legal research—tasks traditionally performed by humans. While these advancements streamline workflows and improve efficiency, they also challenge traditional career pathways. This shift emphasizes the need for workforce training and education to help individuals adapt to an evolving job market. Preparing for these changes will require proactive measures to ensure that workers can thrive alongside AI technologies. Limitations and Risks of AI Models Despite its achievements, AI remains far from flawless. One of the most significant challenges is hallucination, where the model generates incorrect or nonsensical information. This poses serious risks in high-stakes fields such as financial analysis, medical research, or legal decision-making. Moreover, AI's performance can vary widely depending on the context, with its weakest moments undermining its reliability. These limitations highlight the importance of cautious deployment and rigorous oversight, especially in industries where errors can have severe consequences. Making sure that AI is used responsibly requires a combination of technical safeguards, ethical guidelines, and regulatory frameworks. The Transparency Problem in AI Research A critical issue in AI development is the lack of transparency. OpenAI's announcement, while impressive, provided limited insight into the methodology, computational resources, or costs involved in training the model. This opacity makes it difficult for researchers, policymakers, and the public to assess the broader implications of such achievements. Greater transparency—through peer-reviewed publications, open data sharing, and detailed disclosures—could foster a more collaborative and accountable research environment. This would not only benefit the AI community but also help build public trust in these technologies. Transparency is essential for making sure that AI advancements are understood, scrutinized, and responsibly integrated into society. Broader Applications and Contextual Challenges AI's impact extends far beyond academic benchmarks like the IMO. In software development, for instance, AI tools can assist with coding and debugging. However, they may also introduce inefficiencies for experienced developers by generating suboptimal solutions that require additional refinement. On the other hand, AI has delivered tangible benefits in areas such as data center management, where it has optimized energy usage and reduced operational costs. These mixed results underscore the importance of context when evaluating AI's effectiveness. Success in one domain does not guarantee universal applicability, and careful consideration is needed to determine where AI can truly add value. Misinterpreting AI Achievements Headlines celebrating AI milestones can sometimes lead to overestimations of its capabilities. For example, solving IMO problems is undoubtedly impressive, but it does not equate to replacing human creativity or expertise in complex, real-world scenarios. Similarly, benchmarks like the IMO, while valuable, do not fully capture AI's practical utility across diverse applications. It is essential to maintain a nuanced understanding of these achievements to avoid misconceptions about AI's true potential. By critically evaluating such milestones, you can better appreciate both the opportunities and limitations of this rapidly evolving technology. Looking Ahead: Navigating the Future of AI The release of new models, such as GPT-5, promises further advancements in AI reasoning and problem-solving. Competitors like Google DeepMind are also expected to unveil their own breakthroughs, intensifying the pace of innovation. However, it is crucial to approach these developments with a balanced perspective. While AI's progress is undeniable, its limitations remain significant. Recognizing both its potential and its shortcomings is essential for navigating this complex field responsibly. As AI continues to evolve, staying informed and thoughtful will help ensure that its benefits are maximized while its risks are carefully managed. Media Credit: AI Explained Filed Under: AI, Top News Latest Geeky Gadgets Deals Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.


The Independent
19 minutes ago
- The Independent
AI-themed film to depict drama at Elon Musk company
Ike Barinholtz is reportedly in talks to portray Elon Musk in a new film titled Artificial. The AI-themed film, directed by Challengers director Luca Guadagnino, is being produced by Amazon-MGM studios. Early reports suggest the movie will depict the tumultuous period at OpenAI in 2023, when CEO Sam Altman was briefly fired and rehired. Elon Musk co-founded OpenAI, which is now best known for ChatGPT, in 2015 but later expressed concerns about the company's direction and the effects of its technology. The cast is also said to include Yura Borisov, Andrew Garfield, and Cooper Koch, with the script written by Simon Rich.


Daily Mail
20 minutes ago
- Daily Mail
Trump suggests new name for artificial intelligence
By President Donald Trump attended an artificial intelligence summit on Wednesday, even as he revealed to the audience of eager tech moguls he was not a fan of the name. 'Around the globe, everyone is talking about artificial intelligence,' Trump said. 'Artificial. I can't stand it, I don't even like the name.' As the audience stared wordlessly at the president, he suggested they change the name. 'I don't like anything that's artificial so can we straighten that out please?' Trump asked. 'We should change the name.' As some in the audience laughed in response, Trump continued, 'I actually mean that. I don't like the name artificial anything, because it's not artificial, it's genius. It's pure genius,' he said, indicating he would prefer the name 'genius intelligence' better than 'artificial intelligence.' The president spoke after the White House released its AI Action Plan on Wednesday, which detailed the administration's efforts to boost the development and innovation of the technology. The document recommends that Trump streamline regulations and permitting to allow companies to build massive data centers and energy sources quickly to help accelerate the AI or 'genius' industry. During his speech, Trump announced he was getting rid of President Biden's executive order which outlined a preference preference for companies who prioritized DEI programs and climate goals. 'We're getting rid of woke,' Trump said triumphantly, adding that, 'It's so uncool to be woke.' He said his goal was to root out 'partisan bias' or ideological agendas from tech companies and artificial intelligence models such as 'critical race theory' which he described as 'ridiculous.' 'The American people do not want woke Marxist lunacy in the AI models and neither do other countries,' Trump said. 'From now on the US government will only work with AI that works with truth.' Trump acknowledged the dangers of artificial intelligence, but described the race with China and other countries on the technology as one the United States had to win. 'This technology brings the potential for bad as well as for good, for peril as well as for progress but the daunting power of AI is not going to be a reason for retreat from this new frontier,' he said.