logo
Human-level AI is not inevitable. We have the power to change course

Human-level AI is not inevitable. We have the power to change course

The Guardian6 days ago
'Technology happens because it is possible,' OpenAI CEO, Sam Altman, told the New York Times in 2019, consciously paraphrasing Robert Oppenheimer, the father of the atomic bomb.
Altman captures a Silicon Valley mantra: technology marches forward inexorably.
Another widespread techie conviction is that the first human-level AI – also known as artificial general intelligence (AGI) – will lead to one of two futures: a post-scarcity techno-utopia or the annihilation of humanity.
For countless other species, the arrival of humans spelled doom. We weren't tougher, faster or stronger – just smarter and better coordinated. In many cases, extinction was an accidental byproduct of some other goal we had. A true AGI would amount to creating a new species, which might quickly outsmart or outnumber us. It could see humanity as a minor obstacle, like an anthill in the way of a planned hydroelectric dam, or a resource to exploit, like the billions of animals confined in factory farms.
Altman, along with the heads of the other top AI labs, believes that AI-driven extinction is a real possibility (joining hundreds of leading AI researchers and prominent figures).
Given all this, it's natural to ask: should we really try to build a technology that may kill us all if it goes wrong?
Perhaps the most common reply says: AGI is inevitable. It's just too useful not to build. After all, AGI would be the ultimate technology – what a colleague of Alan Turing called 'the last invention that man need ever make'. Besides, the reasoning goes within AI labs, if we don't, someone else will do it – less responsibly, of course.
A new ideology out of Silicon Valley, effective accelerationism (e/acc), claims that AGI's inevitability is a consequence of the second law of thermodynamics and that its engine is 'technocapital'. The e/acc manifesto asserts: 'This engine cannot be stopped. The ratchet of progress only ever turns in one direction. Going back is not an option.'
For Altman and e/accs, technology takes on a mystical quality – the march of invention is treated as a fact of nature. But it's not. Technology is the product of deliberate human choices, motivated by myriad powerful forces. We have the agency to shape those forces, and history shows that we've done it before.
No technology is inevitable, not even something as tempting as AGI.
Some AI worriers like to point out the times humanity resisted and restrained valuable technologies.
Fearing novel risks, biologists initially banned and then successfully regulated experiments on recombinant DNA in the 1970s.
No human has been reproduced via cloning, even though it's been technically possible for over a decade, and the only scientist to genetically engineer humans was imprisoned for his efforts.
Nuclear power can provide consistent, carbon-free energy, but vivid fears of catastrophe have motivated stifling regulations and outright bans.
And if Altman were more familiar with the history of the Manhattan Project, he might realize that the creation of nuclear weapons in 1945 was actually a highly contingent and unlikely outcome, motivated by a mistaken belief that the Germans were ahead in a 'race' for the bomb. Philip Zelikow, the historian who led the 9/11 Commission, said: 'I think had the United States not built an atomic bomb during the Second World War, it's actually not clear to me when or possibly even if an atomic bomb ever is built.'
It's now hard to imagine a world without nuclear weapons. But in a little-known episode, then president Ronald Reagan and Soviet leader Mikhail Gorbachev nearly agreed to ditch all their bombs (a misunderstanding over the 'Star Wars' satellite defense system dashed these hopes). Even though the dream of full disarmament remains just that, nuke counts are less than 20% of their 1986 peak, thanks largely to international agreements.
These choices weren't made in a vacuum. Reagan was a staunch opponent of disarmament before the millions-strong Nuclear Freeze movement got to him. In 1983, he commented to his secretary of state : 'If things get hotter and hotter and arms control remains an issue, maybe I should go see [Soviet leader Yuri] Andropov and propose eliminating all nuclear weapons.'
There are extremely strong economic incentives to keep burning fossil fuels, but climate advocacy has pried open the Overton window and significantly accelerated our decarbonization efforts.
In April 2019, the young climate group Extinction Rebellion (XR) brought London to a halt, demanding the UK target net-zero carbon emissions by 2025. Their controversial civil disobedience prompted parliament to declare a climate emergency and the Labour party to adopt a 2030 target to decarbonize the UK's electricity production.
The Sierra Club's Beyond Coal campaign was lesser-known but wildly effective. In just its first five years, the campaign helped shutter more than one-third of US coal plants. Thanks primarily to its move from coal, US per capita carbon emissions are now lower than they were in 1913.
In many ways, the challenge of regulating efforts to build AGI is much smaller than that of decarbonizing. Eighty-two percent of global energy production comes from fossil fuels. Energy is what makes civilization work, but we're not dependent on a hypothetical AGI to make the world go round.
Further, slowing and guiding the development of future systems doesn't mean we'd need to stop using existing systems or developing specialist AIs to tackle important problems in medicine, climate and elsewhere.
It's obvious why so many capitalists are AI enthusiasts: they foresee a technology that can achieve their long-time dream of cutting workers out of the loop (and the balance sheet).
But governments are not profit maximizers. Sure, they care about economic growth, but they also care about things like employment, social stability, market concentration, and, occasionally, democracy.
It's far less clear how AGI would affect these domains overall. Governments aren't prepared for a world where most people are technologically unemployed.
Capitalists often get what they want, particularly in recent decades, and the boundless pursuit of profit may undermine any regulatory effort to slow the speed of AI development. But capitalists don't always get what they want.
At a bar in San Francisco in February, a longtime OpenAI safety researcher pronounced to a group that the e/accs shouldn't be worried about the 'extreme' AI safety people, because they'll never have power. The boosters should actually be afraid of AOC and Senator Josh Hawley because they 'can really fuck things up for you'.
Assuming humans stick around for many millennia, there's no way to know we won't eventually build AGI. But this isn't really what the inevitabilists are saying. Instead, the message tends to be: AGI is imminent. Resistance is futile.
But whether we build AGI in five, 20 or 100 years really matters. And the timeline is far more in our control than the boosters will admit. Deep down, I suspect many of them realize this, which is why they spend so much effort trying to convince others that there's no point in trying. Besides, if you think AGI is inevitable, why bother convincing anybody?
We actually had the computing power required to train GPT-2 more than a decade before OpenAI actually did it, but people didn't know whether it was worth doing.
But right now, the top AI labs are locked in such a fierce race that they aren't implementing all the precautions that even their own safety teams want. (One OpenAI employee announced recently that he quit 'due to losing confidence that it would behave responsibly around the time of AGI'.) There's a 'safety tax' that labs can't afford to pay if they hope to stay competitive; testing slows product releases and consumes company resources.
Governments, on the other hand, aren't subject to the same financial pressures.
An inevitabilist tech entrepreneur recently said regulating AI development is impossible 'unless you control every line of written code'. That might be true if anyone could spin up an AGI on their laptop. But it turns out that building advanced, general AI models requires enormous arrays of supercomputers, with chips produced by an absurdly monopolistic industry. Because of this, many AI safety advocates see 'compute governance' as a promising approach. Governments could compel cloud computing providers to halt next generation training runs that don't comply with established guardrails. Far from locking out upstarts or requiring Orwellian levels of surveillance, thresholds could be chosen to only affect players who can afford to spend more than $100m on a single training run.
Governments do have to worry about international competition and the risk of unilateral disarmament, so to speak. But international treaties can be negotiated to widely share the benefits from cutting-edge AI systems while ensuring that labs aren't blindly scaling up systems they don't understand.
And while the world may feel fractious, rival nations have cooperated to surprising degrees.
The Montreal Protocol fixed the ozone layer by banning chlorofluorocarbons. Most of the world has agreed to ethically motivated bans on militarily useful weapons, such as biological and chemical weapons, blinding laser weapons, and 'weather warfare'.
In the 1960s and 70s, many analysts feared that every country that could build nukes, would. But most of the world's roughly three-dozen nuclear programs were abandoned. This wasn't the result of happenstance, but rather the creation of a global nonproliferation norm through deliberate statecraft, like the 1968 Non-Proliferation Treaty.
On the few occasions when Americans were asked if they wanted superhuman AI, large majorities said 'no'. Opposition to AI has grown as the technology has become more prevalent. When people argue that AGI is inevitable, what they're really saying is that the popular will shouldn't matter. The boosters see the masses as provincial neo-Luddites who don't know what's good for them. That's why inevitability holds such rhetorical allure for them; it lets them avoid making their real argument, which they know is a loser in the court of public opinion.
The draw of AGI is strong. But the risks involved are potentially civilization-ending. A civilization-scale effort is needed to compel the necessary powers to resist it.
Technology happens because people make it happen. We can choose otherwise.
Garrison Lovely is a freelance journalist
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Target staff praise new anti-theft cases
Target staff praise new anti-theft cases

Daily Mail​

timean hour ago

  • Daily Mail​

Target staff praise new anti-theft cases

Target is changing how it tackles theft — and it should speed things up for shoppers. For years, the retailer has used locked plexiglass cases to protect high-theft items like toiletries, cleaning products, and even clothing. But the system — which often requires a staffer with a manager's key — has long frustrated customers who say it slows them down , especially when they're just trying to grab toothpaste, laundry detergent, or underwear. Now, the company is updating the system. So far, employees seem happy. Target employees on Reddit say the new design uses QR codes that can be unlocked with handheld devices carried by every floor associate, speeding up the process for everyone. 'This looks amazing, to be honest,' a Target employee said on Reddit. But the rollout isn't going perfectly. Some staff say the new system is confusing guests — especially since the QR codes are visible. They say leads shoppers to think that they can open it by scanning the code using their smartphone. Target didn't immediately respond to request for comment. For years, Target has been attempting to stop guests from using the five-finger discount in their stores. In 2023, CEO Brian Cornell said the company lost $500 million each year to shoplifters. To respond, the company has removed self-checkout registers from stores, closed high-theft locations , and added safety locks to hundreds of products — all in hopes of deterring sticky-fingered guests. But those changes have come at a cost. For years, Target has set itself apart from other big-box retailers through high-end marketing, thoughtful lighting, and exclusive clothing lines. Locks on simple products often cut against that branding image. 'The problem with Target is a lot of decisions are being taken centrally without proper thought about what they mean on the ground,' Neil Saunders, a retail expert at GlobalData, previously told Security tags and hard locks also pose logistical issues for store staff. They make it harder to fold garments, merchandise accessories on racks, and fit products onto crowded shelves — adding to disarray on the sales floor. The disconnect between colorful merchandising and anti-theft hardware has only worsened the look of some apparel sections , independent retail experts told Customers have taken to social media to complain about hangers on the floor, ripped-out security tags, and unfolded T-shirt displays. 'I hate to pile on (pun intended), but Target's apparel merchandising and maintenance is an ongoing issue,' Carol Spieckerman, a retail consultant, previously said. 'When shoppers are shopping off the floor, it undermines Target's core brand promise of accessible style.'

‘Fantastic Four: First Steps' scores Marvel's first $100 million box office opening of 2025
‘Fantastic Four: First Steps' scores Marvel's first $100 million box office opening of 2025

The Independent

time2 hours ago

  • The Independent

‘Fantastic Four: First Steps' scores Marvel's first $100 million box office opening of 2025

Marvel's first family has finally found box office gold. 'The Fantastic Four: First Steps,' the first film about the superheroes made under the guidance of Kevin Feige and the Walt Disney Co., earned $118 million in its first weekend in 4,125 North American theaters, according to studio estimates Sunday. That makes it the fourth biggest opening of the year, behind 'A Minecraft Movie,' 'Lilo & Stitch' and 'Superman,' and the biggest Marvel opening since 'Deadpool & Wolverine' grossed $211 million out of the gate last summer. Internationally, 'Fantastic Four' made $100 million from 52 territories, adding up to a $218 million worldwide debut. The numbers were within the range the studio was expecting. The film arrived in the wake of another big superhero reboot, James Gunn's 'Superman,' which opened three weekends ago and has already crossed $500 million globally. That film, from the other main player in comic book films, DC Studios, took second place with $24.9 million domestically. 'First Steps' is the latest attempt at bringing the superhuman family to the big screen, following lackluster performances for other versions. The film, based on the original Marvel comics, is set during the 1960s in a retro-futuristic world led by the Fantastic Four, a family of astronauts-turned-superhuman from exposure to cosmic rays during a space mission. The family is made up of Reed Richards (Pedro Pascal), who can stretch his body to incredible lengths; Sue Storm (Vanessa Kirby), who can render herself invisible; Johnny Storm (Joseph Quinn), who transforms into a fiery human torch; and Ben Grimm (Ebon Moss-Bachrach), who possesses tremendous superhuman strength with his stone-like flesh. The movie takes place four years after the family gained powers, during which Reed's inventions have transformed technology, and Sue's diplomacy has led to global peace. Both audiences and critics responded positively to the film, which currently has an 88% on Rotten Tomatoes and promising exit poll responses from opening weekend ticket buyers. An estimated 46% of audiences chose to see it on premium screens, including IMAX and other large formats. The once towering Marvel is working to rebuild audience enthusiasm for its films and characters. Its two previous offerings this year did not reach the cosmic box office heights of 'Deadpool & Wolverine," which made over $1.3 billion, or those of the 'Avengers'-era. But critically, the films have been on an upswing since the poorly reviewed 'Captain America: Brave New World,' which ultimately grossed $415 million worldwide. ' Thunderbolts," which jumpstarted the summer movie season, was better received critically but financially is capping out at just over $382 million globally. Like Deadpool and Wolverine, the Fantastic Four characters had been under the banner of 20th Century Fox for years. The studio produced two critically loathed, but decently profitable attempts in the mid-2000s with future Captain America Chris Evans as the Human Torch. In 2015, it tried again (unsuccessfully) with Michael B. Jordan and Miles Teller. They got another chance after Disney's $71 billion acquisition of Fox's entertainment assets in 2019. Top 10 movies by domestic box office With final domestic figures being released Monday, this list factors in the estimated ticket sales for Friday through Sunday at U.S. and Canadian theaters, according to Comscore: 1. 'The Fantastic Four: First Steps,' $118 million. 2. 'Superman,' $24.9 million. 3. 'Jurassic World Rebirth,' $13 million. 4. 'F1: The Movie,' $6.2 million. 5. 'Smurfs,' $5.4 million. 6. 'I Know What You Did Last Summer,' $5.1 million. 7. 'How to Train Your Dragon,' $2.8 million. 8. 'Eddington,' $1.7 million. 9. 'Saiyaara,' $1.3 million. 10. 'Oh, Hi!,' $1.1 million.

This fuzzy animal friend may be the key to treating schizophrenia
This fuzzy animal friend may be the key to treating schizophrenia

The Independent

time3 hours ago

  • The Independent

This fuzzy animal friend may be the key to treating schizophrenia

Llamas – likely without red pajamas – may hold the key to treating schizophrenia. The serious brain disorder causes people to interpret reality abnormally, and affects approximately 3.7 million U.S. adults between the ages of 18 to 65 years old, according to the nonprofit RTI International. But the domesticated South American woolly animal might be be able to help. French researchers said this week that they had used llama antibodies, or proteins that help to protect the immune system, to design a tiny fragment of an antibody known as a 'nanobody' that will trigger a neurotransmitter in the brain involved in regulating neural activity. Neurotransmitters are chemical molecules that carry messages or signals from one nerve cell to the next target cell, according to the Cleveland Clinic. No llamas were harmed in the study and researchers can identify nanobodies in a petri dish. In the past, llama antibodies have also proven effective in fighting Covid and other 'SARS-like' viruses. When scientists at the Institute of Functional Genomics injected the molecule into the veins or the muscles, it was able to break the blood-brain barrier and effectively reach brain receptors. The barrier is a a tightly locked layer of cells that defend your brain from harmful substances. Studying the impact of the nanobodies in two tests using mice, the researchers found that they corrected cognitive deficits that were observed. There was an improvement of cognitive function with just one shot, and a prolonged effect over one week. Clinical studies are now required to show that their findings could be a new avenue of treatment for schizophrenia. "In humans obviously we don't know [yet], but in mice yes, it is sufficient to treat most deficits of schizophrenia," molecular biologist Jean-Philippe Pin told Newsweek.. He was a co-author of the research which was published in the journal Nature. Pin said that medications currently given to schizophrenia patients "treat the symptoms well, but less the cognitive deficits." The cause of the chronic condition remains unknown, but the World Health Organization says it is thought that an interaction between genes and a range of environmental factors may be the reason. The exact prevalence of schizophrenia is difficult to measure. Some have tied cases in Canada to cannabis use. Although schizophrenia can occur at any age, people are typically diagnosed between the ages of 16 and 30. Symptoms vary from person to person. There is no cure, but it can be treated through antipsychotic medications, talk therapy, and self-management strategies, the National Alliance on Mental Illness says. The study's authors hope to add this strategy to the list. 'This research confirms the potential of nanobodies as a new therapeutic strategy for acting on the brain, with their use eventually being broadened to include the treatment of other neurological illnesses,' the institute said in a statement.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store