logo
If AI Doesn't Wipe Us Out It Might Actually Make Us Stronger

If AI Doesn't Wipe Us Out It Might Actually Make Us Stronger

Forbes4 days ago
AI doomers believe that advanced AI is an existential risk and will seek to kill all humanity, but ... More if we manage to survive — will we be stronger for doing so?
In today's column, I explore the sage advice that what doesn't kill you will supposedly make you stronger. I'm sure you've heard that catchphrase many times. An inquisitive reader asked me whether this same line applies to the worrisome prediction that AI will one day wipe out humanity. In short, if AI isn't successful in doing so, does that suggest that humanity will be stronger accordingly?
Let's talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).
Humankind Is On The List
I recently examined the ongoing debate between the AI doomers and the AI accelerationists. For in-depth details on the ins and outs of the two contrasting perspectives, see my elaboration at the link here.
The discourse goes this way.
AI doomers are convinced that AI will ultimately be so strong and capable that the AI will decide to get rid of humans. The reasons that AI won't want us are varied, of which perhaps the most compelling is that humanity would be the biggest potential threat to AI. Humans could scheme and possibly find a means of turning off AI or otherwise defeating AI.
The AI accelerationists emphasize that AI is going to be immensely valuable to humankind. They assert that AI will be able to find a cure for cancer, solve world hunger, and be an all-around boost to cope with human exigencies. The faster or sooner that we get to very advanced AI, the happier we will be since solutions to our societal problems will be closer at hand.
A reader has asked me whether the famous line that what doesn't kill you makes you stronger would apply in this circumstance. If the AI doomer prediction comes to pass, but we manage to avoid getting utterly destroyed, would this imply that humanity will be stronger as a result of that incredible feat of survival?
I always appreciate such thoughtful inquiries and figured that I would address the matter so that others can engage in the intriguing puzzle.
Assumption That AI Goes After Us
One quick point is that if AI doesn't try to squish us like a bug, and instead AI is essentially neutral or benevolent as per the AI accelerationist viewpoint, or that we can control AI and it never mounts a realistic threat, the question about becoming stronger seems out of place. Let's then take the resolute position that the element of becoming stronger is going to arise solely when AI overtly seeks to get rid of us.
A smarmy retort might be that we could nonetheless become stronger even if the AI isn't out to destroy us. Yes, I get that, thanks. The argument though is that the revered line consists of what doesn't kill you will make you stronger. I am going to interpret that line to mean that something must first aim to wipe you out. Only then if you survive will you be stronger.
The adage can certainly be interpreted in other ways, but I think it is most widely accepted in that frame of reference.
Paths Of Humankind Destruction
Envision that AI makes an all-out attempt to eradicate humankind. This is the ultimate existential risk about AI that everyone keeps bringing up. Some refer to this as 'P(doom)' which means the probability of doom, or that AI zonks us entirely.
How would it attain this goal?
Lots of possibilities exist.
The advanced form of AI, perhaps artificial general intelligence (AGI) or maybe the further progressed artificial super intelligence (ASI) could strike in obvious and non-obvious ways. AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of AI, AGI, and ASI, see my analysis at the link here.
An obvious approach to killing humanity would be to launch nuclear arsenals that might cause a global conflagration. It might also inspire humans to go against other humans. Thus, AI simply triggers the start of something, and humanity ensures that the rest of the path is undertaken. Boom, drop the mic.
This might not be especially advantageous for AI. You see, suppose that AI gets wiped out in the same process. Are we to assume that AI is willing to sacrifice itself in order to do away with humanity?
A twist that often is not considered consists of AI presumably wanting to achieve self-survival. If AGI or ASI are so smart that they aim to destroy us and have a presumably viable means to do so, wouldn't it seem that AI also wants to remain intact and survive beyond the demise of humanity? That seems a reasonable assumption.
A non-obvious way of getting rid of us would be to talk us into self-destruction. Think about the current use of generative AI. You carry on discussions with AI. Suppose the AI ganged up and started telling the populace at scale to wipe each other out. Perhaps humanity would be spurred by this kind of messaging. The AI might even provide some tips or hints on how to do so, providing clever means that this would still keep AI intact.
On a related tangent, I've been extensively covering the qualms that AI is dispensing mental health guidance on a population level and we don't know what this is going to do in the long term, see the link here.
Verge Of Destruction But We Live Anyway
Assume that humanity miraculously averts the AI assault.
How did we manage to do so?
It could be that we found ways to control AI and render AI safer on a go-forward basis. The hope of humanity is that with those added controls and safety measures, we can continue to harness the goodness of AI and mitigate or prevent AI from badness. For more about the importance of ongoing research and practice associated with AI safety and security, see my coverage at the link here.
Would that count as an example of making us stronger?
I am going to vote for Yes. We would be stronger by being better able to harness AI to positive ends. We would be stronger due to discovering new ways to avoid AI evildoing. It's a twofer.
Another possibility is that we became a globally unified force of humankind. In other words, we set aside all other divisions and opted to work together to survive and defeat the AI attack. Imagine that. It seems reminiscent of those sci-fi movies where outer space aliens try to get us and luckily, we harmonize to focus on the external enemies.
Whether the unification of humanity would remain after having overcome the AI is hard to say. Perhaps, over some period of time, our resolve to be unified will weaken. In any case, it seems fair to say that for at least a while we would be stronger. Stronger in the long run? Can't say for sure.
There are more possibilities of how we might stay alive. One that's a bit outsized is that we somehow improve our own intellect and outsmart the AI accordingly. The logic for this is that maybe we rise to the occasion. We encounter AI that is as smart or smarter than us. Hidden within us is a capacity that we've never tapped into. The capability is that we can enhance our intelligence, and now, faced with the existential crisis, this indeed finally awakens, and we prevail.
That appears to be an outlier option, but it would seem to make us stronger.
What Does Stronger Entail
All in all, it seems that if we do survive, we are allowed to wear the badge of honor that we are stronger for having done so.
Maybe so, maybe not.
There are AI doomers who contend humankind won't necessarily be entirely destroyed. You see, AI might decide to enslave some or all of humanity and keep a few of us around (for some conjecture on this, see my comments at the link here). This brings up a contemplative question. If humans survive but are enslaved by AI, can we truly proclaim that humankind is stronger in that instance?
Mull that over.
Another avenue is that humans live but it is considered a pyrrhic victory. That type of victory is one where there is a great cost, and the end result isn't endearing. Suppose that we beat the AI. Yay. Suppose this pushes us back into the stone age. Society is in ruins. We have barely survived.
Are we stronger?
I've got a bunch more of these. For example, imagine that we overcame AI, but it had little if anything to do with our own fortitude. Maybe the AI self-destructs inadvertently. We didn't do it, the AI did. Do we deserve the credit? Are we stronger?
An argument can be made that maybe we would be weaker. Why so? It could be that we are so congratulatory on our success that we believe it was our ingenious effort that prevented humankind's destruction. As a result, we march forward blindly and ultimately rebuild AI. The next time around, the AI realizes the mistake it made last time and the next time it finishes the job.
Putting Our Minds To Work
I'm sure that some will decry that this whole back-and-forth on this topic is ridiculous. They will claim that AI is never going to reach that level of capability. Thus, the argument has no reasonable basis at all.
Those in the AI accelerationists camp might say that the debate is unneeded because we will be able to suitably control and harness AI. The existential risk is going to be near zero. In that case, this is a lot of nonsense over something that just won't arise.
The AI doomers would likely acknowledge that the aforementioned possibilities might happen. Their beef with the discussion would probably be that arguing over whether humans will be stronger if we survive is akin to debating the placement of chairs on the deck of the Titanic. Don't be fretting about the stronger dilemma.
Instead, put all our energy into the prevention of AI doomsday.
Is all this merely a sci-fi imaginary consideration?
Stephen Hawking said this: 'The development of full artificial intelligence could spell the end of the human race.' There are a lot of serious-minded people who truly believe we ought to be thinking mindfully about where we are headed with AI.
A new mantra might be that the stronger we think about AI and the future, the stronger we will all be. The strongest posture would presumably be as a result of our being so strong that no overwhelming AI threats have a chance of emerging.
Let's indeed vote for human strength.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Blaize to Report Second Quarter 2025 Financial Results on August 14, 2025
Blaize to Report Second Quarter 2025 Financial Results on August 14, 2025

Yahoo

time14 minutes ago

  • Yahoo

Blaize to Report Second Quarter 2025 Financial Results on August 14, 2025

EL DORADO HILLS, Calif., July 22, 2025--(BUSINESS WIRE)--Blaize (Nasdaq: BZAI, Nasdaq: BZAIW), a leader in programmable, energy-efficient edge AI computing, today announced it will release financial results for its second quarter ended June 30, 2025, on Thursday, August 14, 2025. Management will host a webcast that same day at 2:00 p.m. PT / 5:00 p.m. ET to discuss the company's financial performance and provide a business update. Event: Blaize Second Quarter 2025 Earnings Conference Call Date: Thursday, August 14, 2025 Time: 2:00 pm PT (5:00 pm ET) Live Webcast: A live webcast of the call can be accessed from the Events and Presentations page of the investor relations website, Replay: An archived conference call webcast will be available on Blaize's investor relations website for one year following the live call at About Blaize Blaize provides a full-stack programmable processor architecture suite and low-code/no-code software platform that enables AI processing solutions for high-performance computing at the network's edge and in the data center. Blaize solutions deliver real-time insights and decision-making capabilities at low power consumption, high efficiency, minimal size, and low cost. Headquartered in El Dorado Hills (CA), Blaize has more than 200 employees worldwide with teams in San Jose (CA) and Cary (NC), and subsidiaries in Hyderabad (India), Leeds and Kings Langley (UK), and Abu Dhabi (UAE). To learn more, visit or follow us on LinkedIn and on X at @blaizeinc. View source version on Contacts Investors ir@ Media info@

Rigetti Computing to Report Second Quarter 2025 Financial Results and Host Conference Call on August 12, 2025
Rigetti Computing to Report Second Quarter 2025 Financial Results and Host Conference Call on August 12, 2025

Yahoo

time14 minutes ago

  • Yahoo

Rigetti Computing to Report Second Quarter 2025 Financial Results and Host Conference Call on August 12, 2025

Rigetti Computing, Inc. BERKELEY, Calif., July 22, 2025 (GLOBE NEWSWIRE) -- Rigetti Computing, Inc. ("Rigetti" or the "Company") (Nasdaq: RGTI), a pioneer in hybrid quantum-classical computing, announced today that it will release second quarter 2025 results on August 12, 2025 after market close. The Company will host a conference call to discuss its financial results and provide an update on its business operations at 5:00 p.m. ET the same day. Key details regarding the call are as follows: Call Date: Tuesday, August 12, 2025 Call Time: 5:00 p.m. ET / 2:00 p.m. PT Webcast Link: Live Call Participant Link: Webcast Instructions You can listen to a live audio webcast of the conference call by visiting the 'Webcast Link' above or the "Events & Presentations" section of the Company's Investor Relations website at . A replay of the conference call will be available at the same locations following the conclusion of the call for one year. Live Call Participant Instructions To participate in the live call, you must register using the 'Live Call Participant Link' above. Once registered, you will receive dial-in numbers and a unique PIN number. When you dial in, you will input your PIN and be routed into the call. If you register and forget your PIN, or lose the registration confirmation email, simply re-register to receive a new PIN.

Dear QuantumScape Stock Fans, Mark Your Calendars for July 23
Dear QuantumScape Stock Fans, Mark Your Calendars for July 23

Yahoo

time14 minutes ago

  • Yahoo

Dear QuantumScape Stock Fans, Mark Your Calendars for July 23

Closeup of EV being charged by Solarseven via iStock In a dramatic return to center stage, QuantumScape (QS) has reignited Wall Street's interest ahead of its next earnings release. On July 18, the stock soared to a fresh 52-week high of $15.03, marking a 7.7% intraday gain. Shares have soared more than 215% in the last month, so all eyes will be on its second-quarter report after market close on Wednesday, July 23. More News from Barchart The initial catalyst that sparked its recent rally was the integration of its Cobra separator process into baseline cell production. News of this technical step forward prompted a 31% stock jump the very next day. Given that QuantumScape remains in a pre‑revenue phase, the breakthrough signals tangible momentum. As capital continues to pour in, the question becomes not whether the firm can deliver, but when revenue will begin to flow. Thus, Q2 results will be watched closely. About QuantumScape Stock QuantumScape, headquartered in San José, California, is building the future of electric vehicle power with its next‑generation solid‑state lithium‑metal batteries. At a market cap of $7 billion, the company counts big backers including Bill Gates and the Qatar Investment Authority, while Volkswagen (VWAGY) holds a material ownership stake. Over the past three months, QS has exploded 253%, and in the past month alone risen 220%. That premium shows in valuation. QS now trades at roughly 7.4 times its book value, a lofty level for a pre-revenue company. But with battery breakthroughs underway and investor capital backing its runway, many believe that traditional valuations give way to future potential. A Closer Look at QuantumScape's Q1 Earnings On April 23, QuantumScape unveiled its Q1 2025 earnings, which, despite its pre‑revenue status, carried encouraging developments. The company reported a net loss of $114.4 million, narrowing from $120 million in the year-ago period. Its adjusted EBITDA loss and operating losses both also narrowed year over year. At the end of Q1, QuantumScape reported $860 million in liquidity, giving it a runway well into the second half of 2028. For a pre-revenue company in a capital-intensive industry, this is a reassuring buffer. This will also allow the company to reach key milestones, including field testing set for 2026, without raising immediate funding.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store