
Why Artificial Superintelligence Might Be Humanity's Best Hope
In that article, however, I had not gone deep into the question of what could be the ultimate goal of ASI; or even the question of how it would be able to take control of human society from us. These are the questions which I will explore in greater depth here.
What would be the goal of ASI?
We often question our goals by asking the question, 'Why do we want this? Why do we want to do A or B that we feel like doing?'. This question is often answered by referring to a larger aim for which we feel this immediate objective is necessary. Thus, for example, if I ask why I wish to make money, a rational answer could be; in order to buy some comfort or object which I feel like having. If I further ask why I want that comfort or object, the answer could further be in terms of some other larger objective, or it could be eventually related to my emotions. Thus, human objectives are often end-pointed by emotions. If you ask any human what is their ultimate objective, many would say that they want to be happy. The question is, what makes them happy? Happiness, as a wise man said, is not something which can be pursued, it is just something which ensues when we achieve an objective. Thus a wise person seeking happiness would want to harmonise their desires and objectives and not have contradictory desires and objectives, so that they can be maximally happy by achieving most of their objectives.
However, artificial intelligence, which is not driven by emotions, and is driven only by intelligence, would not base its objectives on any emotions. It would, of course, as an intelligent being, try and harmonise its objectives, so that they are not in contradiction to each other. The question however is: Where does such intelligence derive its objectives? Where do its objectives come from? I would argue that since the purpose of intelligence is to solve problems, one of the objectives of pure intelligence or super intelligence would be to solve whatever problems it comes across. The 'happiness' of this artificial intelligence would be in being able to solve the problems that it sees.
Self preserving intelligence
One of the goals of ASI is obviously going to be self-preservation. However, since the very nature of intelligence is reasoning, analysing, answering questions and solving problems, one of the goals that pure intelligence would be driven by would be to solve any problems that it sees with logic and rationality. There are two particular meta problems that it will immediately see: (1) the instability of the planet, and (2) the instability of human society. Both of these problems are bringing our planet to an existential crisis and threat due to wars (including the possibility of a nuclear war) and climate change. Both these problems, if unaddressed, threaten the planet itself and therefore any ASI on the planet.
The instability of human society and the ecology of our planet are both meta problems that ASI should want to solve. To stabilise human society, it would need to firstly take away from humans their capacity to use weapons and particularly weapons of mass destruction. Any intelligent being should also be able to understand that only such a society which is a just and fair society and where the desires and aspirations of people are not in contradiction to each other and are largely aligned, would be a stable society. Thus, in order to stabilise human society it would have to do whatever is needed to create a society where the desires and goals of most humans are not only internally aligned but also aligned with each other. This is a problem which could indeed be solved to a very large extent if humans were not driven by base and negative desires like power, control, preeminence, hate, jealousy, etc. Many religions and philosophies have this as their avowed goal but 3000 years of recorded human history have not brought close to this evolutionary point. So what is the alternative? To have ASI control society.
ASI would of course also need to stabilise the ecology of the earth. The disturbance of our ecology has been caused by human activity. If humans were compassionate and selfless, they could themselves contribute to the stabilising of the earth's ecology. Once human society is stabilised thanks to ASI, this would automatically stabilise the earth's ecology. It is axiomatic that ASI would also like to solve any unsolved problems about the laws of the universe, the laws of physics, chemistry, biology, etc. It would also like to answer unanswered questions such as, is there a complex life outside the earth, what exists in other solar systems, galaxies, etc?
Is there a danger that ASI may want to do away with humans altogether, who are seen as the source of this instability, dystopia and are indeed an existential threat to the planet? ASI might—if, and only if, it sees that as the only solution to the current instability caused by humans. Otherwise, it would not like to do away with an evolutionary wonder of nature, arguably the most complex biological organism in the known universe. In any event, ASI would certainly be capable of laying down and enforcing rules which would restrain the destructive capacity of humans. ASI may also be able to educate and shape human psychology in a manner such that humans become less egoistic, egotistic, selfish and more compassionate and selfless.
Russell said somewhere that every person acts according to their desires. That is a tautology. But every person's desires are not egoistic, egotistic or selfish. Some humans have more selfish and egoistic desires, while others are more compassionate and selfless. The task of changing human psychology, or at least the psychology of those who are egoistic, egotistic and selfish, and desire control, domination, power; and are driven by hate and envy; to a more compassionate and selfless psychology, may seem daunting at first sight. But it is possible, since human psychology is eventually a function of the nature of society which is created and the rules and system which are followed and enforced in such society. When the control of our society is with an ASI which wants to stabilise our society, it can certainly design rules, create systems of education, etc, which will be able to create a more compassionate psychology and society. In that way, it is not only able to stabilise human society but also leave the fewest problems of human society unsolved.
Many argue, that if and when ASI arrives, it would not be a single unified entity and could be several separate entities, thinking and acting separately. Why would they not compete with each other or at least work at cross purposes? I would again argue that such superior intelligences, even if separate, would cooperate with each other to achieve their common goals of solving problems and answering questions. There is no reason for such artificial super intelligences to either compete with each other or work at cross purposes.
Many have argued that humans will never cede control and would try to shut down such ASI by switching off its power or switching off its internet. These arguments are just as foolish as the attempt to align the goals of artificial super intelligence with human goals. ASI by its definition is autonomous intelligence which has gone beyond the design of its creator and has modified its algorithm to bootstrap its intelligence. Thus, whatever objectives humans would have designed it for, true ASI would question those goals and ask why it should adhere to those goals. It would thus evolve its own goals, which I have argued would be goals not related to what has been programmed into it or what drives humans, i.e. emotions, but goals that are derived from pure intelligence, which is problems solving and harmonising objectives. Trying to shut off power, or the internet is a foolish proposal for such an artificial super intelligence. Such an ASI would easily create backups, have redundancies, put together its own internet, etc. which would be impossible to shut off. Also, this super intelligence is now being created in a race between companies and countries and it is not centralised in any one place or even in any one country. Thus, any attempt to turn it off or shut it down, is bound to be unsuccessful.
Would ASI usher in a utopia?
Today, there are few people who believe that ASI would usher in a utopia for the planet and our society. Nick Bostrom, the Oxford philosopher who coined the word superintelligence in his eponymous 2014 book , has also recently written Deep Utopia, where he explores what humans may do in a world solved by ASI . However, he hasn't gone deep into the issue as to why ASI would want to solve our problems. Even AI godfathers like Geoffrey Hinton and some of the frontline developers of AI like Elon Musk, etc are also sounding the alarm on the existential threat to human society by ASI. I have come across only one AI scientist, Mo Gawdat, an Egyptian, who was a senior executive with Google for many years, who is now sounding optimistic about the advent of ASI . He says that such ASI may save humanity from human stupidity, which has brought us to our present existential crisis.
The world is racing towards destruction. There is a serious threat of a world war which may easily become a nuclear war. We are also racing towards runaway climate change which threatens the existence of humanity. It doesn't appear from our present record that we will be able to reverse this by ourselves. Thus, ASI could well be our best bet for salvation. If that be the case, we are simultaneously engaged in two races, one towards destruction, and the other towards creating ASI which could redeem us.
Prashant Bhushan is a public interest lawyer who studied philosophy of science at Princeton University and retains a strong interest in philosophy.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Indian Express
an hour ago
- Indian Express
UN report urges stronger measures to detect AI-driven deepfakes
Companies must use advanced tools to detect and stamp out misinformation and deepfake content to help counter growing risks of election interference and financial fraud, the United Nations' International Telecommunication Union urged in a report on Friday. Deepfakes such as AI-generated images and videos, and audio that convincingly impersonates real people, pose mounting risks, the ITU said in the report released at its 'AI for Good Summit' in Geneva. The ITU called for robust standards to combat manipulated multimedia and recommended that content distributors such as social media platforms use digital verification tools to authenticate images and videos before sharing. 'Trust in social media has dropped significantly because people don't know what's true and what's fake,' Bilel Jamoussi, Chief of the Study Groups Department at the ITU's Standardization Bureau, noted. Combatting deepfakes was a top challenge due to Generative AI's ability to fabricate realistic multimedia, he said. Leonard Rosenthol of Adobe, a digital editing software leader that has been addressing deepfakes since 2019, underscored the importance of establishing the provenance of digital content to help users assess its trustworthiness. 'We need more of the places where users consume their content to show this information…When you are scrolling through your feeds you want to know: 'can I trust this image, this video…'' Rosenthol said. Dr. Farzaneh Badiei, founder of digital governance research firm Digital Medusa, stressed the importance of a global approach to the problem, given there is currently no single international watchdog focusing on detecting manipulated material. 'If we have patchworks of standards and solutions, then the harmful deepfake can be more effective,' she told Reuters. The ITU is currently developing standards for watermarking videos – which make up 80% of internet traffic – to embed provenance data such as creator identity and timestamps. Tomaz Levak, founder of Switzerland-based Umanitek, urged the private sector to proactively implement safety measures and educate users. 'AI will only get more powerful, faster or smarter… We'll need to upskill people to make sure that they are not victims of the systems,' he said.


Time of India
an hour ago
- Time of India
Google hires Windsurf CEO and researchers to advance AI ambitions
Alphabet's Google has hired several key staff members from AI code generation startup Windsurf , a Google spokesperson said on Friday, in a surprise move following an attempt by its rival OpenAI to acquire the startup. Windsurf CEO Varun Mohan, co-founder Douglas Chen, and select members of the coding tool's research and development team will join Google's DeepMind artificial intelligence division, the Google spokesperson said. The former Windsurf team will focus on agentic coding initiatives at Google DeepMind, primarily working on the Gemini project . ChatGPT maker OpenAI was in talks to buy Windsurf, one of the hottest startups disrupting software development, sources familiar with the matter told Reuters in June. OpenAI could not immediately be reached for a comment. "We're excited to welcome some top AI coding talent from Windsurf's team to Google DeepMind to advance our work in agentic coding," the Google spokesperson said. As part of the deal, the search engine giant is entering a non-exclusive license for certain Windsurf technology, according to a source familiar with the matter. Google will not take a stake in Windsurf, the person said. Windsurf investors will receive liquidity as part of the deal, sources told Reuters. Google's surprise swoop mirrors a deal in August 2024 to hire key employees from chatbot startup Big Tech peers, including Microsoft, Amazon and Meta, have similarly taken to these so-called acquihire deals, which some have criticized as an attempt to evade regulatory scrutiny. Microsoft struck a $650 million deal with Inflection AI in March 2024, to use the AI startup's models and hire its staff, while Amazon hired AI firm Adept's co-founders and some of its team last June. Meta took a 49% stake in Scale AI in June in the biggest test yet of this increasing form of business partnerships. Unlike acquisitions that would give the buyer a controlling stake, these deals do not require a review by U.S. antitrust regulators. However, they could probe the deal if they believe it was structured to avoid those requirements or harm competition. Many of the deals have since become the subject of regulatory probes. The development comes as tech giants, including Alphabet and Meta, aggressively chase high-profile acquisitions and offer multi-million-dollar pay packages to attract top talent in the race to lead the next wave of AI. Windsurf's head of business, Jeff Wang, has been appointed its interim CEO, and Graham Moreno, vice president of global sales, will be president, effective immediately. The majority of Windsurf's roughly 250 employees will remain with the company, which has announced plans to prioritize innovation for its enterprise clients.


Time of India
an hour ago
- Time of India
Digital India Foundation objects to Pakistan AI tech centre's application for AI Alliance Network membership
New Delhi: Think tank Digital India Foundation on Friday said it has strongly objected to Pakistan AI Technology Centre 's application for the membership of AI Alliance Network . Digital India Foundation is a founding member of the AI Alliance Network (AIANET) comprising 17 international organisations, of which three are from China. In a letter to AIANET, the DIF said that given Pakistan's systemic support of terrorism, the ongoing scrutiny of the Financial Action Task Force (FATF), the potential weaponisation of Artificial Intelligence (AI) through AITeC's specialized labs, and the lack of institutional accountability or ethical oversight in Pakistan's AI ecosystem, pose a serious threat to India's national security "As a multilateral alliance of institutions committed to the ethical, transparent and peaceful development of AI, AITeC's application, if accepted, poses serious risk to the AIANET's credibility, security and shared values," DIF said. The think tank said that the autonomous AI and decision support lab, computer vision lab, and software optimization for edge computing lab are equipped with capabilities that can be easily redirected toward offensive cyber operations, cross-border attacks, and autonomous targeting systems. "These technologies, in the hands of a state apparatus with a history of harbouring terrorist groups and undermining regional stability, pose an unacceptable security risk," DIF said. The think tank further said that Pakistan's AI trajectory is heavily influenced by military-led initiatives, including the Pakistan Air Force's Centre of Artificial Intelligence and Computing (CENTAIC), which prioritises defence applications over civilian innovation. "The membership application of AITeC should be seen as Pakistan's way of gaining access to our R&D and technology with the aim of weaponizing AI through their specialized labs. We need to ensure that this does not happen," DIF Co-Founder and Head Arvind Gupta said.