logo
#

Latest news with #existentialThreat

How Did Iraq Survive ‘Existential Threat More Dangerous than ISIS'?
How Did Iraq Survive ‘Existential Threat More Dangerous than ISIS'?

Asharq Al-Awsat

time6 days ago

  • Politics
  • Asharq Al-Awsat

How Did Iraq Survive ‘Existential Threat More Dangerous than ISIS'?

Diplomatic sources in Baghdad revealed to Asharq Al-Awsat that Iraqi authorities were deeply concerned about sliding into the Israeli-Iranian war, which they considered 'an existential threat to Iraq even more dangerous than that posed by ISIS when it overran a third of the country's territory.' The sources explained that 'ISIS was a foreign body that inevitably had to be expelled by the Iraqi entity, especially given the international and regional support Baghdad enjoyed in confronting it... but the war (with Israel) threatened Iraq's unity.' They described this 'existential threat' as follows: -When the war broke out, Baghdad received messages from Israel, conveyed via Azerbaijan and other channels, stating that Israel would carry out 'harsh and painful' strikes in response to any attacks launched against it from Iraqi territory. The messages held the Iraqi authorities responsible for any such attacks originating from their soil. -Washington shifted from the language of prior advice to direct warnings, highlighting the grave consequences that could result from any attacks carried out by Iran-aligned factions. -Iraqi authorities feared what they described as a 'disaster scenario': that Iraqi factions would launch attacks on Israel, prompting Israel to retaliate with a wave of assassinations similar to those it conducted against Hezbollah leaders in Lebanon or Iranian generals and scientists at the start of the war. -The sources noted that delivering painful blows to these factions would inevitably inflame the Shiite street, potentially pushing the religious authority to take a strong stance. At that point, the crisis could take on the character of a Shiite confrontation with Israel. -This scenario raised fears that other Iraqi components would then blame the Shiite component for dragging Iraq into a war that could have been avoided. In such circumstances, the divergence in choices between the Shiite and Sunni communities could resurface, reviving the threat to Iraq's unity. -Another risk was the possibility that the Kurds would declare that the Iraqi government was acting as if it only represented one component, and that the country was exhausted by wars, prompting the Kurdish region to prefer distancing itself from Baghdad to avoid being drawn into unwanted conflicts. -Mohammed Shia Al Sudani's government acted with a mix of firmness and prudence. It informed the factions it would not tolerate any attempt to drag the country into a conflict threatening its unity, while on the other hand keeping its channels open with regional and international powers, especially the US. -Iraqi authorities also benefited from the position of Iranian authorities, who did not encourage the factions to engage in the war but instead urged them to remain calm. Some observers believed that Iran did not want to risk its relations with Iraq after losing Syria. -Another significant factor was the factions' realization that the war exceeded their capabilities, especially in light of what Hezbollah faced in Lebanon and the Israeli penetrations inside Iran itself, which demonstrated that Israel possessed precise intelligence on hostile organizations and was able to reach its targets thanks to its technological superiority and these infiltrations. -The sources indicated that despite all the pressure and efforts, 'rogue groups' tried to prepare three attacks, but the authorities succeeded in thwarting them before they were carried out. The sources estimated that Iran suffered a deep wound because Israel moved the battle onto Iranian soil and encouraged the US to target its nuclear facilities. They did not rule out another round of fighting 'if Iran does not make the necessary concessions on the nuclear issue.'

How Should We Regulate AI? The Same Way We Do Airlines
How Should We Regulate AI? The Same Way We Do Airlines

Bloomberg

time07-07-2025

  • Business
  • Bloomberg

How Should We Regulate AI? The Same Way We Do Airlines

Although there's no shortage of AI hype in Silicon Valley, it's impossible to miss how many of the technology's biggest proponents are also concerned about its possible dangers. The idea that artificial intelligence could become an existential threat is on the table: OpenAI CEO Sam Altman has speculated about AI's potential for hacking into essential computer systems or designing a biological weapon and Tesla Inc. CEO Elon Musk has described it as ' potentially more dangerous than nukes.' Even if it never comes to that, improperly designed AI systems could still be very dangerous in critical applications, ranging from self-driving cars to healthcare. This makes regulating AI seem like a no-brainer. Yet there is no federal law or agency that broadly does so and, as evidenced by the failed attempt to include a ban of state regulation in the Trump administration's spending bill, there are some in government and industry who would like to roll back the state-level efforts that do exist.

From Existential Threat To Hope. A Philosopher's Guide To AI
From Existential Threat To Hope. A Philosopher's Guide To AI

Forbes

time06-07-2025

  • Forbes

From Existential Threat To Hope. A Philosopher's Guide To AI

AI was never just a tool to make us more productive, or to help us do 'good'. It was always also an ... More expression of who we are and what we are becoming. Photo: 7/31/1946-New York. This photo of "Miss Liberty" was made from a helicopter, the first time it has ever been done. It shows a head and shoulders view of the statue and the torch. The dark side of AI continues to reveal new faces. A few weeks ago, Geoffrey Hinton, Nobel laureate and former AI chief in Google, highlighted two ways in which AI poses an existential threat to humanity: By people misusing AI, and by AI becoming smarter than us. And this week OpenAI admitted that they don't know how to prevent ChatGPT from pushing people towards mania, psychosis and death. At the same time, AI optimists keep stressing that it is only a matter of years before AI will solve scientific, environmental, health and social problems that humanity has been struggling with for ages. And when The United Nations kicks off its global summit on AI for Good next week, it's to gather AI experts from across the world to "identify innovative AI applications to solve global challenges.' But what if the discussion of AI's risks and opportunities, dark and bright sides and bad and good ways to use technology is part of the existential threat we are facing? Why AI For Good May Be A Bad Idea When German philosopher Friedrich Nietzsche urged us to think Beyond Good and Evil (book from 1885), he suggested that it is not what we identify, define, and decide to be 'good' that determines whether we succeed as humans. It is whether we manage to rise above our unquestioned ideas of what good looks like. Labeling some AI products as human-centric or responsible might sound like a step in the right direction towards identifying and designing innovative AI applications to solve global challenges. But it also reinforces the idea that our future depends on how AI is designed, built and regulated rather on how we live, learn and relate to technology. And by focusing on AI when thinking and talking about our future rather than focusing on ourselves and how we exist and evolve as humans, we are not rising above our unquestioned ideas of what good looks like. Rather, we submit to the idea that permeates all technology, that good equals innovative, fast, and efficient. To rise above our unquestioned ideas about the nature and impact of AI, we need to follow Nietzsche's lead. So, here it is: A Philosopher's Guide to AI. German philosopher Friedrich Nietzsche (1844 - 1900) urged us to think Beyond Good and Evil. This ... More involves questioning the idea that permeates all technology, that good equals innovative, fast, and efficient. (Photo by Hulton Archive) 1. Stop Thinking Of AI As A Tool The first step towards shifting the focus from the development of AI to our evolution as humans is to question the widespread and constantly repeated idea that AI, like any other technology, is just a tool that can be used for good as well as evil. Inspired by Nietzsche and others who set the tradition of existential philosophy in motion, German philosopher Martin Heidegger put it like this: 'Everywhere we remain unfree and chained to technology, whether we passionately affirm or deny it. But we are delivered over to it in the worst possible way when we regard is as something neutral; for this conception of it, to which today we particularly like to pay homage, makes us utterly blind to the essence of technology.' In The Question Concerning Technology from 1954, Heidegger argued that the essence of technology is to give man the illusion of being in control. When we think of technology as a tool that can be used for good as well as evil, we also think that we are in control of why, when, and for what it is used. But according to Heidegger this is only partly the case. We may make a decision to buy a car to drive ourselves to work. And thus we may think of the car as a means to achieve our goal of getting to work as fast as possible. But we never made the decision that fast is better than slow. It's an idea that comes with the car. So is the idea that it should be easy and convenient for us to get to work. And that fast, easy and convenient is more important than anything else. Like all other technologies, the car comes with a promise that we can achieve more by doing less. And like all other technologies, it makes us think that this is what life is and should be about. But to rise above our unquestioned ideas, we must not only ask the questions we are encouraged to ask when faced with a new technology – like 'how does it work?', 'when can I use it?', and 'how much easier will it be to do X?' We must also ask the questions that the essence of technology discourages us from asking – like 'do I even need technology for this?', 'what does this technology prevent me from doing?', and 'what will my life be like if I trust technology to make everything easy?' 2. Take The History Of Technology Seriously Heidegger made it clear that although different generations of technology have different ways of influencing human beings and behaviors, our fundamental purpose for using technology remains the same: to deal with the fact that we are limited creatures, thrown into this world without knowing why and for how long. Put differently, the question concerning technology is and always was existential. It's about who we are and what we become when we try to overcome our limitations. Ever since our early ancestors began using rocks and branches as tools and weapons, our relationship with technology has been at the heart of how we live, learn and evolve as humans. And more than anything else, it has shaped our understanding of ourselves and our relationship with our surroundings. Living in the early days of the digital revolution, Heidegger didn't know that AI would have the impact it has today. Nor did he know that AI experts would talk about their inventions as posing an existential threat to humanity. But he distinguished between different generations of technology. And he suggested that humanity was moving toward a technological era of great existential significance. Illustration of the difference in how humans relate to technology throughout three technological ... More eras. Having used pre-modern tools to survive and modern technology to thrive, the idea that digital technology can help transcend the limitations set by nature doesn't seem far-fetched (see illustration). However, by not realizing that our relationship with technology is existential, AI experts seem to have missed that AI was never just a tool to make us more productive, or to help us do 'good'. It was always also an expression of who we are and what we are becoming. And by building technology that distances itself from the limitations of nature, we also began to distance ourselves from our human nature. According to Heidegger, this distancing has been going on for centuries without any of us noticing it. The widespread debate about AI as an existential threat is a sign that this is changing. And that AI may be the starting point for us humans to finally develop a more reflective and healthy relationship with technology. 3. Make Existential Hope A Joint Venture Heidegger concludes The Question Concerning Technology by writing: 'The closer we come to the danger, the brighter the ways into the saving power begin to shine and the more questioning we become. For questioning is the piety of thought.' While AI experts are calling for regulation, for AI development to be paused, and even for new philosophers to help them deal with the threat they see AI posing, hope shines from a completely different place than tech companies and regulators. 'Where?' you may ask. And that's just it. We are asking more existential questions about who we are, why we are here, and where we want to go as humanity than ever before. And with 'we', I don't mean philosophers, tech experts, and decision makers. I mean all of us in all sorts of contexts in all parts of the world. There is something about AI that, unlike previous generations of technology, makes us ask the questions that the essence of technology has previously discouraged us from asking. Unlike modern technologies like cars and digital technologies like computers, we actually have a widespread debate about what AI is preventing us from doing and what our lives will be like if we trust AI to make everything easy. And this instills hope. Existential hope that we still know and are willing to do what it takes to stay human. Even when it doesn't equal innovative, fast, and efficient. Senior journalist with BBC Global News, Richard Fisher defines existential hope as "the opposite of existential catastrophe: It's the idea that there could be radical turns for the better, so long as we commit to bringing them to reality. Existential hope is not about escapism, utopias or pipe dreams, but about preparing the ground: making sure that opportunities for a better world don't pass us by.' With A Philosopher's Guide to AI, the questions we ask about AI offers a once in many lifetimes opportunity for a better world. Let's make sure it doesn't pass us by!

U.S. Spy Agencies Assess Iran Remains Undecided on Building a Bomb
U.S. Spy Agencies Assess Iran Remains Undecided on Building a Bomb

New York Times

time19-06-2025

  • Politics
  • New York Times

U.S. Spy Agencies Assess Iran Remains Undecided on Building a Bomb

U.S. intelligence agencies continue to believe that Iran has yet to decide whether to make a nuclear bomb even though it has developed a large stockpile of the enriched uranium necessary for it to do so, according to intelligence and other American officials. That assessment has not changed since the intelligence agencies last addressed the question of Iran's intentions in March, the officials said, even as Israel has attacked Iranian nuclear facilities. Senior U.S. intelligence officials said that Iranian leaders were likely to shift toward producing a bomb if the American military attacked the Iranian uranium enrichment site Fordo or if Israel killed Iran's supreme leader. The question of whether Iran has decided to complete the work of building a bomb is irrelevant in the eyes of many Iran hawks in the United States and Israel, who say Tehran is close enough to represent an existential danger to Israel. But it has long been a flashpoint in the debate over policy toward Iran and has flared again as President Trump weighs whether to bomb Fordo. White House officials held an intelligence briefing on Thursday and announced that Mr. Trump would make his decision within the next two weeks. At the White House meeting, John Ratcliffe, the C.I.A. director, told officials that Iran was very close to having a nuclear weapon. Want all of The Times? Subscribe.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store