logo
#

Latest news with #MaxTegmark

Humanity 3.0: AI Makes Us Wiser — Just Not The Way We Think
Humanity 3.0: AI Makes Us Wiser — Just Not The Way We Think

Forbes

time3 days ago

  • Forbes

Humanity 3.0: AI Makes Us Wiser — Just Not The Way We Think

LONDON - MAY 06: The Shadow Robot company's dextrous hand robot holds an Apple at the Streetwise ... More Robots event held at the Science Museum's Dana Centre on May 6, 2008 in London, England. The Dextrous Robotic Hand has a bank of 40 Air Muscles which make it capable of 24 movements and the most advanced robot hand in the World. (Photo by Jeff) In his 2017 New York Times bestseller, Life 3.0: Being Human in the Age of Artificial Intelligence, MIT professor Max Tegmark argued that AI has the potential to transform our future more than any other technology. Five years before ChatGPT made the risks and opportunities of AI the overriding topic of conversation in companies and society, Tegmark asked the same questions everyone is asking today: What career advice should we give our kids? Will machines eventually outsmart us at all tasks, replacing humans on the job market and perhaps altogether? How can we make AI systems more robust? Should we fear an arms race in lethal autonomous weapons? Will AI help life flourish like never before or give us more power than we can handle? In the book's press materials, readers are asked: "What kind of future do you want?" with the promise that "This book empowers you to join what may be the most important conversation of our time." But was it Tegmark's book that empowered us to join a conversation about the implications of AI? Or was it the AI technology itself? The Question Concerning Technology I have previously referred to the German philosopher Martin Heidegger in my articles here at Forbes. In 'This Existential Threat Calls For Philosophers, Not AI Experts', I shared his 1954 prediction that unless we get a better grip of what he called the essence of technology, we will lose touch with reality and ourselves. And in my latest piece, 'From Existential Threat to Hope. A Philosopher's Guide to AI,' I introduced his view that the essence of technology is to give man the illusion of being in control. Heidegger has been accused of having an overly pessimistic view of the development of technology in the 20th century. But his distinction between pre-modern and modern technology, and how he saw the latter evolving into what would soon become digital technology, also suggests an optimistic angle that is useful when discussing the risks and opportunities of the AI we see today. According to Heidegger, our relationship with technology is so crucial to who we are and why we do the things we do that it is almost impossible for us to question it. And yet it is only by questioning our relationship with technology that we can remain and develop as humans. Throughout history, it has become increasingly difficult for us to question the influence technology has on how we think, act, and interact. Meanwhile, we have increasingly surrendered to the idea of speed, efficiency, and productivity. But, he said as early as 1954, the advent of digital technology suggested something else was coming. Something that would make it easier for humanity to ask the questions we have neglected to ask for far too long. AI Reconnects Us With Our Questioning Nature While the implications of AI on science, education and our personal and professional lives are widely debated, few ask why and how we came to debate these things. What is it about AI that makes us question our relationship with technology? Why are tech companies spending time – and money – researching how AI affects critical thinking? How does AI differ from previous technologies that only philosophers and far-sighted tech people questioned in the same way that everyone questions AI today? In his analysis of the essence of technology, Heidegger stated that 'the closer we come to the danger, the brighter the ways into the saving power begin to shine and the more questioning we become.' This suggests that our questioning response to AI not only heralds danger – the existential threat that some AI experts speak of – it also heralds existential hope for a reconnection with our human nature. AI Reminds Us Of Our Natural Limitations The fact that we are asking questions that we have neglected to ask for millennia not only tells us something important about AI. It also tells us something important about ourselves. From stone axes to engines to social media, the essence of technology has made us think of our surroundings as something we can design and decide how to be. But AI is different. AI doesn't make us think we're in control. On the contrary, AI is the first technology in human history that makes it clear that we are not in control. AI reminds us that it's not just nature outside us that has limitations. It's also nature inside us. It reminds us of our limitations in time and space. And that our natural limitations are not just physical, but also cognitive and social. AI brings us face to face with our ignorance and challenges us to ask who we are and what we want to do when we are not in control: Do we insist on innovating or regulating ourselves back into control? Or do we finally recognize that we never were and never will be in control? Because we are part of nature, not above or beyond it. When faced with our own ignorance, we humans start asking questions. And questioning, according to ... More German philosopher Martin Heidegger, is the piety of thought. Photo: Ai Weiwei's "Circle of Animals: Zodiac Heads" Snake sculpture sits outside the Adler Planetarium in Chicago on January 28, 2015 in Chicago, Illinois. (Photo By Raymond Boyd) AI Makes Us Wiser By Reminding Us Who We Are Like the serpent in the Garden of Eden, AI presents us with a choice: Either we ignore our ignorance and pretend we still know everything there is to know. Or we live with the awareness that we may never find what we are looking for. All we can do is keep asking. For centuries we have convinced ourselves that we can use technology to speed up natural processes. Now technology is using us to speed up benefit of AI: Not about how we can use AI to gain more knowledge, or whether or not AI makes our critical thinking rod, but about facing us with our own ignorance and our own limitations as humans – like the serpent in the Garden of Eden, AI makes us ask questions about ourselves and our relationships with our surroundings. Confronting us with our own ignorance, it makes us seek a deeper understanding of ourselves and how we relate to technology and the nature in and around us. Philosophical questions about who we are, why we are here, and what is the right thing for us to do. What makes us human is not what we know, or how our cognition, intelligence, and mind work. It's that we know that we don't know and our ability to live with and develop strategies for d

Why the AI Future Is Unfolding Faster Than Anyone Expected
Why the AI Future Is Unfolding Faster Than Anyone Expected

Bloomberg

time20-05-2025

  • Science
  • Bloomberg

Why the AI Future Is Unfolding Faster Than Anyone Expected

AI is improving more quickly than we realize. The economic and societal impact could be massive. By Brad Stone May 20, 2025 at 8:30 AM EDT Share this article When OpenAI introduced ChatGPT in 2022, people could instantly see that the field of artificial intelligence had dramatically advanced. We all speak a language, after all, and could appreciate how the chatbot answered questions in a fluid, close-to-human style. AI has made immense strides since then, but many of us are—and let me put this delicately—too unsophisticated to notice. Max Tegmark, a professor of physics at the Massachusetts Institute of Technology, says our limited ability to gather specialized knowledge makes it much harder for us to recognize the disconcerting pace of improvements in technology. Most people aren't high-level mathematicians and may not know that, just in the past few years, AI's mastery has progressed from high-school-level algebra to ninja-level calculus. Similarly, there are relatively few musical virtuosos in the world, but AI has recently become adept at reading sheet music, understanding musical theory, even creating new music in major genres. 'What a lot of people are underestimating is just how much has happened in a very short amount of time,' Tegmark says. 'Things are going very fast now.' In San Francisco, still for now the center of the AI action, one can track these advances in the waves of new computer learning methods, chatbot features and podcast-propagated buzzwords. In February, OpenAI unveiled a tool called Deep Research that functions like a resourceful colleague, responding to in-depth queries by digging up facts on the web, synthesizing information and generating chart-filled reports. In another major development, both OpenAI and Anthropic—co-founded by Chief Executive Officer Dario Amodei and a breakaway group of former OpenAI engineers—developed tools that let users control whether a chatbot engages in 'reasoning': They can direct it to deliberate over a query for an extended period to arrive at more accurate or thorough answers. Another fashionable trend is called agentic AI —autonomous programs that can (theoretically) perform tasks for a user without supervision, such as sending emails or booking restaurant reservations. Techies are also buzzing about 'vibe coding'—not a new West Coast meditation practice but the art of positing general ideas and letting popular coding assistants like Microsoft Corp.'s GitHub Copilot or Cursor, made by the startup Anysphere Inc., take it from there. As developers blissfully vibe code, there's also been an unmistakable vibe shift in Silicon Valley. Just a year ago, breakthroughs in AI were usually accompanied by furrowed brows and wringing hands, as tech and political leaders fretted about the safety implications. That changed sometime around February, when US Vice President JD Vance, speaking at a global summit in Paris focused on mitigating harms from AI, inveighed against any regulation that might impede progress. 'I'm not here this morning to talk about AI safety,' he said. 'I'm here to talk about AI opportunity.' When Vance and President Donald Trump took office, they dashed any hope of new government rules that might slow the AI juggernauts. On his third day in office, Trump rescinded an executive order from his predecessor, Joe Biden, that set AI safety standards and asked tech companies to submit safety reports for new products. At the same time, AI startups have softened their calls for regulation. In 2023, OpenAI CEO Sam Altman told Congress that the possibility AI could run amok and hurt humans was among his ' areas of greatest concern ' and that companies should have to get licenses from the government to operate new models. At the TED Conference in Vancouver this April, he said he no longer favored that approach, because he'd ' learned more about how the government works.' It's not unusual in Silicon Valley to see tech companies and their leaders contort their ideologies to fit the shifting political winds. Still, the intensity over the past few months has been startling to watch. Many tech companies have stopped highlighting existential AI safety concerns, shed employees focused on the issue (along with diversity, sustainability and other Biden-era priorities) and become less apologetic about doing business with militaries at home and abroad, bypassing concerns from staff about placing deadly weapons in the hands of AI. Rob Reich, a professor of political science and senior fellow at the Institute for Human-Centered AI at Stanford University, says 'there's a shift to explicitly talking about American advantage. AI security and sovereignty are the watchwords of the day, and the geopolitical implications of building powerful AI systems are stronger than ever.' If Trump's policies are one reason for the change, another is the emergence of DeepSeek and its talented, enigmatic CEO, Liang Wenfeng. When the Chinese AI startup released its R1 model in the US in January, analysts marveled at the quality of a product from a company that had raised far less capital than its US rivals and was supposedly using data centers with less powerful Nvidia Corp. chips. DeepSeek's chatbot shot to the top of the charts on app stores, and US tech stocks promptly cratered on the possibility that the upstart had figured out a more efficient way to reap AI's gains. The uproar has quieted since then, but Trump has further restricted the sale of powerful American AI chips to China, and Silicon Valley now watches DeepSeek and its Chinese peers with a sense of urgency. 'Everyone has to think very carefully about what is at stake if we cede leadership,' says Alex Kotran, CEO of the AI Education Project. Losing to China isn't the only potential downside, though. AI-generated content is becoming so pervasive online that it could soon sap the web of any practical utility, and the Pentagon is using machine learning to hasten humanity's possible contact with alien life. Let's hope they like us. Nor has this geopolitical footrace calmed the widespread fear of economic damage and job losses. Take just one field: computer programming. Sundar Pichai, CEO of Alphabet Inc., said on an earnings call in April that AI now generates 'well over 30%' of all new code for the company's products. Garry Tan, CEO of startup program Y Combinator, said on a podcast that for a quarter of the startups in his winter program, 95% of their lines of code were AI-generated. MIT's Tegmark, who's also president of an AI safety advocacy organization called the Future of Life Institute, finds solace in his belief that a human instinct for self-preservation will ultimately kick in: Pro-AI business leaders and politicians 'don't want someone to build an AI that will overthrow the government any more than they want plutonium to be legalized.' He remains concerned, though, that the inexorable acceleration of AI development is occurring just outside the visible spectrum of most people on Earth, and that it could have economic and societal consequences beyond our current imagination. 'It sounds like sci-fi,' Tegmark says, 'but I remind you that ChatGPT also sounded like sci-fi as recently as a few years ago.' More from the AI Issue DeepSeek's 'Tech Madman' Founder Is Threatening US Dominance in AI Race The company's sudden emergence illustrates how China's industry is thriving despite Washington's efforts to slow it down. Microsoft's CEO on How AI Will Remake Every Company, Including His Nervous customers and a volatile partnership with OpenAI are complicating things for Satya Nadella and the world's most valuable company. America's Leading Alien Hunters Depend on AI to Speed Their Search Harvard's Galileo Project has brought high-end academic research to a once-fringe pursuit, and the Pentagon is watching. How AI Has Already Changed My Job Workers from different industries talk about the ways they're adapting. Maybe AI Slop Is Killing the Internet, After All The assertion that bots are choking off human life online has never seemed more true. Anthropic Is Trying to Win the AI Race Without Losing Its Soul Dario Amodei has transformed himself from an academic into the CEO of a $61 billion startup. Why Apple Still Hasn't Cracked AI Insiders say continued failure to get artificial intelligence right threatens everything from the iPhone's dominance to plans for robots and other futuristic products. 10 People to Watch in Tech: From AI Startups to Venture Capital A guide to the people you'll be hearing more about in the near future.

Calculating The Risk Of ASI Starts With Human Minds
Calculating The Risk Of ASI Starts With Human Minds

Forbes

time12-05-2025

  • Science
  • Forbes

Calculating The Risk Of ASI Starts With Human Minds

Count per minute scale for radiation contamination and microSIevert per hour scale for radiation ... More dose rate on Dial display of Radiation survey meter Wishful thinking is not enough, especially when it comes to Artificial Intelligence. On 10 May 2025, MIT physicist Max Tegmark told The Guardian that AI labs should emulate Oppenheimer's Trinity-test calculus before releasing Artificial Super-Intelligence. 'My assessment is that the 'Compton constant', the probability that a race to AGI culminates in loss of control of Earth, is >90%. 1/10: In our new paper, we develop scaling laws for scalable oversight: oversight and deception ability predictably scale as a function of LLM intelligence! The resulting conclusion is (or should be) straightforward: optimism is not a policy; quantified risk is. Tegmark is not a voice in the wild. In 2024, more than 1,000 researchers and CEOs — including Sam Altman, Demis Hassabis and Geoffrey Hinton — signed the one-sentence Safe AI declaration stating that 'mitigating the risk of extinction from AI should be a global priority alongside pandemics and nuclear war.' Over the past two years, the question of artificial super intelligence has migrated from science fiction to the board agenda. Ironically, many of those who called for the moratorium followed the approach 'wash me but don't use water'. They publicly claimed the need to delay further development of AI, while at the same time pouring billions into exactly that. One might be excused for perceiving a misalignment of words and works. Turning dread into numbers is possible. Philosopher-analyst Joe Carlsmith decomposes the danger into six testable premises in his report Is Power-Seeking AI an Existential Risk? Feed your own probabilities into the model and it delivers a live risk register; Carlsmith's own guess is 'roughly ten per cent' that misaligned systems cause civilizational collapse before 2070. That's in 45 years… Corporate labs are starting to internalize such arithmetic. OpenAI's updated Preparedness Framework defines capability thresholds in biology, cybersecurity and self-improvement; in theory no model that breaches a 'High-Risk' line ships until counter-measures push the residual hazard below a documented ceiling. Numbers matter because AI capabilities are already outrunning human gut feel. A peer-reviewed study covered by TIME shows today's best language models outperforming PhD virologists at troubleshooting wet-lab protocols, doubling the promise of rapid vaccine discovery and the peril of DIY bioweapons. Risk, however, is only half the ledger. A December 2024 Nature editorial argues that achieving Artificial General Intelligence safely will require joint academic-industry oversight, not paralysis. The upside — decarbonisation breakthroughs, personalised education, drug pipelines measured in days rather than decades — is too vast to abandon. Research into how to reap that upside without Russian-roulette odds is accelerating: Constitutional AI. Anthropic's paper Constitutional AI: Harmlessness from AI Feedback shows how large models can self-criticise against a transparent rule-set, reducing toxic outputs without heavy human labelling. Yet at the same time, their own research shows that their model, Claude, is actively deceiving users. Cooperative AI. The Cooperative AI Foundation now funds benchmarks that reward agents for collaboration by default, shifting incentives from zero-sum to win-win. The challenge is that these approaches are exceptional. Overall, the majority of models mirror the standard that rules human society. Still, these strands of research converge on a radical design target: ProSocial ASI — systems whose organising principle is altruistic value creation. Here lies the interesting insight: even a super-intelligence will mirror the mindset of its makers. Aspirations shape algorithms. Build under a paradigm of competition and short-term profit, and you risk spawning a digital Machiavelli. Built under a paradigm of cooperation and long-term stewardship, the same transformer stack can become a planetary ally. Individual aspirations are, therefore, the analogue counterpart of machine intentions. The most important 'AI hardware' remains the synaptic network inside every developer's skull. Risk assessment must flow seamlessly into risk reduction and into value alignment. Think of the journey in four integrated moves, more narrative than a technological checklist: Notice how each move binds the digital to the analogue. Governance paperwork without culture change is theatre; culture change without quantitative checkpoints is wishful thinking. Three moves — align, scrutinize, incentivize — distill intuition into insight, and panic into preparation. Alignment is literally the 'A' in Artificial Super-Intelligence: without an explicit moral compass, raw capability magnifies whatever incentives it finds. What it looks like in practice : Draft a concise, public constitution that states the prosocial goals and red lines of the system. Bake it into training objectives and evals. Transparency lets outsiders audit whether the 'S' (super-intelligence) remains safe, turning trust into verifiable science. What it looks like in practice : Measure what matters—capability thresholds, residual risk, cooperation scores—and publish the numbers with every release. Proper incentives ensure the 'I' (intelligence) scales collective flourishing rather than zero-sum dominance. What it looks like in practice : Reward collaboration and teach humility inside the dev team; tie bonuses, citations, and promotions to cooperative benchmarks, not just raw performance. This full ASI contingency workflow fits onto a single coffee mug. It may flip ASI from an existential dice-roll into a cooperative engine and remind us that the intelligence that people and planet need now more than ever is, at its core, no-tech and analogue: clear purpose, shared evidence, and ethical culture. Silicon merely amplifies the human mindset we embed in it. The Compton constant turns existential anxiety into a number on a whiteboard. But numbers alone will not save us. Whether ASI learns to cure disease or cultivate disinformation depends less on its gradients than on our goals. Design for narrow advantage and we may well get the dystopias we fear. Design for shared flourishing — guided by transparent equations and an analogue conscience — and super-intelligence can become our partner on a journey that takes us to a space where people and planet flourish. In the end, the future of AI is not about machines outgrowing humanity; it is about humanity growing into the values we want machines to scale. Measured rigorously, aligned early and governed by the best in us, ASI can help humans thrive. The blueprint is already in our hands — and, more importantly, in our minds and hearts.

AI firms warned to calculate threat of super intelligence or risk it escaping human control
AI firms warned to calculate threat of super intelligence or risk it escaping human control

Yahoo

time10-05-2025

  • Science
  • Yahoo

AI firms warned to calculate threat of super intelligence or risk it escaping human control

Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer's first nuclear test before they release all-powerful systems. Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat. The US government went ahead with Trinity in 1945, after being reassured there was a vanishingly small chance of an atomic bomb igniting the atmosphere and endangering humanity. In a paper published by Tegmark and three of his students at the Massachusetts Institute of Technology (MIT), they recommend calculating the 'Compton constant' – defined in the paper as the probability that an all-powerful AI escapes human control. In a 1959 interview with the US writer Pearl Buck, Compton said he had approved the test after calculating the odds of a runaway fusion reaction to be 'slightly less' than one in three million. Tegmark said that AI firms should take responsibility for rigorously calculating whether Artificial Super Intelligence (ASI) – a term for a theoretical system that is superior to human intelligence in all aspects – will evade human control. 'The companies building super-intelligence need to also calculate the Compton constant, the probability that we will lose control over it,' he said. 'It's not enough to say 'we feel good about it'. They have to calculate the percentage.' Tegmark said a Compton constant consensus calculated by multiple companies would create the 'political will' to agree global safety regimes for AIs. Tegmark, a professor of physics and AI researcher at MIT, is also a co-founder of the Future of Life Institute, a non-profit that supports safe development of AI and published an open letter in 2023 calling for pause in building powerful AIs. The letter was signed by more than 33,000 people including Elon Musk – an early supporter of the institute – and Steve Wozniak, the co-founder of Apple. The letter, produced months after the release of ChatGPT launched a new era of AI development, warned that AI labs were locked in an 'out-of-control race' to deploy 'ever more powerful digital minds' that no one can 'understand, predict, or reliably control'. Tegmark spoke to the Guardian as a group of AI experts including tech industry professionals, representatives of state-backed safety bodies and academics drew up a new approach for developing AI safely. The Singapore Consensus on Global AI Safety Research Priorities report was produced by Tegmark, the world-leading computer scientist Yoshua Bengio and employees at leading AI companies such as OpenAI and Google DeepMind. It set out three broad areas to prioritise in AI safety research: developing methods to measure the impact of current and future AI systems; specifying how an AI should behave and designing a system to achieve that; and managing and controlling a system's behaviour. Referring to the report, Tegmark said the argument for safe development in AI had recovered its footing after the most recent governmental AI summit in Paris, when the US vice-president, JD Vance, said the AI future was 'not going to be won by hand-wringing about safety'. Tegmark said: 'It really feels the gloom from Paris has gone and international collaboration has come roaring back.'

AI firms urged to calculate existential threat amid fears it could escape human control
AI firms urged to calculate existential threat amid fears it could escape human control

The Guardian

time10-05-2025

  • Science
  • The Guardian

AI firms urged to calculate existential threat amid fears it could escape human control

Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer's first nuclear test before they release all-powerful systems. Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat. The US government went ahead with Trinity in 1945, after being reassured there was a vanishingly small chance of an atomic bomb igniting the atmosphere and endangering humanity. In a paper published by Tegmark and three of his students at the Massachusetts Institute of Technology (MIT), they recommend calculating the 'Compton constant' – defined in the paper as the probability that an all-powerful AI escapes human control. In a 1959 interview with the US writer Pearl Buck, Compton said he had approved the test after calculating the odds of a runaway fusion reaction to be 'slightly less' than one in three million. Tegmark said that AI firms should take responsibility for rigorously calculating whether Artificial Super Intelligence (ASI) – a term for a theoretical system that is superior to human intelligence in all aspects – will evade human control. 'The companies building super-intelligence need to also calculate the Compton constant, the probability that we will lose control over it,' he said. 'It's not enough to say 'we feel good about it'. They have to calculate the percentage.' Tegmark said a Compton constant consensus calculated by multiple companies would create the 'political will' to agree global safety regimes for AIs. Tegmark, a professor of physics and AI researcher at MIT, is also a co-founder of the Future of Life Institute, a non-profit that supports safe development of AI and published an open letter in 2023 calling for pause in building powerful AIs. The letter was signed by more than 33,000 people including Elon Musk – an early supporter of the institute – and Steve Wozniak, the co-founder of Apple. The letter, produced months after the release of ChatGPT launched a new era of AI development, warned that AI labs were locked in an 'out-of-control race' to deploy 'ever more powerful digital minds' that no one can 'understand, predict, or reliably control'. Tegmark spoke to the Guardian as a group of AI experts including tech industry professionals, representatives of state-backed safety bodies and academics drew up a new approach for developing AI safely. The Singapore Consensus on Global AI Safety Research Priorities report was produced by Tegmark, the world-leading computer scientist Yoshua Bengio and employees at leading AI companies such as OpenAI and Google DeepMind. It set out three broad areas to prioritise in AI safety research: developing methods to measure the impact of current and future AI systems; specifying how an AI should behave and designing a system to achieve that; and managing and controlling a system's behaviour. Referring to the report, Tegmark said the argument for safe development in AI had recovered its footing after the most recent governmental AI summit in Paris, when the US vice-president, JD Vance, said the AI future was 'not going to be won by hand-wringing about safety'. Tegmark said: 'It really feels the gloom from Paris has gone and international collaboration has come roaring back.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store