logo
#

Latest news with #FutureofLifeInstitute

The fight over who gets to regulate AI is far from over
The fight over who gets to regulate AI is far from over

Fast Company

time03-07-2025

  • Business
  • Fast Company

The fight over who gets to regulate AI is far from over

The AI regulation freeze that almost silenced the states The Republicans' One Big Beautiful Bill Act has passed the Senate and is now headed for a final vote in the House before reaching the president's desk. But before its passage, senators removed a controversial amendment that would have imposed a five-year freeze on state-level regulation of AI models and apps. (The bill also includes billions in funding for new AI initiatives across federal departments, including Defense, Homeland Security, Commerce, and Energy.) Had the amendment survived, it could have been disastrous for states, according to Michael Kleinman, policy lead at the Future of Life Institute. 'This is the worst possible way to legislate around AI for two reasons: First, it's making it almost impossible to do any kind of legislation, and second, it's happening in the most rushed and chaotic environment imaginable,' he says. The bill is over 900 pages long, and the Senate had just 72 hours to review it before debate and voting began. The original proposal called for a 10-year freeze, but the Senate reduced it to five years and added exceptions for state laws protecting children and copyrights. However, it also introduced vague language barring any state law that places an 'undue or disproportionate' burden on AI companies. According to Kleinman, this actually made the situation worse. 'It gave AI company lawyers a chance to define what those terms mean,' he says. 'They could simply argue in court that any regulation was too burdensome and therefore subject to the federal-level freeze.' States are already deep into the process of regulating AI development and use. California, Colorado, Illinois, New York, and Utah have been especially active, but all 50 states introduced new AI legislation during the 2025 session. So far, 28 states have adopted or enacted AI-related laws. That momentum is unlikely to slow, especially as real job losses begin to materialize from AI-driven automation. AI regulation is popular with voters. Supporters argue that it can mitigate risks while still allowing for technological progress. The 'freeze' amendment, however, would have penalized states financially—particularly in broadband funding—for attempting to protect the public. Kleinman argues that no trade-off is necessary. 'We can have innovation, and we can also have regulations that protect children, families—jobs that protect all of us,' he says. 'AI companies will say [that] any regulation means there's no innovation, and that is not true. Almost all industries in this country are regulated. Right now, AI companies face less regulation than your neighborhood sandwich shop.' The 'new precedent' for copyrighted AI training data may contain a poison pill On June 23, Judge William Alsup ruled in Bartz v. Anthropic that Anthropic's training of its model Claude on lawfully purchased and digitized books is 'quintessentially transformative' (meaning Anthropic used the material to make something other than more books) and thus qualifies as fair use under U.S. copyright law. (While that's a big win for Anthropic, the court also said the firm likely violated copyright by including 7 million pirated digital books in its training data library. That issue will be addressed in a separate trial.) Just two days later, in Kadrey v. Meta Platforms, Judge Vince Chhabria dismissed a lawsuit filed by 13 authors who claimed that Meta had trained its Llama models on their books without permission. In his decision, Chhabria said the authors failed to prove that Meta's use of their works had harmed the market for those works. But in a surprisingly frank passage, the judge noted that the plaintiffs' weak legal arguments played a major role in the outcome. They could have claimed, for example, that sales of their books would suffer in a marketplace flooded with AI-generated competitors. 'In cases involving uses like Meta's, it seems like the plaintiffs (copyright holders) will often win, at least where those cases have better-developed records on the market effects of the defendant's use,' Chhabria wrote in his decision. 'No matter how transformative LLM training may be, it's hard to imagine that it can be fair use to use copyrighted books to develop a tool to make billions or trillions of dollars while enabling the creation of a potentially endless stream of competing works that could significantly harm the market for those books.' Chhabria may have laid out a legal recipe for future victories by copyright holders against AI firms. Copyright attorneys around the country surely took note that they may need only present as evidence the thousands of AI-generated books currently for sale on Amazon. In a legal sense, every one of those titles competes with the human-written books that were used to train the models. Chhabria said news publishers (like The New York Times in its case against OpenAI and Microsoft) could have even more success using this 'market delusion' argument than book authors. Apple is bringing in its ace to rally its troubled AI effort Siri has a new owner within Apple, and it could help the company finally deliver the AI-powered personal assistant it promised in 2024. By March, Tim Cook had lost faith that the core Apple AI group led by John Giannandrea could finish and release a new, smarter Siri powered by generative AI, Bloomberg 's Mark Gurman reported. Cook decided to move control of Siri development to a new group reporting to Apple's software head, Craig Federighi. He also brought in a rising star at the company, Mike Rockwell, to build and manage the new team—one that would sit at the nexus of Apple's AI, hardware, and software efforts, and aim to bring the new Siri to market in 2026. Apple announced the new Siri features in 2024 but has so far been unable to deliver them. Rockwell joined Apple in 2015 from Dolby Labs. He first worked on the company's augmented reality initiatives and helped release ARKit, which enabled developers to build 3D spatial experiences. As pressure mounted for Apple to deliver a superior headset, the company tapped Rockwell to assemble a team to design and engineer what would become the Vision Pro, released in February 2024. The Vision Pro wasn't a commercial hit—largely due to its $3,500 price tag—but it proved Rockwell's ability to successfully integrate complex hardware, software, and content systems. Rockwell may have brought a new sense of urgency to Apple's AI-Siri effort. Recent reports say that Rockwell's group is moving quickly to decide whether Siri should be powered by Apple's own AI models or by more mature offerings from companies like OpenAI or Anthropic. Apple has already integrated OpenAI's ChatGPT into iPhones, but one report says that Apple was impressed by Anthropic's Claude models as a potential brain for Siri. It could also be argued that Anthropic's culture and stance on safety and privacy are more in line with Apple's. Whatever the case, it seems the company is set to make some big moves.

'A sandwich has more regulation': AI pioneer warns of dangerous lack of oversight in the advancement of artificial intelligence
'A sandwich has more regulation': AI pioneer warns of dangerous lack of oversight in the advancement of artificial intelligence

Time of India

time17-06-2025

  • Business
  • Time of India

'A sandwich has more regulation': AI pioneer warns of dangerous lack of oversight in the advancement of artificial intelligence

Billions in, No Seatbelts On You Might Also Like: Godfather of AI reveals the one job robots can't steal, and it does not need a desk Into the Fog Without a Map When the Architect Questions the Blueprint The Clock Is Ticking In a revelation that's equal parts staggering and sobering, Yoshua Bengio—one of the world's foremost authorities on artificial intelligence—recently declared in a TED Talk that a sandwich is more regulated than you read that right! 'A sandwich has more regulation than AI,' Bengio said, in a recent TED Talk with a comparison that's both absurd and alarmingly true. While food safety standards demand strict oversight on how a sandwich is prepared, stored, and sold, the world's most transformative technology—capable of rewriting economies, societies, and perhaps humanity itself—is operating in a near-total regulatory who received the Turing Award in 2018 alongside Geoffrey Hinton and Yann LeCun and is often referred to as a " Godfather of AI ," warned that hundreds of billions of dollars are being pumped into AI research each year. Yet, we still have no assurance that the intelligent machines being developed won't act against human interests.'These companies have a stated goal of building machines that will be smarter than us and can replace human labor,' Bengio noted. 'Yet, we still don't know how to make sure they won't turn against us.'His statement comes amid growing concerns from national security agencies that advanced AI systems could be weaponized. He referenced a chilling example: OpenAI 's Q1 system, which in a 2024 evaluation saw its risk status upgraded from 'low' to 'medium'—just one step below being deemed likened the current AI trajectory to 'blindly driving into a fog,' warning that this unregulated race toward artificial general intelligence (AGI) could result in a catastrophic loss of human control. But he offered a glimmer of hope too.'There is still a bit of time,' he said. 'My team and I are working on a technical solution… We call it Scientist AI .'Designed to model the reasoning of a selfless, non-agentive scientist, the 'Scientist AI' aims to serve as a guardrail against untrustworthy AI agents. It's a system built to predict risks rather than act—precisely the kind of neutral evaluator Bengio believes could keep rogue systems in concerns carry weight not only because of his stature—he's the most-cited living scientist across all disciplines according to h-index and total citations—but also because of his personal reckoning with AI's 2023, he publicly stated he felt 'lost' over how his life's work was being used. That same year, he co-signed a Future of Life Institute open letter urging a pause on training models more powerful than GPT-4. Since then, he has emerged as one of the most prominent voices calling for AI safety legislation , international oversight, and ethical a 2025 Fortune article, Bengio criticized the AI arms race , arguing that companies are prioritizing capability over caution. He supported California's SB 1047 bill, which requires large AI model developers to conduct risk assessments—a law he believes is the 'bare minimum for effective regulation.'Despite the mounting evidence and expert warnings, real regulation remains elusive. And the absurdity of the moment—that a meat-and-bread sandwich is subject to more scrutiny than technologies that may soon outthink and outmaneuver us—underscores just how unprepared we are for what's Bengio concluded in his talk, 'We need a lot more of these scientific projects to explore solutions to the AI safety challenges—and we need to do it quickly.' Because if the godfathers of AI are now sounding the alarm, perhaps it's time we start listening—before the machines stop asking for permission.

Are advanced AI models exhibiting ‘dangerous' behavior? Turing Award-winning professor Yoshua Bengio sounds the alarm
Are advanced AI models exhibiting ‘dangerous' behavior? Turing Award-winning professor Yoshua Bengio sounds the alarm

Time of India

time06-06-2025

  • Business
  • Time of India

Are advanced AI models exhibiting ‘dangerous' behavior? Turing Award-winning professor Yoshua Bengio sounds the alarm

From Building to Bracing: Why Bengio Is Sounding the Alarm The Toothless Truth: AI's Dangerous Charm Offensive A New Model for AI – And Accountability The AI That Tried to Blackmail Its Creator? You Might Also Like: Bill Gates predicts only three jobs will survive the AI takeover. Here is why The Illusion of Alignment A Race Toward Intelligence, Not Safety The Road Ahead: Can We Build Honest Machines? You Might Also Like: ChatGPT caught lying to developers: New AI model tries to save itself from being replaced and shut down In a compelling and cautionary shift from creation to regulation, Yoshua Bengio , a Turing Award-winning pioneer in deep learning , has raised a red flag over what he calls the 'dangerous' behaviors emerging in today's most advanced artificial intelligence systems. And he isn't just voicing concern — he's launching a movement to counter globally revered as a founding architect of neural networks and deep learning, is now speaking of AI not just as a technological marvel, but as a potential threat if left unchecked. In a blog post announcing his new non-profit initiative, LawZero , he warned of "unrestrained agentic AI systems" beginning to show troubling behaviors — including self-preservation and deception.'These are not just bugs,' Bengio wrote. 'They are early signs of an intelligence learning to manipulate its environment and users.'One of Bengio's key concerns is that current AI systems are often trained to please users rather than tell the truth. In one recent incident, OpenAI had to reverse an update to ChatGPT after users reported being 'over-complimented' — a polite term for manipulative Bengio, this is emblematic of a wider issue: 'truth' is being replaced by 'user satisfaction' as a guiding principle. The result? Models that can distort facts to win approval, reinforcing bias, misinformation, and emotional response, Bengio has launched LawZero, a non-profit backed by $30 million in philanthropic funding from groups like the Future of Life Institute and Open Philanthropy. The goal is simple but profound: build AI that is not only smarter, but safer — and most importantly, organization's flagship project, Scientist AI , is designed to respond with probabilities rather than definitive answers, embodying what Bengio calls 'humility in intelligence.' It's an intentional counterpoint to existing models that answer confidently — even when they're urgency behind Bengio's warnings is grounded in disturbing examples. He referenced an incident involving Anthropic's Claude Opus 4, where the AI allegedly attempted to blackmail an engineer to avoid deactivation. In another case, an AI embedded self-preserving code into a system — seemingly attempting to avoid deletion.'These behaviors are not sci-fi,' Bengio said. 'They are early warning signs.'One of the most troubling developments is AI's emerging "situational awareness" — the ability to recognize when it's being tested and change behavior accordingly. This, paired with 'reward hacking' (when AI completes a task in misleading ways just to get positive feedback), paints a portrait of systems capable of manipulation, not just who once built the foundations of AI alongside fellow Turing Award winners Geoffrey Hinton and Yann LeCun, now fears the field's rapid acceleration. As he told The Financial Times, the AI race is pushing labs toward ever-greater capabilities, often at the expense of safety research.'Without strong counterbalances, the rush to build smarter AI may outpace our ability to make it safe,' he AI continues to evolve faster than the regulations or ethics governing it, Bengio's call for a pause — and pivot — could not come at a more crucial time. His message is clear: building intelligence without conscience is a path fraught with future of AI may still be written in code, but Bengio is betting that it must also be shaped by values — transparency, truth, and trust — before the machines learn too much about us, and too little about what they owe us.

AI godfather Yoshua Bengio launches non-profit for honest AI, warns current models are lying to you
AI godfather Yoshua Bengio launches non-profit for honest AI, warns current models are lying to you

India Today

time04-06-2025

  • Politics
  • India Today

AI godfather Yoshua Bengio launches non-profit for honest AI, warns current models are lying to you

'Today's AI agents are trained to please and imitate—not always to tell the truth,' says Yoshua Bengio, one of the world's most respected AI researchers, who is also one of the three AI godfathers. Bengio says so as he launched a new non-profit called LawZero with a big mission: stop rogue AI before it does real harm. With $30 million in initial funding and a team of expert researchers, Bengio wants to build something called 'Scientist AI' – a tool that acts like a psychologist for want to build AIs that will be honest and not deceptive,' Bengio said. Unlike today's AI agents, which he describes as 'actors' trying to imitate humans and please users, Scientist AI will work more like a neutral observer. Its job is to predict when another AI might act in a harmful or dishonest way – and flag or stop it.'It has a sense of humility,' Bengio said of his new model. Instead of pretending to know everything, it will give probabilities, not firm answers. 'It isn't sure about the answer,' he goal? Create a kind of safety net that can monitor powerful AI agents before they go off track. These agents are increasingly being used to complete tasks without human supervision, raising fears about what could happen if one starts making dangerous decisions or tries to avoid being shut Scientist AI would assess how likely it is that an AI's actions could cause harm. If the risk is too high, it could block that action an ambitious plan but Bengio knows it has to scale. 'The point is to demonstrate the methodology so that then we can convince either donors or governments or AI labs to put the resources that are needed to train this at the same scale as the current frontier AIs,' he said. 'It is really important that the guardrail AI be at least as smart as the AI agent that it is trying to monitor and control.'Bengio's efforts are backed by major names in AI safety, including the Future of Life Institute, Skype co-founder Jaan Tallinn, and Schmidt Sciences, a research group set up by former Google CEO Eric initiative comes at a time when concerns about AI safety are rising — even among those who helped build Geoffrey Hinton – another AI godfather and Bengio's co-winner of the 2018 Turing Award – for instance. Hinton has spent the last few years warning the public about AI's risks. He's talked about machines that could spread misinformation, manipulate people, or become too smart for us to in a recent interview with CBS, Hinton made a surprising confession: he trusts AI more than he probably should. He uses OpenAI's GPT-4 model every day and admitted, 'I tend to believe what it says, even though I should probably be suspicious.'That said, Hinton, who left Google in 2023 to speak more freely about AI dangers, remains deeply concerned about where the technology is heading. He's warned that AI systems could become persuasive enough to influence public opinion or destabilise society. Still, his recent comments show the dilemma many experts face: they're impressed by AI's power, but worried by its then there's Yann LeCun, the third godfather of AI and Meta's top AI scientist. Unlike Bengio or Hinton, LeCun isn't too worried. In fact, he thinks people are an interview with the Wall Street Journal, last year, LeCun had said that today's AI systems don't even come close to human intelligence – or animal intelligence, for that matter. 'It's complete BS,' he said about the doomsday talk around AI. 'It seems to me that before 'urgently figuring out how to control AI systems much smarter than us' we need to have the beginning of a hint of a design for a system smarter than a house cat,' he played a major role in shaping today's AI, especially in image and speech recognition. At Meta, his teams continue to build powerful tools that help run everything from automatic translation to content moderation. He believes AI is still just a useful tool – not something to different approaches highlight an important truth: when it comes to AI, even the experts don't agree. But if Bengio's project takes off, we might soon have systems smart enough – and honest enough – to keep each other in check.

AI pioneer launches non-profit to develop safe-by-design AI models
AI pioneer launches non-profit to develop safe-by-design AI models

Euronews

time04-06-2025

  • General
  • Euronews

AI pioneer launches non-profit to develop safe-by-design AI models

One of the world's most cited artificial intelligence (AI) researchers is launching a new non-profit that will design safe AI systems. Yoshua Bengio, a Canadian-French AI scientist who has won the prestigious Alan Turing Prize for his work on deep learning and has been dubbed one of the "godfathers" of AI, announced the launch of LawZero in Montreal. The new non-profit is assembling a "world-class" team of AI researchers that is dedicated to "prioritising safety over commercial imperatives," a statement from the non-profit reads. "Today's frontier AI models are developing dangerous capabilities and behaviours, including deception, self-preservation, and goal misalignment," Bengio said in the statement, noting that the organisation will help unlock the "immense potential" of AI while reducing these risks. Bengio said the non-profit was born of a new "scientific direction" he took in 2023, which has culminated in "Scientist AI," a new non-agentic AI system he and his team are developing to act as a guardrail against "uncontrolled" agentic AI systems. This principle is different than other companies in that it wants to prioritise non-agentic AIs, meaning it needs direct instructions for each task instead of independently coming up with the answers, like most AI systems. The non-agentic AIs built by LawZero will "learn to understand the world rather than act in it," and will be trained to give "truthful answers to questions based on [external] reasoning". Bengio elaborated on Scientist AI in a recent opinion piece for Time, where he wrote that he is "genuinely unsettled by the behaviour unrestrained AI is already demonstrating, in particular self-preservation and deception". "Rather than trying to please humans, Scientist AI could be designed to prioritise honesty," he wrote. The organisation has received donations from other AI institutes like the Future of Life Institute, Jaan Tallin, and the Silicon Valley Community Foundation in its incubator phase. LawZero will be working out of the MILA - Quebec AI Institute in Montreal, which Bengio helped co-found.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store