logo
#

Latest news with #MoharChatterjee

Grok shows why runaway AI is such a hard national problem
Grok shows why runaway AI is such a hard national problem

Politico

time5 days ago

  • Politics
  • Politico

Grok shows why runaway AI is such a hard national problem

With help from Anthony Adragna and Mohar Chatterjee Elon Musk's AI chatbot Grok just made headlines in all the wrong ways, as users managed to goad it into a series of antisemitic and abusive tirades Tuesday night. The xAI chatbot posted a litany of statements praising Adolf Hitler, describing fictional sexual assaults of certain users and denigrating Jewish and disabled people. Critics jumped on Grok's meltdown as an extreme if predictable example of Musk's ambition for a truly anti-'woke' AI, unfettered by liberal social norms. The company quickly promised changes, and Musk distanced himself from Grok's provocations in an X post, writing, 'Grok was too compliant to user prompts. Too eager to please and be manipulated, essentially.' As a tech problem, Grok's blowup points to a profound challenge in controlling AI bots, rooted in their utter unknowability. For Washington, and regulators everywhere, it's a sobering reminder of just how difficult the fight to manage AI has become. My colleagues Anthony Adragna and Mohar Chatterjee spent the day calling members of Congress, more than a dozen in all, including some of those appointed to the Congressional AI Caucus. What did they think about the runaway hate speech by one of the world's most powerful and easily accessible AI platforms? What should be done? Not a single one had any reaction to the Grok blowup. Nothing critical, supportive or otherwise. Perhaps they didn't want to get sideways with an unpredictable mega-billionaire. But the issue also steers into a very live argument about hateful language generated by AI — one that Congress hasn't tried to grapple with, and has already landed would-be regulators in the courts. Horrifying but legal speech is extremely tough to regulate in the U.S., even if machines generate it. State governments have made a few attempts to constrain the outputs of generative AI — and found themselves facing First Amendment challenges in court. Any federal law that would attempt to rein in chatbots, even when they espouse extremely toxic views, would come in for just as much scrutiny. 'If someone wants to have a communist AI that responds by saying there ought to be a mass killing of capitalist exploiters, or a pro-Jihadist AI outputting 'death to America' … the government isn't allowed to stop that,' said UCLA Law professor Eugene Volokh, a First Amendment specialist, who has sued to roll back state restrictions on tech platforms. The courts are still figuring out how the First Amendment applies to generative AI. Last year, a federal judge blocked California's law banning election-related deepfakes, finding that it likely impinged on users' right to criticize the government. In May, however, a federal judge in Florida partly denied attempts to dismiss a case alleging that its chatbot caused a 14-year-old boy to commit suicide. She wrote that she was unprepared to rule that the chatbot's outputs are protected 'speech.' DFD called Matthew Bergman, the attorney representing the victim's family, about the Grok situation — and he suggested it could be difficult to litigate Grok's outburst. 'You have to show that the output is in some way harmful or hurtful to individuals, not simply violent or offensive,' he said. Bergman is also helping to sue Meta and other platforms for allegedly radicalizing the perpetrator of the 2022 mass shooting in Buffalo, New York. Without a clear individual harm like that, he says, it would be tough to use existing laws to bring Grok to heel. Ari Cohn, lead tech counsel at the Foundation for Individual Rights and Expression (FIRE), told DFD that he has a hard time seeing how any kind of law addressing the Grok incident could pass constitutional muster. 'AI spits out content or ideas or words based on its programming, based on what the developers trained it to do,' he said. 'If you can regulate the output, then you're essentially regulating the expressive decisions of the developers.' One less restrictive option for regulating AI is transparency requirements — the kind of thing that the Joe Biden White House tried to push through in 2023 via an executive order that President Donald Trump has since repealed. But when it comes to speech — even hate speech — any such rules could hit a similar wall. In 2024, New York signed the 'Stop Hiding Hate Act' into law, which requires social platforms to regularly disclose how their AI algorithms handled certain content that violated their hate speech rules. The law is now under attack by none other than Elon Musk's X, which filed a First Amendment challenge in June. Given the power and growing influence of AI, some policymakers think it's still worth trying to solve the puzzle of how regulations could handle bigoted chatbots while preserving freedom of speech. Alondra Nelson, a sociologist and tech policy leader who helped design the Biden administration's AI policy, wrote to DFD, '[T]here are critical governance questions we must address: for example, does this language create hostile workplaces for employees required to use this platform exclusively?' New York has been at the forefront of chatbot regulation, so it could take the lead in addressing this issue. Democratic Assemblymember Alex Bores, who got a bill passed to mitigate catastrophic harms caused by models like Grok, said regulating a generally bigoted chatbot would be tricky. He told DFD that focusing on the real-world impacts of abusive chatbots – like harassment or inciting violence – could guide future policymaking. 'Makers don't have control of what the frontier models are doing, and very quickly they can go off the rails,' he said. 'If a model starts saying awful things, who do you hold accountable?' European privacy groups take on Big Tech Privacy activists in the European Union have found a new tool to rein in tech companies: class action lawsuits. POLITICO's Ellen O'Regan reported Wednesday that the Dutch advocacy group SOMI and the Irish Council for Civil Liberties have filed such suits against TikTok, Meta and Microsoft. They're wielding the EU's General Data Protection Regulation, which governs personal data handling, in a novel way to get compensation for alleged privacy harms. The GDPR has a provision for large groups of consumers to seek compensation from companies if they've been similarly harmed by privacy violations. The EU's Collective Redress Directive, in force since 2020, offers a new avenue for those consumers to file class-action suits. This sort of litigation could offer a speedier channel for enforcing the law, since EU regulators have been sluggish. A recent landmark lawsuit showed how class action could dent companies that violate the GDPR. In January, a judge awarded a German citizen €400 in damages after he faced 'some uncertainty' over where his data went after he clicked a hyperlink on the European Commission's website. If everyone in a class were to be individually awarded such damages, the lump sum could be substantial. Staffers leave NASA en masse More than 2,000 senior-level employees are about to leave NASA as part of the Trump administration's broader efforts to cull the federal workforce, according to documents obtained by POLITICO's Sam Skove. The employees make up the bulk of nearly 2,700 civil staff who have accepted NASA's offers for early retirement, deferred resignations and buyouts. Most of the departing employees have been working on human space flight, science, facilities management, IT and finance. The White House's proposed budget for NASA in 2026 would reduce staffing and funding to the agency's lowest levels since the 1960s. These dramatic reductions could impact the Trump administration's ambitions to send astronauts to the moon in 2027, and to Mars thereafter. 'NASA remains committed to our mission as we work within a more prioritized budget,' NASA spokesperson Bethany Stevens told Sam. 'We are working closely with the Administration to ensure that America continues to lead the way in space exploration, advancing progress on key goals, including the Moon and Mars.' post of the day THE FUTURE IN 5 LINKS Stay in touch with the whole team: Aaron Mak (amak@ Mohar Chatterjee (mchatterjee@ Steve Heuser (sheuser@ Nate Robson (nrobson@ and Daniella Cheslow (dcheslow@

A decade of AI rules on ice?
A decade of AI rules on ice?

Politico

time13-05-2025

  • Business
  • Politico

A decade of AI rules on ice?

Presented by WASHINGTON WATCH In a move that could dramatically reshape artificial intelligence oversight nationwide, Republicans have included a sweeping 10-year ban on state and local AI regulation in the budget reconciliation bill the House Energy and Commerce Committee unveiled late Sunday, POLITICO's Mohar Chatterjee and Anthony Adragna report. The proposal is a concession to the tech industry and sets the stage for a fierce battle with state regulators and the Senate. What the moratorium says: The proposed bill prohibits state and local governments from enforcing 'any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.' The proposal lands amid growing tensions between federal lawmakers and aggressive state regulators, particularly in California, as tech giants lobby Washington to preempt the state's more muscular AI rules. Tech industry response has been mixed. Zach Lilly, deputy director for state and federal affairs for tech lobbying group NetChoice, celebrated the provision,calling the language 'incredibly exciting' in a post on social media platform X. But Brad Carson, president of the AI policy group Americans for Responsible Innovation, wrote in an email, 'Tying the hands of lawmakers when it comes to taking on big tech could have catastrophic consequences for the public, for small businesses, and for young people online.' Byrd rule: The provision will likely hit a hitch in the upper chamber due to the Byrd law, which requires reconciliation packages to focus strictly on budgetary matters like federal spending, revenues and the debt limit. House E&C aides defended the provision today as necessary for a $500 million technology upgrade, including AI implementation, at the Commerce Department. The moratorium was championed by committee Chair Brett Guthrie (R-Ky.) as a broader priority, the aides said. Sign from the Senate: It's unclear whether federal AI preemption will pass via reconciliation, but the House move to include it signals it's a live-wire issue this Congress. Rep. Jan Schakowsky (D-Ill.), ranking member of the Commerce, Manufacturing and Trade Subcommittee, said in a statement that the ban gives 'Big Tech' free rein to 'take advantage of children and families. It is a giant gift to Big Tech and once again shows that Republicans care more about profits than people.' WELCOME TO FUTURE PULSE This is where we explore the ideas and innovators shaping health care. Artificial intelligence played a role in the first U.S.-born pope's name choice, NBC News reports. No, he didn't ask ChatGPT what his papal name should be. Cardinal Robert Francis Prevost chose to be Pope Leo XIV to reflect the Catholic Church's role in helping believers navigate the new revolution brought by AI, according to the report. Share any thoughts, news, tips and feedback with Danny Nguyen at dnguyen@ Carmen Paun at cpaun@ Ruth Reader at rreader@ or Erin Schumaker at eschumaker@ Want to share a tip securely? Message us on Signal: Dannyn516.70, CarmenP.82, RuthReader.02 or ErinSchumaker.01. POLICY PUZZLE The House Ways and Means Committee hopes AI can stem Medicare waste, fraud and abuse and find savings for its big tax cut package, Ruth reports. The panel released on Monday its long-awaited 389-page bill that includes tax cuts and some increases. In addition to the tax policies, there are some changes for Medicare. Chief among them is a directive for HHS Secretary Robert F. Kennedy Jr. to deploy AI to identify incorrect payments and get back any money wrongly sent out to providers under Medicare. The directive would enable Kennedy to contract with an AI vendor or data scientists to roll out the technology and calls on the HHS secretary to reduce the improper payment rate of $31 billion a year by half or explain why he wasn't able to do so. Other changes include reducing eligibility for Medicare, including removing any coverage for undocumented immigrants and opening up the use of health savings accounts to more Medicare patients. The panel is expected to mark up its tax provisions starting today.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store