09-07-2025
Grok shows why runaway AI is such a hard national problem
With help from Anthony Adragna and Mohar Chatterjee
Elon Musk's AI chatbot Grok just made headlines in all the wrong ways, as users managed to goad it into a series of antisemitic and abusive tirades Tuesday night. The xAI chatbot posted a litany of statements praising Adolf Hitler, describing fictional sexual assaults of certain users and denigrating Jewish and disabled people.
Critics jumped on Grok's meltdown as an extreme if predictable example of Musk's ambition for a truly anti-'woke' AI, unfettered by liberal social norms. The company quickly promised changes, and Musk distanced himself from Grok's provocations in an X post, writing, 'Grok was too compliant to user prompts. Too eager to please and be manipulated, essentially.'
As a tech problem, Grok's blowup points to a profound challenge in controlling AI bots, rooted in their utter unknowability.
For Washington, and regulators everywhere, it's a sobering reminder of just how difficult the fight to manage AI has become.
My colleagues Anthony Adragna and Mohar Chatterjee spent the day calling members of Congress, more than a dozen in all, including some of those appointed to the Congressional AI Caucus. What did they think about the runaway hate speech by one of the world's most powerful and easily accessible AI platforms? What should be done?
Not a single one had any reaction to the Grok blowup. Nothing critical, supportive or otherwise.
Perhaps they didn't want to get sideways with an unpredictable mega-billionaire. But the issue also steers into a very live argument about hateful language generated by AI — one that Congress hasn't tried to grapple with, and has already landed would-be regulators in the courts.
Horrifying but legal speech is extremely tough to regulate in the U.S., even if machines generate it. State governments have made a few attempts to constrain the outputs of generative AI — and found themselves facing First Amendment challenges in court.
Any federal law that would attempt to rein in chatbots, even when they espouse extremely toxic views, would come in for just as much scrutiny.
'If someone wants to have a communist AI that responds by saying there ought to be a mass killing of capitalist exploiters, or a pro-Jihadist AI outputting 'death to America' … the government isn't allowed to stop that,' said UCLA Law professor Eugene Volokh, a First Amendment specialist, who has sued to roll back state restrictions on tech platforms.
The courts are still figuring out how the First Amendment applies to generative AI. Last year, a federal judge blocked California's law banning election-related deepfakes, finding that it likely impinged on users' right to criticize the government.
In May, however, a federal judge in Florida partly denied attempts to dismiss a case alleging that its chatbot caused a 14-year-old boy to commit suicide. She wrote that she was unprepared to rule that the chatbot's outputs are protected 'speech.'
DFD called Matthew Bergman, the attorney representing the victim's family, about the Grok situation — and he suggested it could be difficult to litigate Grok's outburst.
'You have to show that the output is in some way harmful or hurtful to individuals, not simply violent or offensive,' he said. Bergman is also helping to sue Meta and other platforms for allegedly radicalizing the perpetrator of the 2022 mass shooting in Buffalo, New York. Without a clear individual harm like that, he says, it would be tough to use existing laws to bring Grok to heel.
Ari Cohn, lead tech counsel at the Foundation for Individual Rights and Expression (FIRE), told DFD that he has a hard time seeing how any kind of law addressing the Grok incident could pass constitutional muster. 'AI spits out content or ideas or words based on its programming, based on what the developers trained it to do,' he said. 'If you can regulate the output, then you're essentially regulating the expressive decisions of the developers.'
One less restrictive option for regulating AI is transparency requirements — the kind of thing that the Joe Biden White House tried to push through in 2023 via an executive order that President Donald Trump has since repealed.
But when it comes to speech — even hate speech — any such rules could hit a similar wall. In 2024, New York signed the 'Stop Hiding Hate Act' into law, which requires social platforms to regularly disclose how their AI algorithms handled certain content that violated their hate speech rules. The law is now under attack by none other than Elon Musk's X, which filed a First Amendment challenge in June.
Given the power and growing influence of AI, some policymakers think it's still worth trying to solve the puzzle of how regulations could handle bigoted chatbots while preserving freedom of speech.
Alondra Nelson, a sociologist and tech policy leader who helped design the Biden administration's AI policy, wrote to DFD, '[T]here are critical governance questions we must address: for example, does this language create hostile workplaces for employees required to use this platform exclusively?'
New York has been at the forefront of chatbot regulation, so it could take the lead in addressing this issue. Democratic Assemblymember Alex Bores, who got a bill passed to mitigate catastrophic harms caused by models like Grok, said regulating a generally bigoted chatbot would be tricky. He told DFD that focusing on the real-world impacts of abusive chatbots – like harassment or inciting violence – could guide future policymaking.
'Makers don't have control of what the frontier models are doing, and very quickly they can go off the rails,' he said. 'If a model starts saying awful things, who do you hold accountable?'
European privacy groups take on Big Tech
Privacy activists in the European Union have found a new tool to rein in tech companies: class action lawsuits.
POLITICO's Ellen O'Regan reported Wednesday that the Dutch advocacy group SOMI and the Irish Council for Civil Liberties have filed such suits against TikTok, Meta and Microsoft. They're wielding the EU's General Data Protection Regulation, which governs personal data handling, in a novel way to get compensation for alleged privacy harms.
The GDPR has a provision for large groups of consumers to seek compensation from companies if they've been similarly harmed by privacy violations. The EU's Collective Redress Directive, in force since 2020, offers a new avenue for those consumers to file class-action suits. This sort of litigation could offer a speedier channel for enforcing the law, since EU regulators have been sluggish.
A recent landmark lawsuit showed how class action could dent companies that violate the GDPR. In January, a judge awarded a German citizen €400 in damages after he faced 'some uncertainty' over where his data went after he clicked a hyperlink on the European Commission's website. If everyone in a class were to be individually awarded such damages, the lump sum could be substantial.
Staffers leave NASA en masse
More than 2,000 senior-level employees are about to leave NASA as part of the Trump administration's broader efforts to cull the federal workforce, according to documents obtained by POLITICO's Sam Skove.
The employees make up the bulk of nearly 2,700 civil staff who have accepted NASA's offers for early retirement, deferred resignations and buyouts. Most of the departing employees have been working on human space flight, science, facilities management, IT and finance.
The White House's proposed budget for NASA in 2026 would reduce staffing and funding to the agency's lowest levels since the 1960s. These dramatic reductions could impact the Trump administration's ambitions to send astronauts to the moon in 2027, and to Mars thereafter.
'NASA remains committed to our mission as we work within a more prioritized budget,' NASA spokesperson Bethany Stevens told Sam. 'We are working closely with the Administration to ensure that America continues to lead the way in space exploration, advancing progress on key goals, including the Moon and Mars.'
post of the day
THE FUTURE IN 5 LINKS
Stay in touch with the whole team: Aaron Mak (amak@ Mohar Chatterjee (mchatterjee@ Steve Heuser (sheuser@ Nate Robson (nrobson@ and Daniella Cheslow (dcheslow@