logo
#

Latest news with #AnthonyAdragna

Trump is all-in on AI sandboxes. Do they work?
Trump is all-in on AI sandboxes. Do they work?

Politico

time5 hours ago

  • Business
  • Politico

Trump is all-in on AI sandboxes. Do they work?

With help from Anthony Adragna Among the many, many ambitious ideas baked into his AI Action Plan, President Donald Trump wants to inculcate a 'try-first' culture for innovation in the U.S. The plan, released Wednesday, includes a strong push for industries to hurry up with adopting AI — arguing that 'many of America's most critical sectors,' notably health care, have been slow to integrate it into their operations. And when it comes to the nitty-gritty of how this is supposed to happen, the White House suggests a classic tech-world mechanism: Regulatory sandboxes. 'Sandbox' is a term that gets thrown around a lot in tech, usually referring to a closed environment to test software. When talking about policy, a sandbox is a little different: It essentially gives companies a temporary hall pass on pre-existing regulations — like those for medical devices or data privacy — to pilot new technologies to the public. The idea is that, based on data collected during the trial period, companies can adjust their product designs, and governments can tailor their technology regulations. Federal agencies have previously implemented sandboxes for emerging tech, most notably in financial services. The action plan calls on the Food and Drug Administration and other agencies to establish similar programs for AI — though it's unclear where the funding will come from. Sandboxes for AI systems have already popped up in various states. Last year, Utah enacted a law that has allowed companies to run some tryouts: The ElizaChat platform got approval for a 12-month trial of mental health chatbots, and a firm called Dentacor was allowed to test AI-enabled radiograph diagnostic tools. Texas signed similar legislation into law in June, as did Delaware on Wednesday. To many free-marketers, sandboxes can achieve multiple aims — developing technology, and also putting pressure on regulations to adapt. 'Just because we've been regulating one way for a long, long time doesn't mean we always have to,' Adam Thierer, senior fellow at the R Street Institute, told DFD. 'We can try to find ways to innovate within boundaries – that's what sandboxes are.' Others are skeptical of the whole argument that somehow industry is slow-rolling AI, and needs a government-sanctioned space to try things out. 'Companies aren't held back by lack of permission to test AI,' said Lexi Reese, a former Google VP who ran for Congress in California on a tech-centric platform in 2023. 'They're already deploying it without oversight.' Sandboxes have also been used in the fintech sector for at least a decade, allowing institutions like banks to test out digital systems with real customers. Trump's action plan notably pushed forward on this as well, urging the Securities and Exchange Commission to develop more programs to allow financial firms to test out AI. Hilary Allen, a law professor at American University who specializes in sandboxes, considers herself a skeptic: She said the fintech experience indicates that the downsides of sandboxes outweigh the potential benefits for AI innovations, since they often lead to regulatory capture. 'Sandboxes have been a disappointment, and a lot of regulatory agencies are moving away from them in the fintech space,' she told DFD. She said there's little evidence that these regimes, which are expensive to implement, lead to more sophisticated policies. Instead, what ends up happening is that these sandbox periods stretch on indefinitely, since pilot businesses may grow to a point where agencies are hesitant to shut them down by reinstating regulations. Effectively, the experiment becomes permanent, even without guardrails. (She suggests that clear rules for sunsetting the experiments could help to mitigate the problem.) 'Regulators are in an awkward position,' she said, 'because they become sort of a cheerleader for the firm they have selected, and that leads to natural capture dynamics. Plus, AI companies often skirt regulations in the first place. What the sandboxes really do, said Allen, is attract investors, who get interested because it indicates that an AI company is getting favorable legal treatment. Victoria LaCivita, a spokesperson for the White House's Office of Science and Technology Policy, did not respond directly to Allen's concerns when asked by DFD. But she said in a statement that the administration was fostering 'a pro innovation environment that will foster positive, transformative uses of AI.' Despite the skeptics, sandboxes are still a go-to tool for governments around the world. Even the tech policy hardliners in the European Union have included a sandbox provision in the otherwise strict AI Act. 'The fact that the U.S. is following suit is a wonderful sign,' said Kevin Cochrane, CMO of the global cloud company Vultr. 'Every national government needs to accelerate up policy around AI.' COTTON'S PUSH ON CHINESE ENGINEERS AT DOD The Republican chair of Senate Intelligence is demanding answers from the Defense Department after Microsoft was found to be using China-based engineers to support DOD cloud computing, as ProPublica reported In a letter obtained Thursday by POLITICO, Sen. Tom Cotton (R-Ark.) told Defense Secretary Pete Hegseth, 'we must put in place the protocols and processes to adopt innovative technology quickly, effectively and safely.' Specifically, Cotton asked Hegseth for details on a two-week review of the Defense Department's current cloud contracts; all security classification guides given to Microsoft or other subcontractors; and plans for an agency-wide review of contracting practices to ensure against 'leveraging loopholes' that place systems at risk. Microsoft declined to comment on Cotton's letter. The Defense Department did not immediately respond to POLITICO's inquiry. TikTok won't get more extensions Commerce Secretary Howard Lutnick says that TikTok will have to go offline in the U.S. if China doesn't accept the administration's deal for the app's sale. 'You can't have Chinese control and have something on 100 million American phones,' Lutnick told CNBC Thursday. He added that the proposed deal has been sent to Chinese officials, and that TikTok is an 'unofficial' part of current trade negotiations with Beijing. TikTok did not immediately respond to DFD's inquiry. The app has been a perpetual headache for Trump since he took office. He thrice extended a deadline that Congress set in 2024 for buyers from a nonadversarial country to take majority ownership of TikTok. The administration has been negotiating for months with China and ByteDance, TikTok's Beijing-based parent company. They almost reached a deal in April, but Chinese officials walked away when Trump announced a slew of new tariffs on the country. Trump has reportedly gathered a consortium of U.S. buyers like Oracle for the deal, though Blackstone withdrew its involvement in July. post of the day THE FUTURE IN 5 LINKS Stay in touch with the whole team: Aaron Mak (amak@ Mohar Chatterjee (mchatterjee@ Steve Heuser (sheuser@ Nate Robson (nrobson@ and Daniella Cheslow (dcheslow@

Is ‘Crypto Week' what crypto's inventors had in mind?
Is ‘Crypto Week' what crypto's inventors had in mind?

Politico

time17-07-2025

  • Business
  • Politico

Is ‘Crypto Week' what crypto's inventors had in mind?

With help from Anthony Adragna and Daniella Cheslow 'Crypto Week' on Capitol Hill, as Republicans are calling it, was supposed to help usher in 'the golden age of digital assets' — finally creating a legal foundation for the online markets of tomorrow, a dream of the tech world and very online investors. It hasn't been smooth, though. The marquee GENIUS Act to create a regulatory framework for stablecoins – cryptocurrencies pegged to an asset like the dollar – finally did end up passing on Thursday afternoon, but not before the crypto debate triggered enough GOP infighting to snarl the whole House's agenda for the week. For anyone following the long arc of bitcoin, however, the bigger question isn't about the Hill logistics, but whether this has anything to do with the original goals of cryptocurrency. Crypto was born as a revolution — a way to put value into the hands of the people who use it, rather than have it be controlled by banks, or tycoons, or (ahem) Congress. What's perhaps most ironic about Crypto Week is that the digital currency was designed so that it wouldn't need the imprimatur of politicians and regulators in the first place. Crypto's roots lie in the 2007-8 financial crisis, which sowed a profound distrust in the institutions that were supposed to have kept the global system stable. 'This idea that you could be your own bank resonated with a lot of people,' said Oxford University's Vili Lehdonvirta, one of the first socio-economists to study cryptocurrency. A key feature of cryptocurrencies is that they're decentralized, allowing people to exchange funds without intermediaries like banks. The now-hallowed 2008 white paper that first sketched out a blueprint for bitcoin begins by describing 'A purely peer-to-peer version of electronic cash' that would work 'without going through a financial institution.' The paper's mysterious author, who went by the pseudonym Satoshi Nakamoto and has been subject to fervent speculation regarding his identity, was skeptical of state power as well as bank power. In emails to a mailing list around 2008, Nakamoto said that his invention was 'very attractive to the libertarian viewpoint' and warned that 'governments are good at cutting off the heads of a centrally controlled networks [sic].' Cryptocurrency would free users from government-controlled fiat currencies. 'The root problem with conventional currency is … the central bank must be trusted not to debase the currency, but the history of fiat currencies is full of breaches of that trust,' wrote Nakamoto. The current state of the cryptocurrency landscape, though, seems to be a far cry from Nakamoto's original vision. The powerful institutions he eschewed are lining up to be major players in the sector. So is the federal government: President Donald Trump — whose family runs a cryptocurrency firm — issued a March executive order establishing a Strategic Bitcoin Reserve funded by the Treasury. Bank of America and Citibank said this week that they're working on launching stablecoins. What's more, powerful intermediary institutions have sprouted from the cryptocurrency craze. So individual users, rather than managing their own interactions with a transparent blockchain ledger, rely on major exchanges like Binance as an entry point into the market, and store their tokens with popular digital wallet services. 'Bitcoin became everything that it was trying to make obsolete,' said Lehdonvirta. Lehdonvirta first learned about bitcoin in 2009, from an early developer who was working for his brother. (He gained some fame in the space when the New Yorker suggested he could be Nakamoto, which he denied to the New Yorker in 2011, and to DFD on Thursday.) Initially, he thought that bitcoin's design was 'extremely clever' and a 'huge innovation,' but eventually that enthusiasm waned. 'It was trying to get rid of opaque middlemen who rigged markets at the expense of the little person,' Lehdonvirta told DFD. 'Step by step, it recreated those very same structures and the institutions that it was originally intended to circumvent.' He added that egalitarian ideals were thwarted from the early days by moneyed cryptocurrency miners, who use the power of expensive GPUs to amass more bitcoin than a small-time user could. 'This technology has been largely co-opted by all kinds of actors and certain incumbents,' said David Chaum, a cryptographer who's commonly known as the 'godfather of crypto.' He added that 'pressure from the powers that be isn't what we had envisioned.' Yet even for idealists, some departures from the original promise of cryptocurrency might not be all that bad. 'The ideals are very much, for many people, still the same,' said Dan Elitzer, a cryptocurrency venture capitalist. In 2014, he co-led a Massachusetts Institute of Technology group that gave $100 worth of bitcoin to more than 3,000 students. Elitzer wasn't drawn to bitcoin by a radical libertarian bent, but rather saw it as a mechanism to augment access to financial systems, especially in other countries with less stable monetary policies. 'The bet was that it was going to introduce the ability to access digital financial services to billions of people who don't have access otherwise,' he told DFD. Although Elitzer said the 'Crypto Week' frenzy might not have been what early developers envisioned — its focus on stablecoins, for instance, ties crypto closely to the fiat currencies it was designed to replace — he argued that it would also make the U.S. dollar more widely available on a global scale. 'The majority of the world would love to have access to the ability to save and spend in dollars,' he said. Chaum also mentioned that AI might help consumers interact more directly with cryptocurrency trading, without the need for intermediaries. He told DFD, 'One should be optimistic.' House wants to ban TikTok ads in D.C. The House Appropriations Committee is trying to prevent a redux of TikTok's March PR blitz in Washington, when it flooded metro stations with ads to forestall a ban. POLITICO's Anthony Adragna reports that the committee passed an amendment to the pending appropriations bill on Thursday that would prohibit ByteDance, TikTok's Beijing-based parent company, from advertising in public transportation and airports in the D.C. area. Rep. Steve Womack (R-Ark.), who sponsored the amendment, said at a hearing that it 'cracks down on the exploitation of advertising space by Chinese adversaries in transit systems and airports in our Capitol region.' Congress passed a law in 2024 forcing ByteDance to either shut down TikTok's operations in the U.S. or sell the app to buyers not controlled by foreign adversaries, like China. President Donald Trump has delayed the deadline for the ban three times, and said two weeks ago that he was resuming talks with China to keep the app online. TikTok has been fighting for its life in the meantime. Beyond bombarding the metro with ads about how important it is to the economy, it's also run a Super Bowl commercial, rented out billboards and bought an ad in POLITICO's print newspaper. Call for more global tech cops Lawmakers from the House Foreign Affairs Committee introduced a bill Thursday to nearly double the ranks of export control officers at the Bureau of Industry and Security, which they said would help catch diversion of advanced technologies like semiconductors. The bipartisan bill would increase the number of the officers from 11 to 20. It was sponsored by HFAC ranking member Gregory Meeks (D-N.Y.) the chair of HFAC's subcommittee on South and Central Asia Bill Huizenga (R-Mich.), subcommittee ranking member Sydney Kamlager-Dove (D-Calif.), and Rep. Jefferson Shreve (R-Ind.). The global semiconductor trade is a vast industry worth more than $600 billion, according to one estimate. But the bill noted that in fiscal year 2024, fewer than a dozen overseas officers conducted more than 1,400 'end-use checks' that verify whether U.S. tech is being legally used overseas. 'Without strong enforcement, our export controls are toothless,' Meeks said in a statement. post of the day THE FUTURE IN 5 LINKS Stay in touch with the whole team: Aaron Mak (amak@ Mohar Chatterjee (mchatterjee@ Steve Heuser (sheuser@ Nate Robson (nrobson@ and Daniella Cheslow (dcheslow@

Grok shows why runaway AI is such a hard national problem
Grok shows why runaway AI is such a hard national problem

Politico

time09-07-2025

  • Politics
  • Politico

Grok shows why runaway AI is such a hard national problem

With help from Anthony Adragna and Mohar Chatterjee Elon Musk's AI chatbot Grok just made headlines in all the wrong ways, as users managed to goad it into a series of antisemitic and abusive tirades Tuesday night. The xAI chatbot posted a litany of statements praising Adolf Hitler, describing fictional sexual assaults of certain users and denigrating Jewish and disabled people. Critics jumped on Grok's meltdown as an extreme if predictable example of Musk's ambition for a truly anti-'woke' AI, unfettered by liberal social norms. The company quickly promised changes, and Musk distanced himself from Grok's provocations in an X post, writing, 'Grok was too compliant to user prompts. Too eager to please and be manipulated, essentially.' As a tech problem, Grok's blowup points to a profound challenge in controlling AI bots, rooted in their utter unknowability. For Washington, and regulators everywhere, it's a sobering reminder of just how difficult the fight to manage AI has become. My colleagues Anthony Adragna and Mohar Chatterjee spent the day calling members of Congress, more than a dozen in all, including some of those appointed to the Congressional AI Caucus. What did they think about the runaway hate speech by one of the world's most powerful and easily accessible AI platforms? What should be done? Not a single one had any reaction to the Grok blowup. Nothing critical, supportive or otherwise. Perhaps they didn't want to get sideways with an unpredictable mega-billionaire. But the issue also steers into a very live argument about hateful language generated by AI — one that Congress hasn't tried to grapple with, and has already landed would-be regulators in the courts. Horrifying but legal speech is extremely tough to regulate in the U.S., even if machines generate it. State governments have made a few attempts to constrain the outputs of generative AI — and found themselves facing First Amendment challenges in court. Any federal law that would attempt to rein in chatbots, even when they espouse extremely toxic views, would come in for just as much scrutiny. 'If someone wants to have a communist AI that responds by saying there ought to be a mass killing of capitalist exploiters, or a pro-Jihadist AI outputting 'death to America' … the government isn't allowed to stop that,' said UCLA Law professor Eugene Volokh, a First Amendment specialist, who has sued to roll back state restrictions on tech platforms. The courts are still figuring out how the First Amendment applies to generative AI. Last year, a federal judge blocked California's law banning election-related deepfakes, finding that it likely impinged on users' right to criticize the government. In May, however, a federal judge in Florida partly denied attempts to dismiss a case alleging that its chatbot caused a 14-year-old boy to commit suicide. She wrote that she was unprepared to rule that the chatbot's outputs are protected 'speech.' DFD called Matthew Bergman, the attorney representing the victim's family, about the Grok situation — and he suggested it could be difficult to litigate Grok's outburst. 'You have to show that the output is in some way harmful or hurtful to individuals, not simply violent or offensive,' he said. Bergman is also helping to sue Meta and other platforms for allegedly radicalizing the perpetrator of the 2022 mass shooting in Buffalo, New York. Without a clear individual harm like that, he says, it would be tough to use existing laws to bring Grok to heel. Ari Cohn, lead tech counsel at the Foundation for Individual Rights and Expression (FIRE), told DFD that he has a hard time seeing how any kind of law addressing the Grok incident could pass constitutional muster. 'AI spits out content or ideas or words based on its programming, based on what the developers trained it to do,' he said. 'If you can regulate the output, then you're essentially regulating the expressive decisions of the developers.' One less restrictive option for regulating AI is transparency requirements — the kind of thing that the Joe Biden White House tried to push through in 2023 via an executive order that President Donald Trump has since repealed. But when it comes to speech — even hate speech — any such rules could hit a similar wall. In 2024, New York signed the 'Stop Hiding Hate Act' into law, which requires social platforms to regularly disclose how their AI algorithms handled certain content that violated their hate speech rules. The law is now under attack by none other than Elon Musk's X, which filed a First Amendment challenge in June. Given the power and growing influence of AI, some policymakers think it's still worth trying to solve the puzzle of how regulations could handle bigoted chatbots while preserving freedom of speech. Alondra Nelson, a sociologist and tech policy leader who helped design the Biden administration's AI policy, wrote to DFD, '[T]here are critical governance questions we must address: for example, does this language create hostile workplaces for employees required to use this platform exclusively?' New York has been at the forefront of chatbot regulation, so it could take the lead in addressing this issue. Democratic Assemblymember Alex Bores, who got a bill passed to mitigate catastrophic harms caused by models like Grok, said regulating a generally bigoted chatbot would be tricky. He told DFD that focusing on the real-world impacts of abusive chatbots – like harassment or inciting violence – could guide future policymaking. 'Makers don't have control of what the frontier models are doing, and very quickly they can go off the rails,' he said. 'If a model starts saying awful things, who do you hold accountable?' European privacy groups take on Big Tech Privacy activists in the European Union have found a new tool to rein in tech companies: class action lawsuits. POLITICO's Ellen O'Regan reported Wednesday that the Dutch advocacy group SOMI and the Irish Council for Civil Liberties have filed such suits against TikTok, Meta and Microsoft. They're wielding the EU's General Data Protection Regulation, which governs personal data handling, in a novel way to get compensation for alleged privacy harms. The GDPR has a provision for large groups of consumers to seek compensation from companies if they've been similarly harmed by privacy violations. The EU's Collective Redress Directive, in force since 2020, offers a new avenue for those consumers to file class-action suits. This sort of litigation could offer a speedier channel for enforcing the law, since EU regulators have been sluggish. A recent landmark lawsuit showed how class action could dent companies that violate the GDPR. In January, a judge awarded a German citizen €400 in damages after he faced 'some uncertainty' over where his data went after he clicked a hyperlink on the European Commission's website. If everyone in a class were to be individually awarded such damages, the lump sum could be substantial. Staffers leave NASA en masse More than 2,000 senior-level employees are about to leave NASA as part of the Trump administration's broader efforts to cull the federal workforce, according to documents obtained by POLITICO's Sam Skove. The employees make up the bulk of nearly 2,700 civil staff who have accepted NASA's offers for early retirement, deferred resignations and buyouts. Most of the departing employees have been working on human space flight, science, facilities management, IT and finance. The White House's proposed budget for NASA in 2026 would reduce staffing and funding to the agency's lowest levels since the 1960s. These dramatic reductions could impact the Trump administration's ambitions to send astronauts to the moon in 2027, and to Mars thereafter. 'NASA remains committed to our mission as we work within a more prioritized budget,' NASA spokesperson Bethany Stevens told Sam. 'We are working closely with the Administration to ensure that America continues to lead the way in space exploration, advancing progress on key goals, including the Moon and Mars.' post of the day THE FUTURE IN 5 LINKS Stay in touch with the whole team: Aaron Mak (amak@ Mohar Chatterjee (mchatterjee@ Steve Heuser (sheuser@ Nate Robson (nrobson@ and Daniella Cheslow (dcheslow@

5 questions for Sree Ramaswamy
5 questions for Sree Ramaswamy

Politico

time06-06-2025

  • Business
  • Politico

5 questions for Sree Ramaswamy

Presented by With help from Anthony Adragna and Aaron Mak Hello, and welcome to this week's installment of the Future in Five Questions. This week we interviewed Sree Ramaswamy, a former senior policy adviser to the Biden administration's Commerce Department, whose work included facilitating the CHIPS and Science Act. Ramaswamy is now the chief innovation officer for NobleReach, a recently launched nonprofit that works to set up private-public partnerships through programs focused on talent and innovation, including at universities. He spoke about the changes under the new administration as well as the importance of securing supply chains against adversarial rivals, especially for critical technologies. An edited and condensed version of the conversation follows: What's one underrated big idea? I'm going to come at this from a national security standpoint. One of the things we have struggled with as a country is how to deal with the presence of adversarial inputs in our technology. That manifests in different ways. It manifests in people concerned about their chips coming from China. It manifests in people concerned about the fact that your printed circuit boards and the software that's flashed on them are done in Vietnam or in Malaysia by some third-party contractor, and we're like: Is there a back door here? Is somebody putting in a Trojan horse? We worry about the capability of the stack as it becomes larger and larger. We worry about the fact that we may have blind spots, both in terms of where adversaries can gain capabilities but also where they can insert vulnerabilities. What's a technology that you think is overhyped? The last few years, we've seen various aspects of the government come up with a list of critical technologies. Before we had the CHIPS Act, there was this thing called the Endless Frontiers Act, which had a list of critical technologies. I would say almost every single one of those technologies you could argue is overhyped. Take a look at those lists and ask yourself what technology is not on this list, and there's no answer to that question. Every single technology you can think of is on our list of the most critical technologies. It's sort of like saying I have 100 priorities — then you don't have any priorities. What I would like to see is a shift of attention away from the technologies themselves, and to the problems that the technologies can solve. What could the government be doing regarding technology that it isn't? What the government has traditionally done well is focus on the supply side of tech. It creates incentives, it builds infrastructure — the labs, test beds, it builds all of that stuff. It creates incentives that we've done with tax credits, subsidies and grant programs. What it is struggling to do is figure out how it can help on the demand side. It can tell you it needs warships, like right now. It needed them like a week ago, it needs them over the next year, or six months. It's also good at telling you in 15 years, this is how we think warfare is going to change. What it struggles to tell you is the in-between, because the in-between is where the tech stuff comes in. So when you say that you are trying to prioritize technology, what you're doing is you're prioritizing stuff that is in laboratories today. They're in university labs, they're in federal labs. They're going through proof of concept. They're going through early-stage validation. What that cohort needs to develop is what problem do you need to solve in like six years, seven years. It takes somewhere between five to eight years on average for some of these hard technologies to come to market. What you need is a demand signal sitting there saying, 'I don't need this warship now, but in seven years, I need my warships to have this capability.' And that's the missing piece. If we could get our government to start articulating that sort of demand, that could go a long way in helping develop technologies, de-risking them, and you'll be signaling that there's a customer for these things, which means that a bunch of VC guys will start crowding, because that's what VCs care about. They care about, do you have a path to get a customer? What book most shaped your conception of the future? [Laughs] I've forgotten how to read — my attention span is now three-minute-long YouTube videos. (Note: He later said the book that shaped his concept of the future was 'The Long Game' by Rush Doshi.) What has surprised you the most this year? I think what has surprised me the most this year is how easily and quickly things that we thought could not be changed are changing. And you know, you can take that both in a positive spirit and a negative spirit. When I was in the private sector, there were certain things that you feel are sort of off limits, both good and bad. There's a certain way of doing things, and if you stray beyond that, it's either illegal or it's immoral, or you're gonna get jeered by your peers. I definitely felt that in the government as well. There are certain things — even with something like CHIPS, these big investment programs — there were still spoken and unspoken things that you could do, things that you could not do, and I ran up against many of them. What I find surprising is how quickly many of those things are falling by the wayside. Changing the way federal agencies work, changing the way our allied relationships work, changing the way the trade regime works. In a broad sense, it's good, because it tells us that this country is capable of moving quickly. It does show you that if we need to, we can move. What I'm looking forward to, now that we've shown that you can move in big ways, including companies, can now add an end state to it and say, OK, we really need to be able to move in a big way to solve this problem: completely diversify our supply chains away from adversaries, completely have a clean AI tech stack in the next three years. I left government thinking about our inability to move quickly. So I'm glad to see it — I'm not happy with all of it — but I'm glad to see we can. Tech's heavy emissions footprint Carbon emissions for the world's leading tech company operations surged 150 percent between 2020 and 2023, according to a report from the United Nations' digital agency. Compared to a 2020 baseline, operational emissions for Amazon grew 182 percent in 2023 against 2020 levels, Microsoft's grew 155 percent, Meta's increased 145 percent, and Alphabet's grew 138 percent. This was all for 2023, the last year for which complete data is available. Demand for energy-intensive artificial intelligence and data centers has only surged since then. Just 10 tech companies accounted for half of the industry's electricity demand in 2023, according to the report. Those are China Mobile, Amazon, Samsung Electronics, China Telecom, Alphabet, Microsoft, TSMC, China Unicom, SK Hynix and Meta. Overall, however, the tech sector is a relatively small player in global emissions. The 166 companies covered in the report accounted for 0.8 percent of all global energy-related emissions in 2023, it concluded. Anthropic opposes AI moratorium Anthropic CEO Dario Amodei took what looked like a bold, independent stance on federal AI laws yesterday — but was it really so bold? In a New York Times op-ed, Amodei came out against the 10-year moratorium on state AI laws that Congress is proposing. He argued the moratorium is 'far too blunt an instrument,' and instead recommended that Congress first pass a federal transparency law. A tech CEO calling for federal regulation of his own industry? It's almost like 2023 again. But several critics have pointed out that this wasn't quite such a disinterested stance. The federal law he's looking for would — in his proposal — pre-empt all those inconvenient state laws. 'If a federal transparency standard is adopted,' Amodei wrote, 'it could then supersede state laws, creating a unified national framework.' Former OpenAI researcher Steven Adler critiqued the idea in an X post: 'Anthropic's CEO only says he wants regulation so he seems responsible. He knows there's no risk he'll actually get regulated.' And there's an argument that the law wouldn't change much. As Amodei himself notes, major AI companies like Google and OpenAI already have self-imposed transparency requirements. So does Anthropic – the company recently disclosed that its model tried to blackmail a user in a test run. DFD asked Anthropic about the criticisms. The company responded by clarifying that the transparency standard would mainly supersede state laws mitigating catastrophic AI risks, like cyberattacks. Amodei cautions that companies may abandon their transparency measures as their models get more complex, so the federal law might be necessary. Even so, current state AI laws have more teeth and specificity than the federal transparency standard that Amodei is proposing. South Dakota imposes civil and criminal liabilities on election deepfakes. Tennessee law prevents AI from impersonating musicians. New Hampshire prohibits state agencies from using AI to surveil the public. Alondra Nelson, a key architect of federal AI policy under President Joe Biden, wrote to DFD: '[A] federal requirement for industry to provide more information is a good foundation for states' laws to build upon, but it cannot replace them.' Amodei frames his proposal as a compromise between the goals of states and the federal government. In such a bargain, the big winner could be an industry that is already used to sliding through those gaps. post of the day THE FUTURE IN 5 LINKS Stay in touch with the whole team: Mohar Chatterjee (mchatterjee@ Steve Heuser (sheuser@ Nate Robson (nrobson@ and Daniella Cheslow (dcheslow@

Cyber cuts are freaking out China watchers
Cyber cuts are freaking out China watchers

Politico

time05-06-2025

  • Politics
  • Politico

Cyber cuts are freaking out China watchers

Presented by With help from Anthony Adragna and Aaron Mak More than 1,000 cybersecurity professionals have either left or are set to walk off their jobs in the federal government in the coming months, as the Department of Government Efficiency initiative drives layoffs and buyouts across agencies. The timing could not be worse: staff numbers are plummeting just as China is ramping up its cyberattacks — and these efforts have soared in recent years. These operations include hacking group Volt Typhoon, found to have burrowed widely into critical infrastructure since at least 2022, with experts warning U.S. water systems and transportation networks have been compromised. And they also include Salt Typhoon, discovered to be in U.S. telecom networks last year. Together, these ramped up hacks from government-backed Chinese groups amount to advance work for sophisticated war, said retired Rear Admiral Mark Montgomery, current senior director at the Foundation for Defense of Democracies. 'As a military planner, this is what I called operational preparation of the battlefield,' Montgomery said. 'China has continued to accelerate their efforts to gain access into U.S. and allied critical infrastructures and we are still playing a defensive game of trying to identify and remove [them].' The cuts affect a cross-section of the federal cyber army. The Cybersecurity and Infrastructure Security Agency, a part of the Department of Homeland Security, is expecting to lose about 1,000 employees, amounting to about a third of its personnel, as well its top leadership and programs around election security. The agency has been in President Donald Trump's crosshairs since the cyber chief he appointed, Chris Krebs, said the 2020 election was secure. Trump fired Krebs as a result. The State Department's cyber bureau is set to be split up in a reorganization of the office. The Office of the National Cyber Director at the White House and U.S. Cyber Command are without Senate-confirmed leaders. The Defense Information Systems Agency, which secures the Pentagon's IT and telecommunications infrastructure, is also set to lose about 10 percent of its workforce, as part of Defense Secretary Pete Hegseth's drive to reduce the DOD's civilian workforce by between 5 and 8 percent. Lawmakers from both parties are sounding the alarm. Sen. Josh Hawley (R-Mo.) said during a Senate hearing Thursday that Salt Typhoon hackers still 'have unlimited access to our voice messages, to our telephone calls,' describing it as 'astounding.' A group of House Democrats led by Rep. Ritchie Torres (D-N.Y.) sent a letter Thursday to both Director of National Intelligence Tulsi Gabbard and Homeland Security Secretary Kristi Noem asking about what has been done to respond to Salt Typhoon. The lawmakers wrote that agency personnel cuts showed that 'instead of rising to meet the moment, the Trump administration seems intent on dismantling the core institutions responsible for cyber defense.' The ODNI and DHS did not immediately respond to requests for comment. Noem told cyber experts at the RSA Conference in San Francisco in April to 'just wait until you see what we are able to do' on cyber, noting that 'there are reforms going on' around the topic. Last year provided a case study for the threat when the Chinese government hacking group Salt Typhoon was discovered to have penetrated U.S. telecommunications systems, including devices belonging to then-candidate Trump and his running mate JD Vance. The breach was so vast that Senate Intelligence Committee Vice Chair Mark Warner (D-Va.), a former telecoms executive, estimated earlier this year that it would take '50,000 people and a complete shutdown of the network for 12 hours' to fully weed out Chinese hackers from U.S. telecommunications systems. Adam Meyers, senior vice president of counter adversary operations at CrowdStrike, told POLITICO in a recent interview that 'China is just increasing the pace of what they're doing,' noting that the nation is 'just the biggest, broadest threat out there.' Relief seems a long way away. The Senate Homeland Security Committee held a nomination hearing Thursday for Sean Cairncross as the next national cyber director at the White House. Cairncross has virtually no experience in cyber. He previously led the Millennium Challenge Corporation and worked in various leadership roles at the Republican National Committee. The nomination of Sean Plankey to lead CISA is still pending. Plankey is a former cyber official at the Energy Department and on the National Security Council. Sen. Ron Wyden (D-Ore.) has blocked Plankey's confirmation vote in the full Senate until CISA publicly releases a 2022 report on telecom vulnerabilities. Jim Lewis, distinguished fellow at the Center for European Policy Analysis and a Washington cyber expert, said that it was understandable that the new administration would take time to establish its cyber policies, and anticipated that agencies might stabilize when new funding becomes available after the fiscal year ends in September. But he said the gap until then leaves a dangerous opening. 'Will the Chinese figure out that they have an opportunity and do they need to take it? I think right now the answer is no,' Lewis said of the delay. 'But that's three months of open season.' An Apple appeals setback A federal appeals court rejected Apple's emergency request to halt court-ordered changes to the company's app store — primarily an order that it can't charge commissions for certain payments. Wednesday's order from the 9th U.S. Circuit Court of Appeals said it considered a host of factors in denying Apple's request for a stay, including whether Apple was likely to succeed in its appeal, whether it would be irreparably harmed absent court action and whether a stay of the lower court's order would be in the public interest. Briefs in the appeal are due this summer. 'After reviewing the relevant factors, we are not persuaded that a stay is appropriate,' the court wrote. U.S. District Judge Yvonne Gonzalez Rogers of the Northern District of California previously ruled Apple could no longer charge a commission when a link took users to a third-party payment app. The judge said in late April that Apple violated a prior injunction and that a company executive 'outright lied' under oath. 'We are disappointed with the decision not to stay the district court's order, and we'll continue to argue our case during the appeals process,' an Apple spokesperson said. 'As we've said before, we strongly disagree with the district court's opinion. Our goal is to ensure the App Store remains an incredible opportunity for developers and a safe and trusted experience for our users.' State AI rules threaten national security When House Speaker Mike Johnson defended the controversial 10-year moratorium on enforcement of state AI laws in the spending bill, he invoked national security as the reason. 'We have to be careful not to have 50 different states regulating AI, because it has national security implications, right?' Johnson told POLITICO's Meredith Lee Hill and Anthony Adragna on Wednesday. The speaker's office declined to elaborate when DFD followed up. Republicans have generally justified the moratorium — and potentially preempting state laws — as crucial for business development. So why does this now matter to national security? Johnson's national security argument has been emerging on the edges of the current reconciliation debate. The House's Bipartisan Artificial Intelligence Task Force floated a moratorium in a report last year, suggesting that states do not have the expertise to evaluate the national security ramifications of their AI legislation. Daniel Castro, vice president of the Information Technology and Innovation Foundation, wrote last week that the patchwork of state laws disrupts the supply chains enabling the Department of Defense to implement AI. James Czerniawski, senior policy analyst for Americans for Prosperity, also endorsed Johnson's national security framing on Wednesday, citing the tight race with China for AI leadership. Is it a real concern, or just expediency? National security has been a reliable argument for lawmakers struggling to get a provision over the line, from the TikTok ban to the CHIPS Act. Whatever the rationale, whether the moratorium survives the Senate parliamentarian is the real question now. post of the day THE FUTURE IN 5 LINKS Stay in touch with the whole team: Mohar Chatterjee (mchatterjee@ Steve Heuser (sheuser@ Nate Robson (nrobson@ and Daniella Cheslow (dcheslow@

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store