logo
#

Latest news with #AISI

Republicans Are Trying to Block My State From Regulating AI
Republicans Are Trying to Block My State From Regulating AI

Yahoo

time04-06-2025

  • Business
  • Yahoo

Republicans Are Trying to Block My State From Regulating AI

I helped write, pass, and protect the nation's first law to regulate artificial intelligence. As part of Donald Trump's tax bill, Republicans in Washington are now trying to overturn our law in Colorado and preempt any other similar efforts around the country — as the need to regulate AI only grows. The Biden administration formed the U.S. Artificial Intelligence Safety Institute (AISI) in 2023 to identify potential major risks AI could cause. These federal guidelines ushered tech companies to move toward embracing regulation. Understanding the pattern of inaction by Congress, in 2024, Colorado collaborated with lawmakers in more than 30 other states, attempting to pass a new uniform AI regulation. In the final days of the 2024 Colorado's Legislative Session, I was scrambling to whip votes for what would become the first artificial intelligence regulation in the nation, SB24-205, led by state Senator Robert Rodriguez. We realized an increased use of artificial intelligence, often unseen by the consumer, in important aspects of daily life, like health care, finance, and criminal justice should be a significant concern due to its inherent potential for bias. AI systems learn from wide datasets, and if these datasets reflect existing societal prejudices — whether in gender stereotypes, historical lending practices, or medical research — the AI will not only replicate but often magnify these biases, potentially leading to discriminatory outcomes. This can result in unfair loan denials, misdiagnoses, wrongful arrests, or limited opportunities for just about anybody. Republicans' 'One Big Beautiful Bill' — their effort to extend and expand Trump's 2017 tax cuts for the wealthy — has a dangerous provision to prevent states from enacting any AI regulation of any kind for 10 years. The measure would preempt our state law in Colorado, and have the federal government solely handle regulating AI — which it won't do. That should have you very concerned because the enormous AI growth we've witnessed in just the last couple years will just get exponentially greater over the next decade. As the Chair of the Joint Technology Committee in Colorado's General Assembly, I know what will happen if Republicans enact such a provision into law. Big corporations and tech bros will get uber rich, and we'll become victims of their AI experiments. It's not easy convincing colleagues to show interest in a wonky, nuanced topic like AI, especially with the legislative clock running out, but with carefully negotiated amendments and the support of the Colorado AFL-CIO, we mustered enough votes to become the first and only state to pass the new law. But state lawmakers knew very well that we had to take action because the Feds are notorious for inaction. Colorado's bill and other legislative efforts happening in tandem were supposed to set up a uniform policy any state could adopt to avoid the dreaded 'patchwork of legislative laws' in lieu of federal policy. After our bill passed in 2024, we created a task force, prepared a report, had stakeholder meetings, and drafted legislation, specifically bill SB25-318, to improve compliance, accountability, and processes. We were intent on making a great law other states could model. When the second Trump administration gutted the Biden-initiated AISI guidelines, the tech industry was no longer interested in collaborating. The 'safety first' approach was lost and the free-for-all attitude was embraced. On the third to last day of the 2025 session, we tried to push forward a bill to improve the AI law — but it became apparent that the tech industry was intent on derailing our effort. Led by the venture capitalists, or VCs, and their lobbyists, they created a panic among scapegoats in industries like health care and education and small businesses, to call for us to 'do something about AI.' By 'doing something,' they meant extending the implementation of the current law an additional year so they would have more time for compliance. As a seasoned legislator, I immediately saw this as an obvious lobbying tactic to have more time to create a scheme to prevent the law from ever being enacted. With my legislative colleagues shaken and convinced by the onslaught of the lobbying efforts, Senator Robert Rodriguez was unable to fend off the implementation deadline change, and he was forced to kill the bill. Unsatisfied, the tech industry was still searching for a way out of any regulation. The next day, a seemingly minor bill, SB25-322, was on the calendar for debate. It was basically a simple provision the Attorney General needed for a lawsuit. I got called into several meetings with legislative leaders to talk about how to quell the manufactured VC panic. They proposed I become the 'hero' by running an amendment to SB25-322 to again attempt to push out implementation of our original law on AI regulations. I am no hero to big corporations. I fight for the underdog, the worker, and everyday citizens who don't have billions of dollars to manipulate the legislative process. I said no to their proposal and offered a two month extension to give us time in 2026 to try another bill. That wasn't acceptable, so I went to war. I was not going to let anyone attach an amendment, and if they did, I'd kill the bill. The unwritten rule in the state House and the Senate is the midnight deadline. Every day stops at midnight. Nobody knows what actually happens if we don't end work a minute after 12 a.m. and nobody has ever attempted to find out. The penultimate day of the session embodies the expression, 'If it weren't for the last minute, nothing would get done.' We finally got to SB25-322 at 10:40pm. They called for a vote to limit debate for one hour, which passed. A representative quickly attached the amendment I swore to fight. If I filibustered for the hour, I still needed to fill up the remaining 20 minutes to kill the bill. I blathered for the whole hour and now it was 11:40pm. When time was up, they were successful in getting their amendment on the bill, and the bill passed. Most people don't know that the work of our legislative debate is not complete until we adopt the Committee of the Whole Report which we endearingly call, the COW. The COW is intended for fixing mistakes. It has been often used nefariously. I don't take this lightly, but this was my final strategy for success. Using amendments I ran during my filibuster, I would say the bill actually didn't pass. In order to stop me, the Majority Leader moved for Rule 16 which calls the 'question.' This means that anything and everything that was about to go down would be done without any debate. It would allow them to just do quick votes on all of it, and 15 minutes was sufficient time. I felt a moment of despair. My efforts would all be for naught if Rule 16 passed. Then, we voted, and it failed. I was now able to bring my amendments, run the clock to midnight, and kill the bill. Just seconds before the clock hit midnight, I was interrupted by the Majority Leader to call to adjourn the day's work. The bill was dead. I had saved the country's only AI law from certain demise. The Speaker was angry. There was a buzz of puzzlement and excitement. In the later days and weeks, those who paid attention recalled the events as legislative heroism. Do I believe that Congress will ever pass meaningful AI regulation? No. There is a lack of courage to stand up for what's right, especially when big money gets involved. But it can be done. I know, because I did it. Unfortunately, not all elected officials have the intestinal fortitude to filibuster their own party to do what's right. So next time there's an election, do your homework so you can distinguish between the true public servants and the self-serving politicians. More from Rolling Stone Escaped Inmate Asks Lil Wayne, NBA YoungBoy, Meek Mill for Help Elon Calls Trump's 'Big, Beautiful' Bill a 'Disgusting Abomination' Neil Young Invites Donald Trump to Summer Tour After Springsteen Spat Best of Rolling Stone The Useful Idiots New Guide to the Most Stoned Moments of the 2020 Presidential Campaign Anatomy of a Fake News Scandal The Radical Crusade of Mike Pence

The Wiretap: Trump Says Goodbye To The AI Safety Institute
The Wiretap: Trump Says Goodbye To The AI Safety Institute

Forbes

time03-06-2025

  • Politics
  • Forbes

The Wiretap: Trump Says Goodbye To The AI Safety Institute

The Wiretap is your weekly digest of cybersecurity, internet privacy and surveillance news. To get it in your inbox, subscribe here. (Photo by Jim WATSON / AFP) (Photo by JIM WATSON/AFP via Getty Images) The Trump administration has announced plans to reorganize the U.S. AI Safety Institute (AISI) into the new Center for AI Standards and Innovation (CAISI). Set up by the Biden administration in 2023, AISI operated within the National Institute of Standards & Technology (NIST) to research risks in widely-used AI systems like OpenAI's ChatGPT or Anthropic's Claude. The move to dismantle the body had been expected for some time. In February, as JD Vance headed to France for a major AI summit, his delegation did not include anyone from the AI Safety Institute, Reuters reported at the time. The agency's inaugural director Elizabeth Kelly had stepped down earlier in the month. The Commerce Department's announcement marking the change is thin on details about the reorganization, but it appears the aim is to favor innovation over red tape. 'For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards. CAISI will evaluate and enhance U.S. innovation of these rapidly developing commercial AI systems while ensuring they remain secure to our national security standards,' said Secretary of Commerce Howard Lutnick. What could be gleaned from Lutnick's paradoxical phrasing – national security-focused standards are limiting, but America needs national security-focused standards – is that it's very difficult to tell just how much the new body will differ from the old one. The announcement goes on to state that CAISI will 'assist industry to develop voluntary standards' in AI, which summed up much of what the old body did. Similarly, just as the AI Safety Institute was tasked with assessing risks in artificial intelligence, CAISI will 'lead unclassified evaluations of AI capabilities that may pose risks to national security.' CAISI will also still be a part of NIST. And, despite Lutnick's apparent disdain for standards, the Commerce press release concludes that CAISI will 'ensure U.S. dominance of international AI standards.' That there's little obvious change between the Institute and CAISI might alleviate any immediate concerns the U.S. is abandoning commitments to keep AI safe. Just earlier this year, a coalition of companies, nonprofits and academics called on Congress to codify the Institute's existence before the year was up. That included major players like OpenAI and Anthropic, both of which had agreements to work with the agency on research projects. What happens to those is now up in the air. The Commerce Department hadn't responded to a series of questions at the time of publication, and NIST declined to comment. Got a tip on surveillance or cybercrime? Get me on Signal at +1 929-512-7964. (Photo by Melina Mara-Pool/Getty Images) Unknown individuals have impersonated President Trump's chief of staff Susie Wiles in calls and texts to Republican lawmakers and business executives. Investigators suspect the perpetrators used artificial intelligence to clone Wiles' voice. One lawmaker was asked by the impersonator to assemble a list of individuals for potential presidential pardons, according to the Wall Street Journal. It's unclear that motives lay behind the impersonation, or how they pulled the stunt off. Wiles had told confidantes that some of her contacts from her personal phone had been stolen by a hacker. A Texas police officer searched Flock Safety's AI-powered surveillance camera network to track down a woman who had carried out a self-administered abortion, 404 Media reports. Because the search was conducted across different states, experts raised concerns about police using Flock to track down individuals getting abortions in states where it's legal before going back home to a state where it's illegal. The cops said they were simply worried about the woman's safety. Nathan Vilas Laatsch, a 28-year-old IT specialist at the Defense Intelligence Agency, has been arrested and charged with leaking state secrets after becoming upset at the Trump administration. The DOJ did not specify to which country Laatsch allegedly tried to pass secrets, sources told the Washington Post it was Germany. He was caught out by undercover agents posing as interested parties, according to the DOJ. Europol announced it had identified more than 2,000 links 'pointing to jihadist and right-wing violent extremist and terrorist propaganda targeting minors.' The agency warned that it had seen terrorists using AI to generate content like short videos and memes 'designed to resonate with younger audiences.' A 63-year-old British man, John Miller, was charged alongside a Chinese national by the Department of Justice with conspiring to ship missiles, air defense radar, drones and unspecified 'cryptographic devices' to China. They're also charged with trying to stalk and harass an individual who was planning protests against Chinese president Xi.

'What if Superintelligent AI Goes Rogue?' Why We Need a New Approach to AI Safety
'What if Superintelligent AI Goes Rogue?' Why We Need a New Approach to AI Safety

Newsweek

time19-05-2025

  • Newsweek

'What if Superintelligent AI Goes Rogue?' Why We Need a New Approach to AI Safety

You will hear about "super intelligence," at an increasing rate over the coming months. Though it is the most advanced AI technology ever created, its definition is simple. Superintelligence is the point at which AI intelligence passes human intelligence in general cognitive and analytic functions. As the world competes to create a true superintelligence, the United States government has begun removing previously implemented guardrails and regulation. The National Institute of Standards and Technology sent updated orders to the U.S. Artificial Intelligence Safety Institute (AISI). They state to remove any mention of the phrases "AI safety," "responsible AI," and "AI fairness." In the wake of this change, Google's Gemini 2.5 Flash AI model increased in its likelihood to generate text that violates its safety guidelines in the areas of "text-to-text safety" and "image-to-text safety." If Superintelligence Goes Rouge We are nearing the Turing horizon, where machines can think and surpass human intelligence. Think about that for a moment, machines outsmarting and being cleverer than humans. We must consider all worst-case scenarios so we can plan and prepare to prevent that from ever occurring. If we leave superintelligence to its own devices, Stephen Hawking's prediction of it being the final invention of man could come true. AI apps are pictured. AI apps are pictured. Getty Images Imagine if any AI or superintelligence were to be coded and deployed with no moral guidelines. It would then act only in the interest of its end goal, no matter the damage it could do. Without these morals set and input by human engineers the AI would act with unmitigated biases. If this AI was deployed with the purpose of maximizing profit on flights from London to New York, what would be the unintended consequences? Not selling tickets to anyone in a wheelchair? Only selling tickets to the people that weigh the least? Not selling to anyone that has food allergies or anxiety disorders? It would maximize profits without taking into account any other factors than who can pay the most, take up the least time in boarding and deplaning, and cause the least amount of fuel use. Secondarily, what if we allow an AI superintelligence to be placed in charge of all government spending to maximize savings and cut expenses? Would it look to take spend away from people or entities that don't supply tax revenue? That could mean removing spending from public school meal programs for impoverished children, removing access to health care to people with developmental disabilities, or cutting Social Security payments to even the deficit. Guardrails and guidelines must be written and encoded by people to ensure no potential harm is done by AI. A Modern Approach Is Needed for Modern Technology The law is lagging behind technology globally. The European Union (EU) has ploughed ahead with the EU AI Act, which at a surface glance appears to be positive, but 90 percent of this iceberg lurks beneath the surface, potentially rife with danger. Its onerous regulations put every single EU company at a disadvantage globally with technological competitors. It offers little in the way of protections for marginalized groups and presents a lack of transparency in the fields of policing and immigration. Europe cannot continue on this path and expect to stay ahead of countries that are willing to win at any cost. What needs to happen? AI needs to regulate AI. The inspection body cannot be humans. Using payment card industry (PCI) compliance as a model, there needs to be a global board of AI compliance that meets on a regular basis to discuss the most effective and safe ways AI is used and deployed. Those guidelines are then the basis for any company to have their software deemed AI Compliant (AIC). The guidelines are written by humans, but enforced by AI itself. Humans need to write the configuration parameters for the AI program and the AI program itself needs to certify the technology meets all guidelines, or report back vulnerabilities and wait for a resubmission. Once all guidelines are met a technology will be passed as AIC. This technology cannot be spot checked like container ships coming to port—every single line of code must be examined. Humans cannot do this, AI must. We are on the precipice of two equally possible futures. One is a world where bad actors globally are left to use AI as a rogue agent to destabilize the global economy and rig the world to their advantage. The other is one where commonsense compliance is demanded of any company wanting to sell technology by a global body of humans using AI as the tool to monitor and inspect all tech. This levels the field globally and ensures that those who win are those that are smartest, most ethical, and the ones that deserve to get ahead. Chetan Dube is an AI pioneer and founder and CEO of Quant. The views expressed in this article are the writer's own.

Understanding shift from AI Safety to Security, and India's opportunities
Understanding shift from AI Safety to Security, and India's opportunities

Indian Express

time08-05-2025

  • Business
  • Indian Express

Understanding shift from AI Safety to Security, and India's opportunities

Written by Balaraman Ravindran, Vibhav Mithal and Omir Kumar In February 2025, the UK announced that its AI Safety Institute would become the AI Security Institute. This triggered several debates about what this means for AI safety. As India prepares to host the AI Summit, a key question will be how to approach AI safety. The What and How of AI Safety In November 2023, more than 20 countries, including the US, UK, India, China, and Japan, attended the inaugural AI Safety Summit at Bletchley Park in the UK. The Summit took place against the backdrop of increasing capabilities of AI systems and their integration into multiple domains of life, including employment, healthcare, education, and transportation. Countries acknowledged that while AI is a transformative technology with potential for socio-economic benefit, it also poses significant risks through both deliberate and unintentional misuse. A consensus emerged among the participating countries on the importance of ensuring that AI systems are safe and that their design, development, deployment, or use does not harm society—leading to the Bletchley Declaration. The Declaration further advocated for developing risk-based policies across nations, taking into account national contexts and legal frameworks, while promoting collaboration, transparency from private actors, robust safety evaluation metrics, and enhanced public sector capability and scientific research. It was instrumental in bringing AI safety to the forefront and laid the foundation for global cooperation. Following the Summit, the UK established the AI Safety Institute (AISI), with similar institutes set up in the US, Japan, Singapore, Canada, and the EU. Key functions of AISIs include advancing AI safety research, setting standards, and fostering international cooperation. India has also announced the establishment of its AISI, which will operate on a hub-and-spoke model involving research institutions, academic partners, and private sector entities under the Safe and Trusted pillar of the IndiaAI Mission. UK's Shift from Safety to Security The establishment of AISIs in various countries reflected a global consensus on AI safety. However, the discourse took a turn in January 2025, when the UK rebranded its Safety Institute as the Security Institute. The press release noted that the new name reflects a focus on risks with security implications, such as the use of AI in developing chemical and biological weapons, cybercrimes, and child sexual abuse. It clarified that the Institute would not prioritise issues like bias or free speech but focus on the most serious risks, helping policymakers ensure national safety. The UK government also announced a partnership with Anthropic to deploy AI systems for public services, assess AI security risks, and drive economic growth. India's Understanding of Safety Given the UK's recent developments, it is important to explore what AI safety means for India. Firstly, when we refer to AI safety — i.e., making AI systems safe — we usually talk about mitigating harms such as bias, inaccuracy, and misinformation. While these are pressing concerns, AI safety should also encompass broader societal impacts, such as effects on labour markets, cultural norms, and knowledge systems. One of the Responsible AI (RAI) principles laid down by NITI Aayog in 2021 hinted at this broader view: 'AI should promote positive human values and not disturb in any way social harmony in community relationships.' The RAI principles also address equality, reliability, non-discrimination, privacy protection, and security — all of which are relevant to AI safety. Thus, adherence to RAI principles could be one way of operationalising AI safety. Secondly, safety and security should not be seen as mutually exclusive. We cannot focus on security without first ensuring safety. For example, in a country like India, bias in AI systems could pose national security risks by inciting unrest. As we aim to deploy 'AI for All' in sectors such as healthcare and education, it is essential that these systems are not only secure but also safe and responsible. A narrow focus on security alone is insufficient. Lastly, AI safety must align with AI governance and be viewed through a risk mitigation lens, addressing risks throughout the AI system lifecycle. This includes safety considerations from the conception of the AI model/system, through data collection, processing, and use, to design, development, testing, deployment, and post-deployment monitoring and maintenance. India is already taking steps in this direction. The Draft Report on AI Governance by IndiaAI emphasises the need to apply existing laws to AI-related challenges while also considering new laws to address legal gaps. In parallel, other regulatory approaches, such as self-regulation, are also being explored. Given the global shift from safety to security, the upcoming AI Summit presents India with an important opportunity to articulate its unique perspective on AI safety — both in the national context and as part of a broader global dialogue. Ravindran is Head, Wadhwani School of Data Science and AI & CeRAI; Mithal is Associate Research Fellow, CeRAI (& Associate Partner, Anand and Anand) and Kumar is Policy Analyst, CeRAI. CeRAI – Centre for Responsible AI, IIT Madras

EU wan impose counter-tariffs on US goods as Trump begin im metal tariffs
EU wan impose counter-tariffs on US goods as Trump begin im metal tariffs

BBC News

time12-03-2025

  • Business
  • BBC News

EU wan impose counter-tariffs on US goods as Trump begin im metal tariffs

Tariffs wey US President Donald Trump sama on imports of steel and aluminium don take effect for one move wey fit likely increase tensions wit some of America largest trading partners. E bin spark one immediate response from di European Union wey say e go impose counter tariffs on billions of euros of US goods. Trump hope say di tariffs go boost US steel and aluminium production, but critics say e go raise prices for US consumers and dent economic growth, as US markets sink on Monday and Tuesday for response to fearb of recession. On Tuesday, Trump bin make u-turn for im decision to double di tarrifs on Canada specifically, for response to one surcharge wey Ontario bin placed on electricity. Di tariffs mean say US businesses wey wan bring steel and aluminium into di kountri go need pay 25% tax on dem. Di EU bin announce retaliatory tariffs on Wednesday for response on goods wey worth €26bn (£22bn). Dey glo dey partially introduced on 1 April and fully in place on 13 April. European Union President Ursula von der Leyen tok say she "deeply regrets dis measure" add say tariffs dey "bad for business and worse for consumers". "Dem dey disrupt supply chains. Dey bring uncertainty for di economy. Jobs adey at stake, prices up, nobody need dat, on both sides, no be in di EU or di US." She tok say di EU response dey "strong but proportionate" and say EU remain "open to negotiations". However, di American Iron and Steel Institute (AISI), one group wey represent US steelmakers, bin welcome di tariffs say dey go create jobs and boost domestic steel manufacturing. Di group president Kevin Dempsey tok say di move close one system of exemptions, exclusions and quotas wey bin allow foreign producers to avoid tariffs. "AISI applaud di president actions to restore di integrity of di tariffs on steel and implement one robust and reinvigorated program to address unfair trade practices," Oga Dempsey add. Di US na major importer of aluminium and steel, and Canada, Mexico and Brazil dey among dia largest suppliers of di metals. 'No exceptions' Oda kontries also respond immediately to di move. Trade Secretary Jonathan Reynolds tok say e dey disappointed and "all options dey on di table" to respond in di national interest. Australia Prime Minister, Anthony Albanese, tok say di Trump administration decision to go ahead wit di new tariffs dey "entirely unjustified". Albanese, wey bin don dey try to secure exemption to di tariffs, say Australia no go impose retaliatory duties sake of say dat kind move go only drive up prices for Australian consumers. Meanwhile, Canada Energy Minister, Jonathan Wilkinson, bin tell CNN say im kontri go retaliate but add say Canada no dey look to escalate tensions. Canada, na one of America closest trade partners, and di largest exporter of steel and aluminium to di US. In 2018, during im first term as president, Trump bin impose import tariffs of 25% on steel and 10% on aluminium, but dem later negotiate carve-outs for many kontries. Dis time di Trump administration don signal say exemptions no go dey. British steel Gareth Stace, director general for industry body UK Steel, tok say US move dey "hugely disappointing". Some steel company contracts don already dey cancelled or put on hold, e tok, add say customers in di US go need pay £100m per year extra in di tax. E tok say e bin share Trumpconcerns about cheap steel wey dey flood di market, but urge for am to work wit di UK rather against am. "Surely President Trump realise say we be im friend, not im enemy, and our valued customers in di US be our partners, dey no be our enemies," e tok. Tariffs go "hit us hard" at a time wen imports of steel into di UK dey rise and di industry dey "struggle" wit energy prices. E bin call on di UK govment to "rapidly boost and bolster our trade defences" as di EU don do "to make sure di steel wey no go to di US" no flood di UK market, and to negotiate exemption from US tariffs. Recession fears Na Michael DiMarino dey run Linda Tool, one Brooklyn company wey dey make parts for di aerospace industry. Everytin e make involve some kind of steel, much of wey dey come from American mills. "If I get higher prices, I pass dem on to my customers. Dey get higher prices, dey pass am on to di consumer," Oga DiMarino tok, add say e support di call for increased manufacturing in di US but warn say di president moves fit backfire. Di American Automotive Policy Council, one group wey represents car giants like Ford, General Motors and Stellantis, also echoe such worries. Di organisation president, Matt Blunt, tok say dem "dey concerned say to specifically revoke exemptions for Canada and Mexico go add significant costs" to car makers' suppliers. Some economists dey warn say di tariffs fit help di US steel and aluminium industries but hurt di wider economy. "E dey protect [di steel and aluminium] industries but hurt downstream users of dia products as e go make dem more expensive," say Bill Reinsch, one former Commerce Department official, wey now dey for di Center for Strategic and International Studies. Fear of di economic cost of Trump trade tariffs don spark one selloff for US and global stock markets wey accelerate dis week afta di US president refuse to rule out di prospect of one economic recession. Meanwhile, research firm Oxford Economics, tok for one report say e don lower im US growth forecast for di year from 2.4% to 2% and make even steeper adjustments to im outlook for Canada and Mexico. "Despite di downgrade, we still dey expect di US economy to outperform di oda major advanced economies over di next couple of years," di report add. Additional reporting by Michelle Fleury in New York

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store