logo
#

Latest news with #FeiFeiLi

California lawmaker behind SB 1047 reignites push for mandated AI safety reports
California lawmaker behind SB 1047 reignites push for mandated AI safety reports

Yahoo

time09-07-2025

  • Business
  • Yahoo

California lawmaker behind SB 1047 reignites push for mandated AI safety reports

California State Senator Scott Wiener on Wednesday introduced new amendments to his latest bill, SB 53, that would require the world's largest AI companies to publish safety and security protocols and issue reports when safety incidents occur. If signed into law, California would be the first state to impose meaningful transparency requirements onto leading AI developers, likely including OpenAI, Google, Anthropic, and xAI. Senator Wiener's previous AI bill, SB 1047, included similar requirements for AI model developers to publish safety reports. However, Silicon Valley fought ferociously against that bill, and it was ultimately vetoed by Governor Gavin Newsom. California's Governor then called for a group of AI leaders — including the leading Stanford researcher co-founder of World Labs Fei Fei Li — to form a policy group and set goals for the state's AI safety efforts. California's AI policy group recently published their final recommendations, citing a need for 'requirements on industry to publish information about their systems' in order to establish a 'robust and transparent evidence environment.' Senator Wiener's office said in a press release that SB 53's amendments were heavily influenced by this report. 'The bill continues to be a work in progress, and I look forward to working with all stakeholders in the coming weeks to refine this proposal into the most scientific and fair law it can be,' Senator Wiener said in the release. SB 53 aims to strike a balance that Governor Newsom claimed SB 1047 failed to achieve — ideally, creating meaningful transparency requirements for the largest AI developers without thwarting the rapid growth of California's AI industry. 'These are concerns that my organization and others have been talking about for a while,' said Nathan Calvin, VP of State Affairs for the nonprofit AI safety group, Encode, in an interview with TechCrunch. 'Having companies explain to the public and government what measures they're taking to address these risks feels like a bare minimum, reasonable step to take.' The bill also creates whistleblower protections for employees of AI labs who believe their company's technology poses a 'critical risk' to society — defined in the bill as contributing to the death or injury of more than 100 people, or more than $1 billion in damage. Additionally, the bill aims to create CalCompute, a public cloud computing cluster to support startups and researchers developing large-scale AI. With the new amendments, SB 53 is now headed to California State Assembly Committee on Privacy and Consumer Protection for approval. Should it pass there, the bill will also need to pass through several other legislative bodies before reaching Governor Newsom's desk. On the other side of the U.S., New York Governor Kathy Hochul is now considering a similar AI safety bill, the RAISE Act, which would also require large AI developers to publish safety and security reports. The fate of state AI laws like the RAISE Act and SB 53 were briefly in jeopardy as federal lawmakers considered a 10-year AI moratorium on state AI regulation — an attempt to limit a 'patchwork' of AI laws that companies would have to navigate. However, that proposal failed in a 99-1 Senate vote earlier in July. 'Ensuring AI is developed safely should not be controversial — it should be foundational,' said Geoff Ralston, the former president of Y Combinator, in a statement to TechCrunch. 'Congress should be leading, demanding transparency and accountability from the companies building frontier models. But with no serious federal action in sight, states must step up. California's SB 53 is a thoughtful, well-structured example of state leadership.' Up to this point, lawmakers have failed to get AI companies on board with state-mandated transparency requirements. Anthropic has broadly endorsed the need for increased transparency into AI companies, and even expressed modest optimism about the recommendations from California's AI policy group. But companies such as OpenAI, Google, and Meta have been more resistant to these efforts. Leading AI model developers typically publish safety reports for their AI models, but they've been less consistent in recent months. Google, for example, decided not to publish a safety report for its most advanced AI model ever released, Gemini 2.5 Pro, until months after it was made available. OpenAI also decided not to publish a safety report for its GPT-4.1 model. Later, a third-party study came out that suggested it may be less aligned than previous AI models. SB 53 represents a toned-down version of previous AI safety bills, but it still could force AI companies to publish more information than they do today. For now, they'll be watching closely as Senator Wiener once again tests those boundaries. Sign in to access your portfolio

California lawmaker behind SB 1047 reignites push for mandated AI safety reports
California lawmaker behind SB 1047 reignites push for mandated AI safety reports

TechCrunch

time09-07-2025

  • Business
  • TechCrunch

California lawmaker behind SB 1047 reignites push for mandated AI safety reports

California State Senator Scott Wiener on Wednesday introduced new amendments to his latest bill, SB 53, that would require the world's largest AI companies to publish safety and security protocols and issue reports when safety incidents occur. If signed into law, California would be the first state to impose meaningful transparency requirements onto leading AI developers, likely including OpenAI, Google, Anthropic, and xAI. Senator Wiener's previous AI bill, SB 1047, included similar requirements for AI model developers to publish safety reports. However, Silicon Valley fought ferociously against that bill, and it was ultimately vetoed by Governor Gavin Newsom. California's Governor then called for a group of AI leaders — including the leading Stanford researcher and co-founder of World Labs, Fei Fei Li — to form a policy group and set goals for the state's AI safety efforts. California's AI policy group recently published their final recommendations, citing a need for 'requirements on industry to publish information about their systems' in order to establish a 'robust and transparent evidence environment.' Senator Wiener's office said in a press release that SB 53's amendments were heavily influenced by this report. 'The bill continues to be a work in progress, and I look forward to working with all stakeholders in the coming weeks to refine this proposal into the most scientific and fair law it can be,' Senator Wiener said in the release. SB 53 aims to strike a balance that Governor Newsom claimed SB 1047 failed to achieve — ideally, creating meaningful transparency requirements for the largest AI developers without thwarting the rapid growth of California's AI industry. 'These are concerns that my organization and others have been talking about for a while,' said Nathan Calvin, VP of State Affairs for the nonprofit AI safety group, Encode, in an interview with TechCrunch. 'Having companies explain to the public and government what measures they're taking to address these risks feels like a bare minimum, reasonable step to take.' Techcrunch event Save up to $475 on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Save $450 on your TechCrunch All Stage pass Build smarter. Scale faster. Connect deeper. Join visionaries from Precursor Ventures, NEA, Index Ventures, Underscore VC, and beyond for a day packed with strategies, workshops, and meaningful connections. Boston, MA | REGISTER NOW The bill also creates whistleblower protections for employees of AI labs who believe their company's technology poses a 'critical risk' to society — defined in the bill as contributing to the death or injury of more than 100 people, or more than $1 billion in damage. Additionally, the bill aims to create CalCompute, a public cloud computing cluster to support startups and researchers developing large-scale AI. With the new amendments, SB 53 is now headed to California State Assembly Committee on Privacy and Consumer Protection for approval. Should it pass there, the bill will also need to pass through several other legislative bodies before reaching Governor Newsom's desk. On the other side of the U.S., New York Governor Kathy Hochul is now considering a similar AI safety bill, the RAISE Act, which would also require large AI developers to publish safety and security reports. The fate of state AI laws like the RAISE Act and SB 53 were briefly in jeopardy as federal lawmakers considered a 10-year AI moratorium on state AI regulation — an attempt to limit a 'patchwork' of AI laws that companies would have to navigate. However, that proposal failed in a 99-1 Senate vote earlier in July. 'Ensuring AI is developed safely should not be controversial — it should be foundational,' said Geoff Ralston, the former president of Y Combinator, in a statement to TechCrunch. 'Congress should be leading, demanding transparency and accountability from the companies building frontier models. But with no serious federal action in sight, states must step up. California's SB 53 is a thoughtful, well-structured example of state leadership.' Up to this point, lawmakers have failed to get AI companies on board with state-mandated transparency requirements. Anthropic has broadly endorsed the need for increased transparency into AI companies, and even expressed modest optimism about the recommendations from California's AI policy group. But companies such as OpenAI, Google, and Meta have been more resistant to these efforts. Leading AI model developers typically publish safety reports for their AI models, but they've been less consistent in recent months. Google, for example, decided not to publish a safety report for its most advanced AI model ever released, Gemini 2.5 Pro, until months after it was made available. OpenAI also decided not to publish a safety report for its GPT-4.1 model. Later, a third-party study came out that suggested it may be less aligned than previous AI models. SB 53 represents a toned-down version of previous AI safety bills, but it still could force AI companies to publish more information than they do today. For now, they'll be watching closely as Senator Wiener once again tests those boundaries.

California is trying to regulate its AI giants — again
California is trying to regulate its AI giants — again

The Verge

time17-06-2025

  • Business
  • The Verge

California is trying to regulate its AI giants — again

Last September, all eyes were on Senate Bill 1047 as it made its way to California Governor Gavin Newsom's desk — and died there as he vetoed the buzzy piece of legislation. SB 1047 would have required makers of all large AI models, particularly those that cost $100 million or more to train, to test them for specific dangers. AI industry whistleblowers weren't happy about the veto, and most large tech companies were. But the story didn't end there. Newsom, who had felt the legislation was too stringent and one-size-fits-all, tasked a group of leading AI researchers to help propose an alternative plan — one that would support the development and the governance of generative AI in California, along with guardrails for its risks. On Tuesday, that report was published. The authors of the 52-page 'California Report on Frontier Policy' said that AI capabilities — including models' chain-of-thought 'reasoning' abilities — have 'rapidly improved' since Newsom's decision to veto SB 1047. Using historical case studies, empirical research, modeling, and simulations, they suggested a new framework that would require more transparency and independent scrutiny of AI models. Their report is appearing against the backdrop of a possible 10-year moratorium on states regulating AI, backed by a Republican Congress and companies like OpenAI. The report — co-led by Fei-Fei Li, Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence; Mariano-Florentino Cuéllar, President of the Carnegie Endowment for International Peace; and Jennifer Tour Chayes, Dean of the UC Berkeley College of Computing, Data Science, and Society — concluded that frontier AI breakthroughs in California could heavily impact agriculture, biotechnology, clean tech, education, finance, medicine and transportation. Its authors agreed it's important to not stifle innovation and 'ensure regulatory burdens are such that organizations have the resources to comply.' 'Without proper safeguards… powerful Al could induce severe and, in some cases, potentially irreversible harms' But reducing risks is still paramount, they wrote: 'Without proper safeguards… powerful Al could induce severe and, in some cases, potentially irreversible harms.' The group published a draft version of their report in March for public comment. But even since then, they wrote in the final version, evidence that these models contribute to 'chemical, biological, radiological, and nuclear (CBRN) weapons risks… has grown.' Leading companies, they added, have self-reported concerning spikes in their models' capabilities in those areas. The authors have made several changes to the draft report. They now note that California's new AI policy will need to navigate quickly-changing 'geopolitical realities.' They added more context about the risks that large AI models pose, and they took a harder line on categorizing companies for regulation, saying a focus purely on how much compute their training required was not the best approach. AI's training needs are changing all the time, the authors wrote, and a compute-based definition ignores how these models are adopted in real-world use cases. It can be used as an 'initial filter to cheaply screen for entities that may warrant greater scrutiny,' but factors like initial risk evaluations and downstream impact assessment are key. That's especially important because the AI industry is still the Wild West when it comes to transparency, with little agreement on best practices and 'systemic opacity in key areas' like how data is acquired, safety and security processes, pre-release testing, and potential downstream impact, the authors wrote. The report calls for whistleblower protections, third-party evaluations with safe harbor for researchers conducting those evaluations, and sharing information directly with the public, to enable transparency that goes beyond what current leading AI companies choose to disclose. One of the report's lead writers, Scott Singer, told The Verge that AI policy conversations have 'completely shifted on the federal level' since the draft report. He argued that California, however, could help lead a 'harmonization effort' among states for 'commonsense policies that many people across the country support.' That's a contrast to the jumbled patchwork that AI moratorium supporters claim state laws will create. In an op-ed earlier this month, Anthropic CEO Dario Amodei called for a federal transparency standard, requiring leading AI companies 'to publicly disclose on their company websites … how they plan to test for and mitigate national security and other catastrophic risks.' 'Developers alone are simply inadequate at fully understanding the technology and, especially, its risks and harms' But even steps like that aren't enough, the authors of Tuesday's report wrote, because 'for a nascent and complex technology being developed and adopted at a remarkably swift pace, developers alone are simply inadequate at fully understanding the technology and, especially, its risks and harms.' That's why one of the key tenets of Tuesday's report is the need for third-party risk assessment. The authors concluded that risk assessments would incentivize companies like OpenAI, Anthropic, Google, Microsoft and others to amp up model safety, while helping paint a clearer picture of their models' risks. Currently, leading AI companies typically do their own evaluations or hire second-party contractors to do so. But third-party evaluation is vital, the authors say. Not only are 'thousands of individuals… willing to engage in risk evaluation, dwarfing the scale of internal or contracted teams,' but also, groups of third-party evaluators have 'unmatched diversity, especially when developers primarily reflect certain demographics and geographies that are often very different from those most adversely impacted by AI.' But if you're allowing third-party evaluators to test the risks and blind spots of your powerful AI models, you have to give them access — for meaningful assessments, a lot of access. And that's something companies are hesitant to do. It's not even easy for second-party evaluators to get that level of access. Metr, a company OpenAI partners with for safety tests of its own models, wrote in a blog post that the firm wasn't given as much time to test OpenAI's o3 model as it had been with past models, and that OpenAI didn't give it enough access to data or the models' internal reasoning. Those limitations, Metr wrote, 'prevent us from making robust capability assessments.' OpenAI later said it was exploring ways to share more data with firms like Metr. Even an API or disclosures of a model's weights may not let third-party evaluators effectively test for risks, the report noted, and companies could use 'suppressive' terms of service to ban or threaten legal action against independent researchers that uncover safety flaws. Last March, more than 350 AI industry researchers and others signed an open letter calling for a 'safe harbor' for independent AI safety testing, similar to existing protections for third-party cybersecurity testers in other fields. Tuesday's report cites that letter and calls for big changes, as well as reporting options for people harmed by AI systems. 'Even perfectly designed safety policies cannot prevent 100% of substantial, adverse outcomes,' the authors wrote. 'As foundation models are widely adopted, understanding harms that arise in practice is increasingly important.'

There is a vast hidden workforce behind AI
There is a vast hidden workforce behind AI

Mint

time09-06-2025

  • Business
  • Mint

There is a vast hidden workforce behind AI

WHEN DEEPSEEK, a hotshot Chinese firm, released its cheap large language model late last year it overturned long-standing assumptions about what it will take to build the next generation of artificial intelligence (AI). This will matter to whoever comes out on top in the epic global battle for AI supremacy. Developers are now reconsidering how much hardware, energy and data are needed. Yet another, less discussed, input in machine intelligence is in flux too: the workforce. To the layman, AI is all robots, machines and models. It is a technology that kills jobs. In fact, there are millions of workers involved in producing AI models. Much of their work has involved tasks like tagging objects in images of roads in order to train self-driving cars and labelling words in the audio recordings used to train speech-recognition systems. Technically, annotators give data the contextual information computers need to work out the statistical associations between components of a dataset and their meaning to human beings. In fact, anyone who has completed a CAPTCHA test, selecting photos containing zebra crossings, may have inadvertently helped train an AI. This is the 'unsexy" part of the industry, as Alex Wang, the boss of Scale AI, a data firm, puts it. Although Scale AI says most of its contributor work happens in America and Europe, across the industry much of the labour is outsourced to poor parts of the world, where lots of educated people are looking for work. The Chinese government has teamed up with tech companies, such as Alibaba and to bring annotation jobs to far-flung parts of the country. In India the IT industry body, Nasscom, reckons annotation revenues could reach $7bn a year and employ 1m people there by 2030. That is significant, since India's entire IT industry is worth $254bn a year (including hardware) and employs 5.5m people. Annotators have long been compared to parents, teaching models and helping them make sense of the world. But the latest models don't need their guidance in the same way. As the technology grows up, are its teachers becoming redundant? Data annotation is not new. Fei Fei Li, an American computer scientist known as 'the godmother of AI", is credited with firing the industry's starting gun in the mid-2000s when she created ImageNet, the largest image dataset at the time. Ms Li realised that if she paid college students to categorise the images, which was then how most researchers did things, the task would take 90 years. Instead, she hired workers around the world using Mechanical Turk, an online gig-work platform run by Amazon. She got some 3.2m images organised into a dataset in two and a half years. Soon other AI labs were outsourcing annotation work this way, too. Over time developers got fed up with the low-quality annotation done by untrained workers on gig-work sites. AI-data firms, such as Sama and iMerit, emerged. They hired workers across the poor world. Informal annotation work continued but specialist platforms emerged for AI work, like those run by Scale AI, which tests and trains workers. The World Bank reckons that between 4.4% and 12.4% of the global workforce is involved in gig work, including annotation for AI. Krystal Kauffman, a Michigan resident who has been doing data work online for a decade, reckons that tech companies have an interest in keeping this workforce hidden. 'They are selling magic—this idea that all these things happen by themselves," Ms Kauffman, says. 'Without the magic part of it, AI is just another product." A debate in the industry has been about the treatment of the workers behind AI. Firms are reluctant to share information on wages. But American annotators generally consider $10-20 per hour to be decent pay on online platforms. Those in poor countries often get $4-8 per hour. Many must use monitoring tools that track their computer activity and are penalised for being slow. Scale AI has been hit with several lawsuits over its employment practices. The firm denies wrongdoing and says: 'We plan to defend ourselves vigorously." The bigger issue, though, is that basic annotation work is drying up. In part, this was inevitable. If AI was once a toddler who needed a parent to point things out and to help it make sense of the world around it, the technology has grown into an adolescent who needs occasional specialist guidance and advice. AI labs increasingly use pre-labelled data from other AI labs, which use algorithms to apply labels to datasets. Take the example of self-driving tractors developed by Blue River Technology, a subsidiary of John Deere, an agricultural-equipment giant. Three years ago the group's engineers in America would upload pictures of farmland into the cloud and provide iMerit staff in Hubli, India, with careful instructions on what to label: tractors, buildings, irrigation equipment. Now the developers use pre-labelled data. They still need iMerit staff to check that labelling and to deal with 'edge cases", for example where a dust cloud obscures part of the landscape or a tree throws shade over crops, confusing the model. A process that took months now takes weeks. From baby steps The most recent wave of AI models has changed data work more dramatically. Since 2022, when OpenAI first let the public play with its ChatGPT chatbot, there has been a rush of interest in large language models. Data from Pitchbook, a research firm, suggest that global venture-capital funding for AI startups jumped by more than 50% in 2024 to $131.5bn, even as funding for other startups fell. Much of it is going into newer techniques for developing AI, which do not need data annotated in the same way. Iva Gumnishka at Humans in the Loop, a social enterprise, says firms doing low-skilled annotation for older computer-vision and natural-language-processing clients are being 'left behind". There is still demand for annotators, but their work has changed. As businesses start to deploy AI, they are building smaller specialised models and looking for highly educated annotators to help. It has become fairly common for adverts for annotation jobs to require a PhD or skills in coding and science. Now that researchers are trying to make AI more multilingual, demand for annotators who speak languages other than English is growing, too. Sushovan Das, a dentist working on medical-AI projects at iMerit, reckons that annotation work will never disappear. 'This world is constantly evolving," he says. 'So the AI needs to be improved time and again." New roles for humans in training AI are emerging. Epoch AI, a research firm, reckons the stock of high-quality text available for training may be exhausted by 2026. Some AI labs are hiring people to write chunks of text and lines of code that models can be trained on. Others are buying synthetic data, created using computer algorithms, and hiring humans to verify it. 'Synthetic data still needs to be good data," says Wendy Gonzalez, the boss of Sama, which has operations east Africa. The other role for workers is in evaluating the output from models and helping to hammer it into shape. That is what got ChatGPT to perform better than previous chatbots. Xiaote Zhu at Scale AI provides an example of the sort of open-ended tasks being done on the firm's Outlier platform, which was launched in 2023 to facilitate the training of AI by experts. Workers are presented with two responses from a chatbot recommending an itinerary for a holiday to the Maldives. They need to select which response they prefer, rate it, explain why the answer is good or bad and then rewrite the response to improve it. Ms Zhu's example is a fairly anodyne one. Yet human feedback is also crucial to making sure AI is safe and ethical. In a document that was published after the launch of ChatGPT in 2022, OpenAI said it had hired experts to 'qualitatively probe, adversarially test and generally provide feedback" on its models. At the end of that process the model refused to respond to certain prompts, such as requests to write social-media content aimed at persuading people to join al-Qaeda, a terrorist group. Flying the nest If AI developers had their way they would not need this sort of human input at all. Studies suggest that as much as 80% of the time that goes into the development of AI is spent on data work. Naveen Rao at Databricks, an AI firm, says he would like models to teach themselves, just as he would like his own children to do. 'I want to build self-efficacious humans," he says. 'I want them to have their own curiosity and figure out how to solve problems. I don't want to spoon-feed them every step of the way." There is a lot of excitement about unsupervised learning, which involves feeding models unlabelled data, and reinforcement learning, which uses trial and error to improve decision-making. AI firms, including Google DeepMind, have trained machines to win at games like Go and chess by playing millions of contests against themselves and tracking which strategies work, without any human input at all. But that self-taught approach doesn't work outside the realms of maths and science, at least for the moment. Tech nerds everywhere have been blown away by how cheap and efficient DeepSeek's model is. But they are less impressed by DeepSeek's attempt to train AI using feedback generated by computers rather than humans. The model struggled to answer open-ended questions, producing gobbledygook in a mixture of languages. 'The difference is that with Go and chess the desired outcome is crystal clear: win the game," says Phelim Bradley, co-founder of Prolific, another AI-data firm. 'Large language models are more complex and far-reaching, so humans are going to remain in the loop for a long time." Mr Bradley, like many techies, reckons that more people will need to get involved in training AI, not fewer. Diversity in the workforce matters. When ChatGPT was released a few years ago, people noticed that it overused the word 'delve". The word became seen as 'AI-ese", a telltale sign that the text was written by a bot. In fact, annotators in Africa had been hired to train the model and the word 'delve" is more commonly used in African English than it is in American or British English. In the same way as workers' skills and knowledge are transferred to models, their vocabulary is, too. As it turns out, it takes more than just a village to raise a child. Clarification: This article has been amended to reflect Scale AI's claim that most of its labour is based in America and Europe.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store