AI

California lawmaker behind SB 1047 reignites push for mandated AI safety reports

Senator Scott Wiener in California introduced on Wednesday New changes For its last invoice, SB 53, the largest AI companies in the world would require that they publish safety and security protocols and issues reports when safety incidents occur.

If signed in the law, California would be the first state to impose meaningful transparency -requirements on leading AI developers, including OpenAi, Google, Anthropic and Xai.

The previous AI Bill from Senator Wiener, SB 1047, included similar requirements for AI model developers to publish safety reports. Silicon Valley, however, fought fought fought against that bill, and it was ultimately veto by governor Gavin Newsom. The Governor of California then called for a group of AI-leaders-and under the leading Stanford researcher and co-founder of World Labs, Fei-Fei Li-Om to form a policy group and to set goals for the AI ​​security efforts of the state.

The AI ​​policy group of California recently published them Latest recommendationsWith reference to a need for “requirements for industry to publish information about their systems” to set up a “robust and transparent evidence environment”. The Senator Wiener’s office said in a press release that the amendments of SB 53 were strongly influenced by this report.

“The bill remains a work in progress and I am looking forward to working with all stakeholders in the coming weeks to refine this proposal to the most scientific and honest law,” said Senator Wiener in the release.

SB 53 wants to find a balance that Governor Newsom claimed that SB 1047 did not reach -ideally, creating meaningful transparency requirements for the largest AI developers without thwarting the rapid growth of the AI ​​industry of California.

See also  Guidde taps AI to help create software training videos

“These are worries that my organization and others have been talking about for a while,” says Nathan Calvin, vice -president of state affairs for the non -profit AI Safety Group, Encode, in an interview with WAN. “Having companies explain to the public and the government what measures they take to tackle these risks feels like an absolute minimum, reasonable step to take.”

The bill also creates whistleblower protection for employees of AI Laboratories who believe that the technology of their company is a “critical risk” for society – defined in the bill as contributions to the death or injury of more than 100 people, or more than $ 1 billion in damage.

In addition, the account wants to make calcumple, a public cloud computing cluster to support startups and researchers who develop large-scale AI.

In contrast to SB 1047, the new account of Senator Wiener does not make AI model developers liable for the damage of their AI models. SB 53 was also designed not to form problems for startups and researchers who refine AI models from leading AI developers, or use open source models.

With the new changes, SB 53 is now on its way to the California State Assembly Committee on Privacy and Consumer Protection for Approvement. If it passes there, the bill must also go through various other legislative bodies before it reaches the office of the Governor Newsom.

On the other side of the US, Governor of New York Kathy Hochul is now considering a similar AI safety account, the Raise Act, which would also require large AI developers to publish safety and security reports.

See also  These 5 cozy cities in California are trending this Christmas season

The fate of the AI ​​laws such as the Raise Act and SB 53 were briefly in danger, because federal legislators considered a 10-year AI moratorium on the AI ​​regulation of the state attempted to limit a “patchwork” of AI laws that should navigate companies. However, that proposal failed in July in a 99-1 senate voice.

“Ensuring that AI is safe, should not be controversial – it should be fundamental,” says Geoff Ralston, the former President of Y Combinator, in a statement to WAN. “The congress must lead, demand transparency and accountability of the companies that build border models. But without serious federal action in sight, states must rise. The SB 53 of California is a well -considered, well -structured example of state leadership.”

Until now, legislators have not succeeded in getting AI companies on board with transparency requirements mandated by the state. Anthropic has broadly approved The need for increased transparency in AI companiesand even expressed modest optimism about the recommendations of the AI ​​policy group of California. But companies such as OpenAi, Google and Meta are more resistant to these efforts.

Leading AI model developers usually publish safety reports for their AI models, but they have been less consistent in recent months. For example, Google decided not to publish a safety report for the most advanced AI model ever released, Gemini 2.5 Pro, up to months after it was made available. OpenAi also decided not to publish a safety report for his GPT-4.1 model. Later a study from third parties came out that suggested that it could be less aligned than previous AI models.

See also  California’s new AI safety law shows regulation and innovation don’t have to clash 

SB 53 represents a coordinated version of previous AI safety accounts, but it could still force AI companies to publish more information than today. For now they will keep a close eye on as Senator Wiener tests that borders again.

Source link

Back to top button