The author of SB 1047 introduces a new AI bill in California

The author of the SB 1047 of California, the most controversial AI safety account of the Land van 2024, is back with a new AI account that could shake Silicon Valley.
Senator Scott Wiener in California introduced one New account On Friday, that would protect employees in leading AI laboratories, so that they can express themselves if they think that the AI systems of their company can be a “critical risk” for society. The new account, SB 53, would also make a public cloud computing cluster, called Calcompute, to give researchers and startups the necessary computer sources to develop AI that benefit the public.
Wiener’s last AI security account, SB 1047 of California, was one of the most controversial AI legislative efforts of 2024. SB 1047 was intended to prevent very large AI models from creating catastrophic events, such as causing loss of life or cyber strals. However, Governor Gavin Newsom pronounced a veto against the bill in September.
The debate about Wiener’s last bill quickly became ugly in 2024. Some leaders of Silicon Valley said that SB 1047 would harm the American competitive advantage in the global AI race and claimed that the bill was inspired by unrealistic fears that AI-systems could bring scenario’s to science-like-like. In the meantime, Senator Wiener claimed that some venture capitalists had a “propaganda campaign” against his bill, partially pointing at the claim of Y Combinator that SB 1047 would send starting founders to prison, a claim that experts claimed was misleading.
SB 53 essentially takes the least controversial parts of SB 1047 – such as whistleblower protection and the creation of a calcumty cluster – and packs them again in a new AI account.
Wiener in particular does not shy away from the existential AI risk in SB 53. The new account specifically protects whistleblowers who believe that their employers create AI systems that pose a ‘critical risk’. The bill defines a critical risk as a ‘or equipment risk that the development, storage or deployment of a developer of a foundation model, as defined, will lead to the death of or serious injury, more than 100 people, or more than $ 1 billion in damage to rights in money or property. “
SB 53 limits Frontier AI model developers – probably including OpenAi, Anthropic and Xai, among other things – from retribution against employees who make public about information to the attorney general, federal authorities or other employees of California. According to the bill, these developers should be reported to whistleblowers on certain internal processes that the whistleblowers find.
As far as calcumute is concerned, SB 53 would set up a group to expand a public cloud computing cluster. The group would consist of representatives from the University of California, as well as other public and private researchers. It would make recommendations to build Calcompute, how large the cluster should be and which users and organizations should have access to it.
Of course it is very early in the legislative process for SB 53. The bill must be assessed and accepted by the legislative bodies of California before it reaches the office of the Governor Newsom. State legislators will certainly wait for Silicon Valley’s response to SB 53.
2025, however, can be a more difficult year to approve AI safety accounts compared to 2024. California has taken 18 AI-related accounts in 2024, but now it seems as if the AI Doom movement has lost ground.
Vice -President JD Vance indicated on the AI Action Summit of Paris that America is not interested in AI security, but rather prioritizing AI innovation. Although the Calcompute cluster founded by SB 53 can certainly be seen as the progress of the AI, it is unclear how legislative efforts about existential AI risk will go in 2025.