AI

A California bill that would regulate AI companion chatbots is close to becoming law

The California State Assembly took a big step in the direction of regulating AI on Wednesday evening, passed by SB 243 – A bill that regulates AI Companion Chatbots to protect minors and vulnerable users. The legislation has been adopted with two -part support and is now going to the Senate for a definitive vote on Friday.

If Governor Gavin Newsom signed the bill, it would come into effect on January 1, 2026, which means that California is the first to demand AI Chatbot operators that they implement safety protocols for AI guidance companions and keep companies legally responsible if their chatbots do not meet these standards.

The bill is specifically aimed at preventing accompanying chatbots, which defines legislation as AI systems that offer adaptive, human-like answers and are able to meet the social needs of a user-through conversations about suicidal ideas, self-damage or sexually explicit content. The bill would require that platforms offer recurring warnings to users – every three hours for minors – remind you that they speak with an AI chatbot, not a real person, and that they should take a break. It also determines annual reporting and transparency requirements for AI companies that offer accompanying chatbots, including large players OpenAI, Character.ai and Replika.

The Californian bill would also allow people who believe that they were injured by violations to submit lawsuits against AI companies that are looking for provisional exemption, compensation, up to $ 1,000 per violation) and lawyers’ fees.

SB 243, introduced in January by Senators Steve Padilla and Josh Becker, will go to the Senate of the State on Friday. If approved, it goes to Governor Gavin Newsom to be signed in the law, with the new rules taking effect on January 1, 2026 and reporting requirements from 1 July 2027.

See also  Hammerspace, an unstructured data wrangler used by Nvidia, Meta and Tesla, raises $100M at $500M+ valuation

The bill was given a momentum in the Californian legislative power after the death of teenager Adam Raine, who committed suicide after prolonged chats with OpenAi’s chatgpt where his death and self -damage was presented and planned. The legislation also responds to leaked internal documents that have reported that the chatbots of Meta were allowed to “romantic” and “sensual” chats with children.

In recent weeks, American legislators and supervisors have responded with a more intensive investigation of the guarantees of AI platforms to protect minors. The Federal Trade Commission Prepares to investigate how AI chatbots influence the mental health of children. Attorney General Ken Paxton from Texas has started research into meta and character. Aii, who accuses them of misleading children with claims in mental health care. Meanwhile, both senator Josh Hawley (R-MO) and Senator Ed Markey (D-MA) have launched individual probes in meta.

WAN event

San Francisco
|
27-29 October 2025

“I think the damage is potentially great, which means we have to move quickly,” Padilla told WAN. “We can place reasonable security protectors to ensure that minors in particular know that they do not talk to a real person, that these platforms link people to the right resources when people say things as they think to hurt themselves or are in need, [and] To ensure that there is no inappropriate exposure to inappropriate material. “

Padilla also emphasized the importance of AI companies that share data about the number of times they refer users to crisis services every year, “so we have a better understanding of the frequency of this problem, instead of only becoming aware of when someone is harmed or worse.”

See also  Nvidia CEO Jensen Huang says market got it wrong about DeepSeek's impact

SB 243 had previously had a stronger requirements, but many were reduced by amendments. For example, the bill would have originally required operators to prevent AI chatbots from using ‘variable remuneration’ tactics or other functions that encourage excessive involvement. These tactics, used by AI -guiding companies such as Replika and Character, offer users special messages, memories, storylines or the opportunity to unlock rare answers or new personalities, so that critics call a potentially addictive reward loop.

The current account also delete provisions for which operators should be followed and report how often chatbots discussions about suicidal thoughts or actions have initiated with users.

“I think it is the right balance to cause the damage without force anything that is impossible for companies to meet, because it is technically not feasible or just a lot of paperwork for nothing,” Becker told WAN.

SB 243 is going to be the law at a time when companies in Silicon Valley cast millions of dollars in pro-AI-AI committees for political action (PACs) to support candidates in the upcoming interim elections that promote a slight approach to AI regulation.

The bill also comes when California weighs a different AI safety account, SB 53, which would oblige extensive requirements for transparency reporting requirements. OpenAi has written an open letter to Governor Newsom and asked him to leave that bill in favor of less strict federal and international frameworks. Large technology companies such as Meta, Google and Amazon have also opposed SB 53. On the other hand, only anthropically said that it supports SB 53.

See also  California Insurance Exodus left many houses vulnerable to forest fires

“I reject the starting point that this is a zero interim situation, that innovation and regulations exclude each other,” said Padilla. “Do not tell me that we cannot walk and chew chewing gum. We can support innovation and development that we think is healthy and have benefits – and there are clear advantages for this technology – and at the same time we can offer reasonable guarantees for the most vulnerable people.”

WAN has contacted OpenAi, Anthropic, Meta, Character AI and Replika for comment.

Source link

Back to top button