California becomes first state to regulate AI companion chatbots

California Governor Gavin Newsom signed on Monday, a landmark bill regulating AI companion chatbots was passed, making it the first state in the country to require AI chatbot operators to implement safety protocols for AI companions.
The law, SB 243, is intended to protect children and vulnerable users from some of the harms associated with the use of AI-assisted chatbots. It holds companies – from the big labs like Meta and OpenAI to more focused startups like Character AI and Replika – legally liable if their chatbots don’t meet the law’s standards.
Introduced in January by Senators Steve Padilla and Josh Becker, SB 243 gained momentum following the death of teenager Adam Raine, who committed suicide after a long series of suicidal conversations with OpenAI’s ChatGPT. The legislation is also a response to leaked internal documents that reportedly showed Meta’s chatbots were allowed to conduct “romantic” and “sensual” chats with children. More recently, a Colorado family has filed suit against role-playing startup Character AI after their 13-year-old daughter committed suicide following a series of problematic and sexualized conversations with the company’s chatbots.
“Emerging technology like chatbots and social media can inspire, educate and connect – but without real guardrails, technology can also exploit, mislead and endanger our children,” Newsom said in a statement. “We have seen some truly horrific and tragic examples of young people harmed by unregulated technology, and we will not stand by and let companies continue without the necessary boundaries and accountability. We can continue to lead in AI and technology, but we must do it responsibly – protecting our children every step of the way. The safety of our children cannot be bought.”
SB 243 goes into effect on January 1, 2026 and requires companies to implement certain features such as age verification and alerts related to social media and companion chatbots. The law also implements harsher penalties for those who profit from illegal deepfakes, including up to $250,000 per violation. Companies must also establish protocols to address suicide and self-harm, which will be shared with the state’s Department of Health, along with statistics on how the agency provides users with prevention alerts for crisis centers.
Under the bill’s language, platforms must also make clear that all interactions are artificially generated, and chatbots must not represent themselves as healthcare professionals. Companies are required to provide break reminders to minors and prevent them from viewing sexually explicit images generated by the chatbot.
Some companies have already started implementing some precautions aimed at children. For example, OpenAI recently started rolling out parental controls, content protection, and a self-harm detection system for children using ChatGPT. Character AI has said that its chatbot includes a disclaimer that all chats are AI-generated and fictionalized.
WAN event
San Francisco
|
October 27-29, 2025
Senator Padilla told TechCrunch that the bill was “a step in the right direction” toward putting guardrails in place for “an incredibly powerful technology.”
“We must act quickly to avoid missing opportunities before they disappear,” Padilla said. “I hope other states will see the risk. I think many do. I think this is a conversation that’s happening across the country, and I hope people will take action. The federal government certainly hasn’t, and I think we have an obligation here to protect the most vulnerable people among us.”
SB 243 is the second major AI regulation to come out of California in recent weeks. On September 29, Governor Newsom signed SB 53 into law, establishing new transparency requirements for large AI companies. The bill requires major AI laboratories, such as OpenAI, Anthropic, Meta and Google DeepMind, to be transparent about safety protocols. It also provides whistleblower protection for employees at those companies.
Other states, such as Illinois, Nevada and Utah, have passed laws to limit or completely ban the use of AI chatbots as a substitute for licensed mental health care.
TechCrunch has reached out to Character AI, Meta, OpenAI, and Replika for comment.
This article has been updated with comments from Senator Padilla.



