Google and Character.AI negotiate first major settlements in teen chatbot death cases

In what could be the tech industry’s first major legal settlement over AI-related harm, Google and the startup Character.AI are negotiating terms with families whose teens died by suicide or harmed themselves after interacting with Character.AI’s chatbot companions. The parties have in principle reached a settlement; now comes the harder work of finalizing the details.
These are among the first settlements in lawsuits accusing AI companies of harming users, a legal frontier that OpenAI and Meta must nervously watch from the wings as they defend themselves against similar lawsuits.
Founded in 2021 by ex-Google engineers who returned to their former employer in a $2.7 billion deal in 2024, Character.AI invites users to chat with AI personas. The most terrifying case involves Sewell Setzer III, who at age 14 had sexualized conversations with a “Daenerys Targaryen” bot before killing himself. His mother, Megan Garcia, has told the Senate that companies “should be legally liable if they knowingly design harmful AI technologies that kill children.”
Another lawsuit describes a 17-year-old whose chatbot encouraged self-harm and suggested that killing his parents was reasonable. limit screen time. Character.AI banned minors last October, it told TechCrunch. The settlements are likely to include monetary damages, although no liability was admitted in the lawsuits made available Wednesday.
Character.AI declined to comment and referred TechCrunch to the documents. Google did not respond to a request for comment.




