AI

State attorneys general warn Microsoft, OpenAI, Google, and other AI giants to fix ‘delusional’ outputs

Following a series of troubling mental health incidents involving AI chatbots, a group of attorneys general have sent a letter to key companies in the AI ​​industry, warning them to resolve “delusions” or risk violating state law.

The lettersigned by dozens of AGs from US states and territories with the National Association of Attorneys General, calls for the companies, including Microsoft, OpenAI, Google and ten other major AI companies, to implement a variety of new internal security measures to protect their users. Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika and xAI were also included in the letter.

The letter comes as a battle over AI regulations rages between the state and federal government.

These safeguards include transparent third-party audits of major language models that look for signs of delusion or sycophantic thinking, as well as new incident reporting procedures designed to notify users when chatbots produce psychologically damaging results. These third parties, including academic and community groups, should be allowed to “evaluate systems prior to release without retaliation and publish their findings without prior company approval,” the letter said.

“GenAI has the potential to positively change the way the world works. But it has also caused (and has the potential to cause) serious harm, especially to vulnerable populations,” the letter said, citing a number of highly publicized incidents over the past year, including suicides and suicides. murder – in which violence has been linked to excessive AI use, the letter said. “In many of these incidents, GenAI products generated sycophantic and delusional statements that encouraged users’ delusions or reassured users that they were not delusional.”

See also  Attorneys general warn OpenAI ‘harm to children will not be tolerated’

AGs also suggest that companies handle mental health incidents the same way technology companies handle cybersecurity incidents – with clear and transparent incident reporting policies and procedures.

Companies should develop and publish “detection and response timelines for sycophantic and delusional behavior,” the letter said. In a similar manner to the way data breaches are currently handled, companies should also notify users “promptly, clearly and directly if they have been exposed to potentially harmful sycophantic or delusional behavior,” the letter said.

WAN event

San Francisco
|
October 13-15, 2026

Another demand is that the companies develop “reasonable and appropriate safety testing” on GenAI models to “ensure that the models do not produce potentially harmful sycophantic and delusional beliefs.” These tests must be conducted before the models are ever offered to the public, it adds.

TechCrunch was unable to reach Google, Microsoft or OpenAI for comment prior to publication. The article will be updated as the companies respond.

Technology companies developing AI have received a much warmer reception at the federal level.

The Trump administration has made it known that it is unapologetically pro-AI, and over the past year there have been multiple attempts to implement a nationwide moratorium on AI regulations at the state level. So far, those attempts have failed, thanks in part pressure from state officials.

Not to be deterred, Trump announced Monday that he plans to adopt an executive order next week that will limit states’ ability to regulate AI. The President said in a after on Truth Social, he hoped his EO would prevent AI from being “DESTROYED IN ITS SMALL SHOES.”

See also  OpenAI taps Tata for 100MW AI data center capacity in India, eyes 1GW

Source link

Back to top button