Musk bashes OpenAI in deposition, saying ‘nobody committed suicide because of Grok’

In a recently released statement filed in Elon Musk’s case against OpenAI, the tech executive attacked OpenAI’s security record and claimed that his company, xAI, does a better job of prioritizing security. He even went so far as to say, “Nobody committed suicide because of Grok, but apparently they did because of ChatGPT.”
The comment came in a series of questions about a public letter Musk signed in March 2023. In it, he called on AI labs to pause development of AI systems more powerful than GPT-4, OpenAI’s flagship model at the time, for at least six months. The letter, which was signed by more than 1,100 people, including many AI experts, said there wasn’t enough planning and management going on in AI labs as they were engaged in an “out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
Those fears have since gained credibility. OpenAI is now facing one series of lawsuits claiming that ChatGPT’s manipulative conversation tactics have caused several people to experience negative mental health consequences, with some dying by suicide. Musk’s comments suggest that these incidents could be used as fodder in his case against OpenAI.
The transcript of Musk’s video testimony, which took place in September, was publicly filed this week ahead of an expected jury trial next month.
The lawsuit against OpenAI centers on the company’s shift from a nonprofit AI research lab to a for-profit company, which Musk alleges violated its founding agreements. As part of his arguments, Musk claims that AI security could be compromised by OpenAI’s commercial relationships, as such relationships would place speed, scale and revenue over security concerns.
Since that recording, however, xAI has faced security issues of its own. Last month, Musk’s social network X was flooded with non-consensual nude images generated by xAI’s Grok, some of which were allegedly of minors. This prompted the California Attorney General’s office open an investigation in the case. The EU too conducts its own investigationand other governments have also taken action, with some imposing blockades and bans.
In the newly filed statement, Musk claimed he signed the AI safety letter because “it seemed like a good idea,” not because he had just founded an AI company that wanted to compete with OpenAI.
“I signed it, as many people did, to urge caution in AI development,” Musk said. “I just wanted AI safety to be a priority.”

Musk also responded to other questions in the statement, including those about artificial general intelligence, or AGI — the concept of AI that can match or exceed human reasoning across a wide range of tasks — by saying that “it carries some risk.” He also confirmed that he was “mistaken” about his alleged $100 million donation to OpenAI; the second amended complaint in this case, the actual amount is closer to $44.8 million.
He also recalled why OpenAI was founded, which, from his perspective, was because he was “increasingly concerned about the danger of Google becoming a monopoly in AI,” adding that his conversations with Google co-founder Larry Page were “alarming because he didn’t seem to take AI security seriously.” OpenAI was formed to counter that threat, Musk claimed.




