AI

Elon Musk’s only AI expert witness at the OpenAI trial fears an AGI arms race

When will we take AI doomers seriously?

That’s a major subtext of Elon Musk’s attempt to shut down OpenAI’s profitable AI business. His lawyers claim the organization was set up as a charity focused on AI safety and has lost its way in the pursuit of profit. To prove this, they cite old emails and statements from the organization’s founders about the need for a public-facing counterbalance to Google DeepMind.

Today they called the only expert witness who has spoken directly to AI technology: Stuart Russell, a computer science professor at the University of California, Berkeley who has studied AI for decades. His job was to provide background on AI and establish that this technology is dangerous enough to be concerned about.

Russell co-signed one open letter in March 2023, a six-month pause in AI research was called for. In a sign of the contradictions here, Musk also signed the same letter even as he launched xAI, his own for-profit AI lab.

Russell told jurors and judge Yvonne Gonzalez Rodgers that there were a variety of risks associated with the development of AI, ranging from cybersecurity threats to misalignment issues and the winner-takes-all nature of artificial general intelligence (AGI) development. Ultimately, he said there was a tension between the pursuit of AGI and safety.

Russell’s larger concerns about the existential threats of unrestricted AI were not raised in open court after objections from OpenAI’s lawyers led the judge to limit Russell’s testimony. But Russell has long been a critic of the arms race dynamic created by frontier labs around the world competing to get to AGI first, and called on governments to regulate the field more tightly.

See also  Warner Music signs deal with AI music startup Suno, settles lawsuit

OpenAI’s attorneys determined during their cross-examination that Russell did not directly evaluate the organization’s corporate structure or specific security policies.

WAN event

San Francisco, CA
|
October 13-15, 2026

But this reporter (as well as the judge and jurors) will weigh how much value to place on the relationship between corporate greed and AI safety concerns. Virtually all of OpenAI’s founders strongly warned about the risks of AI, while also emphasizing its benefits, trying to build AI as quickly as possible – and hatching plans for AI-focused for-profit companies that they would control.

From the outside, an obvious problem here is the growing realization within OpenAI after its founding that the organization simply needed more computing spend to succeed. That money can only come from for-profit investors. The founding team’s fear that AGI would fall into the hands of a single organization forced them to seek the capital that ultimately tore the team apart, creating the arms race we see today – and bringing us to this lawsuit.

The same dynamic is already playing out at the national level: Senator Bernie Sanders’ push for a law imposing a moratorium on data center construction cites AI fears expressed by Musk, Sam Altman, Geoffrey Hinton and others. Hoden Omar, who works at the trade group Center for Data Innovation, took issue with Sanders citing their fears without nurturing their hopes. He told TechCrunch that “it is unclear why the public should disregard everything tech billionaires say, except when their words can be recruited to fill gaps in a precarious argument.”

See also  'Rupaul's Drag Race' Star Jiggly Caliente Death at 44

Now both sides of the case are asking the court to do just that: take some of Altman and Musk’s arguments seriously, but disregard the parts that are less helpful to their legal argument.

Correction: The article was updated to correct the name of a Stuart Russell, professor of computer science at the University of California, Berkeley.

When you make a purchase through links in our articles, we may earn a small commission. This does not affect our editorial independence.

Source link

Back to top button