AI

Silicon Valley spooks the AI safety advocates

Silicon Valley leaders, including White House AI & Crypto Czar David Sacks and OpenAI Chief Strategy Officer Jason Kwon, caused a stir online this week for their comments about groups promoting AI security. In separate cases, they have claimed that certain AI safety advocates are not as virtuous as they seem, and are either acting in their own interests or in the interests of billionaire puppeteers behind the scenes.

AI safety groups who spoke to TechCrunch say the accusations from Sacks and OpenAI are Silicon Valley’s latest attempt to intimidate critics, but certainly not the first. In 2024, some venture capital firms spread rumors that a California AI safety law, SB 1047, would send startup founders to jail. The Brookings Institution labeled the rumor as one of many “misrepresentations‘ on the bill, but Governor Gavin Newsom ultimately vetoed it.

Whether or not Sacks and OpenAI intended to intimidate critics, their actions have sufficiently frightened several AI safety advocates. Many nonprofit leaders contacted by TechCrunch over the past week asked to speak on the condition of anonymity to protect their groups from retaliation.

The controversy underlines the growing tension in Silicon Valley between building AI responsibly and developing it into a giant consumer product – a theme that my colleagues Kirsten Korosec, Anthony Ha and I highlight this week. Equity podcast. We also dive into a new AI safety law passed in California to regulate chatbots, and OpenAI’s approach to erotica in ChatGPT.

On Tuesday, Sacks wrote one message on X claiming that Anthropic – did concerns expressed about AI’s ability to contribute to unemployment, cyber-attacks and catastrophic damage to society – is simply fear-mongering to pass laws that will benefit themselves and drown smaller startups with paperwork. Anthropic was the only major AI lab to endorse California Senate Bill 53 (SB 53), a bill that would impose safety reporting requirements on major AI companies, which was signed into law last month.

See also  PayPal’s Agentic Commerce Play Shows Why Flexibility, Not Standards, Will Define the Next E-Commerce Wave

Sacks responded to one viral essay from Anthropic co-founder Jack Clark on his fears regarding AI. Clark delivered the essay weeks earlier as a speech at the Curve AI security conference in Berkeley. Sitting in the audience, it certainly felt like a sincere account of a technologist’s concerns about his products, but Sacks didn’t see it that way.

Sacks said Anthropic has a “sophisticated regulatory capture strategy,” though it’s worth noting that a truly sophisticated strategy likely won’t involve making the federal government an enemy. In one follow-up post to X, Sacks noted that Anthropic “has consistently positioned itself as an enemy of the Trump administration.”

WAN event

San Francisco
|
October 27-29, 2025

Also this week, OpenAI Chief Strategy Officer Jason Kwon wrote one message on X explaining why the company sent subpoenas to AI safety nonprofits such as Encode, a nonprofit that advocates for responsible AI policies. (A subpoena is a legal order demanding documents or testimony.) Kwon said that after Elon Musk sued OpenAI — over concerns that the ChatGPT maker has deviated from its nonprofit mission — OpenAI found it suspicious how several organizations had also expressed opposition to its restructuring. Encode filed an amicus brief in support of Musk’s lawsuit, and other nonprofits publicly spoke out against OpenAI’s restructuring.

“This raised questions about transparency about who funded them and whether there was coordination,” Kwon said.

See also  How Data Engineering Services Are Reshaping Global Business Strategies

NBC News reported this week that OpenAI is issuing broad subpoenas to Encode and six other nonprofits who criticized the company and asked for their communications regarding two of OpenAI’s biggest opponents, Musk and Meta CEO Mark Zuckerberg. OpenAI also asked Encode for communications regarding support for SB 53.

A prominent leader in AI safety told TechCrunch that there is a growing rift between OpenAI’s government affairs team and the research organization. While OpenAI’s security researchers regularly publish reports revealing the risks of AI systems, OpenAI’s policy unit lobbied against SB 53, saying they would prefer uniform rules at the federal level.

OpenAI’s head of mission alignment, Joshua Achiam, spoke about his company sending subpoenas to nonprofits in a message on X this week.

“In what is potentially a risk to my entire career, I will say, this doesn’t seem great,” Achiam said.

Brendan Steinhauser, CEO of the AI ​​safety nonprofit Alliance for Secure AI (which has not been subpoenaed by OpenAI), told TechCrunch that OpenAI appears convinced its critics are part of a Musk-led conspiracy. However, he claims that this is not the case, and that much of the AI ​​security community is quite critical of xAI’s security practices, or lack thereof.

“On OpenAI’s part, this is intended to silence critics, intimidate them, and deter other nonprofits from doing the same,” Steinhauser said. “For Sacks, I think he’s concerned about that [the AI safety] The movement is growing and people want to hold these companies accountable.”

Sriram Krishnan, White House senior policy advisor for AI and former general partner of a16z, joined the conversation this week with a message on social media of his own country, leaving AI safety advocates out of touch. He urged AI safety organizations to talk to “people in the real world who use, sell and adopt AI at home and in organizations.”

See also  Korean AI startup Motif reveals 4 big lessons for training enterprise LLMs

A recent Pew survey found that about half of Americans are more worried than excited about AI, but it is unclear what exactly worries them. Another recent survey went into more detail and found that American voters care more job losses and deepfakes then catastrophic risks caused by AI, which is largely the focus of the AI ​​safety movement.

Addressing these security concerns could come at the expense of the AI ​​industry’s rapid growth — a tradeoff that worries many in Silicon Valley. With AI investments underpinning much of the U.S. economy, fears of overregulation are understandable.

But after years of unregulated progress in AI, the AI ​​safety movement appears to be really gaining momentum heading into 2026. Silicon Valley’s efforts to fight back against security-oriented groups may be a sign that they are working.



Source link

Back to top button