Silicon Valley leaders, together with White Home AI and Crypto Czar David Sacks and OpenAI Chief Technique Officer Jason Kwon, have brought on controversy on-line this week for his or her feedback about teams selling AI safety. In separate instances, they’ve alleged that sure AI security advocates should not as virtuous as they seem and are performing within the pursuits of themselves or billionaire puppet masters behind the scenes.
AI safety teams who spoke to TechCrunch say the Sacks and OpenAI allegations are Silicon Valley’s newest try to intimidate its critics, however actually not the primary. In 2024, some enterprise capital companies unfold rumors {that a} California AI security invoice, SB 1047, would ship startup founders to jail. The Brookings Establishment labeled the rumor as one in all many “misrepresentations” in regards to the invoice, however Governor Gavin Newsom ended up vetoing it anyway.
Whether or not or not Sacks and OpenAI supposed to intimidate critics, their actions sufficiently frightened a number of AI security advocates. Many nonprofit leaders TechCrunch reached out to final week requested to talk on situation of anonymity to spare their teams from retaliation.
The controversy highlights the rising rigidity in Silicon Valley between constructing AI responsibly and constructing it to be a mass client product – a theme that my colleagues Kirsten Korosec, Anthony Ha and I uncover on this week’s article. Fairness podcast. We additionally delve into a brand new AI security legislation handed in California to control chatbots and OpenAI’s strategy to erotica on ChatGPT.
On Tuesday, Sacks wrote a post on X claiming that Antropica – which has raised concerns about AI’s potential to contribute to unemployment, cyberattacks, and catastrophic hurt to society – it is merely fear-mongering to get legal guidelines handed that may profit itself and stifle small startups in paperwork. Anthropic was the one main AI lab to endorse California Senate Invoice 53 (SB 53), a invoice establishing security reporting necessities for big AI corporations, which was signed into legislation final month.
Sacks was responding to a viral assay from Anthropic co-founder Jack Clark on his fears surrounding AI. Clark offered the essay as a keynote handle on the Curve AI safety convention in Berkeley weeks earlier. Sitting within the viewers, it actually appeared like a real account of a technologist’s reservations about his merchandise, however Sacks did not see it that approach.
Sacks stated Anthropic is executing a “subtle regulatory seize technique,” though it’s price noting {that a} actually subtle technique in all probability wouldn’t contain turning the federal authorities into an enemy. In a follow-up post on X, Sacks famous that Anthropic has “persistently positioned itself as an enemy of the Trump administration.”
Techcrunch Occasion
San Francisco
|
October 27-29, 2025
Additionally this week, OpenAI Chief Technique Officer Jason Kwon wrote a post on X explaining why the corporate was sending subpoenas to AI security nonprofits like Encode, a nonprofit that advocates for accountable AI coverage. (A subpoena is a authorized order that calls for paperwork or testimony.) Kwon stated that after Elon Musk sued OpenAI — over considerations that the maker of ChatGPT strayed from its nonprofit mission — OpenAI grew to become suspicious as a number of organizations additionally raised opposition to its restructuring. Encode filed an amicus temporary in help of Musk’s lawsuit, and different nonprofits have spoken out publicly in opposition to OpenAI’s restructuring.
“This raised questions of transparency about who was funding them and whether or not there was any coordination,” Kwon stated.
NBC Information reported this week that OpenAI despatched in depth subpoenas to Encode and six other non-profit organizations which criticized the corporate, requesting its associated communications from two of OpenAI’s greatest opponents, Musk and Meta CEO Mark Zuckerberg. OpenAI additionally requested communications from Encode concerning its help of SB 53.
A distinguished AI safety chief informed TechCrunch that there’s a rising divide between OpenAI’s authorities affairs workforce and its analysis group. Though OpenAI safety researchers continuously publish studies touting the dangers of AI methods, OpenAI’s coverage unit lobbied in opposition to SB 53, saying it could favor to have uniform guidelines on the federal degree.
OpenAI’s head of mission alignment, Joshua Achiam, spoke about his firm sending subpoenas to nonprofits in a post on X this week.
“In gentle of what doubtlessly poses a danger to my whole profession, I’ll say: This doesn’t look nice,” Achiam stated.
Brendan Steinhauser, CEO of the AI safety nonprofit Alliance for Safe AI (which was not subpoenaed by OpenAI), informed TechCrunch that OpenAI seems satisfied that its critics are a part of a conspiracy led by Musk. Nonetheless, he argues that this isn’t the case, and that a lot of the AI safety neighborhood is kind of important of xAI’s safety practices, or lack thereof.
“On OpenAI’s half, the objective is to silence critics, intimidate them, and dissuade different nonprofits from doing the identical,” Steinhauser stated. “For Sacks, I feel he is nervous about that [the AI safety] the motion is rising and other people wish to maintain these corporations accountable.”
Sriram Krishnan, White Home senior coverage advisor for AI and former normal accomplice at a16z, chimed on this week with a post on social media by itself, leaving AI security advocates out of contact. He urged AI safety organizations to speak to “individuals in the actual world who use, promote and undertake AI of their properties and organizations.”
A latest Pew research discovered that about half of Individuals are more worried than excited about AI, however it’s not clear what precisely considerations them. One other latest research went into extra element and located that American voters care extra about job losses and deepfakes than the catastrophic dangers attributable to AI, which the AI security motion is basically centered on.
Addressing these safety considerations might come on the expense of the speedy development of the AI business – a trade-off that worries many in Silicon Valley. With funding in AI underpinning a lot of the American economic system, fears of extreme regulation are comprehensible.
However after years of unregulated progress in AI, the AI security motion seems to be gaining actual momentum heading into 2026. Silicon Valley’s makes an attempt to struggle again in opposition to safety-focused teams could possibly be an indication that it is working.

Leave a Reply