Anthropic CEO says DeepSeek was ‘the worst’ on a critical bioweapons data safety test
![](https://skyscrapers.today/wp-content/uploads/2025/01/53202070940_ea57312b1a_k-780x470.jpg)
The CEO of Anthropic Dario Amodei is concerned about competitor Deepseek, the Chinese AI company that Silicon Valley stormed with its R1 model. And his worries can be more serious than the typical ones that have been collected about Dieek that send user data back to China.
In An interview On the Chinatalk -Podcast by Jordan Schneider, Amodei said that Deepseek generated rare information about Bioweapons in a safety test run by Anthropic.
The performance of Deepseek were “the worst of every model we had ever tested,” Amodei claimed. “It had absolutely no blocks against generating this information.”
Amodei stated that this part of evaluations Anthropic carried out in various AI models to assess their potential risks in the field of national security. His team looks at whether models can generate biowapons-related information that cannot easily be found on Google or in textbooks. Anthropic positions itself as the AI Foundation Model Provider That takes safety seriously.
Amodei said he did not think that the Deepseek models are nowadays ‘literally dangerous’ in offering rare and dangerous information, but that they might be in the near future. Although he praised the Deepseek team as ‘talented engineers’, he advised the company to ‘take these AI safety considerations seriously’.
Amodei has also supported strong export controls on chips to China, with reference to concern that they could give China’s army a lead.
Amodei did not clarify in the Chinatalk interview that the deep model tested anthropic model, nor he gave more technical details about these tests. Anthropic did not immediately respond to a request for a tech crunch comment. Deep noise either.
The rise of Deepseek has also led to concern about its safety elsewhere. For example, Cisco Security Researchers said last week Die Deepseek R1 could not block harmful indications in its safety tests, achieving a 100% Jailbreak -Succe percentage.
Cisco did not mention Bioweapons, but said it was able to get Diepeek to generate harmful information about cyber crime and other illegal activities. However, it is worth mentioning that the Lama-3.1-405B from Meta and the GPT-4O from OpenAI also had a high failure rates of 96% and 86% respectively.
It is still to be seen whether such safety problems will make a serious dent in the rapid adoption of Deepseek. Companies like AWS And Microsoft has publicly advertised the integration of R1 into their cloud platforms – ironically, given that Amazon is the greatest investor in Anthropic.
On the other hand, there is a growing list of countries, companies and especially government organizations such as the American Navy and the Pentagon that started to ban Deepseek.
Time will learn whether these efforts are struggling or whether the worldwide rise of Deepseek will continue. Anyway, Amodei says that he regards Deepseek as a new competitor who is at the level of the best AI companies in the US.
“The new fact here is that there is a new competitor,” he said on Chinatalk. “In the large companies that can train AI – anthropic, openi, Google, perhaps Meta and Xai – Deepseek may now be added to that category.”