AI

After dissing Anthropic for limiting Mythos, OpenAI restricts access to Cyber, too

After Sam Altman mocked Anthropic for monitoring its cybersecurity tool Mythos by only releasing it to select users, he confirmed that OpenAI would do the same with its rival tool, Cyber.

Altman said a message on X On Thursday, OpenAI announced that GPT-5.5 Cyber ​​will be rolled out “to critical cyber defenders” in the coming days. OpenAI has an application on its website where people pass on information about their login details and planned usage to gain access.

This version of Cyber ​​​​can perform tasks such as penetration testing, vulnerability identification (and exploitation) and reverse engineering of malware, the application implies. It is intended as a toolkit to help a company find security holes and test defenses. The fear is that the kit could be misused by the bad guys.

When Anthropic similarly restricted access to Mythos, Altman called the tactic fear-based marketing. Some critics felt the same way, saying Anthropic’s rhetoric was exaggerated. Ironically, an unauthorized group reportedly managed to gain access to Mythos after all.

OpenAI says it’s working to make Cyber ​​more widely available by consulting with the US government and identifying more users with legitimate cybersecurity credentials.

A spokesperson tells TechCrunch that the company’s system for verifying people with legitimate cybersecurity credentials, which it calls Trusted Access for Cyber ​​(TAC), has scaled “to thousands of verified defenders and hundreds of teams responsible for protecting critical software.” Those people can use the latest model, GPT 5.5, for “cyber security” tasks with less “friction” from security.

The TAC consent program is tiered, the spokesperson said: “Critical defenders with legitimate defensive use cases can access special, more cyber-tolerant models such as GPT 5.4-Cyber ​​and the upcoming GPT 5.5-Cyber ​​through the program.”

See also  OpenAI announces New Delhi office as it expands footprint in India

Note: This story has been updated with a statement from OpenAI.

Source link

Back to top button