OpenAI tightens the screws on security to keep away prying eyes

OpenAi reportedly revised its security activities to protect against the espionage of companies. According to the Financial TimesThe company accelerated an existing security clampdown after the Chinese startup Deepseek had released a competitive model in January, whereby OpenAi claimed that Deepseek wrongly copied its models with the help of the techniques for “distillation”.
The reinforced security includes the ‘Information Tenting’ policy that reduces access to sensitive algorithms and new products. During the development of the O1 model of OpenAi, for example, only verified team members who were read in the project can discuss it in shared office spaces, according to the FT.
And there is more. OpenAI is now insulating its own technology in offline computer systems, implements biometric access controls for office areas (the scanning the fingerprints of employees) and maintains a “refusal-per-default” internet policy that requires explicit approval for external connections, according to the report, which further applies and its company and its company and its company and its company and its company and its company and its company and its company Cyber security staff staff has expanded.
The changes would reflect broader concerns about foreign opponents who try to steal the intellectual property of OpenAi, although given the current poaching wars in the midst of American AI companies and increasingly frequent leak From the comments by CEO Sam Altman, OpenAi may also try to tackle internal security problems.
We have contacted OpenAI for comments.




