According to the Financial Times, the company sped up these changes after a Chinese startup, DeepSeek, released a competing model in January, which it claims has copied its models using “distillation” techniques.
The stronger security includes “information tenting” rules that limit who can see sensitive algorithms and new products.
For example, during the development of OpenAI’s o1 model, only approved team members who knew about the project could talk about it in shared office spaces, the FT reported.
The report added that OpenAI now keeps important technology on offline computers, uses fingerprint scans to control office access, and has a “deny-by-default” internet policy that requires permission for outside connections.
It also said the company has increased security at data centres and hired more cybersecurity staff.
RELATED STORIES





