AI

Irregular raises $80 million to secure frontier AI models

On Wednesday, AI security company Irregular $ 80 million in new financing announced in a round led by Sequoia Capital and Redpoint Ventures, with the participation of Wiz CEO Assaf RappaPort. A source close to the deal said that the round rated irregular irregular $ 450 million.

“Our opinion is that a lot of economic activity will soon come from human interaction and AI-on-II interaction,” co-founder than Lahav told WAN, “and that will break the safety stack of several points.”

Previously known as pattern laboratories, Irregular is already an important player in AI evaluations. The work of the company is cited in security evaluations For Claude 3.7 Sonnet as well as OpenAi’s O3 and O4-mini models. More generally, the company’s framework for scoring the vulnerability detection of a model (Disconnected solving) is used on a large scale within the industry.

Although irregularly has done a lot of work on the existing risks of models, the company is fundraising with a view to something that is even more ambitious: spotting risks and behavior before they come up in the wild. The company has built an extensive system of simulated environments, which means that intensive tests of a model are possible before it is released.

“We have complex network simulations where we take on the role of attacker and defender,” says co-founder Omer Nevo. “So when a new model comes out, we can see where the defenses stand and where they are not.”

Security has become a point of intense focus for the AI ​​industry, because the potential risks of Frontier models have arisen as more risks have arisen. OpenAi has overhauled its internal security measures this summer, with a view to potential business spionage.

See also  This airline is facing a huge scandal for selling fake flights to almost 1 million travelers

At the same time, AI models are increasingly skilled in finding software -vulnerabilities -a force with serious implications for both attackers and defenders.

WAN event

San Francisco
|
27-29 October 2025

For the irregular founders, it is the first of many security headaches caused by the growing possibilities of large language models.

“If the goal of the Frontier Lab is to create more and more advanced and capable models, our goal is to secure these models,” says Lahav. “But it is a moving target, so inherent in the future is much more work to do.”

Source link

Back to top button