AI

UK drops ‘safety’ from its AI body, now called AI Security Institute, inks MOU with Anthropic

The British government wants to make a hard pivot point to stimulate its economy and industry with AI, and as part of it it is an institution that it founded just over a year ago for a completely different goal. Today, the Ministry of Science, Industry and Technology announced that it would rename the AI ​​Safety Institute to the ‘AI Security Institute’. (The same first letters: The same URL) The body will shift from mainly exploring areas such as existential risk and bias in large language models, to a focus on cyber security, in particular “strengthening protection against the risks for national security and crime.”

In addition, the government also announced a new partnership with Anthropic. No business services have been announced, but the MOU indicates that the two will “explore” with the help of Anthropic’s AI Assistant Claude in public services; And anthropic will want to contribute to work in scientific research and economic modeling. And at the AI ​​Security Institute it offers tools to evaluate AI possibilities in the context of identifying security risks.

“AI has the potential to transform how governments serve their citizens,” said anthropic co-founder and CEO Dario Amodei in a statement. “We look forward to investigating how the AI ​​assistant Claude from Anthropic British government agencies could help improve public services, with the aim of discovering new ways to make essential information and services more efficient and accessible for residents of the VK. “

Anthropic is the only company announced today – coinciding with a week AI activities in Munich and Paris – but it is not the only one who works with the government. A series of new tools that were unveiled in January was all powered by OpenAi. (At the time, Peter Kyle, the State Secretary for Technology, said that the government intended to collaborate with various fundamental AI companies, and that is what the anthropic deal turns out.)

See also  Unmasking Privacy Backdoors: How Pretrained Models Can Steal Your Data and What You Can Do About It

The switch from the AI ​​Safety Institute by the government who was launched a little more than a year ago with a lot of fanfare to AI security should not be too much surprise.

When the newly installed Labor government announced its AI-Heavy Plan for change in January, it was remarkable that the words ‘safety’, ‘damage’, ‘existential’, and ‘threat’ did not appear in the document at all.

That was not a supervision. The government’s plan is to start investments in a more modern economy, to use technology and specifically AI to do that. It wants to collaborate more closely with Big Tech, and it also wants to build its own big techies.

To support that are the most important messages that it promotes, development, AI and more development. Civil servants will have their own AI assistant called “Humphrey”, and they are encouraged to share data and use AI in other areas to speed up how they work. Consumers receive digital portfolios for their government documents and chatbots.

So have AI safety problems been solved? Not exactly, but the message seems to be that they cannot be considered at the expense of progress.

The government claimed that despite the name change, the song will remain the same.

“The changes I announce today represent the logical next step in how we approach responsible AI development – help us to unleash AI and grow the economy as part of our plan for change,” Kyle said in a statement. “The work of the AI ​​Security Institute will not change, but this renewed focus will ensure that our citizens – and that of our allies – will be protected against those who would like to use AI against our institutions, democratic values ​​and way of life. “

See also  Aaron Taylor-Johnson from 007 has been called the most handsome in the world

“The focus of the institute has been on security from the start and we have built a team of scientists aimed at evaluating serious risks for the public,” added Ian Hogarth, which remains the chairman of the institute. “Our new criminal abuse team and the deepening of partnership with the national security community mark the next phase of tackling those risks.”

Furthermore, priorities seem to have changed absolutely around the importance of “AI safety”. The greatest risk that the AI ​​Safety Institute in the US is currently considering is that it will be dismantled. The American Vice President JD Vance Telegraphed the same as much earlier this week during his speech in Paris.

WAN has an AI-oriented newsletter! Register here to get it in your inbox every Wednesday.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button