UK’s AI Safety Institute drops ‘safety’

Will now be known as ‘AI Security Institute’

Image:
'Rest easy, citizen’ - the new AISI will look to crime and security

The Labour government continues its plan to make AI the engine of the UK economy, now unburdened by those pesky ethics.

The government has been open about its commitment to AI this year, beginning January with a multi-billion pound investment and continuing this month with a plan to reinvigorate the UK’s deindustrialised heartlands.

This week, however, the Labour government declined to sign a pact on open, inclusive and ethical AI at the AI Action Summit in Paris, saying it lacks “practical clarity.”

That seems to hint that, like the USA, the UK will look to unfettered innovation without being hobbled by regulation to power its growth: a conclusion backed up by the latest announcement from the Department for Science, Innovation and Technology (DSIT).

The Department says the AI Safety Institute, which is barely a year old, will now be known as the AI Security Insitute going forward.

That means the AISI will drop its focus on bias, free speech and the risks of AI to focus instead on cybersecurity and crime.

Technology secretary Peter Kyle, speaking ahead of the Munich Security Conference, said the changes “represent the next logical step in how we approach responsible AI development – helping us to unleash AI and grow the economy as part of our plan for change.”

Kyle said the Institute’s work “won’t change” - crime and security concerns already formed part of the AISI’s remit – but DSIT says the rebranded agency “will not focus on bias or freedom of speech.”

Government “exploring” opportunities with Anthropic

At the same time, Mr Kyle announced a new partnership between the UK and Anthropic, developer of the Claude chatbot.

Anthropic CEO Dario Amodei said the company is looking forward to seeing how Claude can assist government agencies “enhance public services,” although there were no firm announcements – this was more of an MOU.

Kyle has previously talked about the government’s plan to work with multiple foundational AI companies, of which Anthropic is one. It began a chatbot trial with OpenAI last year.

Computing says:

The message seems to be that AI safety issues cannot be considered at the expense of progress, but Labour is in a difficult spot.

On one hand, we have an economy that is barely growing – even 0.1% being celebrated as a win. Clearly, something needs to change.

On the other hand, the UK has about 1.6 million people out of work, and AI – which, as well as impacting creative roles is being abused in recruitment - is not making the job hunt any easier.

There is a very fine line to walk here.