UK government taking fresh look at AI regulation

One year after vowing to follow 'light-touch' approach

UK government taking fresh look at AI regulation

Sources say the government is considering a more heavy-handed approach in regulating AI.

The UK government is planning new rules to regulate artificial intelligence, just a few months after Prime Minister Rishi Sunak counselled against being "alarmist."

Late last year, Tory grandee Viscount Camrose said there would be no new AI laws in the "short term," at the risk of stifling innovation.

However, sources speaking to the Financial Times recently have said the Department for Science, Innovation and Technology is "developing its thinking" on AI legislation, exploring moving on regulation for "the most powerful AI models."

That is likely a reference to large language models (LLMs), the technology that supports modern generative AI systems like ChatGPT and Gemini.

Two people briefed on the plans told the FT that nothing would be would be introduced imminently, but it is likely that new laws would require companies developing sophisticated models will have to share their algorithms with the government and provide proof of safety testing.

Sarah Cardell, CEO of regulator the Competition and Markets Authority, said last week that she had "real concerns" that a small number of tech companies could "shape markets in their own interest" using AI models.

The CMA pointed at Google, Apple, Microsoft, Meta, Amazon and Nvidia as the giants sitting at the middle of a web of partnerships, giving them influence well beyond their own borders.

Raises questions about UK's light touch approach

Last year the UK committed to an agile, light touch method of regulating AI, delegating responsibility to individual existing regulators rather than setting up a single overseer.

In October, Sunak pointedly asked, "How can we write laws that make sense for something that we don't yet fully understand?"

That stands in contrast to the EU's approach. The bloc approved the world's first comprehensive AI legislation last month, in the form of the AI Act, which takes a risk-based approach to regulation. Low-risk systems will have minimal oversight, while high risk applications like those used in defence and law enforcement will be subject to stricter obligations.

However, even some EU countries disagree with the plans, supporting a so-called pro-business approach like the UK's.

We say so-called because some companies have even criticised the UK's light touch model. Nick Clegg, former deputy Prime Minister and now president of global affairs at Meta, said:

"At the moment, the regulators are working on a rather crude rule of thumb that, if future models are of a particular size or surpass a particular size . . . that therefore there should be greater disclosure.

"I don't think anyone thinks that over time that is the most rational way of going about things, because in the future you will have smaller fine-tuned models aimed at particular purposes that could arguably be more worthy of greater scrutiny than very large, hulking, great big all-purpose models that might be less worrisome."