UK aims for 'agile' AI regulation

But rules out a single regulator

UK aims for 'agile' AI regulation

The UK government has set out a new whitepaper on AI regulation, with the aim of driving "responsible" innovation and maintaining public trust in the technology.

However, it has ruled out establishing a new central regulator for the technology, instead preferring to split responsibility among existing bodies.

A new whitepaper ('A pro-innovation approach to AI regulation') sets out the business case for artificial intelligence. It notes that the UK's AI industry is already well developed, employing more than 50,000 people and contributing £3.7 billion to the economy last year.

However, there are also questions around privacy, safety and fairness. Concerns about AI bias are widespread.

The government says the proposals in the AI Regulation White Paper "will help create the right environment for artificial intelligence to flourish safely in the UK."

It says a "patchwork of legal regimes" is holding firms back from using AI to its full potential, causing confusion and both financial and administrative burdens.

Instead of a "heavy-handed" approach, the government says it will pursue light-touch regulation. That includes handing responsibility to existing regulators rather than establishing a new body.

Existing regulators include the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority, which will be asked to build their own approaches that suit the way their respective sectors are using artificial intelligence.

These regulators are advised to follow five principles to help companies use AI safely:

Science, Innovation and Technology Secretary Michelle Donelan said, "The pace of AI development is staggering, so we need to have rules to make sure it is developed safely. Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow"

Business leaders also welcomed the news.

Grazia Vittadini, CTO at Rolls-Royce, said, "Both our business and our customers will benefit from agile, context-driven AI regulation. It will enable us to continue to lead the technical and quality assurance innovations for safety-critical industrial AI applications, while remaining compliant with the standards of integrity, responsibility and trust that society demands from AI developers."

Meanwhile Sridhar Iyengar, MD of Zoho Europe, said, "It is fantastic to see the government supporting the use of AI and understanding the potential it can bring to UK businesses and the economy.

"Part of increased AI adoption requires the demystification of its use cases and education on how businesses and staff can utilise this extremely useful business tool would help to achieve this."

However, not everyone is convinced.

Jacob Gatley, a solicitor at law firm BDB Pitmans, says, "While the UK Government seeks to foster a trailblazing AI sector, it is difficult to look past the fact that there is a fundamental regulatory lacuna. Specifically, how UK regulators will monitor AI development, how the existing statutory framework applies to data stored and utilised by AI programmes, and whether regulatory bodies such as the HSE and ICO require legislative and financial support so they are properly equipped to guide and police AI development.

"There is also the concern that the UK could in fact be an anomaly rather than a market leader, if the USA, China and the EU are already putting in place AI specific laws, and we are not discussing how people's personal data can be protected on an international level."

Dr Andrew Rogoyski, from the University of Surrey's Insitute for People-Centred AI, said the pro-innovation approach is "laudable," but disagreed with the decision to split regulation between departments.

"We need a central regulator for AI technology; partially because the individual regulators don't currently have the individual skills, but mainly because AI regulation needs to be joined up across sectors, especially since many AI providers operate across different sectors and don't want to find themselves operating the same technology under different regimes in different sectors.

"The pace and scale of change in AI development is extraordinary, and everyone is struggling to keep up. I have real concerns that whatever is put forward will be made irrelevant within weeks or months."

Computing says:

The commitment to light-touch regulation is very much in keeping with the Conservatives' core philosophy, but may be the wrong approach for this specific sector.

It also makes the UK something of an outlier on the world stage.

Our closest neighbour, the European Union, has published perhaps the world's broadest AI regulation in the Artificial Intelligence Act, which applies proportionate regulation to uses of AI. For example, a low-risk use such as an email spam filter might only require companies to state that they're using AI, whereas medical diagnosis would be more heavily regulated - and some uses, like social grading by governments, would be banned altogether.

China also requires that companies tell customers when they are using AI, and the USA's Algorithmic Accountability Act mandates that companies assess the impacts of AI.

The concern in the UK under the new regulatory framework must be the lack of detailed, unified regulation. While it might be more agile, it will also add to the confusion for companies that fall under the auspices of more than one regulator.