Government pledges more than £100m of AI funding
But the topline figures obscure a fragmented regulatory approach
The government wants the UK to provide an agile, adaptive environment for AI developers. Is the existing regulatory framework up to it?
Following on from yesterday's coverage in Computing of Labour Party proposals to manage and regulate AI technology development, the government yesterday set out it's approach in response to the consultation it published last March.
Last years' white paper focused on an agile, pro-innovation approach to AI regulation. The government made clear that it wanted to regulate with a light touch. A crucial aspect of this light touch regulation was giving responsibility to existing, industry specific regulators such as the CMA, the Health and Safety Executive and the increasingly busy Ofcom. There are, in fact, 90 separate regulators in the UK. - something that is also under review.
Headlines laud £100 million of government AI funding, but the ways the figures break down gives rise to some concerns. The vast majority of that sum - £90m - will be allocated to the establishment of innovative AI research hubs nationwide which will focus on mathematics and computational research, and science, engineering and real-world data.
Approximately £19 million will be directed towards 21 projects dedicated to developing safe and trustworthy AI tools. These are going to be delivered by more industry specific groups and will look at areas like the responsible use of AI in policing and in the creative industries which are already struggling to work through issues such as LLMs being trained on copyrighted material.
£10 million will go towards upskilling the 90 regulatory bodies. It sounds like a lot you when you divide £10 million by 90, you arrive at around £111,000. That's unlikely to go very far in a field which is fiendishly complex in every aspect and has profound implications for the future of humanity.
Regulators have until the end of April to publish their current plans in response to AI risks and opportunities.
Is regulatory framework agile or just fragmented?
Posting on LinkedIn, AI governance and ethics specialist Sue Turner OBE, cast some doubt on whether the £10 million pledged would be enough for regulators to meet the demands likely to be placed on them.
She also said:
"The government response doesn't provide a detailed strategy on how it will ensure the framework remains adaptable to future technological developments, whilst balancing innovation and risk mitigation.
"I'm all for a pro-innovation approach, but the absence of detailed regulation doesn't let the Government off the hook - it just provides a vacuum that could be filled by bad practice."
In her ministerial foreword to the paper, Secretary of State for Science, Innovation and Technology Michelle Donelan reiterated the government's commitment to leverage the existing regulatory framework rather than establish a central overarching authority, arguing that that this approach is more agile which will benefit the UK. She wrote:
"By taking an agile, sector-specific approach, we have begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of AI safely."
In an interview with Channel 4 News yesterday evening, the Minister for AI and Intellectual Property Viscount Camrose confirmed that the regulators would have no new powers initially, and that the govt is "laying the foundations so that as and when we need to regulate or create new powers in a targeted way we can do so."
Some AI companies greeted this approach positively.
Dr. Maya Dillon, Head of AI at Cambridge Consultants said:
"As Head of AI at Cambridge Consultants, I would like to express my optimism regarding the government's recent announcement on its response to the AI Regulation White Paper consultation.
"This comprehensive investment strategy, alongside the existing £100 million invested in the world's first AI Safety Institute, demonstrates a robust and forward-thinking approach. It will advance the UK's position in AI research and development and ensure that this progress is balanced with careful consideration of the ethical, societal, and regulatory implications of AI.
"However, moving forward with this ambitious agenda, we must remain vigilant and adaptable. The development and deployment of AI technologies must be continuously monitored and guided by a framework that prioritises ethical considerations, public trust, and the protection of individual rights. Collaboration across sectors, disciplines, and borders will be vital to achieving these goals."
Others, such as Arun Kumar, Regional Director of IT solutions provider Manage Engine, were optimistic overall but sounded a note of concern that failing to get ahead in regulating AI posed risks further down the line. Kumar said:
"Ultimately, AI will benefit society, but only a close alliance between technology, government, and industry regulators will enable the safe and responsible use of AI. Today's investment is a positive step towards building a regulatory framework for AI, facilitating a safer cyber environment for all.
"It's clear that tethering AI will not be easy. A failure to identify and anticipate AI's next wave of innovation, risks dangerous outcomes. We need a dual defence, with regulators and businesses joining forces to build the legislation and security practices necessary to keep pace with the level of attack. A sector-driven approach will encourage collaboration between the government bodies, industry regulators and companies, helping organisations to stay one step ahead of the threats."
Computing says:
It is clear that the present government thinks that the UK has a window to attract AI developers here to build their models, and it considers a minimally regulated environment the best way to do that.
Other countries and groups of countries - the US, China and the EU - are setting legislative and regulatory standards and benchmarks, and the commercial reality of selling into those markets suggests that AI developers will choose to comply with those standards, no matter where they choose to develop their technology.
The UK's approach might be moderately successful in the short term, but it's hard to envision such a fragmented regulatory regime being sufficiently adaptive in the long term and able to address the depth and breadth of risk to business and to individuals that generative AI could pose.