How the UK can stay one move ahead in AI policy chess
Embracing open source is an important step
OpenUK CEO Amanda Brock says the AI Safety Institute's use of open source is "a stroke of genius."
As the UK joins the world in South Korea for the second AI Safety Summit, there is a steady flow of announcements coming from the AI Safety Institute and DSIT. Open source will undoubtedly feature on the agenda at the summit, and Rishi Sunak threw his weight firmly behind it 10 days ago.
So, how has the UK fared since the first summit, and why is open source so central to this?
The Bletchley Declaration in November 2023 centred on a broad global agreement on "the necessity and urgency of addressing" AI safety.
The last few months has seen the global picture on how to achieve that goal begin to crystallise. Some regions are committed to direct prescriptive controls to protect safety across their economy through legislation. Others are focusing more on national security and AI's threat to the integrity of the political process.
From this landscape, the UK has emerged as a canny operator. It has made all the right moves, and with judicious action in the coming months, can secure its advantage.
The UK AI Safety Institute (AISI) was launched at the AI Safety Summit in Bletchley Park last November, with a mission to advance AI safety for the public interest. This was followed by a number of AI companies (including Google, OpenAI, Amazon, Microsoft and Meta) agreeing to testing of their AI systems, in line with their agreed shared objective of supporting public trust in AI safety.
In February the AISI established its three missions: To develop and conduct evaluations on advanced AI systems; drive foundational AI safety research; and facilitate information exchange.
LLMs, however, were not being tested by AISI, and in recent weeks rumblings could be heard across tech media.
Friday's announcement on LinkedIn sees the necessary shift from the AISI to quell those grumblings:
"Too often regulation can stifle those innovators," Sunak wrote. "We cannot let that happen. Not with potentially the most transformative technology of our time. That's why we don't support calls for a blanket ban or pause in AI. It's why we are not legislating. It's also why we are pro-open source. Open source drives innovation."
A new direction of travel
So how did we get to where we are now?
The shift began on 1 April when the UK and US signed the UK US LLM Testing MOU, to test the safety of large language models underpinning AI systems.
The agreement was signed by US Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan, to be implemented by the UK's AISI and its US counterpart immediately.
This news didn't quite catch the media glare in the same way as the EU's announcement of the world's first AI legislation, the EU AI Act. This very long, detailed piece of legislation is an achievement the EU can rightly be proud of - getting it over the line was quite a feat. But, it may be an achievement that is rued by the EU.
Overly prescriptive in nature, the AI Act runs a real and present risk of creating regulatory capture as it comes into force. The use of standards, through pay to play Standards Setting Organisations, to enable much of its requirements of legislation raises questions from an antitrust and competition perspective.
In short, few will be able to comply with the AI Act and it is unlikely that it will promote - and may indeed stifle - innovation.
It is this misstep from the EU that has made the UK's intervention such an important one.
On Friday, Sunak committed to "a very high bar for any restrictions on open source", recognising the power of open source. The same day saw a critical announcement: the UK AI Safety Institute open sourced its testing platform.
The practical approach in the UK of focusing on testing platforms is a sensible one, and the use of open source for the platform is a stroke of genius.
Free to use, an open source platform can be modified and interfaced with to enable better innovation. This facilitates testing by allowing those who are doing the work to build the necessary interfaces with their systems and internal infrastructure.
In addition, anyone who wishes to build their own testing platform can do so, based on the UK one, potentially building ubiquity in its usage and enabling its adoption as the default testing platform. This is how open source builds de facto standards and benefits from collaborative development.
The UK opportunity
The government must now dig deep into UK expertise in this space and listen to the real experts in the UK's open source business community. This segment of the tech sector has flourished since 2012 and the creation of the UK's first open source public sector policy, created by Lord Maude. The policy was a result of a Cabinet Office Committee of Experts, which I was part of, put together by Liam Maxwell.
What has to happen now? The proposed Open Source Day on the AISI's calendar must bring together the UK's open source expertise and spur lasting and impactful action to secure the UK's leadership in this area.
That leadership must be leveraged to build clarity in our tech policy - clarity that has been missing since 2012. A few of the folk on the Cabinet Office committee that led to this are still working in open source from the UK today, but so too are many others. They collaborate globally and form a group that is often missed within the UK tech sector as the "submarine under the digital economy."
OpenUK's reporting has showcased the UK's incredible talent and output in this sector, with UK professionals and companies leading and collaborating with this global, often-US-driven tech sector.
The UK has a particular opportunity to lead on open source AI: OpenUK's recent AI report highlighted that the fastest growing open repositories for AI anywhere in the world are UK-owned.
The time to embrace this leadership is now. The UK has already shown it can make smart moves to avoid over-regulation on AI and embrace open source innovation. As we go into the next election, The Open Manifesto calls for open source to be recognised and safeguarded in AI. It also advocates for skills in open source to be developed in the UK, and for the public sector to be enabled to do open source better.
Whoever is in No.10 in 2025, the game of AI policy will continue. We will need to be smart to avoid others' mistakes, and keep the country moving forwards to benefit from this critical technology.