UK cyber intelligence leads international standard on safe AI software development
Agreement represents a 'truly global effort' to ensure security by design
Britain's GCHQ cyber intelligence agency has fronted an international effort to agree standards to build AI systems safe from hackers, in collaboration with the US.
Of 21 countries that helped formulate guidelines for AI security that the UK published yesterday, 18 signed them. Signatories include the familiar roster of G7 and like-minded nations: US and Canada, Australia and New Zealand, 8 EU countries including Germany, France, Norway, and Poland; in Asia Japan, Singapore and South Korea. Nigeria was the sole African signatory and Chile the only one from South America.
Notable absences among leading tech nations included Ireland, Sweden, Spain and the Netherlands in Europe, South Africa and Egypt, Brazil, India and Saudi Arabia. Russia, China and Iran also did not sign their support.
The UK National Cyber Security Centre (NCSC), said in a statement the outcome was "testament to the UK's leadership in AI safety", following the Bletchley Park Declaration on AI Safety. It constituted the first global agreement of its kind. It did not say which countries contributed and then withheld their signatures. 28 countries signed the Bletchley declaration in London on 1st November.
The agreement was a "truly global effort" that showed the UK was "an international standard bearer on the safe use of AI", Michelle Donelan, Secretary of State for Science, Innovation and Technology, said in the UK government statement announcing the guidelines yesterday.
Lindy Cameron, CEO of the UK NCSC, said in the same statement that the 'secure by design' guidelines were intended "to ensure that security is not a postscript to development but a core requirement throughout."
"AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up," she said.
The guidelines set out ways software developers should address cyber security when designing, developing, deploying, operating and maintaining AI systems.
Jen Easterly, director of the US Cybersecurity and Infrastructure Security Agency, said in the same statement that the agreement "underscores the global dedication to fostering transparency, accountability, and secure practices" and "could not come at a more important time in our shared technology revolution".
US secretary of homeland security Alejandro Mayorkas said the guidelines were "an historic agreement".
"We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time," he said.