Countries and companies agree AI safety initiatives in Seoul
China is notably absent
Ten nations and the EU have signed the "Seoul Statement of Intent toward International Cooperation on AI Safety Science," agreeing to establish the first international network of AI safety institutes.
The collaborative effort aims to accelerate the development of safe and trustworthy AI.
Last year the UK launched the world's first AI Safety Institute, with an aim to advance global understanding of AI safety.
Now the UK has been joined by the USA, South Korea, Australia, France, Germany, Italy, Canada, Japan, Singapore and the EU at the ongoing AI Seoul Summit.
These nations will work together to build a common understanding of AI safety and align their research, standards and testing practices.
The new initiative goes beyond mere information sharing. Member institutes will openly discuss their AI models, including limitations, capabilities and potential risks.
They will also focus on complementarity and interoperability – ensuring their safety approaches and technical work are compatible - and will monitor and share resources on real-world "AI harms and safety incidents."
This open exchange aims to build a strong global understanding of AI safety principles.
The agreement signifies a broader commitment to "human-centric, trustworthy, and responsible" AI development.
Prime Minister Rishi Sunak hailed the agreement as a major step towards "international progress" on AI safety.
"AI is a hugely exciting technology – and the UK has led global efforts to deal with its potential, hosting the world's first AI Safety Summit last year," Mr Sunak said.
"But to get the upside we must ensure it's safe. That's why I'm delighted we have got agreement today for a network of AI Safety Institutes."
China, a key player in the AI development race, was notably absent from the Seoul summit. Reports indicate that the Chinese government did not join the virtual meeting co-hosted by UK Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol.
However, the UK Department for Science, Innovation and Technology (DSIT) clarified that China is participating in the wider summit, and that a Chinese firm signed a new safety agreement alongside other tech companies earlier on Tuesday.
Big Tech joins forces on AI safety
The Seoul agreement comes alongside the "Frontier AI Safety Commitments" signed by 16 global AI technology companies on Tuesday.
This agreement outlines voluntary commitments from companies like Amazon, Google, Meta, Microsoft and even Elon Musk's xAI to ensure the responsible development and deployment of AI technology.
The core of the agreement lies in risk mitigation. Companies have pledged to develop frameworks for assessing the potential risks associated with their "frontier" AI models – the most advanced and powerful systems currently under development.
Crucially, the companies commit to not deploying models where severe risks cannot be adequately addressed. This clause signifies a proactive approach to safety, prioritising responsible development over unmitigated risks.
Rishi Sunak welcomed the commitment by tech firms, highlighting its potential to set a global standard for responsible AI development.
"It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology," he said.
Lee Sang-min, South Korea's Minister of the Interior and Safety, said: "Ensuring AI safety is crucial for sustaining recent remarkable advancements in AI technology, including generative AI, and for maximising AI opportunities and benefits, but this cannot be achieved by the efforts of a single country or company alone.
"We are confident that the 'Frontier AI Safety Commitments' will establish itself as a best practice in the global AI industry ecosystem, and we hope that companies will continue dialogues with governments, academia, and civil society, and build cooperative networks with the 'AI Safety Institute' in the future."