UK to bring AI developers together in San Francisco

Building on safety commitments made in Seoul

The UK will host an invite-only AI developer conference in San Francisco this November to discuss how to implement the commitments made at a summit in Seoul.

In May 10 countries, plus the European Union, signed an agreement in South Korea to accelerate the development of safe and trustworthy artificial intelligence.

At the same time 16 companies - from the USA, China, South Korea, the EU and more - agreed to publish their latest AI safety frameworks ahead of the next AI Action Summit, which will be held in France in February 2025.

Those frameworks should detail how the companies plan to address important AI risks, like bias and misuse by threat actors.

As part of the commitments, the companies also agreed not to deploy or develop models if they cannot address their potential risks.

The conference, to be held on 21st and 22nd November, "will help build a deeper understanding of how the Frontier AI Safety Commitments [from Seoul] are being put into practice," according to the Department for Science, Innovation and Technology.

The UK summit is being held in the USA because DSIT plans to piggyback off the first meeting of the International Network of AI Safety Institutes, on 20th and 21st November, also in San Francisco.

Countries around the world have been establishing AI safety institutes since the first AI Safety Summit at Bletchley Park last year.

The UK's AI Safety Institute, which is organising the conference with the Centre for the Governance of AI, has called for submissions from attendees on potential areas for discussion.

Technology secretary Peter Kyle said the conference is "a clear sign of the UK's ambition to further the shared global mission to design practical and effective approaches to AI safety."