AI Action Summit: Why it’s the Notre Dame Method, not St Paul’s
The world needs to move fast and not break anything
The AI Action Summit in Paris hosted grand ideas and high ideals, but not everybody is on board, writes OpenUK’s Amanda Brock.
President Emmanuel Macron ended day one of the AI Action Summit in Paris promising to use the “Notre Dame de Paris Strategy,” referring to the speedy reconstruction of Notre Dame Cathedral just five years after its destruction in a blaze.
Macron said “France will show the rest of the world” it can commit to a clear timeline and deliver a clear strategy, with someone in charge ensuring that delivery. Frankly, it was all very impressive.

The summit led to major announcements - in particular, the Current AI Foundation, with $400m of funding and a goal of $2.5bn for public good AI and a focus on data.
Alongside this, ‘ROOST’ (Robust Open Source Safety Tools) will see open source tooling for AI safety made available. This means using tools, not rules, to manage governance in the way engineers understand, whilst also not burdening them with excessive compliance requirements. Open sourcing the software tools means they become collaborative and accessible to all.
While France wants to lead this global revolution from Europe - and the razzmatazz of the Action Summit is the right way to catalyse this - the event itself is nothing more than the starting line. The path ahead is long and will only work with the right degree of collaboration.
The importance of collaboration was rightly recognised across the outputs in a way that the UK has failed, to date, to recognise, despite having started the Summit process in Bletchley in 2023.

The difference between the two Summits is stark. One was a small, closed group in a rural, if historic, location, with a declaration on AI safety. The other was a huge event in the heart of a city, set to inspire the world with a series of major announcements, with the backdrop of the vast Grand Palais.
Open sourcing Inspect – the UK AI Safety Institute’s LLM evaluation platform - back in May 2024 was a stroke of genius. It’s something Rishi Sunak got behind, going so far as to write that: “We are pro-open source. Open source drives innovation. It creates startups. It creates communities. There must be a very high bar for any restrictions on open source. That’s why the AI Safety Institute is today open sourcing what it has built.”
The code for the Inspect project – a framework for building AI safety evaluations – is now available to anyone to use.
The UK’s AI Safety Institute was also said to be announcing plans for an Open Source Open Day, which would bring together experts around how to explore open source and tooling for safety. This would have made it easier for the UK to lead on collaboration across Big Tech and SMEs alike. However, that open day never happened, given the changes caused by a general election and a new party taking over government.
Fast forward 10 months and we watch President Macron’s team build what is needed, leveraging their own and the UK’s ideas. That’s the joy of innovation and openness. In open source we say of commercialisation that you enable your competitors with your own innovation.
The same is true in AI and its future. Being first isn’t relevant, but building something of this scale is.
Travelling back from Paris, exhausted by the relentless unmissable fringe events, as well as the main summit, I flicked open my Eurostar Tatler magazine. In there, an article called “Hot Dame” explores the work of a British carpenter on Notre Dame’s restoration. That work was “hailed as a triumph of French craftsmanship. But sometimes only the best of British will do.”
We see this too often with our leaders in technology, too. London’s Demis Hassabis and Laura Gilbert, and Scotland’s Toran Bruce Richards are great examples of the British AI geniuses that sit at the heart of the global AI revolution, but their critical roles and expertise are easy to miss. It is not simply a few individuals we are referring to here, but the position of the UK within the global open source and AI communities.

At the end of the Summit, the UK and US declined to sign the Summit Declaration, whilst 60 other countries did. The Declaration requires that the signatories ensure “AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all” and that it makes “AI sustainable for people and the planet.”
While the Declaration may, as the UK statement says, not go far enough on safety, its support shows what is at stake if we do not attempt to follow that path.
In any case, even if the UK did not sign the Declaration, as Europe’s leader in open source software, it is in the UK’s nature if not our DNA - and our best interest - to join in the collaboration effort.
The UK and global open source communities will work with Martin Tisne, Camille Francois and their teams as they build ‘Current AI’ and ‘ROOST’ to ensure our AI futures are open and in the public interest.
Macron said that France must “Plug, baby, plug,” as part of the European goal around AI. To support the success of opening up AI for the public good, we must now “collaborate, baby, collaborate.”
Amanda Brock is CEO of OpenUK, which has just released its AI Openness Report.