IT Essentials: Kicking the can
AI Bill is “properly in the background”
Last week we had news that the government has kicked the can of AI safety and ethics down the road again.
I don’t know about you, but I’m old enough to remember last years’ general election, when technology formed a critical plank of Labour’s programme for change. Labour seemed to realise the AI safety was a concern for many. The following extract is from page 38 of their manifesto (I checked so you didn’t have to.)
“Labour will ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models and by banning the creation of sexually explicit deepfakes.”
Well, we’ve had plenty of announcements since, mainly involving AI being used to try and increase the efficiency and productivity of the public sector and of the governments push to allow more datacentres to be built. But safety? Not so much.
Ministers had intended to bring a short AI Bill to Parliament last Autumn which would require companies developing frontier models to share these models with the newly renamed UK AI Security Institute. You’ll have noted the name change and the focusing away from ‘safety’ to ‘security’.
The Bill was delayed, and we found out this week that it has been delayed again until Summer. You wouldn’t have to be particularly risk averse to want to avoid betting on it making an appearance this side of Christmas. According to a report in The Guardian which quotes a source inside Labour, the Bill is “properly in the background.”
At the Paris AI Summit in February, safety took a backseat to innovation and, inevitably, securing further investment. The US and UK both refused to sign the closing declaration which outlined a vision for “open, inclusive and ethical AI.”
At the same event US VP JD Vance lectured delegates about Europe’s planned regulation of AI, a warmup for his speech at the NATO conference in Munich where he warned that pronouns are more dangerous that Vladimir Putin. I’m paraphrasing but Ukrainians would likely beg to differ.
Journalists have also noticed just how many meetings the Technology Secretary Peter Kyle has had with Big Tech since Labour won the election. According to a report in The Times, Kyle met executives from Amazon, Google, Meta and Microsoft during his first three months in the job. He has also met with representatives from Apple, TikTok, X, Anthropic and Nvidia.
Why is it a problem? It’s a problem because Big Tech is now inseparable from Trump, from MAGA, from DOGE and from what is looking increasingly to the outside world like a US regime, as opposed to an administration. In the blizzard of executive orders issuing from the White House, one was a revocation of the Biden administration order focusing on safety and trust in AI development.
The likes of Sam Altman and Mark Zuckerberg never thought they had any sort of responsibility to regulate their AI development (or social media platforms) and now they won’t have to – in the US at any rate. The prospect of regulation in Europe and the UK was worrying for them, so they invested in a regime that would do their bidding. You must admit, it’s paid off handsomely.
Sometimes, things are exactly what they look like. And what this looks like is the UK bowing to pressure from the US not to regulate the activities of the companies that have bound themselves with billions of dollars to Trump 2.0.
It looks weak.