IT Essentials: AI risk and regulation
The EU grasps the AI nettle. Will it get stung?
Who'd be a tech regulator? You spend years consulting, attending lengthy conferences, poring over precedent, refining a set of rules - and the buggers move the goal posts.
Tech is moving faster all the time, while making legislation takes as long as it always has. With AI that disparity seems certain to accelerate.
The sudden rise of AI in the popular consciousness, complete with outlandish promises and dire warnings, has left authorities around the world floundering to come up with a response.
In a matter of months, UK has zigzagged from downplaying questions of safety governance in favour of the cut and thrust of competitive capitalism, to saying that Britain should lead the world in AI regulation.
Similarly, the US has been bouncing from pillar to post, seemingly unable to land on a coherent position. The country has issued a hodgepodge of executive orders, and a high-level blueprint for an AI Bill of Rights, but so far there has been little clarity or agreement, every development confronted by "But China".
But China is moving forward in this area. Despite its size and largely because it's a one-party state, the country can move surprisingly quickly when it comes to enacting new laws. What the results will look like is not clear, but one China watcher describes the government's moves to regulate generative AI as encompassing many of the principles that AI critics are advocating for in the West. At the same time, of course, it includes systems of control such as social credit systems "that other countries would likely balk at."
So it was interesting this week to see the EU, a body rarely accused of acting with haste, make the first really move in the democratic world in creating a comprehensive set of regulations specifically for AI.
The EU's approach, greenlighted this week, is to group AI solutions according to the potential risk they pose, with four categories: minimal, limited, high and unacceptable. The "unacceptable" category contains systems considered a threat to individuals or groups, including things like China's social credit system and also many applications of remote biometric identification systems, such as facial recognition.
Will it work? Certainly, it has its critics, not least among the companies who face hefty fines for transgression. IBM said it would like to see the top limit whittled down "so that only truly high-risk use cases are captured." Microsoft and Google said they look forward to "further refinement." You can almost hear the massed ranks of lobbyists being readied for action.
Time (and not much of it) will tell if the bloc has built in enough flexibility with its definitions and categorisations in the Act, which will not become law for a couple of years. It will doubtless be found to be insufficient in many areas and in need of upgrading and rebalancing as the situation evolves. Will it be sufficiently agile? Maybe the risk-based approach will not be workable after all? We will see.
But the thing is, it's a start, and it can be built upon. AI is going to be in everything, everywhere all the time, and the alternative to agreed rules is regulatory capture by the world's most powerful companies. We really don't want that to happen.
Weekend Reading
Tech entrepreneur Ewen Kirk shares his thoughts on the UK's "incoherent" attitude to immigration. "We get the brightest and best, who pay £30k a year, often to be taught in a language that is not their first. After we've taught them to be brilliant, we tell them to go home," he tells Penny Horwood. "It's just crazy".
There's also the unfolding tale of the Clop ransomware gang's attempted extortion of an ever widening pool of victims.
And those telecoms companies just can't keep their hands off each other, the latest dying to tie the knot being Vodafone and Three.
Have a great weekend.