UK needs system to record AI misuse and mistakes, says thinktank

Current system is piecemeal and lacks an effective reporting framework

UK needs system to record AI misuse and mistakes, says thinktank

Report by the Centre for Long-Term resilience (CLTR) says that the UK needs a system for recording the misuse and malfunctions of artificial intelligence (AI), lest ministers remain unaware of issues that could prove dangerous in the long term.

The CLTR recommends that the new government should create a system for the logging of AI fails in public services and consider building a hub where all AI -related issues can be collated. It says such a system is vital if the technology is to be used successfully.

The report cites 10,000 AI "safety incidents" recorded by news outlets since 2014, which are currently listed in a database compiled by the Organisation for Economic Co-operation and Development (OECD).

Some very recent examples logged in the database include approximately 200,00 people being incorrectly flagged by a government algorithm as being potentially guilty of benefit fraud, a deep faked video of Keir Starmer purportedly being abusive towards staffers and the fact that Network Rail has been monitoring passengers via facial recognition software at stations.

The report argues that the UKs present regulation is piecemeal and lacks an effective incident reporting framework. If this situation continues, the Department for Science, Industry and Technology (DSIT) will not have visibility or be able to act quickly enough on the basis of incidents in foundational models such as biased outcomes, incidents where the governments own use of AI fails (such as in the benefit fraud case), the misuse of AI to create and distribute disinformation or incidents where AI is misused to encourage harm to individuals.

CLTR sets out three reasons why a central reporting mechanism is the best way forward in such a complex patchwork of risks. Firstly, safety in a real-world context should be monitored to allow for the regulation and deployment of AI to be corrected as necessary. Secondly, a central hub with speed up responses and investigations into major incidents. Finally, such a hub could serve as an early warning system for larger-scale harms that may arise in the future.

Speaking with The Guardian, Tommy Shaffer Shane, a policy manager at CLTR and the report's author said:

"Incident reporting has played a transformative role in mitigating and managing risks in safety-critical industries such as aviation and medicine. But it's largely missing from the regulatory landscape being developed for AI. This is leaving the UK government blind to the incidents that are emerging from AI's use, inhibiting its ability to respond."

Ekaterina Almasque, General Partner at Venture Capital firm OpenOcean, said: "Investors crave certainty and long-term stability. The Centre for Long-Term Resilience's proposal for an AI incident reporting system makes sense. To attract private investment in AI, the UK needs solid guidance for startups and a stable regulatory environment.

"However, while safety-critical industries like aviation and medicine do benefit from incident reporting, we can't compare apples to oranges. AI will integrate into every sector, making it harder to track each and every incident. We must be realistic and avoid drowning all parties involved in red tape. Policies need to balance oversight with flexibility, allowing AI to develop dynamically without unnecessary constraints."