Public sector lacks transparency in the use of AI, warns official report

Implementations of AI in the public sector risk undermining the Nolan Principles, warns Lord Evans, chair of the Committee on Standards in Public Life

The use of artificial intelligence (AI) in government risks undermining transparency, obscuring accountability, reducing responsibility for key decisions by public officials, and making it more difficult for government to provide "meaningful" explanations for decisions reached with the assistance of AI.

Those are just some of the warnings contained in the Review of Artificial Intelligence and Public Standards [PDF] report released today by the Committee on Standards in Public Life.

The Review upheld the importance of the Nolan Principles, claiming that they remain a valid guide for the implementation of AI in the public sector.

"If correctly implemented, AI offers the possibility of improved public standards in some areas. However, AI poses a challenge to three Nolan Principles in particular: openness, accountability, and objectivity," Lord Evans of Weardale KCB DL Chair, Committee on Standards in Public Life. "Our concerns here overlap with key themes from the field of AI ethics."

The risk is that AI will undermine the three principles of openness, objectivity, and accountability, Evans added.

"This review found that the government is failing on openness. Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government," the Review concluded, adding that it was still too early to form a judgement in terms of accountability.

"Fears over ‘black box' AI, however, may be overstated, and the Committee believes that explainable AI is a realistic goal for the public sector. On objectivity, data bias is an issue of serious concern, and further work is needed on measuring and mitigating the impact of bias," it added.

The government, therefore, needs to put in place effective governance procedures around the adoption and use of AI and machine learning in the public sector. "Government needs to identify and embed authoritative ethical principles and issue accessible guidance on AI governance to those using it in the public sector. Government and regulators must also establish a coherent regulatory framework that sets clear legal boundaries on how AI should be used in the public sector."

However, it continued, there has been a significant amount of activity in this direction already.

The Department for Culture, Media and Sport (DCMS), the Centre for Data Ethics and Innovation (CDEI) and the Office for AI have all published ethical principles for data-driven technology, AI and machine learning, while the Office for AI, the Government Digital Service, and the Alan Turing Institute have jointly issued A Guide to Using Artificial Intelligence in the Public Sector and draft guidelines on AI procurement.

Nevertheless, the Review found that the governance and regulatory framework for AI in the public sector is still "a work in progress" and one with significant deficiencies, partly because multiple sets of ethical principles have been issued and the guidance is not yet widely used or understood - especially as many public officials lack expertise, or even understanding of, AI and machine learning.

While a new regulator isn't required, the twin issues of transparency and data bias "are in need of urgent attention", the Review concluded.