AI bias uncovered in UK welfare system

Certain groups more likely to be flagged for potential fraud

Image:
AI bias uncovered in UK welfare system

The Department for Work and Pensions (DWP) is facing criticism after revelations that an AI system used by the department to detect benefits fraud disproportionately targets individuals based on factors like age, disability, marital status, and nationality.

As reported by The Guardian, internal documents [pdf] obtained through a Freedom of Information request show significant flaws in the AI system designed to streamline the process of verifying universal credit claims.

A "fairness analysis" conducted by the DWP uncovered "statistically significant outcome disparities," indicating that certain groups were more likely to be flagged for potential fraud, even if their claims were legitimate.

Despite DWP assurances that human oversight mitigates risks, critics argue that the algorithm's inherent bias could lead to wrongful investigations and significant financial hardship for vulnerable individuals.

Key details from the fairness analysis have been withheld, leaving unanswered questions about which age groups are most at risk and how disabled claimants may be disproportionately impacted. The DWP has defended the redactions, claiming they are necessary to prevent fraudsters from manipulating the system.

The DWP's system is not an isolated case. Reports suggest at least 55 AI systems are currently in use by public authorities across the UK, influencing decisions on housing, welfare, healthcare and policing. However, the government's official AI register lists just nine systems, exposing a major gap in oversight and transparency.

Last month, a Guardian investigation revealed that no Whitehall department had complied with a policy mandating the disclosure of AI systems.

Peter Kyle, the Secretary of State for Science and Technology, has previously admitted the government's shortcomings, stating that the public sector "hasn't taken seriously enough the need to be transparent in the way that the government uses algorithms".

The DWP defends its use of AI, saying the technology is essential in combating fraud and ensuring the integrity of the welfare system.

"Our AI tool does not replace human judgment, and a caseworker will always look at all available information to make a decision," A DWP spokesperson said.

"We are taking bold and decisive action to tackle benefit fraud – our fraud and error bill will enable more efficient and effective investigations to identify criminals exploiting the benefits system faster."

Campaigners are calling for urgent reforms. They demand a halt to the system until comprehensive fairness analyses are conducted and greater transparency is established.

Caroline Selman of the Public Law Project, the organisation that obtained the documents, condemned the lack of accountability.

"It is clear that in a vast majority of cases the DWP did not assess whether their automated processes risked unfairly targeting marginalised groups," she said.

Selman criticised the DWP's approach, whereby people are subjected to intrusive scrutiny before any evidence of wrongdoing is uncovered.

"DWP must put an end to this 'hurt first, fix later' approach and stop rolling out tools when it is not able to properly understand the risk of harm they represent," she added.

In September, Common Sense Media published details of its research, which found that Black teenagers in the US are nearly twice as likely as their peers to have their schoolwork mistakenly flagged as AI-generated.

The study, based on surveys of 1,045 American students (13- to 18-year-olds) and their parents, found that nearly 20% of Black teenagers had their work incorrectly flagged as AI-generated, compared to just 7% of white teens and 10% of Latino teens.