Police use of facial recognition not ethical or legal
All facial recognition deployments the researchers examined fell short of minimum ethical and legal standards.
UK police should not be allowed to use live facial recognition (LFR) technology in any public areas because they are violating ethical standards and human rights laws, according to a new study by the Minderoo Centre for Technology and Democracy at the University of Cambridge.
Discussion over police use of LFR in the UK has been ongoing for several years. While police departments often advocate for its use as a means of reducing crime, demands for stricter regulations and accountability are common.
LFR technology involves connecting cameras to databases of human images. The camera images can then be compared to those databases to determine whether they are an exact match.
The Minderoo Centre study [pdf] advises against the deployment of LFR in public places like airports and streets - which is where police feel it would be most useful.
The researchers developed a new audit tool to determine if police use of LFR conforms to the law and national advice around issues such as privacy, equality, freedom of speech and assembly.
They examined three LFR technology deployments by police forces in the UK: one by the Metropolitan Police and two by South Wales Police.
The researchers found that the deployments fell short of 'minimum ethical and legal standards' in all three cases.
They believe crucial details around how the police utilise facial recognition technology, such as the minimal demographic data published on arrests or other outcomes, are 'kept from view,' making it difficult to determine whether the tools 'perpetuate racial profiling.'
The report's lead author Evani Radiya-Dixit, a visiting fellow at the Minderoo Centre, said, "There is a lack of robust redress mechanisms for individuals and communities harmed by police deployments of the technology.
"To protect human rights and improve accountability in how technology is used, we must ask what values we want to embed in technology."
Along with a lack of transparency, the researchers discovered a lack of accountability, with no clear recourse for anyone harmed by police use - or misuse - of the technology.
Radiya-Dixit said police forces are not always held accountable or culpable for harms caused by the use of LFR.
The report found that some facial recognition systems uses lacked frequent monitoring from an independent ethics committee or the general public and did not take adequate precautions to ensure there was a reliable "human in the loop."
The Cambridge research comes months after the UK Government rejected the Lords Justice and Home Affairs Committee's (JHAC) proposals on police facial recognition use.
The Lords JHAC investigation came to the conclusion that employing technologies like face recognition and AI is likely to have more negative effects than positive ones. It pushed for modifying how law enforcement uses such tools.
The Government acknowledged the issues brought up by the inquiry's findings in its formal response in July. However, it disagreed with the assertion that new technologies would inevitably overturn societal standards or allow machines to make decisions about what is necessary and appropriate.
The Government said it remained committed in its belief that people, not machines, should decide on crucial issues like whether to make an arrest, file charges, begin a criminal investigation, or convict someone.