Seven people wrongfully apprehended by Met Police during Oxford Circus facial recognition deployment
Big Brother Watch claims 86 per cent of alerts flag innocent members of the public
The Metropolitan Police's facial recognition deployment in Oxford Circus on Thursday led to the wrongful apprehension of seven innocent members of the public who were incorrectly identified.
Nevertheless, their facial images will be added to the police database that already has more than one million mugshots.
On top of that, five further members of the public who were passing by were stopped, questioned and asked to produce ID, with 8,600 people in total scanned by the facial recognition technology without their consent.
That's according to Big Brother Watch, which claims that the Met Police's facial recognition systems are so poor that 86 per cent of the alerts wrongly flagged innocent members of the public as ‘wanted', while 71 per cent of those misidentifications resulted in the police stopping and demanding ID from innocent people.
"This blows apart the Met's defence that facial recognition surveillance is in any way proportionate or that the staggering inaccuracy is mitigated by human checks. This is a disaster for human rights, a breach of our most basic liberties & an embarrassment for our capital city," the organisation tweeted.
The details have emerged after it was revealed that the Met Police had used Clearview AI, the controversial US company that put together a database of three billion images of people without their consent, to help populate its own imaging database.
That was only disclosed after data breach by Clearview AI. It had left an unsecured database online in the cloud, revealing its full customer list.
The company had claimed that its services were only used by law enforcement agencies in North America. But the breach also showed that the Met Police had made more than 170 searches of Clearview AI's database since December, in the run-up to going live on the latest stage of its facial recognition roll-out.
Furthermore, the data breach indicates that the Met Police misled journalists when it flatly denied using the services of Clearview AI in a Freedom of Information request. The Met Police, meanwhile, claims that Clearview AI was not being used in conjunction with its live facial recognition tool that it has started deploying across London.
The Oxford Street deployment followed on from the first deployment in Stratford in mid-February. Facial recognition technology has also been rolled-out on private estates across London.