Britain's Information Commissioner issues warning over facial recognition
Past investigations into the application of live facial recognition tech have found problems in all cases.
Elizabeth Denham, the head of the UK's Information Commissioner's Office (ICO), says she is "deeply concerned" about the inappropriate and reckless use of live facial recognition (LFR) technologies in public spaces.
Denham has noted that while facial recognition technology could help make our lives more secure and efficient, the risks to privacy increase when it is used in real time and in more public contexts.
She added that there could be "significant" consequences from using LFR technology, if peoples' personal data is collected on a mass scale without their knowledge, choice or control.
"We should be able to take our children to a leisure complex, visit a shopping centre or tour a city to see the sights without having our biometric data collected and analysed with every step we take," she warned.
Since its inception, facial recognition technology has faced intense criticism from lawmakers and privacy advocates in different countries. Critics cite multiple studies that have found the technology can suffer from race-, age- and ethnicity-related biases, and could lead to human rights abuses. They also argue that the technology has the potential to become an invasive form of surveillance.
The ICO has undertaken several investigations into planned applications of LFR technology, and found problems in all of them. None of the organisations involved in those probes was able to fully justify the processing of people's data, and none of the systems deployed was found to be fully compliant with the data protection regulations in the UK.
Denham also raised concerns about the impact of combining the LFR with social media or other large datasets.
Last week, the ICO published an Opinion on the use of LFR in public places, by public organisations and private firms - an update to a similar publication from 2019. The new documents includes guidance for organisations planning to implement LFR, and advises controllers to avoid using the technology just to save money, improve efficiency or because it 'is part of a particular business model or proffered service'.
The Opinion explains how the law sets a high bar to justify the use of LFR in public places.
The use of biometric technologies to identify individuals has sparked major human rights concerns around privacy and the risk of discrimination in recent years.
In June 2020, tech giant IBM announced that it was quitting the facial recognition software market over concerns that the technology could be used to promote racial injustice and discrimination. The firm said it would no longer sell general purpose facial recognition software and would also oppose the use of such technology for racial profiling, mass surveillance, violations of basic human rights or any purpose 'which is not consistent with our values and principles of trust and transparency'.
In April this year, EU lawmakers presented the bloc's first ever legal framework on regulating high-risk applications of AI technology. Members of the European Commission stated their aim to achieve 'proportionate and flexible rules' to address risks, and to strengthen Europe's position in setting the highest standard in regulating AI.
Just last month, privacy campaigners filed a series of legal complaints with five European regulators against the US tech firm Clearview AI. They allege the company scraped facial images of 3 billion people from the web without their knowledge or permission, in contravention of the GDPR and other regulations.