ICO fines Clearview AI £7.5m over image collection
The UK's data protection watchdog has fined facial recognition company Clearview AI £7.5m for illegally collecting images for facial recognition.
The Information Commissioner's Office (ICO) has issued an enforcement notice ordering the US-based firm to cease collecting and using the personal data of UK residents, and to delete any such data that it may have stored on its systems.
Clearview has gathered more than 20 billion images of people's faces, together with other data from the internet and social media sites.
'People were not informed that their images were being collected or used in this way.' the ICO said.
John Edwards, the UK's information commissioner, said Clearview's methods of image collection enable identification of the people in the photos, as well as monitoring their behaviour.
The firm "offers [image collection] as a commercial services," which is "unacceptable," said Edwards.
"That is why we have acted to protect people in the UK by both fining the company and issuing an enforcement notice," he noted.
The ICO has found Clearview to have violated multiple UK data privacy rules, including:
- failing to use the personal information of people living in the UK in a manner that is fair and transparent;
- failing to have a legitimate reason for collecting people's information;
- not having a process in place to prevent the data from being kept for an endless period of time;
- failing to meet the more stringent data protection requirements that are necessary for biometric data;
- asking for additional personal information, including photographs, from members of the general public who asked whether or not they were included in Clearview's database.
The ICO's announcement on Clearview follows a joint probe between itself and the Office of the Australian Information Commissioner (OAIC).
In November, the OAIC ordered Clearview to delete data after concluding that it violated Australian data protection laws.
The same month, the ICO said it intended to fine Clearview AI £17 million. However, this week the regulator noted that it reduced the fine after taking into consideration a number of factors, including representations from the company itself.
Clearview's AI tool enables customers to run facial recognition searches and identify persons of interest. Customers submit people's pictures, and the system tries to locate those people in the database, using facial recognition.
If successful, it returns details like the individual's name, social media handles and so on.
Clearview CEO Hoan Ton-That said in October that a larger database of images means its customers, often law enforcement agencies, are more likely to find people of interest.
Clearview is facing scrutiny in many other countries over the privacy implications of its software. Canada's data privacy commissioner concluded last year that the firm had 'violated federal and provincial privacy laws' by collecting images of Canadians.
Commenting on the ICO's fine, Simon Randall, CEO of video privacy and security company, Pimloc, said:
"Artificial intelligence can change the world for the better, but we've got to make sure that we protect people's rights rather than rushing forwards with AI as fast as we can - it's reckless.
"People's faces are among their most important personal identifiable information - we shouldn't let companies run amuck with free unfettered access to this at industrial scale."
Toby Lewis, global head of threat analysis at Darktrace, stated. "Facial recognition technology has always been marred by controversy and this is likely to trigger further calls to ban the new technology - but that is absolutely not the answer.
"We need to find a way of managing the associated risks (privacy and security) that come with embracing new technology. Organisations tasked with securing this data will need AI to monitor the systems that manage the data to protect them against breaches or cyber-attacks. This will ensure the strengths of the latest technologies are not turned into security weaknesses."