Meta re-starts facial recognition tests to combat fraud

50,000 public figures will be enrolled in trial

Image:
Facial recognition trials will begin in December

Meta has announced that it will begin trialling the use of facial recognition technology again in an effort to crack down on “celebrity-bait” scam ads.

Trials will begin in December with a select pool of 50,000 celebrities or public figures worldwide on an opt-out basis. The trial will involve automatically comparing celebrity Facebook and Instagram profile photos with images used in suspected scam ads. If the images match and Meta believes the ads are scams, it will block them.

Monika Bickert, Meta’s VP of content policy, wrote in a blog post yesterday:

“Scammers often try to use images of public figures, such as content creators or celebrities, to bait people into engaging with ads that lead to scam websites where they are asked to share personal information or send money. This scheme, commonly called ‘celeb-bait,’ violates our policies and is bad for people that use our products,” she wrote.

“Of course, celebrities are featured in many legitimate ads. But because celeb-bait ads are often designed to look real, they’re not always easy to detect.”

David Agranovich, director of global threat disruption at Meta, said yesterday

“This process is done in real time and is faster and much more accurate than manual human reviews, so it allows us to apply our enforcement policies more quickly and to protect people on our apps from scams and celebrities,”

Meta had previously shut down facial recognition trials amid pushback from privacy group and regulators.

The tech giant claimed that the feature is not being used for any other purpose than for fighting scam ads. “We immediately delete any facial data generated from ads for this one-time comparison regardless of whether our system finds a match, and we don’t use it for any other purpose,” Bickert said.

Meta said that early tests on a small group has shown promising results in improving the speed and efficacy of detecting and enforcing against this type of scam.

Meta said it will start displaying in-app notifications to a larger group of public figures to advise them that they’re being enrolled in the system and giving the chance to opt out.

Meta also told the TechCrunch website it thinks the use of facial recognition technology could prove effective at detecting deepfake scam ads, where GenAI is used to manipulate the images of public figures.

Meta is also testing use of facial recognition for spotting imposter accounts, where scammers impersonate public figures for fraudulent purposes.

Interestingly no testing is occurring in either the UK or EU due to the more restrictive data protection laws. In the specific case of biometrics for ID verification, the EU data protection framework demands explicit consent from the individuals concerned. This doesn’t rule out testing. It just means that Meta must obtain explicit consent.

“We are engaging with the U.K. regulator, policymakers, and other experts while testing moves forward,” Meta spokesman Andrew Devoy told TechCrunch. “We’ll continue to seek feedback from experts and make adjustments as the features evolve.”