Regulators block Meta from training AI on user data

UK and EU authorities have told Meta to pause plans to train LLMs on Facebook and Instagram data

Regulators block Meta from training AI on user data

Regulators in the UK and European Union have forced Meta to pause its plans to train AI on user data.

Both the Irish Data Protection Commission (DPC) and UK's Information Commissioner's Office (ICO) have pushed back against Meta's decision to train large language models (LLMs) using public content users have shared on Facebook and Instagram.

The DPC, acting on behalf of several European data protection authorities (DPAs), asked Meta to delay the training; while the ICO asked the company to pause and review its plans.

Meta says it is "disappointed" by the requests, calling them "a step backwards for European innovation."

Both regulators welcomed the decision and say they will continue to engage with Meta, and other generative AI developers, to make sure users' rights are respected.

Leaning on legitimate interest

The regulators' requests stemmed from Meta's announcement last month that it would start training LLMs on user data.

In response to the announcement user privacy rights group noyb ("none of your business") filed 11 separate complaints with DPAs across the EU to put pressure on Ireland's DPC.

Noyb argued that Meta's plans went against "at least ten" GDPR articles, including one related to the need to opt in to (rather than out of) personal data processing.

In this particular case, users couldn't even technically opt out; all they could do was fill out an objection form (hidden deep in Instagram's settings), leaving the decision about whether to honour it entirely up to Meta's discretion.

Meta claimed that "legitimate interest" overrode data protection laws: an argument it has repeatedly invoked to justify its data processing practices, and which European authorities have generally rebuffed.

On a webpage about the policy change, Meta says it believes the legitimate interest argument "is the most appropriate balance for processing public data at the scale necessary to train AI models, while respecting people's rights."

Reading between the lines, this means Meta doesn't believe an opt in system would have generated data at the scale it needed to train AI.

Noyb has welcomed the regulators' pushback. However, according to chair Max Schrems, it will continue to monitor the case closely.

"So far, there has been no official change to the Meta privacy policy that would make this commitment legally binding. The cases we have filed are ongoing and will require an official decision."