Meta's AI plans are against EU law, says Max Schrems
'This is clearly the opposite of GDPR compliance'
Meta updated its privacy policy in May, notifying millions of European users of changes that allow the company to use their personal data for undefined "AI technology" uses.
This data includes years of personal posts, images, chats with organisations (private chats are exempt) and online tracking information. Data from dormant accounts will also be hoovered up to fuel Meta's AI ambitions.
The changes, which cover FaceBook and Instagram but not WhatsApp, are due to take effect on 26th of June. Meta says they are designed to make it easier to understand how customers' information is used. It claims its "legitimate interest" overrides data protection laws in processing users' data.
Meta has repeatedly invoked the legitimate interest argument to justify its data processing practices, and has generally been rebuffed by the European authorities.
In response to the latest changes, nyob, a non-profit privacy organisation headed by Austrian lawyer Max Schrems, has filed complaints in 11 European countries, requesting an "urgency procedure" to halt the changes immediately.
In a blog post nyob said this action was necessary because of the imminent deadline: "Given that Meta's processing for undisclosed 'artificial intelligence technology' is already set to take effect on 26 June 2024, and Meta claims that there is no option to opt out at a later point to have your data removed (as foreseen under Article 17 GDPR and the "right to be forgotten"), noyb has requested an 'urgency procedure' under Article 66 GDPR."
Users can opt out of the data collection prior to the changed terms coming into effect, but Meta has been criticised for making this process unduly complex and difficult. Moreover, this approach shifts the burden of privacy protection onto the users, which is contrary to the principles of the GDPR. Meta claims its legitimate interest overrides this provision.
According to Schrems: "Meta is basically saying that it can use 'any data from any source for any purpose and make it available to anyone in the world', as long as it's done via 'AI technology'. This is clearly the opposite of GDPR compliance."
Schrems has notched up several legal successes against Meta in the past decade, including one that resulted in a €1.2 billion fine over the illegal transfer of EU users' data to the US.
He said noyb has counted violations of "at least ten" articles of the GDPR in Meta's upcoming privacy policy, which he noted provides no limits or specific details on the use of user data and with whom it might be shared.
Schrems accused the the Irish Data Protection Commission (DPC), Meta's data protection regulator in Europe, of complicity in allowing the breach of GDPR regulations.
He has repeatedly criticised the Irish regulator for allegedly looking out for the interests of US big tech companies headquartered in the country over those of EU citizens.
In 2021, the European Parliament passed a resolution expressing "great concern" over the slow progress of cases referred to the Irish DPC against companies like Facebook, Microsoft and Apple. The DPC claims a lack of resources prevent it from processing the high volume of claims it receives.
The Norwegian regulator has said it takes noyb's complaint seriously and will give it high priority. "We will work closely with our European colleagues in the further handling of the case," said Line Coll, director of the Norwegian Data Protection Authority, in a post on its website.
The EU AI Act is due to come into force this summer, after which affected companies will have between 12 and 36 months to comply, depending on the use case. In the meantime, we can expect the GDPR to be invoked more frequently by complainants and regulators in cases involving training data.