Facebook and Instagram to hide AI-edited image labels
Claims new approach will better reflect extent to which AI has been used
Meta Platforms is once again adjusting its labelling policy for content on Facebook and Instagram modified or edited using generative AI tools
Starting this week, the company will no longer prominently display the "AI Info" tag for content that has been "only modified or edited by AI tools." Instead, the label will be hidden within the menu that appears when clicking the three dots above the post.
The move comes after criticism of Meta's previous "Made with AI" label, which incorrectly tagged real photos taken by users.
“Our intent has always been to help people know when they see content that was made with AI, and we've continued to work with companies across the industry to improve our labeling process so that labels on our platforms are more in line with peoples' expectations," Meta stated in a blog post.
The company says the new approach will better reflect the extent to which AI has been used in a piece of content.
Some experts, however, worry that this change could make it more difficult for users to identify images that have been edited with AI, potentially leading to increased confusion and the spread of misinformation.
Doctored images can be used to spread false information, and the hidden AI-edited image labels could make it harder for users to identify and avoid such content, especially during election seasons.
Meta has clarified that the content that is detected as fully AI-generated will still retain the "AI Info" label in its original position.
Additionally, the company will disclose whether the label is based on industry-shared signals or self-disclosure.
"We will still display the 'AI info' label for content we detect was generated by an AI tool and share whether the content is labeled because of industry-shared signals or because someone self-disclosed."
While Meta does not disclose which systems it uses to detect AI-edited content, it does mention industry-shared signals like Adobe's C2PA-supported Content Credentials metadata and Google's SynthID digital watermarks.
Meta to proceed with AI training using UK users' posts
Meta Platforms is now moving forward with its controversial plans to use public Facebook and Instagram posts from UK users to train its AI models.
The company has announced that it will resume using publicly shared content to train its AI models, excluding private messages and data from users under 18.
"We do not use people's private messages with friends and family to train for AI at Meta, and we do not use information from accounts of people in the UK under the age of 18. We'll use public information – such as public posts and comments, or public photos and captions – from accounts of adult users on Instagram and Facebook to improve generative AI models for our AI at Meta features and experiences, including for people in the UK," it said.
Meta argues that this approach will allow its AI to better reflect British culture and history, benefiting UK companies and institutions.
This announcement comes three months after Meta paused its AI training plans in the UK due to regulatory concerns raised by the Information Commissioner's Office (ICO).
The ICO questioned Meta's approach to using UK user data for AI training and its methods for obtaining consent.
The company has since been working closely with the ICO to address their concerns.
The Irish Data Protection Commission, Meta's lead privacy regulator in the European Union, has also objected to Meta's plans.
Meta says Facebook and Instagram users in the UK will begin receiving in-app notifications, explaining the company's procedure and providing an option to opt-out of their data being used for AI training.
Privacy campaigners like the Open Rights Group and None of Your Business (NOYB) have expressed concerns over the plans, accusing Meta of turning users into involuntary test subjects. They have urged the ICO and the European Union to block the initiative.
The ICO said it has not granted explicit regulatory approval and will monitor the experiment after Meta agreed changes to its approach.