Faced with the rise of artificial intelligence (AI) in content generation, Meta is making significant changes to its rules by playing the transparency card. A response to the criticisms formulated by its supervisory board which will result in a method of traceability of this content.
Former social media giant Facebook plans to increase the labeling of AI content, including deepfakes, starting next month. The goal ? Provide users with a clearer view of the content they are exposed to, especially when it could be misleading and influence their opinion on crucial issues. This is particularly relevant in the year 2022, which will see several major elections across the globe.
However, for deepfakes, Meta will only impose labels if the content contains industry-standard « AI image flags » or if the creator has explained that it is AI-generated content. AI. Outside of these limits, AI-generated content will likely be exempt from labeling.
These changes to Meta’s policy envision an approach geared more toward « transparency and additional context, » rather than removing manipulated media. In short, more labels, less removals, will likely be the future of managing AI-generated content and manipulated media on Meta’s platforms, including Facebook and Instagram.
Next July, Meta plans to stop removing content solely because it is manipulated, giving users time to understand the tagging process.
This reconfiguration could enable Meta to comply with increasing legal requirements for content moderation, including those posited by the European Union’s Digital Services Regulation (DSA). Which requires Meta to find a balance between removing illegal content, reducing systemic risks and protecting freedom of expression.
Meta’s Oversight Board, funded by the company but allowed to operate independently, validates a small percentage of content moderation decisions and can issue policy recommendations. While Meta is not forced to accept this advice, it has chosen to change its approach to AI-generated content and manipulated media this time around.
Monika Bickert, Meta’s vice president of content policy, admitted in a recently published blog post that the company was taking too narrow an approach when it only considered videos created or edited by AI for make it appear that a person was saying something that they never said. The supervisory board had in fact criticized this attitude last February, after an incident involving a falsified video of President Biden.
Anticipating a future filled with synthetic content, Meta is cooperating with other industry players to create common technical standards for identifying AI content. It does this by relying on AI-shaped labels added to AI-generated video, audio and images, which will be based on AI signal detection or AI disclosures. users reporting that they download AI-generated content.
Meta also intends to broaden the scope of its policy, by labeling a much wider variety of content. If a digitally altered image, video, or audio poses a high risk of misleading the public about an important matter, Meta reserves the right to add a more prominent label.
Meta’s efforts to address these issues also include working with nearly 100 independent fact-checkers to help identify risks associated with manipulated content. These external entities will continue to review false and misleading AI-generated content. And if these fact-checkers rate the content as « fake or altered », Meta will respond by limiting the scope of the affected content – meaning it will be less visible on users’ feeds.
It is clear that with the increase in the proliferation of synthetic content, thanks to the rise of generative AI tools, the workload on the shoulders of third-party fact-checkers is only going to increase.