Meta, the parent company of Facebook, announced significant revisions to its policies concerning digitally created and altered media on Friday, as the upcoming U.S. elections loom as a crucial test of its ability to combat deceptive content generated by emerging artificial intelligence (AI) technologies.
In a blog post, Monika Bickert, Vice President of Content Policy at Meta, unveiled the forthcoming introduction of “Made with AI” labels slated to debut in May. These labels will be affixed to AI-generated videos, images, and audio shared across Meta’s platforms, marking an expansion of the company’s previous policy which had focused on a narrower scope of manipulated content.
Moreover, Meta plans to implement separate and more conspicuous labels for digitally altered media deemed to pose a “particularly high risk of materially deceiving the public on a matter of importance,” irrespective of the tools used in their creation.
This new approach signifies a departure from Meta’s previous strategy of removing select posts with manipulated content towards a model that preserves the content while providing viewers with transparency regarding its origins.
Meta had previously disclosed plans to identify images produced using external generative AI tools through embedded invisible markers within the files, though the rollout date was not specified at the time.
A Meta spokesperson confirmed to Reuters that the updated labeling measures would extend to content shared on Meta’s Facebook, Instagram, and Threads platforms. However, different rules will govern its other services such as WhatsApp and Quest virtual reality headsets.
Immediate implementation of the more prominent “high-risk” labels is slated to commence, the spokesperson added.
These policy changes arrive ahead of the U.S. presidential election scheduled for November, a contest that tech researchers caution could be influenced by the proliferation of new generative AI technologies. Political campaigns have already begun leveraging AI tools, particularly in regions like Indonesia, thereby testing the boundaries of guidelines set forth by platforms like Meta and leading generative AI provider, OpenAI.