Meta announces measures to label AI-generated content to combat deepfakes
text_fieldsMeta, the parent company of Facebook, revealed significant policy changes regarding digitally manipulated media on Friday, ahead of upcoming US elections that will test its ability to tackle deceptive content produced by new artificial intelligence (AI) technologies.
According to Vice President of Content Policy Monika Bickert, Meta will introduce "Made with AI" labels starting in May for AI-generated videos, images, and audio shared on its platforms. This initiative expands on a previous policy that addressed only a limited subset of altered videos, reported Reuters.
Furthermore, Meta plans to implement distinct and more prominent labels for digitally manipulated media that presents a "particularly high risk of materially deceiving the public on a matter of importance," regardless of whether AI or other tools were used in its creation.
This revised approach signifies a shift in Meta's handling of manipulated content, transitioning from a strategy focused on removal to one aimed at providing viewers with information about the content's origin while keeping it accessible.
Meta had previously announced the development of a system to detect images generated using third-party generative AI tools by embedding invisible markers in the files, although no specific start date was provided at the time.
A company spokesperson confirmed that the new labeling measures would apply to content shared on Meta's Facebook, Instagram, and Threads platforms, while noting that different rules govern its other services such as WhatsApp and Quest virtual reality headsets.
The implementation of the "high-risk" labels will commence immediately, according to the spokesperson.
These changes come ahead of the US presidential election scheduled for November, with tech researchers warning of the potential impact of new generative AI technologies on political campaigns. Already, AI tools are being utilized in countries like Indonesia, challenging the guidelines set forth by platforms like Meta and leading AI provider OpenAI.
In February, Meta's oversight board criticized the company's existing rules on manipulated media, deeming them "incoherent." The review was prompted by a video posted on Facebook last year featuring altered footage of US President Joe Biden. Despite its misleading content, the video remained on the platform.
The oversight board recommended extending the policy to cover non-AI content, emphasizing that such content can be equally misleading. They also suggested applying the policy to audio-only content and videos depicting fabricated actions by individuals.