More on News
Meta has announced plans to identify and label AI-generated content across its platforms as the company tries to combat misinformation and safeguard the integrity of upcoming elections.
In a recent blog post, Nick Clegg, Meta’s President of Global Affairs, revealed that the company would work to begin labeling AI-generated content created on rival platforms, including photorealistic images created using Meta’s AI imaging tool.
Clegg emphasized the importance of transparency in distinguishing between human-generated and synthetic content, stating, “As the difference between human and synthetic content gets blurred, people want to know where the boundary lies.” He underscored that users value transparency regarding this evolving technology and expressed Meta’s commitment to providing clarity to its users.
Meta has integrated metadata and invisible watermarks into its AI-generated content to improve its detection capabilities. Furthermore, the company is developing tools to identify similar markers used by other companies such as Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock in their AI image generators.
Clegg affirmed Meta’s dedication to this endeavor, stating that labels indicating AI-generated content would be implemented across all languages in the coming months. “We’re taking this approach through the next year, during which several important elections are taking place worldwide, ” he said.
While Meta’s focus currently centers on images, Clegg acknowledged the absence of markers for AI-generated audio and video content. However, he assured users that the company would allow people to disclose and add labels to this content when posted online. Additionally, Meta plans to introduce more prominent labels for digitally created or altered media that “creates a particularly high risk of materially deceiving the public on a matter of importance.”
Moreover, Meta is exploring technological advancements to detect AI-generated content automatically, even when traditional markers are absent or removed. Clegg said, “This work is especially important as this is likely to become an increasingly adversarial space in the years ahead”. He further emphasized the collective responsibility of the industry and society to continually innovate and adapt to stay ahead of the threats posed by AI-generated content.