Meta, the parent company of social media giants Facebook, Instagram, and Threads, announces plans to unveil groundbreaking technology capable of detecting and labeling images produced by rival artificial intelligence (AI) systems. This move aims to combat the proliferation of AI-generated fakes across its platforms and ignite momentum within the industry to address the issue.
While Meta already labels AI-generated images from its own systems, this new technology is touted as a significant step forward. In a statement penned by senior executive Sir Nick Clegg, Meta expresses intentions to expand the labeling of AI-generated content in the upcoming months.
However, skepticism arises from AI experts who question the efficacy of such measures. Professor Soheil Feizi from the University of Maryland's Reliable AI Lab warns that these detection systems could be circumvented with ease through minor alterations to the images, leading to high rates of false positives.
Despite Meta's efforts, the technology will not extend to audio and video content, which are prime mediums for AI-generated manipulation. Instead, users will be tasked with labeling their own audio and video posts, potentially facing penalties for non-compliance.
Acknowledging the challenges, Sir Nick Clegg concedes that identifying text generated by tools like ChatGPT is an insurmountable task. The admission follows criticism from Meta's Oversight Board regarding the company's policy on manipulated media, which was deemed incoherent and insufficiently focused on the evolving landscape of synthetic content.
In response to the Oversight Board's critique, Meta pledges to update its policy, particularly regarding digitally altered political advertisements. The company emphasizes its commitment to fostering transparency and integrity in online content amid the rise of AI-driven manipulation.