A German consumer advocacy group has failed in its attempt to block Meta Platforms from using public posts on Facebook and Instagram to train its artificial intelligence models. The Cologne Regional Court rejected the injunction request filed by Verbraucherzentrale NRW, a state-backed consumer protection agency, dealing a blow to critics of Meta’s data practices in the European Union.
The consumer group had argued that Meta’s plan to collect and use publicly visible content from its social media platforms—without obtaining explicit prior consent from users—posed significant privacy risks. They sought an emergency legal order to prevent Meta from going forward with the initiative, citing concerns over data protection and user rights under European regulations.
However, the court ruled in Meta’s favor, allowing the tech giant to proceed with its plan. The court did not release a detailed explanation of its reasoning, but the decision implies that the judges found Meta’s current framework—offering users a notification and an opt-out mechanism—sufficient under EU legal standards.
Meta announced last month that it would begin training its AI systems in the EU using public posts made by adult users on Facebook and Instagram, along with their interactions with AI tools. According to the company, this approach will help improve the responsiveness and accuracy of its artificial intelligence products, including chatbots and content generation tools.
In an effort to address privacy concerns, Meta emphasized that EU users would be informed about the data usage plan and would have the right to opt out. The company maintains that only public content is being used, and private messages or hidden posts are excluded from the AI training process. This opt-out option, Meta argues, aligns with the General Data Protection Regulation (GDPR), the EU’s comprehensive privacy law.
Despite the court's ruling, the issue is far from settled. Verbraucherzentrale NRW and other privacy advocates are expected to continue pushing for stricter oversight and legal clarification around the use of personal data for AI development. As artificial intelligence technology evolves rapidly, tensions between innovation and privacy rights are likely to intensify, particularly in regions like Europe that maintain strong regulatory frameworks.