A series of significant events have recently unfolded at Meta. Facing strong opposition from data protection advocates and activists, Meta has delayed the launch of its new AI software in Europe just days before its scheduled release. The Irish Data Protection Authority has demanded that Meta halt the training of its AI models using publicly accessible posts and images from Facebook and Instagram. Previously, Meta had informed users via email that they needed to actively opt-out if they did not want their posts included in AI training. This practice faced intense criticism, resulting in the postponement of the rollout. Data protection experts from noyb and the North Rhine-Westphalia Consumer Advice Center also criticized the “misleading and complicated” opt-out procedure.
Legal Actions and Consumer Advocacy
Earlier this year, a coalition of various consumer organizations filed a lawsuit against Meta. The European Consumer Organisation (BEUC) condemned Meta’s “mass” and “unlawful” data processing practices. Since November 2023, Meta has required Facebook and Instagram users to either consent to data processing for advertising purposes or pay to avoid advertisements. BEUC criticized this approach, highlighting misleading information and high fees for ad-free services. Additionally, the organization filed a second complaint against the extensive processing of personal data, arguing that Meta failed to comply with GDPR principles such as data minimization and purpose limitation. BEUC also accused Meta of exploiting its monopoly position to force consent, noting the lack of competition with similarly large user bases.
Meta's Response and AI Ambitions
Regulatory Developments in California
As Meta grapples with European data protection issues, California is preparing new regulations for AI technologies. A bill passed by the Senate aims to prevent the development of AI models with “dangerous capabilities.” The legislation mandates monthly safety reports and introduces a “kill switch” mechanism to allow immediate shutdown of AI models in critical situations. OpenAI has previously posted a humorous job listing for a “kill switch engineer,” as seen in this Reddit post, underscoring the serious implications of these new requirements.
However, the motivation behind this initiative is serious: open-source AI models, which democratize AI innovations, are particularly affected. Developers could be held liable if their freely accessible source code is manipulated for malicious purposes. Following criticism from the tech industry, a compromise was proposed: open-source developers would not be liable for heavily modified models, and the “kill switch” mechanism would not apply to them for now. Additionally, the law would only affect models costing over $100 million.
The Global Importance of AI Transparency and Ethics
These developments underscore the importance of transparency and ethical responsibility in AI development. While Meta navigates European data protection standards and the pending regional implementation of the AI Act, California is forging its path with stringent safety requirements that may unintentionally hinder the open-source community.
Both cases illustrate the growing global significance of AI regulation. Companies must address these challenges to build and maintain user trust. As a European AI company, we recognize that we cannot match the large American platforms’ infrastructure and resources. However, our strength lies in our European values, adherence to GDPR, and commitment to ensuring that our users’ data remains private, even offering on-premise solutions without relying on the cloud.
For further insights, we recommend reading Svenja Bergt’s commentary on the Meta case in TAZ (DE).