July 18, 2025

The European Union has made history with its General-Purpose AI (GPAI) Code of Practice, published on July 10, 2025—the world’s first comprehensive compliance framework for AI models under the EU AI Act. While tech giants like Meta Platforms publicly decline to sign this voluntary agreement, we should be applauding the EU’s courageous attempt to establish global AI governance standards.

This isn’t just regulatory theater—it’s a necessary step toward responsible AI development that prioritizes societal benefit over corporate convenience.

A Framework Built on Real Concerns

The EU’s Code of Practice addresses genuine, urgent concerns that the tech industry has largely ignored or deflected. Built on three core pillars—transparency, copyright compliance, and safety—the framework tackles issues that affect millions of users and creators worldwide.

The transparency requirements mandate detailed documentation of AI models’ capabilities and limitations, giving users and researchers actual insight into systems that increasingly shape our digital lives. Copyright compliance rules protect the intellectual property of creators, writers, and artists whose work has been scraped without permission to train AI models. And safety provisions require additional risk assessments for the most capable AI models that could have systemic impacts.

These aren’t abstract regulatory concerns—they’re fundamental questions about how AI systems should operate in democratic societies.

An introduction to the Code of Practice for the AI Act referenced ahead of a plenary session or plenum (deliberative assembly in which all parties or members are present).

Why Industry Pushback Was Predictable

Meta’s refusal to sign represents just the tip of a much larger iceberg of corporate resistance. The company’s stance aligns with a coordinated industry effort to delay and weaken the EU’s AI regulations.

A group of over 45 European companies—including industrial giants like Airbus, ASML, Lufthansa, Mercedes-Benz, Siemens Energy, and notably AI company Mistral—has called for a two-year delay on the AI Act’s most stringent requirements. Their concerns mirror Meta’s arguments: implementation uncertainty, regulatory overload, and fears about European competitiveness.

Meanwhile, other tech giants including Google, Alphabet, and potentially Microsoft have expressed similar concerns about the AI Act and its Code of Practice. The Computer & Communications Industry Association (CCIA) Europe, representing Alphabet, Meta, and Apple, has urged the EU to pause implementation entirely.

This coordinated resistance centers on three main arguments:

Implementation Uncertainty: Companies claim the lack of developed standards makes compliance difficult, though this ignores their own responsibility to help develop workable standards.

Regulatory Complexity: The perceived overlap with other regulations is presented as insurmountable, rather than a challenge requiring coordination and investment.

Competitiveness Fears: Perhaps most tellingly, companies worry that accountability measures could hinder Europe’s global AI standing—essentially arguing that regulation inherently weakens innovation.

But these arguments reveal a troubling priority: protecting corporate interests over public accountability, wrapped in the language of European competitiveness.

Why the EU Deserves Our Applause

The EU’s bold approach represents exactly the kind of leadership democratic societies need in the AI age. Here’s why this framework deserves support:

Putting People First: Unlike voluntary industry self-regulation, the Code of Practice prioritizes societal benefit over corporate convenience. It recognizes that AI systems aren’t just products—they’re infrastructure that shapes how we work, learn, and communicate.

Protecting Creators: The copyright provisions acknowledge that AI companies have built trillion-dollar valuations on the backs of creators whose work was used without permission or compensation. This framework begins to address that fundamental injustice.

Demanding Accountability: The transparency requirements aren’t about stealing trade secrets—they’re about ensuring that systems affecting millions of lives operate with appropriate oversight and understanding.

Leading by Example: While other jurisdictions debate and delay, the EU has actually created implementable standards that other democracies can adapt and improve.

The Real Cost of Industry Resistance

The scale of corporate resistance reveals something significant: Meta isn’t acting alone, but as part of a broader effort to maintain the regulatory status quo. When 45+ major European companies, multiple tech giants, and influential lobbying groups all push for delays, it’s not coincidence—it’s strategy.

This coordinated resistance carries several troubling implications:

Weaponizing European Competitiveness: The most insidious aspect of this pushback is how it frames regulatory accountability as inherently anti-European. Companies like Airbus and Siemens Energy—which should understand the value of safety standards—are essentially arguing that Europe can only compete by lowering its standards.

The Perpetual “Not Ready” Excuse: The call for two-year delays based on “implementation uncertainty” ignores a fundamental truth: standards develop through implementation, not endless preparation. Companies that have spent years developing AI systems suddenly claim they need more time to understand how to make them accountable.

Selective Compliance: It’s telling that while companies like OpenAI and Mistral have committed to signing the Code of Practice, tech giants with the most resources to implement compliance measures are the ones pushing hardest for delays.

The European Commission’s rejection of blanket delays, while acknowledging that targeted delays might be considered, represents exactly the right approach—serious engagement with legitimate implementation challenges without surrendering to corporate pressure campaigns.

A Necessary Reality Check

The EU’s framework isn’t perfect—no pioneering regulation ever is. But it represents a crucial first step toward ensuring that AI development serves democratic values rather than just maximizing corporate profits.

The voluntary nature of the current Code provides space for refinement based on practical experience. Companies that engage constructively with the framework will help shape more effective future regulations. Those that simply refuse to participate are abandoning their opportunity to influence the process.

Meta’s stance is particularly disappointing given the company’s stated commitments to responsible AI development. But it’s also revealing when viewed alongside the broader industry resistance. The fact that OpenAI and Mistral have committed to signing the Code of Practice while Meta refuses suggests this isn’t about technical feasibility—it’s about corporate philosophy.

The divide between companies willing to engage constructively with the framework and those demanding indefinite delays reveals the true stakes of this debate. Those engaging recognize that sustainable AI development requires public trust and democratic oversight. Those resisting seem to believe that any accountability measures threaten their business models.

The Stakes Are Too High for Business as Usual

We’re at a critical juncture in AI development. The systems being built today will shape society for decades to come. The EU recognizes this moment requires active governance, not passive hope that market forces will somehow align with public interest.

The Code of Practice tackles three fundamental questions that can’t be left to corporate discretion:

How much should the public know about AI systems that affect their lives? The EU says: enough to make informed decisions and maintain democratic oversight.

Should AI companies be able to use copyrighted material without permission? The EU says: no, creators deserve protection and compensation.

Who should assess the risks of powerful AI systems? The EU says: not just the companies building them.

These are the right questions, and the EU deserves credit for providing concrete answers rather than endless consultations and voluntary guidelines.

The Path Forward

The EU’s AI Code of Practice isn’t the end of the conversation—it’s the beginning. The framework will undoubtedly need refinement based on implementation experience. But that refinement should come through constructive engagement, not wholesale rejection.

Companies like Meta have a choice: they can work within this framework to help shape effective AI governance, or they can cling to a regulatory vacuum that increasingly serves no one’s interests—including their own.

The EU has shown the world that comprehensive AI governance is both necessary and possible. Other jurisdictions should follow this lead, adapting the framework to their own contexts while maintaining its core commitment to public accountability.

A Model for Democratic AI Governance

The broader significance of the EU’s Code of Practice extends beyond its specific provisions. It demonstrates that democratic societies can assert control over emerging technologies rather than simply accepting whatever Silicon Valley produces.

This framework represents a fundamental shift from the “move fast and break things” mentality that has dominated tech development. Instead, it insists that powerful technologies should be developed with appropriate safeguards and accountability from the start.

The EU’s approach isn’t anti-innovation—it’s pro-responsibility. It recognizes that the most important innovations are those that serve society’s needs, not just shareholders’ interests.

Why This Matters Now

As AI systems become more powerful and pervasive, the window for effective governance is narrowing. The EU’s willingness to act decisively, despite industry resistance, provides a model for other democracies grappling with similar challenges.

Meta’s pushback and similar resistance from other tech companies shouldn’t discourage regulators—it should reinforce the necessity of this work. When companies that profit from the status quo resist accountability measures, it’s usually a sign that those measures are needed.

The EU’s AI Code of Practice deserves our applause not because it’s perfect, but because it’s a serious attempt to ensure that AI development serves democratic values. In an era where corporate power increasingly rivals government authority, this kind of regulatory leadership is exactly what citizens should demand.

The author is a technology analyst covering AI governance and regulation.


Leave a Reply