Digital Policy

EU AI Regulation Faces Deregulatory Shift Amid Big Tech Influence

The European Commission’s recent Digital Omnibus Regulatory Package reveals a deregulatory turn in EU AI policy driven in part by Big Tech lobbying and pervasive AI hype. Proposed amendments to the GDPR and AI Act would ease data protection rules and delay mandatory standards, potentially undermining regulatory safeguards amid growing industry influence.

Big Tech’s Growing Influence in EU AI Regulation

Lobbying by major technology companies has intensified in the EU, extending beyond traditional advocacy to shaping the production of regulatory knowledge itself. Large tech firms possess extensive research capabilities in computer science, ethics, and legal aspects of AI, giving them a knowledge advantage over academic institutions and smaller companies. This phenomenon, described as “epistemic capture,” allows Big Tech to dominate AI policy debates, promoting a deregulatory “innovation narrative” that frames AI as inherently positive and regulation as an obstacle.

The knowledge asymmetry creates dependency for policymakers, who increasingly rely on industry expertise to envision the future of AI. Despite some corporate research appearing to engage critically with societal implications, cases like the departure of Google researcher Timnit Gebru illustrate tensions between profit motives and public interest, contributing to regulatory approaches that favor conservative or deregulatory stances.

Key Changes in the Omnibus Proposal

Among the most notable provisions in the November 2025 Omnibus proposal is a significant relaxation of GDPR restrictions on processing sensitive personal data for AI training. The proposed Article 9(2)(k) introduces an exception allowing the use of special data categories without prior proportionality assessments, placing the responsibility on data controllers to implement undefined “appropriate” safeguards. This ambiguity is expected to disadvantage small and medium-sized enterprises, while enabling dominant tech firms to process sensitive data at scale with less oversight.

The proposal’s vague definitions, such as the broad term “training an AI system,” further complicate enforcement and risk privileging AI technologies over others without a clear risk-based approach. Added to this is a delay in applying mandatory compliance standards for high-risk AI systems until 2027 and 2028, due to private standardization bodies’ failure to produce required standards by the Commission’s deadline.

Regulatory Challenges and Democratic Concerns

This postponement affords the AI industry additional time to adapt while relieving it from immediate compliance obligations, raising concerns about the democratic legitimacy of the regulatory process. The reliance on private standardization organizations to establish safeguards has effectively transferred significant regulatory influence to industry actors, undermining legal certainty and delaying public oversight mechanisms.

Why it matters

The deregulatory shift in EU AI policy coincides with increased political urgency and geopolitical competition in AI development, but risks weakening data protection and reducing accountability for powerful tech companies. The trend complicates efforts to align AI advancement with societal values such as privacy and fundamental rights. With regulatory enforcement delayed and safeguards diluted, the EU faces challenges in maintaining a balanced, principled approach to AI governance—one that upholds legal certainty and public trust while fostering innovation.

Read more Digital Policy stories on Goka World News.

Sources

This article is based on reporting and publicly available information from the following source:

Giorgio Kajaia
About the author

Giorgio Kajaia

Giorgio Kajaia is a writer at Goka World News covering world news, U.S. news, politics, business, climate, science, technology, health, security, and public-interest stories. He focuses on clear, factual, and reader-first reporting based on credible reporting, official statements, publicly available information, and relevant source material.

View all posts by Giorgio Kajaia