Europe’s AI Retreat: Cutting “Red Tape” or Cutting Corners?

May 7, 2026
4 mins read

On Thursday, EU governments and Parliament negotiators struck a provisional deal to water down key parts of the bloc’s landmark AI Act, delaying enforcement for some of the riskiest systems and carving out new exemptions in the name of competitiveness. The agreement has been welcomed by business lobbies and some governments—but it should worry anyone who believed Europe meant what it said about fundamental rights in the digital age.

The new compromise is a response to months of pressure from industry and a broader Commission “Digital Omnibus” push to simplify digital rules after complaints of overlapping obligations and regulatory “red tape.” The question is not whether implementation needed fine‑tuning; it did. The question is whether the EU is now quietly dismantling the very safeguards it advertised as a global model.

The headline change is timing. Under the original schedule, stringent obligations for “high‑risk” AI systems—such as biometric identification, critical infrastructure and law‑enforcement tools—were due to bite from August this year. Thursday’s deal pushes that deadline back to December 2, 2027, a more than one‑year reprieve for some of the most consequential uses of AI.

Negotiators also agreed to exclude “machinery” from the AI Act’s scope on the grounds that it is already covered by sectoral safety rules, a longstanding demand from industrial lobbies. At the same time, they added a ban on AI practices that generate non‑consensual sexually explicit images, a response to a wave of intimate deepfakes produced by systems like xAI’s Grok.

On paper, this mix of delay, deregulation and new prohibitions looks like a balanced trade: less burden for “over‑regulated” sectors, stronger action on some abuses. But the balance tilts heavily towards cutting obligations, not building capacity to enforce the ones that remain.

One cannot understand this deal without acknowledging the force of the campaign waged by major European companies. In recent weeks, the chief executives of ASML, Airbus, Ericsson, Mistral AI, Nokia, SAP and Siemens published a joint op‑ed across Europe’s business press warning that Europe is still “writing rules” while the United States and China are busy “building products.” Their core demand: radically simpler AI rules and a pause on burdens that, they argue, threaten Europe’s competitiveness.

That intervention followed earlier letters from more than 45 European companies, including giants like Mercedes‑Benz and Siemens Energy, urging the Commission to “stop the clock” on key AI Act provisions for high‑risk and general‑purpose AI systems. They complained of “unclear, overlapping and increasingly complex” requirements, invoking fears of lost investment and an exodus of AI talent to more permissive jurisdictions.

The Commission’s Digital Omnibus initiative—of which Thursday’s compromise is a part—explicitly echoes this narrative, promising to “cut red tape,” close the innovation gap and help EU firms keep pace with U.S. and Asian rivals. Industry did not merely lobby; it largely framed the terms of the debate.

Yet not all lawmakers are celebrating. The trilogue process has already seen tense standoffs, with some MEPs warning that blanket exemptions for sectors covered by other safety rules would punch holes in the AI Act’s risk‑based logic. Even Thursday’s watered‑down deal is described by critics as “Europe caving to Big Tech” rather than carefully calibrating enforcement.

Civil society groups are blunter. Amnesty International has warned that the “simplification” drive is in reality a deregulatory push that will roll back hard‑won protections in both the AI Act and the GDPR, under the banner of competitiveness. It highlights how omnibus amendments, rushed through with limited consultation, are weakening transparency obligations—such as requirements for companies to publish risk assessments of high‑risk AI systems—just as those obligations were about to become enforceable.

Legal scholars have raised a different concern: predictability. Redrafting core provisions months before they apply, via complex omnibus texts that modify dozens of articles at once, “undermines stability and legitimate expectations” and increases legal uncertainty for everyone—from startups to regulators to courts. Europe promised a clear, harmonized framework for AI governance; instead, it is nudging businesses and citizens into a fog.

There is a legitimate problem here: the EU has layered a dense thicket of digital rules—on AI, data, platforms and cybersecurity—onto companies that often lack the resources to navigate them. Implementation timelines and overlaps do need rationalization. But if the answer is simply to delay high‑risk rules and carve out powerful sectors, Europe is not cutting red tape; it is cutting corners.

A smarter approach would focus on three things. First, targeted simplification: streamline documentation and reporting where rules genuinely duplicate each other; do not gut core safeguards like risk assessments and transparency registers that make AI deployments visible and contestable. Second, enforcement capacity: fund data protection and market surveillance authorities so they can provide guidance, coordinate across borders and penalize bad actors, rather than relying on self‑assessment in the dark. Third, industrial policy that complements, not replaces, regulation: invest in compute, skills and open infrastructure so European firms can innovate within rights‑respecting guardrails, instead of insisting the guardrails are the problem.

Europe’s “Brussels effect”—its ability to set global standards by regulating its own market—was never just about strictness; it was about credibility. If the AI Act becomes a moving target, rewritten under pressure before it is even fully in force, that credibility erodes. Businesses will face uncertainty instead of clarity; citizens will face systems that grow more powerful while the rules designed to protect them recede into the future.

The EU cannot out‑Silicon‑Valley Silicon Valley, nor should it try. Its comparative advantage lies in embedding technology in a social contract that takes rights and safety seriously. Thursday’s deal risks sending the opposite signal: that when industry shouts loudly enough, Europe blinks. Lawmakers who are dissatisfied with this direction are right to push back—and they should use the remaining legislative steps to restore ambition, not just extend deadlines.

Elias Badeaux

Elias Badeaux

Elias is a student of International Development Studies International Development Studies at the University of Clermont Auvergne (UCA) in France. His interests are Global Affairs and Sustainable Development, with a focus on European Affairs.