In a major shift, the European Commission has proposed delaying tougher regulations on so‑called “high-risk” artificial intelligence (AI) systems. Originally, those rules were set to take effect in August 2026 — but under the new plan, they may not fully apply until December 2027.
What Exactly Is Being Delayed?
These postponed provisions are part of the EU’s broader Digital Omnibus package, a sweeping reform that aims to simplify several key technology-related regulations. Among the laws being re-examined are:
-
The AI Act, which regulates AI based on risk levels.
-
The General Data Protection Regulation (GDPR), Europe’s landmark privacy law.
-
The e‑Privacy Directive, governing electronic communications.
-
The Data Act, which covers data-sharing rules.
Under the proposed changes, AI systems used in especially sensitive areas — like biometric identification, healthcare, credit scoring, law enforcement, job applications, and even road traffic — would face delayed enforcement.
Why the Delay?
According to EU officials, the postponement isn’t about weakening protections. Rather, it’s about giving businesses and governments more time to prepare.
Key technical tools — such as standards, guidelines, and specifications — are not yet fully ready. The Commission argues that enforcing the high-risk rules without these support systems in place would be premature and burdensome.
Henna Virkkunen, the EU’s Vice President for Digital Policy, stressed that delaying the rules is about capacity building, not backpedaling on regulation.
What Exactly Will Change or Be Eased?
Here are some of the major changes the Digital Omnibus proposes:
-
Grace period for penalties
-
Registration exemptions
-
AI systems used for narrow or procedural tasks (for example, simple internal workflows) might be exempted from having to register in a dedicated EU database for high-risk systems. Reuters
-
This reduces the administrative burden on companies whose AI tools don’t carry heavy societal risk.
-
-
Labeling of AI-generated content
-
The requirement for providers to clearly mark their output as AI-generated (a safeguard against deepfakes and misinformation) would be phased in more gradually, rather than enforced immediately. Reuters
-
-
Easier data use for AI training
-
Proposed GDPR changes could allow major tech companies — like Google, Meta, and OpenAI — to more freely use anonymized European user data to train their AI models.
-
This is a significant shift because currently, personal data use is heavily restricted under GDPR.
-
-
Simplified cookie consent
-
The Omnibus could also streamline how websites obtain cookie consent from users. Rather than multiple pop-ups and repeated permission requests, a one-click consent mechanism (valid for six months) is on the table.
-
Support, But Also Strong Criticism
The proposal has stirred a heated debate across Europe:
-
Supporters, including business groups and some EU member states like Germany and France, argue the delay will boost innovation and prevent overregulation that hurts competitiveness.
-
They also stress that simplification doesn’t mean deregulation: according to EU officials, the reforms are meant to make compliance more manageable, not to weaken protections.
-
The proposed changes could save companies billions in costs and potentially help European startups scale up faster.
On the flip side, civil society and privacy advocates are deeply concerned:
-
Over 50 organizations, including Access Now and the Centre for Democracy and Technology Europe, have warned that the delay could undermine accountability mechanisms built into the AI Act.
-
Groups such as European Digital Rights (EDRi) argue these changes amount to a rollback of digital protections.
-
Critics fear that easing GDPR restrictions could lead to unchecked use of personal data, potentially eroding citizens’ privacy rights.
-
There’s also a worry that the “simplification” narrative is being used to cave to big tech, giving dominant companies more room at the expense of smaller players or individual rights.
Political Stakes & Power Dynamics
The timing and framing of the Digital Omnibus suggest that Brussels is trying to strike a delicate balance:
-
Innovation vs. regulation: The EU wants to stay internationally competitive on AI — especially against the U.S. and China — while upholding its values of safety, transparency, and fundamental rights.
-
Big Tech lobbying: Companies like Google, Meta, and OpenAI have pushed hard for more flexibility. Their voices are clearly being heard.
-
Transatlantic pressure: The U.S. government has reportedly raised concerns that the AI Act could disadvantage American tech firms — and some believe this political pressure has helped shape the Omnibus proposals.
-
Internal EU resistance: Not all EU policymakers are on board. Some MEPs and civil rights groups warn that reopening recently adopted rules erodes trust and legal certainty.
What Are the Risks?
While delaying stricter rules may ease burdens on companies, it also carries real risks:
-
Weakened safeguards
Without full enforcement, powerful AI models could operate in sensitive areas (like credit scoring or health) without adequate oversight for a longer period. -
Privacy concerns
By loosening GDPR constraints, there’s a risk that user data could be used more broadly for AI development, potentially infringing on fundamental rights. -
Regulatory uncertainty
Companies and regulators alike may struggle with the shifting timeline, making long-term planning difficult. -
Undermining trust
Citizens may view the move as a compromise of their protections, damaging trust in both AI technology and the EU’s ability to regulate it effectively.
Why the EU Is Betting on This
Here’s what the EU hopes to gain by pushing back the AI rules:
-
Time to build infrastructure: By delaying, Brussels gives itself and member states more time to develop technical standards, certification bodies, and other enforcement mechanisms.
-
Supporting innovation: A more gradual rollout may reduce early resistance and encourage tech companies to scale responsibly within Europe.
-
Reducing compliance costs: Smaller companies, in particular, may benefit from lower administrative burdens and more predictable regulatory demands.
-
Political appeasement: The move helps defuse tension with both big tech lobbyists and transatlantic partners, while still signaling that the EU takes AI risk seriously.
What’s Next
-
The Digital Omnibus proposal must still win approval from EU member states and the European Parliament.
-
If adopted, new timelines and transitional phases would replace some of the original AI Act deadlines.
-
Meanwhile, civil society and privacy advocates are mobilizing to press lawmakers to maintain strong protections.
-
EU institutions will also work on finalizing the technical standards and guidelines necessary to make the AI Act workable.
Why This Matters
This delay is more than a mere procedural shift — it highlights a turning point in Europe’s approach to AI. On one hand, the EU is signaling that it wants to be a global leader in AI innovation. On the other, it’s trying to preserve its core regulatory values around privacy, safety, and human rights.
How regulators navigate this tension — between speed and caution, between competition and protection — will likely shape not only Europe’s AI ecosystem, but global norms for the technology as well.
