The European Union has decided to slow down, at least in part. With a package of amendments dubbed the Digital Omnibus, the Commission has proposed to
postpone until December 2027 some of the strictest rules of the AI Act, those concerning uses considered high-risk: personnel selection, credit assessment, healthcare, biometrics, critical infrastructure. In parallel, some aspects of the GDPR and e-Privacy are also being eased to facilitate the use of data in model training.
What has been postponed
In the text presented in Brussels, the Commission proposes to delay the entry into force of the toughest rules for the use of AI in contexts such as biometric recognition, traffic management, utilities, exams and hiring, healthcare services, credit checks, and law enforcement by over a year. These areas were initially subject to stringent obligations as early as 2026; now the deadline is moved to the end of 2027.
The official rationale is to reduce bureaucracy, avoid stifling innovation, and increase European competitiveness. The Commission insists that simplification does not mean deregulation, but the political signal is clear: after months of pressure from big tech and some capitals, the line has softened.
More leeway on data use for training
Among the proposed changes are also adjustments to the
GDPR and cookie legislation. In particular, some wording opens up the possibility for companies like
Google,
Meta,
OpenAI and other major operators to use European personal data more broadly for training models, of course within a revised regulatory framework.
The logic is to prevent Europe from falling behind in the race for advanced models due to a lack of data, but digital rights associations are already calling it a gift to big tech and a retreat on the privacy front.
Impact for those developing and using AI in Europe
For European startups, agencies, and companies working with AI, the postponement is a double-edged sword. On one hand, it grants more time to adapt processes, documentation, risk assessments, and auditing logic. On the other hand, it pushes regulatory certainty further into the future: for a few more years we will live in a gray area, with evolving rules and interpretive margins that do not help those who want to invest seriously.
Those building systems for sensitive areas cannot afford to wait until 2027: they must already behave today as if the strict AI Act were fully operational. This means dataset traceability, model explainability where possible, bias control, procedures for human intervention, and clear complaint mechanisms.
How digital professionals should proceed
For entities designing platforms, integrations, and digital products, even with AI components, the healthiest strategy is simple: aim for
standards higher than the minimum legal requirement. Do not chase the latest regulatory concession, but build systems that would be acceptable even in a stricter scenario.
This means carefully choosing where to use AI, what data feeds the models, how logs are managed, and what consents are requested from users. The postponement of high-risk rules is a window of time, and those who use it to prepare will reach 2027 calmly. Those who use it to pretend nothing is happening, will not.
Hai bisogno di applicare questa strategia?
Esegui il protocollo di contatto per iniziare un progetto con noi.
> INIZIA_PROGETTO