Open Letter on the AI Omnibus negotiations: Do not lower the level of protection for physical AI

Ahead of the final negotiations on the AI Omnibus, organizations from the fields of safety, testing and AI governance are warning against a weakening of the European AI regulatory framework. Protection against AI-specific risks in machinery, medical devices and toys must not be weakened.

[Translate to English:]
© Alexandre Lallemand via Unsplash

Berlin, 5 May 2026 - Lawmakers in the European Union are on the verge of a very important decision. On Wednesday, the European Parliament, the Council and the Commission are set to decide jointly on the final form of the AI Omnibus.

AI is the most powerful tool that mankind has developed in centuries or even in its entire history. The rules of this incredibly potent technology are currently being set. This is mirrored by the incredible amount of investments that companies, especially in the US, are pouring into the development and integration of AI. Especially against the backdrop of these investments, we need clear, harmonised rules and legal certainty as soon as possible. Thinking and regulating in sectoral silos will not help the AI market in Europe.

The co-legislators must decide whether to abandon the horizontal approach to AI regulation in Europe, thereby encouraging legal fragmentation, creating new legal uncertainty and weakening protection against AI-specific risks in physical products (Physical AI).

For powerful forces are still, and with ever-increasing intensity, seeking to exclude key product categories from the scope of the AI Regulation (‚sector exit‘), thereby effectively deregulating AI in products such as machinery, medical devices and toys for at least several years to come. The vague announcement of forthcoming sectoral regulation is nothing more than a fig leaf. It cannot hide the fact that it is currently entirely unclear whether, when and in what form safeguards for Physical AI will be introduced in the future.

If entire product categories such as machinery, medical devices and toys are excluded from the AI Regulation, protection against AI-specific risks is being scaled back precisely in those areas where people come into particularly close contact with AI, with real physical consequences. This affects everyone, but in particular vulnerable groups such as children, patients and workers in industrial manufacturing who handle AI-based toys, medical devices or industrial manufacturing machinery. In the coming years, we will all increasingly encounter (humanoid) robotics (i.e. machinery), which is simply inconceivable without AI. If we leave AI risks in these areas unregulated, we risk one of the most powerful technologies in human history encroaching, without clear safeguards, into areas that could directly endanger people’s livelihood. Societal acceptance and trust of affected parties of these potent AI applications is fundamentally important for a speedy uptake of technology. Trust is unthinkable without clear rules and boundaries.

The AI-specific risks that the AI Act seeks to avert or mitigate through its targeted provisions are currently not regulated in any of the product areas in question – even though the opposite is frequently claimed. They would therefore still need to be incorporated into all these sector-specific legislative acts.

The AI Omnibus was introduced to incorporate specific simplifications and clarifications into the AI Act and to extend the deadlines for the application of its requirements. A legislative process designed to run from initiation to conclusion in just a few months cannot achieve more than this.

Under no circumstances was or is the AI Omnibus designed to undermine the AI Act and fundamentally change the entire framework of AI regulation in Europe. Such fundamental decisions regarding the structure and level of protection set by European regulation must not be rushed through without a broad range of stakeholders having been properly consulted. At present, however, political decisions are being hastily taken and the necessary detailed work on such a complex regulation cannot be done. That is not the path we should be taking when it comes to safely integrating groundbreaking technological developments into the daily lives of 500 million people.

We therefore appeal once again to all policy-makers: do not give in to the increasing pressure to simply discard the AI regulatory framework that has been built up over several years, to leave Physical AI to its own devices, and, moreover, to create problems that cannot yet be fully anticipated within the short timeframe of such a process. Leave the horizontal approach of the AI Regulation for all sectoral products untouched and do not agree to alternative circumvention strategies.

Signatories

Download

Open Letter on the AI Omnibus negotiations "Do not lower the level of protection for physical AI – maintain a horizontal approach – reject circumvention strategies"