Improvement of EU regulation: Artificial intelligence with high risk for humans to be fundamentally inspected for safety

The TÜV Association calls for improvements to the EU Commission's draft regulation on artificial intelligence. A clear definition and derivation of the risks of AI applications are still necessary. The current white paper of the TÜV AI Lab summarises all proposals for risk assessment.

©Alina Grubnyak via Unsplash

3. September 2021 –In a recent statement, the TÜV Association has called for concrete improvements to the EU's draft regulation on artificial intelligence. "The European Commission is doing pioneering work with its regulatory proposal, but falls short of its own claim to create an 'ecosystem of trust' for artificial intelligence in Europe," says Dr Joachim Bühler, CEO of the TÜV Association. "When regulating artificial intelligence, people's health and the protection of their elementary fundamental rights must come first." The right approach is to regulate applications and products with artificial intelligence depending on the risk they pose. For example, lower security requirements should apply to an intelligent spam filter than to AI systems that evaluate X-ray images, decide on the granting of loans or are used to control vehicles. "Improvements are needed in the allocation of AI systems to the four envisaged risk classes from minimal to unacceptable and the associated requirements for security and its verification," says Bühler. In particular, the TÜV association demands that mandatory independent third-party audits be provided for all AI applications with high risk. Certain minimum requirements should also apply to other AI applications.

The EU Commission had presented its draft regulation in April, which is now being discussed among the member states and in the European Parliament. The aim is to create the world's first legal framework for artificial intelligence. In the view of the TÜV Association, the following improvements are necessary:

  • Deviring risk classes in a comprehensible way:  The EU Commission's proposal lacks a clear definition and derivation of the risk classes. In particular, there is a lack of comprehensible criteria for which AI systems pose a particularly high risk. In the view of the TÜV Association, AI systems always pose a high risk if they can endanger the life of people or their elementary fundamental rights such as privacy or equality.
  • Providing independent third-party testing for all high-risk AI systems: All AI products and applications classified as particularly high-risk for humans should be subject to mandatory testing by an independent body. The risk-based approach is a central cornerstone of European product regulation and should also be consistently implemented for AI systems.
  • Introducing risk-adequate classification for high-risk AI systems: The draft regulation provides independent testing of high-risk AI systems almost exclusively for products that already oblige to third-party inspections, for example medical devices or lifts. However, the integration of AI in all kinds of products and applications can lead to significantly increasing risks. The legislator should therefore make the third-party testing obligation solely dependent on the risks from the respective AI systems. Therefore, the already regulated product lines must also be subject to an risk-specific AI reassessment.
  • Supplementing the list of high-risk AI systems: The possibility for the EU Commission to subsequently expand the list of high-risk AI systems (Annex III) should not be formally limited to certain areas of applications. The sole criterion should be whether the AI system poses significant risks to important legal interests protected by fundamental rights. If life or, for example, people's privacy is at risk, the AI system must be classified as high-risk.


Furthermore, the "TÜV AI Lab" has presented a white paper with a "Proposal for the risk classification of AI systems". "In addition to risks to life, products and applications with artificial intelligence are also about protecting people's fundamental rights and adhering to certain ethical principles that are considered socially desirable," says Dr Dirk Schlesinger, head of the TÜV AI Lab. Therefore, the white paper discusses systematic approaches on how fundamental rights and ethical principles can also be adequately considered in the risk assessment of AI systems. In the TÜV AI Lab, AI experts from the TÜV companies work together on new testing methods for AI systems and accompany the legal regulation of the technology with practical proposals.