Effective enforcement of the AI Act requires a sufficient number of competent conformity assessment bodies. Existing designations under sectoral product legislation should therefore be used as a basis and specifically extended to include AI-related competences. Additional bureaucracy, parallel testing structures and technology-specific designation codes that undermine the application- and risk-based approach of the AI Act should be avoided.
Targeted relief for small and medium-sized enterprises and small mid-cap companies can help reduce administrative burdens, but must not result in a loss of transparency or verifiability. Simplifications should be limited to formal aspects and must ensure that notified bodies remain able to fully assess the conformity of AI systems. The safety of AI products must not depend on the size of the company.
Highly capable general-purpose AI models (GPAI) also need to be integrated into the existing European conformity assessment framework in a legally sound manner. Rather than creating new parallel testing systems, the established quality infrastructure of independent notified bodies should be used. AI regulatory sandboxes can foster innovation and regulatory learning, but they do not replace formal conformity assessment procedures.
Transparency and competence remain key prerequisites for trustworthy AI. TÜV-Verband therefore supports maintaining the registration obligation for so-called opt-out AI systems and calls for a systematic strengthening of AI skills and competencies in companies, public authorities and organisations.



