As a key technology Artificial Intelligence (AI), especially in the form of deep neural networks, is already omnipresent in many digitization applications, including security and safety relevant applications in domains such as biometrics, healthcare and automotive. Despite its undisputed benefits, the use of AI also entails qualitatively and quantitatively new risks and vulnerabilities. Together with its increasing dissemination, this calls for audit methods that allow to give guarantees concerning the trustworthiness and that allow to operationalize emerging AI standards and AI regulation efforts, e.g. the European AI act. Auditing AI systems is a complex endeavour since multiple aspects have to be considered along the AI lifecycle that require multi-disciplinary approaches. AI audit methods and tools are in many cases subject of research and not practically applicable yet.
To allow for a comprehensive inventory of the auditability of AI systems in different use cases and to allow for tracking its progress over time, BSI, Fraunhofer HHI and TÜV-Verband recommend the newly developed “Certification Readiness Matrix” (CRM) and present the initial concept. By using the CRM concept as a frame to summarize the results of a one day workshop on auditing AI systems with talks covering basic research, applied AI auditing efforts and standardisation activities it is demonstrated that audit methods for some aspects are already well developed while other aspects still require more research into and development of new audit technologies and tools.