AI Act Insight - Compliance journey for High Risk Requirements on the Use Case level

EU AI Act compliance journey

The EU AI Act is set to regulate the development, offering, and usage of AI Systems and General Purpose AI Systems in the EU. Operators, including providers and deployers, must demonstrate compliance with applicable requirements before introducing such systems to the market or service.

The AI Act adopts a risk-based approach, with higher-risk AI Systems facing more stringent requirements, primarily on the providers.

While such requirements ought to prevent unreasonable damage to health, safety and human rights, they also demand attention and expertise, as they are challenging from a technical and legal point of view.

Since the AI Act is brand new, there is close to zero experience in companies and authorities on how to operationalize such requirements, i.e. the learning curve is not only steep, but also uncertain and potentially iterative.

The appliedAI Institute for Europe intends to support individuals across the AI Ecosystem to reach compliance with minimal invest in time and resources, so that attention and expertise can be “spend” on driving trustworthy AI Innovation.

This new content resource is a step into this direction.

The first step is to foster a solid understanding of the requirements as such, link the to design choices, and to consider how they might apply to a Use Case at hand.

With this in mind, we focus on the Articles 10-15 and provide for each requirement:

  • A breakdown of each requirement in the form of a mind-map for easy reading

  • An indication how the requirement might link to design choices

  • Prompts for implementation for a given Use Case


Why did we do this?

The appliedAI Institute for Europe aims to ease this transition, supporting individuals across the AI Ecosystem to achieve compliance efficiently, allowing a focus on driving trustworthy AI innovation.

This analysis is part of our deep dive into the whole EU AI Act compliance journey. Operationalising the requirements on the Use Case level (e.g. Data Governance, Transparency, Accuracy) is challenging from a technical perspective. We want to provide you support on understanding the requirements for the technical implementation.

The analysis was created using the INTERNATIONAL STANDARD ISO/IEC 22989 First edition 2022-07 Reference number ISO/IEC 22989:2022(E) as a source.

Contact us:

Dr. Till Klein
Head of Trustworthy AI