This image was created with the AI Midjourney by the appliedAI Institute for Europe.

European AI Regulation (AI Act)

The regulatory change

The European AI Act is about to come and it will fundamentally shape the entire AI Ecosystem in the EU. The Regulation aims to foster the wide adoption of AI by requiring the implementation of Trustworthy AI, meaning the AI Systems shall be lawful, robust and ethical.

Effects on the ecosystem

The AI Act affects the development, marketing and usage of AI in Europe, and the additional requirements are expected to further increase the cost and complexity of “doing AI”. All involved stakeholders are facing new technical, legal and organizational challenges, and the fines for non-compliance are substantial, with up to 6% of the global annual turnover or € 30M, whichever is higher (according to one of the proposals).

Our goal

The appliedAI Institute for Europe gGmbh has the goal to enable Trustworthy AI at scale by openly collaborating with all actors of the AI Ecosystem to co-create methods, tools, guidelines, trainings and other resources. Specifically, our goals are:

  • Lowering Cost for Compliance
  • Accelerating time to innovation
  • Developing trustworthy AI at scale

Our risk classification model

If and to what extent you are facing any additional obligations under the AI Act depends significantly on the risk class of your AI Use Case (or the “AI-System” as it is called in the AI Act). The Risk Class depends on the intended purpose of the AI System, the area of application and the potential harm associated with its usage. There are four classes:

  • Prohibited (see Article 5)
  • High-Risk (see Article 6)
  • Limited Risk (see Article 52)
  • Low-Risk (see Article 69)

As the AI Act itself, the method is work in progress and we are looking forward to continuously adjusting and improving it. Join in and help co-creating this method: from the AI Ecosystem for the AI Ecosystem!

  • Step #1: Is the System an "AI System"?
  • Step #2: Is the AI System in the scope of the AI Act?
  • Step #3: Is the AI System prohibited in the EU?
  • Step #4: Does the AI System have Human Interaction?
  • Step #5: Is the AI System a High-Risk System?

With this in mind, we are working on a Method for Risk Classification, which you can openly access here (see below), try it out, and tell us what you are thinking about it :-)

Our risk classification database

Our goal is to develop an open reference library of cases as a reference for anyone who wants to apply AI in a functional area of the organization. Important: This is no legal advice. We take no responsibility for the correctness of the information provided. So please use it as reference only!

What data can I find here?

This table contains a total of 106 AI Systems (aka. Use Cases) in the enterprise along with a Risk Classification according to the European AI Act, with reference to the proposals of

  • the EU Commission (April 2021)
  • the EU Parlament Amendments (early/mid 2022)
  • the EU Council (December 2022)

For each AI System, the table contains
Note: The English text was generated using Google Translate

  • ID
  • Title (EN only)
  • Description (EN & DE)
  • Enterprise Function (e.g. Marketing, Production, HR, Legal, … )
  • Risk Class (High-Risk, Low-Risk, Prohibited)
  • Classification Details
  • Applicable Annex in the AI Act (II or III)
  • For Annex III: The applicable sub-item (1-8)
  • If high-risk or unclear classification: Comment with a rational

What are the limitations of the data and the analysis?

  • Focus on AI in the enterprise

    The AI systems studied are all taken from the enterprise context, i.e. AI in other application areas, such as for specific industries (e.g. medicine, aerospace, automotive) or sectors (e.g. education, public administration, healthcare), are not included. Thus, the results are not representative for the totality of all AI applications, but give a very good and broad overview of AI in functional areas of companies.

  • Significance of the AI systems studied for companies in Europe

    The AI systems considered are currently in use, but not only in Europe. Therefore, we cannot make a statement about whether and to what extent the selection of AI systems is representative for AI in European companies.

  • Limited information about AI systems

    The descriptions of the AI systems were limited and in some cases the lack of details was a reason for unclear classification. With more information, the proportion of unclear cases would possibly decrease. This observation shows that comprehensive details about the AI system need to be known for an unambiguous classification.

  • Changes to the AI Regulation

    This study was prepared during the ongoing negotiations of the AI Regulation. We have endeavored to apply the same standard to all AI systems and to pick up divergent rules from recent drafts and indicate them as such. Future changes may result in different classifications.

  • Potential errors

    The AI regulation is a comprehensive and complex set of rules, and AI is a complex and multi-faceted technology. Both are continually evolving. The authors have dealt extensively with both issues and there have been various review cycles to review the study with lawyers as well as experts from the EU institutions. Nevertheless, it cannot be ruled out that errors may have crept in during the drafting process.

How can I contribute or ask for help?

Contribute your use case to this collection, so we can learn from each others cases. Complete the form linked below to have your AI System added to this public database.

Ask appliedAI for support with classifying your AI System and send an email.