European AI Regulation (AI Act)

What does the AI Act mean for us? What are the implications for the ecosystem? Find out more about the AI Act in our position paper.

What is the EU AI Act?

On Dec 9th the EU negotiators announced the conclusion of the EU trilogue negotiations of the EU AI Act, the first horizontal AI Regulation worldwide. This will fundamentally shape the entire AI Ecosystem in the EU. The Regulation aims to foster the wide adoption of AI by requiring the implementation of Trustworthy AI, meaning the AI Systems shall be lawful, robust and ethical.

What are the effects on the ecosystem?

While the regulatory framework sets the groundwork, the real work for its operationalisation lies ahead. Whether the AI Act spurs AI Innovation made in Europe or impedes innovation due to bureaucratic procedures will be decided in the next months. We at the appliedAI Institute are committed to accelerate Trustworthy AI at scale for the diverse actors in the EU AI Ecosystem.

What is the timeline now for implementation?

The AI Act re-wires the entire AI Value Chain, plus it affects various certification and oversight bodies. Hence, a plurality of actors, both private and public, need to build expertise and resources to deliver on the new requirements. The transition period are per risk class:

  • 6 months period for prohibited AI Systems

  • 12 months for Foundations Models and General Purpose AI Systems (GPAIS)

  • 24 months for High Risk and Low Risk AI Systems.

What is our goal?

The appliedAI Institute for Europe gGmbh has the goal to enable Trustworthy AI at scale by openly collaborating with all actors of the AI Ecosystem to co-create methods, tools, guidelines, trainings and other resources. Specifically, our goals are:

  • Education: Upskilling professionals in the AI Ecosystem on the AI Act to foster “Legislative Literacy”, taking into account the needs of various stakeholders.

  • Communities:: Relationships of excellence and trust are key to enable compliance, be it along the value chain or between companies and authorities. We create a space for exchange and collaboration.

  • Insights: If we can not measure it, we can not control it. We will continue to publish empirical studies and investigative papers from a practical perspective to enable evidence based policies.

  • Tools: Many requirements, e.g. for High Risk AI Systems, are deeply technical and we support compliance by evaluating, tweaking and and developing open source tools for this purpose.

Our position on the EU AI Act

Europe is about to make history by adopting the EU AI Act!

Learn more about our position on the EU AI Act

27th of January 2024 - In the final stages of the talks, the EU AI Act could fail due to the abstention of some countries. However, no AI Act would be more negative at the current momentum and against the background of the resulting uncertainty than if the AI Act were to come to pass. Europe's largest AI initiative appliedAI takes a stand on the current developments and supports the adoption of the EU AI Act.

13th of December 2023 - Within our Position Paper we want to provide insights on why the time to get “AI Act ready” is now and what it means for us to be the open-access Accelerator for Trustworthy AI.

Our risk classification model

If and to what extent you are facing any additional obligations under the AI Act depends significantly on the risk class of your AI Use Case (or the “AI-System” as it is called in the AI Act). The Risk Class depends on the intended purpose of the AI System, the area of application and the potential harm associated with its usage. There are four classes:

  • Prohibited (see Article 5)
  • High-Risk (see Article 6)
  • Limited Risk (see Article 52)
  • Low-Risk (see Article 69)

As the AI Act itself, the method is work in progress and we are looking forward to continuously adjusting and improving it. Join in and help co-creating this method: from the AI Ecosystem for the AI Ecosystem!

  • Step #1: Is the System an "AI System"?
  • Step #2: Is the AI System in the scope of the AI Act?
  • Step #3: Is the AI System prohibited in the EU?
  • Step #4: Does the AI System have Human Interaction?
  • Step #5: Is the AI System a High-Risk System?

Our risk classification database

Our goal is to develop an open reference library of cases as a reference for anyone who wants to apply AI in a functional area of the organization. Important: This is no legal advice. We take no responsibility for the correctness of the information provided. So please use it as reference only!

What data can I find here?

This table contains a total of 106 AI Systems (aka. Use Cases) in the enterprise along with a Risk Classification according to the European AI Act, with reference to the proposals of

  • the EU Commission (April 2021)
  • the EU Parlament Amendments (early/mid 2022)
  • the EU Council (December 2022)

For each AI System, the table contains
Note: The English text was generated using Google Translate

  • ID
  • Title (EN only)
  • Description (EN & DE)
  • Enterprise Function (e.g. Marketing, Production, HR, Legal, … )
  • Risk Class (High-Risk, Low-Risk, Prohibited)
  • Classification Details
  • Applicable Annex in the AI Act (II or III)
  • For Annex III: The applicable sub-item (1-8)
  • If high-risk or unclear classification: Comment with a rational

What are the limitations of the data and the analysis?

  • Focus on AI in the enterprise

    The AI systems studied are all taken from the enterprise context, i.e. AI in other application areas, such as for specific industries (e.g. medicine, aerospace, automotive) or sectors (e.g. education, public administration, healthcare), are not included. Thus, the results are not representative for the totality of all AI applications, but give a very good and broad overview of AI in functional areas of companies.

  • Significance of the AI systems studied for companies in Europe

    The AI systems considered are currently in use, but not only in Europe. Therefore, we cannot make a statement about whether and to what extent the selection of AI systems is representative for AI in European companies.

  • Limited information about AI systems

    The descriptions of the AI systems were limited and in some cases the lack of details was a reason for unclear classification. With more information, the proportion of unclear cases would possibly decrease. This observation shows that comprehensive details about the AI system need to be known for an unambiguous classification.

  • Changes to the AI Regulation

    This study was prepared during the ongoing negotiations of the AI Regulation. We have endeavored to apply the same standard to all AI systems and to pick up divergent rules from recent drafts and indicate them as such. Future changes may result in different classifications.

  • Potential errors

    The AI regulation is a comprehensive and complex set of rules, and AI is a complex and multi-faceted technology. Both are continually evolving. The authors have dealt extensively with both issues and there have been various review cycles to review the study with lawyers as well as experts from the EU institutions. Nevertheless, it cannot be ruled out that errors may have crept in during the drafting process.

How can I contribute or ask for help?

Contribute your use case to this collection, so we can learn from each others cases. Complete the form linked below to have your AI System added to this public database.

Ask appliedAI for support with classifying your AI System and send an email.