AI Act for startersAn adaptive learning course to help you understand and navigate the AI Act
Get a solid, practical understanding of the AI Act — what it regulates, how it classifies AI systems by risk, and what it means for your organisation today.
This course in a nutshell
| Time: |
Approx. 30mins–4 hours (adaptive — varies depending on learner knowledge level) |
|---|---|
| Target group: |
Professionals working in or with organisations that develop, deploy, or use AI systems in the EU. Relevant for any role involved in AI governance, compliance, procurement, product development, or operations. No legal or technical background required. |
| Prior knowledge: |
No prior knowledge of the AI Act or AI regulation is required. Familiarity with basic AI concepts is recommended and helpful but not essential. |
| Format: |
Adaptive learning course on the Area9 Rhapsode platform, combining slideshows, fill-in-the-blank exercises, matching activities, scenario-based activities, and categorisation tasks. |
After completing the course, you will be able to:
- Identify the potential risks AI can pose and explain why regulatory oversight is needed
- Describe the fundamental differences between AI and traditional software
- Define the term "AI system" according to the AI Act and determine whether a given system falls within its scope
- Explain the AI Act's purpose, structure, and risk-based approach
- Classify AI systems according to the four risk categories and identify the obligations associated with each
- Name the eight AI practices prohibited under the AI Act
- Identify the characteristics of high-risk AI systems under Annex I and Annex III
- Match AI system use cases to their transparency obligations under Article 50
- Distinguish between the roles of providers and deployers in the AI value chain and the obligations each carries
- Recall the key differences between GPAI models and GPAI systems, including downstream provider responsibilities
- Identify how the AI Act functions as a horizontal regulation alongside other EU digital legislation
- Apply the AI Act's compliance timeline to your organisation
- Explain the AI literacy requirements under Article 4 and what they mean for organisations
Syllabus
01 Why does the EU regulate AI? Introduction to the risks and characteristics that make AI different from traditional software and why those differences call for dedicated regulation. Learners explore AI-specific challenges such as bias, opacity, and adversarial vulnerability, and connect these to the regulatory logic of the AI Act. Includes the Act's official definition of an AI system and hands-on practice determining whether real-world systems fall within its scope.
02 How does the EU regulate AI? A structured walkthrough of the AI Act's regulatory framework. Learners explore the risk-based classification approach, the four risk categories (unacceptable, high, transparency obligations, and low risk), and the specific obligations attached to each. Coverage includes the eight prohibited AI practices, high-risk system classifications under Annex I and Annex III, transparency obligations under Article 50, Codes of Conduct under Article 95, and the roles and responsibilities of providers, deployers, and downstream providers in the AI value chain. Also covers GPAI models and GPAI systems and how they are distinguished under the Act.
03 What should organisations consider and do today? Practical guidance on what the AI Act means for organisations right now. Learners see how the Act fits within the broader EU digital regulatory landscape as a horizontal regulation. They gain anunderstanding of the phases of the compliance timeline and which deadlines apply to which risk categories. Learners also explore AI literacy obligations under Article 4 — including what providers and deployers must do to ensure their staff have sufficient competence to work with AI systems responsibly.
Contact us
For more information or to be added to the waiting list, contact us at learn@appliedai-institute.de.