The hidden first step of AI Act compliance: getting system boundaries and risk classification right

Introduction: which system and who is accountable?

The first practical step for any operator looking to comply with the EU AI Act is risk classification, the process of mapping and assigning their AI systems into one of the legislation’s four “risk classes.”

Under the AI Act, this classification depends exclusively on the “intended purpose” of the AI system. Yet in practice, companies often confront two practical questions:

  • First, the system boundary - where does one AI system end and another begin?

  • Second, the responsibility boundary - who is responsible for risk classification and the resulting obligations?

The risks of drawing the line wrong

Ultimately, the risk class of an AI system and an organisation's role determine the obligations that apply. Getting this right is essential to avoid unnecessary costs.

If the system boundary is drawn too broadly, organisations might over-comply by applying high-risk requirements to more components than necessary. They may also misallocate responsibility by assuming provider-level duties for tools they merely deploy. On the other hand, if it’s drawn too narrowly, or if the responsible party is misidentified, they risk under-compliance by overlooking high-risk elements and failing to fulfil the appropriate obligations.

What the AI Act says about systems and roles

To understand why this is a challenge, it helps to first look at how the EU AI Act defines an AI system and the actors responsible for it. The Act combines a functional definition of what counts as an AI system with a role-based framework that determines who bears which obligations.

The EU AI Act’s definition of an ‘AI system’ (Article 3(1)) is functional: it focuses on what the software does rather than how it is built. It covers machine-based systems that operate with a degree of autonomy, infer from inputs to produce outputs influencing their environment, and are designed to achieve specific objectives.

This can sometimes make risk classification challenging in practice because companies must decide if an AI-enabled feature in a product or tool is its own system or part of a larger one. Such decisions are often further complicated by modular software architectures, frequent updates, and the fact that many AI components rely on third-party models or data services.

Compounding this challenge is how the AI Act divides responsibilities based on who builds the system and places it on the market (the provider) or uses it under their authority (the deployer). This is because in practice, this distinction is not always clear-cut. AI deployments often combine vendor models, external APIs, and in-house tools, blurring the line between those who develop or market a system and those who merely use it.

These challenges become clearer in practice when we look at a concrete example.

The challenge in practice: defining boundaries in multi-vendor, multi-risk systems

Consider a large organisation that uses an AI-powered human-resources platform made up of several interconnected applications. Each performs a different task but together they form one integrated workflow:

  • An AI-based résumé screening tool to filter job applications, provided by a SaaS vendor. (high-risk under Annex III(4)(a) of the AI Act)

  • An AI-based video-interview analysis module to evaluate candidates, offered by a third-party provider but accessed through the SaaS vendor’s platform via API calls. (high-risk under Annex III(4)(a) of the AI Act)

  • An LLM-powered recruiter assistant for drafting interview notes, developed internally and connected to the same platform through a plug-in, using data from the first two models. (transparency obligations under Article 50(2) of the AI Act)

  • An AI-driven sentiment-analysis dashboard provided by another vendor and embedded within the same SaaS platform, which summarises overall candidate experience from post-interview feedback. (low risk under the AI Act)

This example illustrates how both challenges materialise in practice.

  • The functional boundary question arises because the platform is intended to perform a single end-to-end purpose which is likely high-risk under the AI Act – recruitment – yet it is composed of multiple AI features which independently might have different risk classifications.

  • At the same time, the responsibility boundary is blurred: the SaaS vendor provides the core platform, third parties supply specialised modules, and the organisation itself adds an internally built LLM plug-in.

Applying the AI Act in practice: what operators can do today

In practice, operators can manage this by carefully documenting and justifying how system boundaries are drawn, and by re-examining contractual arrangements to ensure that provider and deployer responsibilities are clearly allocated along the value chain.

To help companies better assess the legal definitions of an AI system and to compare them to the technical design of their enterprise software, among other issues, the free and open training offerings by the Bavarian AI Act Accelerator, funded by the Bavarian Ministry for Digital Affairs and implemented by appliedAI Institute for Europe, are a good way to get started!

Over autumn and winter 2025, these two learner paths get you up to speed with the AI Act:

See here for all training sessions and details about the project!