EU AI ACT · RISK CLASSIFICATION

From qualification to obligations, mapped to your role.

A risk classification module built on the structure of the European Commission AI Office's compliance checker (March 2026), customised for the enterprise. Five steps. One outcome: the obligations that apply, by article and by operator role.

This module is for informational and operational purposes. It does not constitute legal advice. Always consult qualified AI Act compliance, legal or regulatory professionals for advice on your specific circumstances.

Most enterprises today are working through the AI Act with a small team and a long spreadsheet. The questions are clear, but the volume isn’t manageable manually: every AI system needs to be qualified, every operator classified, every system checked against prohibited practices, high-risk categories, transparency obligations, and the enterprise’s own business-critical list.

Kosmoy turns the spreadsheet into an operational module. The same five-step structure as the EC AI Office’s compliance checker, with the same definitions and exception handling, customised for the enterprise context — multi-operator scenarios, business-critical categories, and obligations matrices generated per role.


The five steps.

1

Qualification

Is this a GPAI model? With or without systemic risk? Is it an AI system under Article 3? Is it placed on the market or used in the EU? Out-of-scope checks.

Output · AI system in scope · GPAI model with systemic risk · Out of scope

2

System info

General info, data used, technical info, ML type (rule-based, deep learning, generative, NLP, recommendation, predictive, computer vision, other), autonomy level, output type, governance.

Output · A system metadata record.

3

Operator role

Provider · Deployer · Importer · Distributor · Authorised representative · Product manufacturer. Plus checks for downstream modifier scenarios that promote a Deployer/Importer/Distributor to Provider.

Output · A role assignment per system.

4

Risk classification

Prohibited (Article 5) · High-risk (Article 6 + Annex III) · Transparency (Article 50) · Business-critical (organisation-defined). Exception handling per Article.

Output · A risk tier per system.

5

Obligations

Mapped to the role and tier. Articles 4, 9, 10, 11, 12, 13, 14, 15, 17, 22, 23, 24, 26, 27, 43, 47, 48, 49, 50, 53, 54, 55, 72.

Output · An obligations checklist with status, evidence, owner.


Obligations matrix

Which Articles apply to which role.

Representative subset for high-risk systems. Full matrix available inside the platform.

ObligationProviderDeployerImporterDistributorAuth. rep.Manufacturer
AI Literacy (Art. 4)
Risk management system (Art. 9)
Data governance (Art. 10)
Technical documentation (Art. 11)
Record keeping (Art. 12)
Human oversight (Art. 14)
Quality management system (Art. 17)
FRIA / impact assessment (Art. 27)
Conformity assessment (Art. 43)
EU declaration of conformity (Art. 47)
CE marking (Art. 48)
Registration (Art. 49)

What the module does not do.

  • Not legal advice. Produces an operational classification and an obligations list. A qualified person inside the organisation owns the decision.
  • Not auto-classification without input. A human owner answers the questions. The module structures them, applies the definitions consistently, and computes the consequences.
  • Not a single interpretation. Where the AI Act is ambiguous, the module surfaces the ambiguity rather than hiding it. Your AI Act lead decides; the module records the decision.

How evidence flows.

Once a system is classified, the obligations attach to its AI system record. As the system operates — through the Gateway, in a Capsule — the runtime evidence (logs, conversation records, guardrail events, oversight actions, QA sessions) connects back to the obligations. By the time an audit asks for Article 12 record keeping, it’s already there.


Module questions, answered straight.

Is this a substitute for legal counsel?

No. It produces an operational classification and an obligations list. Legal review is owned by qualified people inside the organisation.

Where does the EC AI Office compliance checker fit in?

The structure of the module — qualification, system info, operator role, risk classification, obligations — directly mirrors the structure of the EC AI Office's checker (March 2026 version). The customisations are for multi-operator scenarios, business-critical categories, and the integration with Kosmoy's AI Systems Registry.

Can I add custom risk categories?

Yes. The Business-Critical category is intentionally configurable. Add the categories your organisation considers critical to its core business, even when not high-risk under the AI Act.

What about ISO/IEC 42001?

The same registries and evidence support an ISO/IEC 42001 management system. The module's output (risk classifications, obligations, evidence) maps cleanly to 42001 controls.

Are GPAI obligations covered?

Yes. A separate obligations sheet covers GPAI model providers (Articles 53, 54) and providers of GPAI models with systemic risk (Article 55).

See the AI Act module in action.

Walk through the five-step classification flow on a real use case.