Qualification
Is this a GPAI model? With or without systemic risk? Is it an AI system under Article 3? Is it placed on the market or used in the EU? Out-of-scope checks.
Output · AI system in scope · GPAI model with systemic risk · Out of scope
EU AI ACT · RISK CLASSIFICATION
A risk classification module built on the structure of the European Commission AI Office's compliance checker (March 2026), customised for the enterprise. Five steps. One outcome: the obligations that apply, by article and by operator role.
Most enterprises today are working through the AI Act with a small team and a long spreadsheet. The questions are clear, but the volume isn’t manageable manually: every AI system needs to be qualified, every operator classified, every system checked against prohibited practices, high-risk categories, transparency obligations, and the enterprise’s own business-critical list.
Kosmoy turns the spreadsheet into an operational module. The same five-step structure as the EC AI Office’s compliance checker, with the same definitions and exception handling, customised for the enterprise context — multi-operator scenarios, business-critical categories, and obligations matrices generated per role.
Is this a GPAI model? With or without systemic risk? Is it an AI system under Article 3? Is it placed on the market or used in the EU? Out-of-scope checks.
Output · AI system in scope · GPAI model with systemic risk · Out of scope
General info, data used, technical info, ML type (rule-based, deep learning, generative, NLP, recommendation, predictive, computer vision, other), autonomy level, output type, governance.
Output · A system metadata record.
Provider · Deployer · Importer · Distributor · Authorised representative · Product manufacturer. Plus checks for downstream modifier scenarios that promote a Deployer/Importer/Distributor to Provider.
Output · A role assignment per system.
Prohibited (Article 5) · High-risk (Article 6 + Annex III) · Transparency (Article 50) · Business-critical (organisation-defined). Exception handling per Article.
Output · A risk tier per system.
Mapped to the role and tier. Articles 4, 9, 10, 11, 12, 13, 14, 15, 17, 22, 23, 24, 26, 27, 43, 47, 48, 49, 50, 53, 54, 55, 72.
Output · An obligations checklist with status, evidence, owner.
Obligations matrix
Representative subset for high-risk systems. Full matrix available inside the platform.
| Obligation | Provider | Deployer | Importer | Distributor | Auth. rep. | Manufacturer |
|---|---|---|---|---|---|---|
| AI Literacy (Art. 4) | ● | ● | ● | ● | ● | ● |
| Risk management system (Art. 9) | ● | — | — | — | — | ● |
| Data governance (Art. 10) | ● | — | — | — | — | ● |
| Technical documentation (Art. 11) | ● | — | ● | — | ● | ● |
| Record keeping (Art. 12) | ● | ● | — | — | — | — |
| Human oversight (Art. 14) | ● | ● | — | — | — | — |
| Quality management system (Art. 17) | ● | — | — | — | — | — |
| FRIA / impact assessment (Art. 27) | — | ● | — | — | — | — |
| Conformity assessment (Art. 43) | ● | — | — | — | ● | — |
| EU declaration of conformity (Art. 47) | ● | — | ● | ● | ● | — |
| CE marking (Art. 48) | ● | — | — | — | — | — |
| Registration (Art. 49) | ● | — | — | — | — | — |
Once a system is classified, the obligations attach to its AI system record. As the system operates — through the Gateway, in a Capsule — the runtime evidence (logs, conversation records, guardrail events, oversight actions, QA sessions) connects back to the obligations. By the time an audit asks for Article 12 record keeping, it’s already there.
No. It produces an operational classification and an obligations list. Legal review is owned by qualified people inside the organisation.
The structure of the module — qualification, system info, operator role, risk classification, obligations — directly mirrors the structure of the EC AI Office's checker (March 2026 version). The customisations are for multi-operator scenarios, business-critical categories, and the integration with Kosmoy's AI Systems Registry.
Yes. The Business-Critical category is intentionally configurable. Add the categories your organisation considers critical to its core business, even when not high-risk under the AI Act.
The same registries and evidence support an ISO/IEC 42001 management system. The module's output (risk classifications, obligations, evidence) maps cleanly to 42001 controls.
Yes. A separate obligations sheet covers GPAI model providers (Articles 53, 54) and providers of GPAI models with systemic risk (Article 55).
Walk through the five-step classification flow on a real use case.