- Updated: March 11, 2026
- 7 min read
Self-Service or Not? How to Guide Practitioners in Classifying AI Systems Under the EU AI Act
Direct Answer
The paper introduces a web‑based, self‑service decision‑support tool that helps practitioners classify AI systems under the EU AI Act’s Risk Classification Scheme (RCS). By providing clear explanations, concrete examples, and interactive guidance, the tool reduces legal‑technical ambiguity and speeds up compliant risk assessment—a critical need as the Act becomes enforceable.
Background: Why This Problem Is Hard
The EU Artificial Intelligence Act (AIA), which entered into force in August 2024, is the world’s first comprehensive regulatory framework for AI. Its core principle is risk‑based regulation: AI systems are sorted into four risk tiers—unacceptable, high, limited, and minimal—each with distinct obligations. While the tiered approach is theoretically sound, translating the legal text into actionable decisions is notoriously difficult for three reasons:
- Multidisciplinary ambiguity: The Act blends legal definitions (e.g., “biometric categorisation”) with technical concepts (e.g., “real‑time remote biometric identification”). Practitioners must interpret both domains simultaneously.
- Context‑sensitive scope: The same algorithm can fall into different risk categories depending on deployment context, data sources, and intended use‑case, forcing assessors to make nuanced judgments.
- Lack of tooling: Existing compliance checklists are static PDFs or generic questionnaires that do not adapt to user input, leading to inconsistent classifications and costly re‑work.
Current approaches—manual legal reviews, ad‑hoc spreadsheets, or generic risk‑assessment platforms—struggle to provide the granularity, speed, and repeatability required by organizations that develop or deploy AI at scale. The gap between the Act’s high‑level risk taxonomy and day‑to‑day engineering decisions remains a bottleneck for compliance.
What the Researchers Propose
The authors present a self‑service decision‑support tool built on a Design Science Research (DSR) methodology. At a conceptual level, the tool consists of three interacting components:
- Legal Knowledge Base: A curated repository of the AIA’s definitions, obligations, and illustrative excerpts, expressed in plain language.
- Interactive Classification Engine: A rule‑based workflow that asks users targeted questions about their AI system (e.g., data type, deployment environment, user interaction) and maps answers to the appropriate risk tier.
- Example Library: Real‑world case studies and scenario templates that demonstrate how similar systems have been classified, helping users bridge abstract legal language with concrete technical details.
These components are orchestrated through a web interface that guides the practitioner step‑by‑step, offering on‑demand explanations and confidence scores for each decision point.
How It Works in Practice
Conceptual Workflow
The tool follows a linear yet adaptive workflow:
- System Profile Capture: Users input high‑level metadata (industry, intended function, user group).
- Legal Context Query: The engine presents a series of yes/no or multiple‑choice questions derived from the Legal Knowledge Base (e.g., “Does the system process biometric data in real time?”).
- Dynamic Example Retrieval: Based on answers, the Example Library surfaces relevant case studies, highlighting why a particular answer leads to a specific risk tier.
- Risk Tier Determination: The Classification Engine aggregates responses, applies a rule matrix, and outputs a provisional risk tier with an explanatory tooltip.
- Compliance Action Checklist: Once a tier is assigned, the tool generates a tailored checklist of obligations (e.g., conformity assessment, transparency notices).
- Export & Documentation: Users can download a compliance report that records the decision path, supporting evidence, and next steps.
Interaction Between Components
The Legal Knowledge Base feeds the question set, while the Example Library is indexed by the same taxonomy, ensuring that every question has a concrete illustration. The Classification Engine acts as the glue, translating user responses into a risk tier using a deterministic rule set that can be updated as the AIA evolves or as new jurisprudence emerges.
What Sets This Approach Apart
- Self‑service orientation: No legal expert is required to start the assessment; the tool educates users as they progress.
- Contextual grounding: Real‑world examples reduce interpretive variance, a common source of compliance errors.
- Iterative design: The DSR process incorporated feedback from 78 practitioners across multiple domains, ensuring the interface matches actual work practices.
Evaluation & Results
Study Design
The researchers conducted two evaluation phases:
- Phase 1 (Exploratory): 38 participants performed a “dry‑run” classification on a synthetic AI system without tool assistance, then repeated the task using the prototype.
- Phase 2 (Confirmatory): 40 participants from diverse industries (healthcare, finance, autonomous vehicles, HR) used the refined tool on real‑world case studies provided by their employers.
Metrics captured included classification accuracy (agreement with a panel of legal experts), time‑to‑decision, confidence rating, and perceived usability (System Usability Scale, SUS).
Key Findings
| Metric | Without Tool | With Tool |
|---|---|---|
| Classification Accuracy | 62 % | 89 % |
| Average Time per Assessment | 45 minutes | 18 minutes |
| Self‑Reported Confidence (1‑5) | 2.8 | 4.3 |
| SUS Score | — | 84 (excellent) |
Practitioners consistently highlighted that the Example Library “made the legal language tangible,” and that the step‑wise questioning “prevented me from overlooking hidden risk factors.” The tool also surfaced systematic misinterpretations—e.g., many users initially classified a facial‑recognition system as “limited risk” until the real‑time component was clarified, at which point the tool correctly escalated it to “high risk.”
Why the Findings Matter
These results demonstrate that a well‑designed, self‑service decision‑support system can dramatically improve both the speed and quality of AI risk classification under the AIA. The improvement in accuracy (27 percentage points) suggests that organizations can reduce the likelihood of costly regulatory breaches, while the reduction in assessment time frees up engineering resources for product development.
Why This Matters for AI Systems and Agents
For AI developers, compliance officers, and agents that orchestrate AI pipelines, the tool offers a practical bridge between legal obligations and technical implementation:
- Embedded compliance checks: Agents can invoke the Classification Engine via an API to automatically flag high‑risk components during CI/CD pipelines.
- Risk‑aware orchestration: Multi‑agent systems can adjust runtime behaviour (e.g., enable additional human‑in‑the‑loop safeguards) when a component is classified as high risk.
- Documentation automation: Exported compliance reports serve as machine‑readable artifacts for audit trails, simplifying downstream governance.
In short, the tool transforms a traditionally manual, legal‑centric activity into a repeatable, software‑driven process that can be integrated into existing AI development lifecycles.
For organizations looking to operationalise AI governance, platforms such as the UBOS compliance platform can ingest the tool’s output and provide continuous monitoring across model updates.
What Comes Next
Current Limitations
While the study shows strong promise, several constraints remain:
- Static rule set: The Classification Engine relies on a fixed matrix; emerging AI techniques (e.g., foundation models) may require new rule extensions.
- Domain coverage: The participant pool, though diverse, did not include sectors such as public safety or defence, where risk definitions may differ.
- Legal evolution: The AIA is subject to future amendments and case law, necessitating ongoing updates to the Knowledge Base.
Future Research Directions
Potential avenues to extend this work include:
- Adaptive learning: Incorporating machine‑learning models that refine the rule matrix based on user feedback and regulatory outcomes.
- Cross‑jurisdictional extensions: Mapping the tool’s architecture to other emerging AI regulations (e.g., U.S. AI Executive Order, China’s AI Governance Guidelines).
- Integration with model‑cards: Linking risk classification directly to model documentation standards to create a unified compliance artifact.
Practical Next Steps for Practitioners
Organizations can start by piloting the decision‑support tool on a low‑stakes AI project, collecting internal feedback, and then scaling the approach across the portfolio. Coupling the tool with an enterprise governance platform—such as the UBOS AI Governance Hub—enables continuous risk monitoring as models evolve.
Conclusion
The EU AI Act marks a watershed moment for AI regulation, but its risk‑based taxonomy can be a moving target for engineers and compliance teams. The self‑service decision‑support tool evaluated by Schnitzer, Hoeving, and Zillner demonstrates that targeted, example‑rich guidance can dramatically improve classification accuracy while slashing assessment time. By embedding legal knowledge into an interactive workflow, the tool paves the way for automated, audit‑ready compliance that scales with modern AI development practices.
As the regulatory landscape continues to mature, tools that translate legal nuance into actionable engineering steps will become indispensable. The research offers a solid blueprint for building such systems, and its open‑source‑friendly design invites the broader community to iterate, adapt, and extend the approach to new jurisdictions and emerging AI paradigms.
Read the full study in the original arXiv paper and explore how your organization can embed risk‑aware AI governance into its product pipelines.