Is Your AI System High-Risk?
An AI system is classified as high-risk if it is used in contexts where errors or biases could significantly harm individuals' rights, safety, or livelihoods.
Approach 1: Safety Component
AI systems that are safety components of products covered by existing EU product safety legislation and require third-party conformity assessment.
Approach 2: Listed Use Cases (Annex III)
| Category | Examples |
|---|---|
| Biometrics | Remote biometric identification, emotion recognition |
| Critical Infrastructure | AI managing electricity, water, gas, traffic |
| Education | AI determining access, exam proctoring, student assessment |
| Employment | CV screening, interview assessment, hiring decisions |
| Essential Services | Credit scoring, insurance pricing, emergency dispatch |
| Law Enforcement | Risk assessment tools, evidence analysis |
| Migration/Border | Visa assessment, asylum processing |
| Justice | Systems assisting judicial authorities |
The Quick Classification Test
| If Your AI Does This... | Classification |
|---|---|
| Screens job applications | High-risk (Employment) |
| Assesses creditworthiness | High-risk (Essential Services) |
| Recommends products | Not high-risk |
| Generates marketing copy | Not high-risk |
| Analyzes contracts | Usually not high-risk |
| Detects fraud in payments | Potentially high-risk |
The Seven Requirements for High-Risk AI
Requirement 1: Risk Management System (Article 9)
A continuous process to identify, analyze, and mitigate risks throughout your AI system's lifecycle. Document all identified risks, implement measures, test under foreseeable misuse conditions, and maintain ongoing assessment.
Requirement 2: Data Governance (Article 10)
| Element | What You Need |
|---|---|
| Data sources | Where did training data come from? |
| Data collection | How was it collected? |
| Data preparation | What preprocessing was applied? |
| Bias assessment | How did you check for and address bias? |
| Data gaps | What limitations exist in your dataset? |
Requirement 3: Technical Documentation (Article 18)
Comprehensive records covering design, development, and operation. Retention: 10 years. Your 2026 documentation must be accessible until 2036.
Requirement 4: Automatic Logging (Article 19)
Logging capabilities appropriate to purpose. Minimum: period of use, reference databases, input data, identification of verifying persons. Retention: at least 6 months.
Requirement 5: Transparency (Article 13)
Clear documentation for deployers: identity, characteristics, intended purpose, misuse scenarios, human oversight instructions, performance metrics.
Requirement 6: Human Oversight (Article 14)
Humans must be able to understand outputs, decide not to use the system, intervene or interrupt operation. The key word is "effectively."
Requirement 7: Accuracy, Robustness, Cybersecurity (Article 15)
Appropriate levels of accuracy, resilience to errors, and protection against threats.
The Conformity Assessment
| Pathway | When It Applies | What It Involves |
|---|---|---|
| Self-Assessment | Most high-risk Annex III systems | Internal procedures, documentation review |
| Third-Party | Remote biometric ID and certain sectoral systems | Notified Body evaluation |
The Real Timeline: 8β14 Months
Phase 1: Assessment (4β8 weeks)
AI inventory, risk classification, gap analysis, resource planning.
Phase 2: Technical Implementation (12β20 weeks)
Risk management system, data governance framework, logging infrastructure, human oversight, cybersecurity.
Phase 3: Documentation (8β12 weeks)
Technical documentation, instructions for use, risk management records, training records.
Phase 4: Conformity Assessment (8β16 weeks)
Quality management setup, internal assessment, Notified Body (if required), registration.
If you have not started, you are already behind. Work backward from August 2, 2026 β you should have started by Q2 2025.
Deployer Obligations
Not building AI? If you use a third-party high-risk AI system, you still have obligations:
| Obligation | What It Means |
|---|---|
| Technical measures | Implement according to provider instructions |
| Human oversight | Assign qualified persons to monitor |
| Data quality | Ensure input data is relevant |
| Monitoring | Watch for risks, report to provider |
| Staff training | Ensure users understand AI limitations |
Startup Support Provisions
Regulatory Sandboxes (Article 57)
Controlled testing environments with guidance from regulators, reduced fees, and priority processing.
SME-Specific Provisions (Article 62)
Reduced conformity assessment fees, faster processing, tailored guidance, and dedicated information campaigns.
Penalties
| Violation | Maximum Penalty |
|---|---|
| Non-compliance with high-risk requirements | β¬15M or 3% of global turnover |
| Providing incorrect information | β¬7.5M or 1% of global turnover |
The Bottom Line
The startups that treat compliance as a competitive advantage β not just a legal burden β will be the ones customers trust. Classify your systems. Assess your gaps. Plan your timeline. Start now.
Related: Product Liability Directive 2026: Software and AI now liable | GDPR reforms and AI training



