EU AI Act — Annex III
High-risk AI enforcement starts 2 August 2026. €15M–€35M penalty exposure.
Independent void assessment available for credit scoring, EdTech, HR tech, and healthcare AI.
The AI Act applies in stages. Some obligations are already in force.
The Annex III high-risk window closes 2 August 2026.
In force
2 February 2025
Prohibited practices
Social scoring, real-time biometric ID, emotion recognition in workplaces and schools, subliminal manipulation. No transition period — violations already enforceable.
In force
2 August 2025
GPAI model rules
General-purpose AI: transparency obligations, technical documentation, copyright compliance. Applies to foundation models and their deployers.
⚠ Enforcement begins
2 August 2026
Annex III high-risk AI
Conformity assessments, risk management, data governance, human oversight, and transparency obligations for all Annex III categories. This is the window.
€35M
or 7% global annual turnover
Prohibited practices (Art. 5). Social scoring, biometric ID, subliminal manipulation. Already in force.
€15M
or 3% global annual turnover
High-risk AI non-compliance (Annex III). Insufficient risk management, missing conformity assessment, inadequate human oversight.
€7.5M
or 1.5% global annual turnover
Incorrect, incomplete, or misleading information to notified bodies or national authorities.
The profiling exception (Art. 6(3)) means that AI systems performing profiling of natural
persons are always classified as high-risk — regardless of other mitigations.
No derogation escape for credit scoring, employment screening, or criminal justice.
§3
Education & EdTech
AI in admissions, learning outcome assessment, student monitoring, proctoring. Emotion recognition in schools already prohibited (Feb 2025).
§4
Employment & HR
CV screening, recruitment ranking, performance monitoring, termination decisions. Profiling exception applies — always high-risk, no derogation.
Paper 21B
Q1 2026
§5
Credit & Insurance
Credit scoring, insurance pricing, benefit eligibility. Profiling exception applies — always high-risk, no derogation.
§5
Healthcare
Clinical decision support, diagnostic AI, triage systems, treatment recommendations. Highest opacity risk due to medical complexity.
Paper 22
Wave 3
§6
Law Enforcement
Recidivism prediction, crime analytics, offender profiling. Profiling exception applies — always high-risk, no derogation.
Paper 29
Wave 3
§2
Critical Infrastructure
Digital infrastructure, road traffic, utilities management. Opacity in infrastructure AI presents systemic risk across dependent systems.
Paper 19
Wave 2