EU AI Act Compliance Guide
Full enforcement begins August 2026. Understand the risk tiers, key obligations, and how to prepare your AI systems for compliance with the world's first comprehensive AI regulation.
What is the EU AI Act?
The EU AI Act (Regulation 2024/1689) is the European Union's landmark legislation establishing a harmonised legal framework for artificial intelligence. It applies to providers, deployers, importers, distributors, and product manufacturers placing or putting AI systems into service in the EU market — regardless of where those companies are established.
The regulation takes a risk-based approach, imposing requirements proportional to the potential harm an AI system could cause. Non-compliance can result in fines of up to €35 million or 7% of global annual turnover.
The Four Risk Tiers
The EU AI Act classifies all AI systems into one of four risk categories, each with a different compliance burden.
Unacceptable Risk
BANNEDAI systems deemed a clear threat to fundamental rights are prohibited outright.
- Subliminal manipulation or deceptive techniques
- Exploitation of vulnerabilities of specific groups
- Social scoring by public authorities
- Real-time remote biometric ID in public spaces (with narrow exceptions)
- Emotion recognition in workplaces and schools
High Risk
STRICT OBLIGATIONSSystems posing significant risk to health, safety, or fundamental rights. Permitted but subject to extensive requirements before market placement.
- Critical infrastructure management (water, electricity, transport)
- Education and vocational training (admissions, grading)
- Employment: CV screening, interview analysis, task allocation
- Essential private/public services: credit scoring, benefits assessment
- Law enforcement and border control
- Administration of justice and democratic processes
Limited Risk
TRANSPARENCY OBLIGATIONSSystems that interact with people must disclose that the user is interacting with an AI, or that content is AI-generated.
- Chatbots and conversational AI
- Deepfake generation systems
- AI-generated text published for public information
Minimal Risk
FEW OBLIGATIONSThe vast majority of AI systems fall here — spam filters, AI-enabled video games, inventory management tools. No mandatory requirements, though voluntary codes of conduct are encouraged.
Key Obligations for High-Risk Systems
Providers of high-risk AI systems must satisfy all of the following before placing a system on the EU market.
Risk Management System
Establish, implement, document and maintain a continuous risk management process throughout the entire lifecycle of the AI system.
Data Governance
Training, validation and testing data must meet quality criteria: relevance, representativeness, freedom from errors, and completeness.
Technical Documentation
Detailed technical documentation must be drawn up before the system is placed on the market and kept up to date throughout its lifecycle.
Human Oversight
Systems must be designed to allow effective oversight by natural persons during the period of use, including the ability to intervene or halt.
Accuracy & Cybersecurity
Systems must achieve an appropriate level of accuracy, robustness, and cybersecurity, and must perform consistently throughout their lifecycle.
Conformity Assessment
Before market placement, providers must undergo a conformity assessment — either self-assessment or third-party audit, depending on the use case.
Enforcement Timeline
Regulation enters into force
The EU AI Act officially entered into force 20 days after publication in the Official Journal.
Prohibited practices banned
Unacceptable risk AI systems (Article 5) are prohibited. Governance bodies and AI Office established.
GPAI model obligations
Rules for general-purpose AI (GPAI) models apply, including transparency obligations and codes of practice.
High-risk system obligations
Full obligations for high-risk AI systems (Annex III) apply. Notified bodies must be designated.
Remaining high-risk systems
Obligations extend to high-risk AI systems that are safety components of products already regulated by EU product safety law.
Official Sources & Further Reading
Primary regulatory texts and official guidance referenced in this guide.
EU AI Act — Full Regulation Text
Regulation (EU) 2024/1689 — complete legal text as published in the EU Official Journal
EUR-Lex ↗European Commission — AI Act Hub
Official EC page on the AI Act regulatory framework, implementation status, and guidance documents
European Commission ↗EU AI Office
The EU body responsible for overseeing GPAI models and coordinating with national market surveillance authorities
European Commission ↗Article-by-Article Navigator
Plain-language breakdown of every article in the EU AI Act with explanatory notes and cross-references
Future of Life Institute ↗Article 5 — Prohibited AI Practices
Complete list of AI applications unconditionally banned in the EU, in force since February 2025
AI Act Navigator ↗Annex III — High-Risk AI Categories
The definitive list of use cases classified as high-risk under Annex III, requiring full compliance obligations
AI Act Navigator ↗Automate Your EU AI Act Compliance
ComplyLayer maps your AI systems to the correct risk tier, generates required technical documentation, and tracks your compliance status in real time.
14-day free trial · No credit card required