The EU AI Act (Regulation (EU) 2024/1689) entered into force on August 1, 2024, establishing the world’s first comprehensive legal framework for artificial intelligence. It applies to any organization that places AI systems on the EU market or deploys AI systems affecting people in the EU, regardless of where the organization is based.
Risk-Based Classification
The AI Act categorizes AI systems into four risk levels, with obligations increasing by risk.
Unacceptable Risk (Prohibited)
Banned outright as of February 2, 2025:
- Social scoring by public authorities
- Real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions)
- Emotion recognition in workplaces and educational institutions
- Biometric categorization based on sensitive characteristics like race, political opinions, or sexual orientation
- Predictive policing based solely on profiling
- Untargeted scraping of facial images from the internet or CCTV for facial recognition databases
- Manipulation techniques that exploit vulnerabilities of specific groups
High Risk
Subject to the most extensive compliance obligations, effective August 2, 2026:
| Category | Examples |
|---|---|
| Biometric identification | Remote biometric identification (non-real-time), biometric categorization |
| Critical infrastructure | AI managing electricity, gas, water, or transport safety systems |
| Education | AI determining access to education, evaluating learning outcomes, proctoring |
| Employment | AI for recruitment, screening, hiring decisions, performance evaluation, termination |
| Essential services | AI for credit scoring, insurance pricing, emergency service dispatch |
| Law enforcement | AI for risk assessment, polygraphs, evidence analysis, crime prediction |
| Migration and border control | AI for visa processing, asylum applications, border surveillance |
| Justice and democracy | AI assisting judicial decisions, election influence |
Limited Risk
Subject to transparency obligations only. Chatbots must disclose to users that they are interacting with an AI system. Deepfakes and synthetic content must be labeled as AI-generated. Emotion recognition systems must inform subjects when emotion recognition is being used.
Minimal Risk
No specific obligations. This includes AI-powered spam filters, AI in video games, and inventory management systems.
Compliance Obligations for High-Risk AI
Organizations deploying high-risk AI systems must implement:
| Requirement | Description |
|---|---|
| Risk management system | Continuous identification, analysis, and mitigation of risks throughout the AI system lifecycle |
| Data governance | Training data must be relevant, representative, and free from errors; bias testing required |
| Technical documentation | Detailed documentation of system design, development, and intended use |
| Record-keeping | Automatic logging of AI system operations for traceability |
| Transparency | Clear instructions for use, including limitations and intended purpose |
| Human oversight | Systems must be designed to allow effective human oversight and intervention |
| Accuracy and robustness | Appropriate levels of accuracy, robustness, and cybersecurity |
| Conformity assessment | Self-assessment or third-party assessment depending on the category |
General-Purpose AI (GPAI) Models
Foundation models and general-purpose AI have specific obligations effective August 2, 2025.
All GPAI models require technical documentation, copyright compliance, and transparency about training data. GPAI with systemic risk (models trained with more than 10^25 FLOPs) have additional obligations including model evaluation, adversarial testing, incident reporting, and cybersecurity measures.
OpenAI, Google, Meta, Anthropic, and Mistral are among the providers likely to be classified as systemic risk GPAI providers.
Enforcement Timeline
| Date | Milestone |
|---|---|
| Aug 1, 2024 | AI Act enters into force |
| Feb 2, 2025 | Prohibited AI practices ban takes effect |
| Aug 2, 2025 | GPAI model obligations take effect; Codes of Practice due |
| Aug 2, 2026 | High-risk AI system obligations take effect |
| Aug 2, 2027 | Obligations for high-risk AI embedded in regulated products (medical devices, vehicles, aviation) |
Penalties
| Violation | Maximum Fine |
|---|---|
| Prohibited AI practices | 35 million euros or 7% of global annual turnover |
| High-risk non-compliance | 15 million euros or 3% of global annual turnover |
| Incorrect information to authorities | 7.5 million euros or 1.5% of global annual turnover |
For SMEs and startups, fines are capped at the lower of the fixed amount or the percentage of turnover.
Practical Steps
Start by inventorying all AI systems your organization develops, deploys, or procures, and classify each by risk level. Verify no current or planned AI systems fall under prohibited categories, since those rules are already in effect. If you develop or fine-tune foundation models, ensure technical documentation and copyright compliance by August 2025. For high-risk systems, begin building risk management, data governance, and human oversight frameworks now for the August 2026 deadline. Designate accountability for AI Act compliance and integrate with existing data protection and cybersecurity governance. The EU AI Office is publishing guidelines, codes of practice, and standards that will clarify obligations, so keep monitoring that guidance.