The EU AI Act (Regulation (EU) 2024/1689) entered into force on August 1, 2024, establishing the world’s first comprehensive legal framework for artificial intelligence. It applies to any organization that places AI systems on the EU market or deploys AI systems affecting people in the EU, regardless of where the organization is based.

Risk-Based Classification

The AI Act categorizes AI systems into four risk levels, with obligations increasing by risk.

Unacceptable Risk (Prohibited)

Banned outright as of February 2, 2025:

  • Social scoring by public authorities
  • Real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions)
  • Emotion recognition in workplaces and educational institutions
  • Biometric categorization based on sensitive characteristics like race, political opinions, or sexual orientation
  • Predictive policing based solely on profiling
  • Untargeted scraping of facial images from the internet or CCTV for facial recognition databases
  • Manipulation techniques that exploit vulnerabilities of specific groups

High Risk

Subject to the most extensive compliance obligations, effective August 2, 2026:

CategoryExamples
Biometric identificationRemote biometric identification (non-real-time), biometric categorization
Critical infrastructureAI managing electricity, gas, water, or transport safety systems
EducationAI determining access to education, evaluating learning outcomes, proctoring
EmploymentAI for recruitment, screening, hiring decisions, performance evaluation, termination
Essential servicesAI for credit scoring, insurance pricing, emergency service dispatch
Law enforcementAI for risk assessment, polygraphs, evidence analysis, crime prediction
Migration and border controlAI for visa processing, asylum applications, border surveillance
Justice and democracyAI assisting judicial decisions, election influence

Limited Risk

Subject to transparency obligations only. Chatbots must disclose to users that they are interacting with an AI system. Deepfakes and synthetic content must be labeled as AI-generated. Emotion recognition systems must inform subjects when emotion recognition is being used.

Minimal Risk

No specific obligations. This includes AI-powered spam filters, AI in video games, and inventory management systems.

Compliance Obligations for High-Risk AI

Organizations deploying high-risk AI systems must implement:

RequirementDescription
Risk management systemContinuous identification, analysis, and mitigation of risks throughout the AI system lifecycle
Data governanceTraining data must be relevant, representative, and free from errors; bias testing required
Technical documentationDetailed documentation of system design, development, and intended use
Record-keepingAutomatic logging of AI system operations for traceability
TransparencyClear instructions for use, including limitations and intended purpose
Human oversightSystems must be designed to allow effective human oversight and intervention
Accuracy and robustnessAppropriate levels of accuracy, robustness, and cybersecurity
Conformity assessmentSelf-assessment or third-party assessment depending on the category

General-Purpose AI (GPAI) Models

Foundation models and general-purpose AI have specific obligations effective August 2, 2025.

All GPAI models require technical documentation, copyright compliance, and transparency about training data. GPAI with systemic risk (models trained with more than 10^25 FLOPs) have additional obligations including model evaluation, adversarial testing, incident reporting, and cybersecurity measures.

OpenAI, Google, Meta, Anthropic, and Mistral are among the providers likely to be classified as systemic risk GPAI providers.

Enforcement Timeline

DateMilestone
Aug 1, 2024AI Act enters into force
Feb 2, 2025Prohibited AI practices ban takes effect
Aug 2, 2025GPAI model obligations take effect; Codes of Practice due
Aug 2, 2026High-risk AI system obligations take effect
Aug 2, 2027Obligations for high-risk AI embedded in regulated products (medical devices, vehicles, aviation)

Penalties

ViolationMaximum Fine
Prohibited AI practices35 million euros or 7% of global annual turnover
High-risk non-compliance15 million euros or 3% of global annual turnover
Incorrect information to authorities7.5 million euros or 1.5% of global annual turnover

For SMEs and startups, fines are capped at the lower of the fixed amount or the percentage of turnover.

Practical Steps

Start by inventorying all AI systems your organization develops, deploys, or procures, and classify each by risk level. Verify no current or planned AI systems fall under prohibited categories, since those rules are already in effect. If you develop or fine-tune foundation models, ensure technical documentation and copyright compliance by August 2025. For high-risk systems, begin building risk management, data governance, and human oversight frameworks now for the August 2026 deadline. Designate accountability for AI Act compliance and integrate with existing data protection and cybersecurity governance. The EU AI Office is publishing guidelines, codes of practice, and standards that will clarify obligations, so keep monitoring that guidance.