The European Union’s landmark Artificial Intelligence Act reaches a critical milestone on February 2, 2026, marking one year since prohibited AI practices became enforceable across all 27 member states. This anniversary triggers the European Commission’s mandated review under Article 112, potentially leading to expanded prohibitions.
Enforcement timeline
| Date | Milestone |
|---|---|
| August 1, 2024 | AI Act enters into force |
| February 2, 2025 | Prohibited AI practices enforceable |
| February 2, 2026 | Commission Article 112 review triggered |
| August 2, 2026 | General-purpose AI (GPAI) transparency rules |
| August 2, 2027 | High-risk AI system rules fully enforceable |
| August 2, 2027 | Full AI Act application |
Prohibited AI practices
Since February 2, 2025, the following AI applications have been illegal across the EU:
Social scoring systems
AI systems that evaluate or classify individuals based on social behavior or personal characteristics, leading to detrimental treatment in unrelated contexts or disproportionate to the behavior.
Manipulative AI
Systems that deploy subliminal techniques, exploitative methods, or deceptive practices to materially distort behavior in ways that cause significant harm.
Vulnerability exploitation
AI that exploits vulnerabilities of specific groups based on age, disability, or socioeconomic situation to distort behavior and cause harm.
Real-time biometric identification
Remote biometric identification in publicly accessible spaces for law enforcement purposes, with narrow exceptions for:
- Missing children searches
- Imminent terrorist threats
- Serious criminal suspects
Emotion recognition
AI systems that infer emotions in workplaces and educational institutions (with medical/safety exceptions).
Predictive policing
AI predicting crime risk based solely on profiling, personality traits, or personal characteristics.
Facial recognition database creation
Untargeted scraping of facial images from the internet or CCTV footage to build facial recognition databases.
Penalty structure
| Violation Type | Maximum Penalty |
|---|---|
| Prohibited AI practices | 7% of global annual revenue or €35 million |
| Other AI Act violations | 3% of global annual revenue or €15 million |
| Incorrect information | 1% of global annual revenue or €7.5 million |
| SME penalties | Proportionally reduced caps |
Article 112 Commission review
The February 2, 2026 anniversary triggers the Commission’s mandated review of Article 5 prohibitions.
Review scope
| Area | Assessment |
|---|---|
| Prohibition effectiveness | Are current bans achieving goals? |
| Enforcement gaps | Are prohibited systems still deployed? |
| Technological evolution | Do new AI capabilities require new prohibitions? |
| International developments | How do other jurisdictions compare? |
Potential outcomes
Following the review, the Commission has 12 months to propose amendments through delegated acts. Possible expansions:
| Potential New Prohibition | Likelihood |
|---|---|
| Expanded biometric restrictions | Moderate |
| Deepfake generation limits | Under discussion |
| Autonomous weapons applications | Separate regulatory track |
Any prohibition expansions face Parliamentary and Council scrutiny before implementation, meaning earliest enforcement of new prohibitions would be 2027.
Enforcement status
As of early 2026, enforcement actions for prohibited practices remain limited due to:
| Factor | Impact |
|---|---|
| Regulatory infrastructure | Still being established in most member states |
| Detection complexity | Difficult to identify prohibited AI in practice |
| Proactive compliance | Companies discontinuing or redesigning systems |
Active investigations
Several high-profile investigations are reportedly underway involving:
- Workplace emotion recognition systems in multinational corporations
- Predictive policing algorithms used by several EU law enforcement agencies
- Social scoring elements in employee management platforms
Workplace emotion recognition ban
The prohibition on emotion recognition in workplaces deserves special attention:
| Scenario | Status under AI Act |
|---|---|
| Webcam-based “engagement” detection | Prohibited |
| AI assessing if employees are “happy” | Prohibited |
| Emotion inference in hiring interviews | Prohibited |
| Biometric stress detection at work | Prohibited |
| Medical/safety exceptions | Permitted with safeguards |
Using AI to detect if employees are “happy” or “engaged” via webcam monitoring is now explicitly illegal. Companies that implemented such systems before February 2025 were required to discontinue them.
Enforcement challenges
| Challenge | Impact |
|---|---|
| Defining “emotion recognition” | Boundary cases require interpretation |
| Cross-border enforcement | Multinational companies face complexity |
| Technical detection | Identifying prohibited AI in deployed systems |
| Whistleblower reliance | Many violations surface through complaints |
Cybersecurity industry implications
The AI Act has direct implications for security vendors operating in the EU.
Systems requiring review
| Category | Concern |
|---|---|
| Behavioral analytics | Must not cross into emotion recognition or social scoring |
| Insider threat detection | Employee profiling scrutiny |
| Biometric authentication | Physical access control implications |
| Threat intelligence | Facial recognition or public data scraping |
ENISA guidance
The European Union Agency for Cybersecurity (ENISA) has published guidance clarifying that most cybersecurity AI applications fall under lower risk categories:
| Application | Risk Level |
|---|---|
| Automated threat detection | Not prohibited |
| Malware analysis | Not prohibited |
| Vulnerability scanning | Not prohibited |
| Network anomaly detection | Not prohibited |
These applications are not affected by the February 2025 prohibitions.
Coming deadlines
August 2026: GPAI transparency
General-purpose AI models, including foundation models, face new transparency obligations:
| Requirement | Scope |
|---|---|
| Technical documentation | Model capabilities and limitations |
| Training data summary | General description of training data |
| Copyright compliance | Respect for EU copyright law |
| Systemic risk assessment | For high-capability models |
August 2027: High-risk AI
Full enforcement of rules for high-risk AI systems used in:
- Critical infrastructure
- Education and vocational training
- Employment and worker management
- Essential services access
- Law enforcement
- Border management
- Justice administration
Industry response
Technology company compliance
| Company | Status |
|---|---|
| Microsoft | Confirmed EU compliance |
| Confirmed EU compliance | |
| Others | Restructuring AI features in EU products |
Trade association positions
| Organization | Position |
|---|---|
| DigitalEurope | Welcomes clarity; warns of national fragmentation |
| BSA | Urges additional technical guidance |
Compliance recommendations
For organizations using AI in the EU
| Action | Timeline |
|---|---|
| AI system inventory | Immediate |
| Risk classification | Before August 2027 |
| Prohibited use audit | Already required |
| High-risk preparation | 18-month runway |
Self-assessment resources
The European Commission has released a self-assessment tool to help organizations determine which AI systems may be affected by the Act.
Global influence
The EU AI Act represents the world’s most comprehensive AI regulatory framework and is expected to influence similar legislation globally:
| Jurisdiction | Status |
|---|---|
| United Kingdom | AI Safety Institute approach, lighter regulation |
| United States | Executive order, sector-specific approach |
| Canada | AIDA proposed |
| Brazil | AI framework under development |
| China | AI regulations in effect |
The “Brussels Effect”—where EU regulations become de facto global standards—may apply to AI governance as companies find it easier to comply globally than maintain separate systems.
Context
The AI Act’s first year of prohibited practice enforcement has been characterized by preparation rather than prosecution. Companies have largely anticipated the rules and adjusted accordingly, with the most obvious prohibited applications discontinued before enforcement began.
The more significant compliance challenge lies ahead with August 2027’s high-risk AI rules, which will require conformity assessments, technical documentation, and ongoing monitoring for a much broader range of systems.
For cybersecurity vendors, the Act creates both compliance obligations and market opportunity. AI-powered security tools must be designed and documented appropriately, but the regulatory clarity may advantage European-compliant vendors in a market increasingly concerned about AI governance.
Penalty comparison
The EU AI Act’s penalty structure exceeds even GDPR:
| Regulation | Maximum Penalty |
|---|---|
| EU AI Act (prohibited AI) | 7% global revenue or €35M |
| GDPR (data protection) | 4% global revenue or €20M |
| EU AI Act (other violations) | 3% global revenue or €15M |
| EU AI Act (misinformation) | 1% global revenue or €7.5M |
The significantly higher penalties for prohibited AI practices signal the EU’s prioritization of these issues.
Enforcement architecture
| Level | Authority | Powers |
|---|---|---|
| EU | European AI Office | GPAI model evaluation, documentation requests, source code access |
| National | Designated competent authorities | Investigation, audits, penalties |
| Market | Surveillance authorities | Product withdrawals, compliance orders |
Each EU member state must designate at least one national competent authority with full investigatory powers. The European AI Office has special authority over general-purpose AI models, including the ability to demand source code access.