The European Union’s landmark Artificial Intelligence Act reaches a critical milestone on February 2, 2026, marking one year since prohibited AI practices became enforceable across all 27 member states. This anniversary triggers the European Commission’s mandated review under Article 112, potentially leading to expanded prohibitions.

Enforcement timeline

DateMilestone
August 1, 2024AI Act enters into force
February 2, 2025Prohibited AI practices enforceable
February 2, 2026Commission Article 112 review triggered
August 2, 2026General-purpose AI (GPAI) transparency rules
August 2, 2027High-risk AI system rules fully enforceable
August 2, 2027Full AI Act application

Prohibited AI practices

Since February 2, 2025, the following AI applications have been illegal across the EU:

Social scoring systems

AI systems that evaluate or classify individuals based on social behavior or personal characteristics, leading to detrimental treatment in unrelated contexts or disproportionate to the behavior.

Manipulative AI

Systems that deploy subliminal techniques, exploitative methods, or deceptive practices to materially distort behavior in ways that cause significant harm.

Vulnerability exploitation

AI that exploits vulnerabilities of specific groups based on age, disability, or socioeconomic situation to distort behavior and cause harm.

Real-time biometric identification

Remote biometric identification in publicly accessible spaces for law enforcement purposes, with narrow exceptions for:

  • Missing children searches
  • Imminent terrorist threats
  • Serious criminal suspects

Emotion recognition

AI systems that infer emotions in workplaces and educational institutions (with medical/safety exceptions).

Predictive policing

AI predicting crime risk based solely on profiling, personality traits, or personal characteristics.

Facial recognition database creation

Untargeted scraping of facial images from the internet or CCTV footage to build facial recognition databases.

Penalty structure

Violation TypeMaximum Penalty
Prohibited AI practices7% of global annual revenue or €35 million
Other AI Act violations3% of global annual revenue or €15 million
Incorrect information1% of global annual revenue or €7.5 million
SME penaltiesProportionally reduced caps

Article 112 Commission review

The February 2, 2026 anniversary triggers the Commission’s mandated review of Article 5 prohibitions.

Review scope

AreaAssessment
Prohibition effectivenessAre current bans achieving goals?
Enforcement gapsAre prohibited systems still deployed?
Technological evolutionDo new AI capabilities require new prohibitions?
International developmentsHow do other jurisdictions compare?

Potential outcomes

Following the review, the Commission has 12 months to propose amendments through delegated acts. Possible expansions:

Potential New ProhibitionLikelihood
Expanded biometric restrictionsModerate
Deepfake generation limitsUnder discussion
Autonomous weapons applicationsSeparate regulatory track

Any prohibition expansions face Parliamentary and Council scrutiny before implementation, meaning earliest enforcement of new prohibitions would be 2027.

Enforcement status

As of early 2026, enforcement actions for prohibited practices remain limited due to:

FactorImpact
Regulatory infrastructureStill being established in most member states
Detection complexityDifficult to identify prohibited AI in practice
Proactive complianceCompanies discontinuing or redesigning systems

Active investigations

Several high-profile investigations are reportedly underway involving:

  • Workplace emotion recognition systems in multinational corporations
  • Predictive policing algorithms used by several EU law enforcement agencies
  • Social scoring elements in employee management platforms

Workplace emotion recognition ban

The prohibition on emotion recognition in workplaces deserves special attention:

ScenarioStatus under AI Act
Webcam-based “engagement” detectionProhibited
AI assessing if employees are “happy”Prohibited
Emotion inference in hiring interviewsProhibited
Biometric stress detection at workProhibited
Medical/safety exceptionsPermitted with safeguards

Using AI to detect if employees are “happy” or “engaged” via webcam monitoring is now explicitly illegal. Companies that implemented such systems before February 2025 were required to discontinue them.

Enforcement challenges

ChallengeImpact
Defining “emotion recognition”Boundary cases require interpretation
Cross-border enforcementMultinational companies face complexity
Technical detectionIdentifying prohibited AI in deployed systems
Whistleblower relianceMany violations surface through complaints

Cybersecurity industry implications

The AI Act has direct implications for security vendors operating in the EU.

Systems requiring review

CategoryConcern
Behavioral analyticsMust not cross into emotion recognition or social scoring
Insider threat detectionEmployee profiling scrutiny
Biometric authenticationPhysical access control implications
Threat intelligenceFacial recognition or public data scraping

ENISA guidance

The European Union Agency for Cybersecurity (ENISA) has published guidance clarifying that most cybersecurity AI applications fall under lower risk categories:

ApplicationRisk Level
Automated threat detectionNot prohibited
Malware analysisNot prohibited
Vulnerability scanningNot prohibited
Network anomaly detectionNot prohibited

These applications are not affected by the February 2025 prohibitions.

Coming deadlines

August 2026: GPAI transparency

General-purpose AI models, including foundation models, face new transparency obligations:

RequirementScope
Technical documentationModel capabilities and limitations
Training data summaryGeneral description of training data
Copyright complianceRespect for EU copyright law
Systemic risk assessmentFor high-capability models

August 2027: High-risk AI

Full enforcement of rules for high-risk AI systems used in:

  • Critical infrastructure
  • Education and vocational training
  • Employment and worker management
  • Essential services access
  • Law enforcement
  • Border management
  • Justice administration

Industry response

Technology company compliance

CompanyStatus
MicrosoftConfirmed EU compliance
GoogleConfirmed EU compliance
OthersRestructuring AI features in EU products

Trade association positions

OrganizationPosition
DigitalEuropeWelcomes clarity; warns of national fragmentation
BSAUrges additional technical guidance

Compliance recommendations

For organizations using AI in the EU

ActionTimeline
AI system inventoryImmediate
Risk classificationBefore August 2027
Prohibited use auditAlready required
High-risk preparation18-month runway

Self-assessment resources

The European Commission has released a self-assessment tool to help organizations determine which AI systems may be affected by the Act.

Global influence

The EU AI Act represents the world’s most comprehensive AI regulatory framework and is expected to influence similar legislation globally:

JurisdictionStatus
United KingdomAI Safety Institute approach, lighter regulation
United StatesExecutive order, sector-specific approach
CanadaAIDA proposed
BrazilAI framework under development
ChinaAI regulations in effect

The “Brussels Effect”—where EU regulations become de facto global standards—may apply to AI governance as companies find it easier to comply globally than maintain separate systems.

Context

The AI Act’s first year of prohibited practice enforcement has been characterized by preparation rather than prosecution. Companies have largely anticipated the rules and adjusted accordingly, with the most obvious prohibited applications discontinued before enforcement began.

The more significant compliance challenge lies ahead with August 2027’s high-risk AI rules, which will require conformity assessments, technical documentation, and ongoing monitoring for a much broader range of systems.

For cybersecurity vendors, the Act creates both compliance obligations and market opportunity. AI-powered security tools must be designed and documented appropriately, but the regulatory clarity may advantage European-compliant vendors in a market increasingly concerned about AI governance.

Penalty comparison

The EU AI Act’s penalty structure exceeds even GDPR:

RegulationMaximum Penalty
EU AI Act (prohibited AI)7% global revenue or €35M
GDPR (data protection)4% global revenue or €20M
EU AI Act (other violations)3% global revenue or €15M
EU AI Act (misinformation)1% global revenue or €7.5M

The significantly higher penalties for prohibited AI practices signal the EU’s prioritization of these issues.

Enforcement architecture

LevelAuthorityPowers
EUEuropean AI OfficeGPAI model evaluation, documentation requests, source code access
NationalDesignated competent authoritiesInvestigation, audits, penalties
MarketSurveillance authoritiesProduct withdrawals, compliance orders

Each EU member state must designate at least one national competent authority with full investigatory powers. The European AI Office has special authority over general-purpose AI models, including the ability to demand source code access.