Russia’s APT28 (Fancy Bear) has deployed malware that queries large language models (LLMs) to dynamically generate attack commands, marking the first documented use of AI-powered malware in live cyber operations. The malware families—LAMEHUG and PROMPTSTEAL—use Hugging Face’s API to generate reconnaissance and data theft commands, representing a significant evolution in offensive cyber capabilities.

Campaign overview

AttributeDetails
Threat actorAPT28 / Fancy Bear / UAC-0001
AttributionRussia’s GRU (Unit 26165)
Malware familiesLAMEHUG, PROMPTSTEAL
AI model usedQwen2.5-Coder-32B-Instruct
API platformHugging Face
Primary targetUkraine (defense sector)
DiscoveryCERT-UA (July 2025)
SignificanceFirst LLM-powered malware in live operations

Timeline

DateEvent
July 10, 2025CERT-UA discovers LAMEHUG
July 2025CERT-UA attributes to UAC-0001 (APT28)
Late 2025Google confirms PROMPTSTEAL variant targeting Ukraine
January 2026Multiple security firms publish analysis
February 2026Broader AI malware family documentation

How LAMEHUG works

Attack chain

PhaseAction
1Malware deployed via spearphishing
2Malware contains base64-encoded attack objectives
3Objectives sent as prompts to Hugging Face API
4LLM (Qwen2.5-Coder-32B-Instruct) generates commands
5Malware executes AI-generated commands
6Data collected and exfiltrated

Technical mechanism

ComponentDetails
Prompt deliveryBase64-encoded text descriptions of objectives
API authentication~270 tokens for Hugging Face access
Model queriedQwen2.5-Coder-32B-Instruct
Response formatExecutable command sequences
ExecutionImmediate on target system

Why LLM-generated commands matter

Traditional malwareLAMEHUG approach
Hardcoded commandsDynamically generated
Static detection signaturesPolymorphic output
Fixed capabilitiesAdaptive commands
Built-in APIsAI-created scripts
Predictable behaviorVariable execution

By using an LLM to generate commands, the malware can evade signature-based detection because the actual commands are created at runtime rather than embedded in the malware.

PROMPTSTEAL variant

Capabilities

FeatureDescription
MasqueradePoses as image generation tool
Background executionRuns reconnaissance silently
Dynamic scriptsNew code generated per execution
PersistenceAvoids static code patterns

Google’s assessment

Google Threat Intelligence confirmed:

“Those attacks were the first time [Google] had seen malware querying an LLM in the wild.”

The PROMPTSTEAL variant specifically uses Hugging Face to generate Windows commands for information collection, enabling attackers to maintain access without triggering defenses looking for specific code patterns.

Broader AI malware ecosystem

Google and other researchers have identified five new AI-powered malware families:

MalwareAI capability
LAMEHUGLLM command generation
PROMPTSTEALLLM-based reconnaissance
PROMPTFLUXUses Google Gemini for code regeneration
PROMPTLOCKAI-assisted evasion
QUIETVAULTDynamic script generation

PROMPTFLUX characteristics

FeatureDescription
AI modelGoogle Gemini
CapabilitySelf-regenerating code
PurposeDetection evasion
ImpactPolymorphic at scale

Attribution

CERT-UA assessment

Tracking nameUAC-0001
Western nameAPT28 / Fancy Bear
Government affiliationGRU (Russian military intelligence)
Specific unitUnit 26165 / 85th GTsSS
ConfidenceHigh

Supporting evidence

Evidence typeDetails
Infrastructure overlapMatches known APT28 C2
Target selectionConsistent with Russian interests
TTP alignmentMatches documented APT28 operations
TimingCoincides with Ukraine military operations

Why Ukraine as testing ground

FactorSignificance
Active conflictOperational pressure to innovate
Target-rich environmentGovernment, military, defense
Lower riskRetaliation constraints
Proof of conceptValidate before wider deployment
Historical patternPrevious Russian cyber innovation tested in Ukraine

Researchers note:

“Ukraine has historically served as a testing ground for Russian cyber capabilities, making it an ideal location for PoC deployments.”

Defensive challenges

Detection difficulties

ChallengeImpact
Dynamic commandsNo static signatures
Legitimate API useHugging Face traffic appears benign
Variable outputDifferent commands each execution
In-memory executionLimited disk artifacts

Behavioral indicators

IndicatorDetection opportunity
Hugging Face API callsNetwork monitoring
Unusual process spawningEDR behavioral analysis
Reconnaissance patternsUEBA anomaly detection
Token usage patternsAPI monitoring

Recommendations

For security teams

PriorityAction
CriticalMonitor for Hugging Face API traffic from endpoints
HighImplement behavioral detection for reconnaissance
HighBlock unnecessary AI platform access
MediumReview LLM API usage policies
OngoingUpdate threat models for AI-powered attacks

For organizations in targeted sectors

PriorityAction
CriticalAssume targeting if operating in Ukraine/defense
HighDeploy advanced EDR with behavioral analysis
HighSegment networks to limit lateral movement
MediumTrain SOC on AI malware indicators
OngoingShare threat intelligence via ISACs

For AI platform operators

PriorityAction
HighImplement abuse detection for API usage
HighMonitor for automated/scripted access patterns
MediumConsider authentication strengthening
OngoingCollaborate with threat researchers

Detection opportunities

Network indicators

IndicatorDetection
Hugging Face API callsProxy/firewall logging
Unusual model queriesAPI traffic analysis
Base64 in requestsContent inspection
High-frequency API accessRate limiting alerts

Endpoint indicators

IndicatorDetection
Process spawning patternsEDR monitoring
Reconnaissance command executionCommand-line logging
Data staging behaviorFile system monitoring
Unusual PowerShell/cmd activityScript block logging

Context

LAMEHUG and PROMPTSTEAL represent a paradigm shift in malware development. By outsourcing command generation to large language models, attackers gain:

  1. Polymorphism at scale: Every execution can produce different commands
  2. Reduced development burden: No need to hardcode complex functionality
  3. Evasion advantage: Signature-based detection becomes ineffective
  4. Adaptive capabilities: Malware can respond to different environments

This is the realization of long-predicted AI-augmented cyber attacks. While security researchers have theorized about AI malware for years, APT28’s deployment of LAMEHUG in live operations against Ukraine marks the transition from theory to reality.

The use of publicly accessible AI platforms (Hugging Face) as attack infrastructure also creates attribution and takedown challenges—the same platforms used by millions of legitimate developers are now being weaponized by nation-state actors.

Organizations should expect AI-powered malware techniques to proliferate rapidly as other threat actors adopt similar approaches. Traditional signature-based defenses are insufficient; behavioral analysis and anomaly detection become essential for detecting this new class of threats.