Russia’s APT28 (Fancy Bear) has deployed malware that queries large language models (LLMs) to dynamically generate attack commands, marking the first documented use of AI-powered malware in live cyber operations. The malware families—LAMEHUG and PROMPTSTEAL—use Hugging Face’s API to generate reconnaissance and data theft commands, representing a significant evolution in offensive cyber capabilities.
Campaign overview
| Attribute | Details |
|---|
| Threat actor | APT28 / Fancy Bear / UAC-0001 |
| Attribution | Russia’s GRU (Unit 26165) |
| Malware families | LAMEHUG, PROMPTSTEAL |
| AI model used | Qwen2.5-Coder-32B-Instruct |
| API platform | Hugging Face |
| Primary target | Ukraine (defense sector) |
| Discovery | CERT-UA (July 2025) |
| Significance | First LLM-powered malware in live operations |
Timeline
| Date | Event |
|---|
| July 10, 2025 | CERT-UA discovers LAMEHUG |
| July 2025 | CERT-UA attributes to UAC-0001 (APT28) |
| Late 2025 | Google confirms PROMPTSTEAL variant targeting Ukraine |
| January 2026 | Multiple security firms publish analysis |
| February 2026 | Broader AI malware family documentation |
How LAMEHUG works
Attack chain
| Phase | Action |
|---|
| 1 | Malware deployed via spearphishing |
| 2 | Malware contains base64-encoded attack objectives |
| 3 | Objectives sent as prompts to Hugging Face API |
| 4 | LLM (Qwen2.5-Coder-32B-Instruct) generates commands |
| 5 | Malware executes AI-generated commands |
| 6 | Data collected and exfiltrated |
Technical mechanism
| Component | Details |
|---|
| Prompt delivery | Base64-encoded text descriptions of objectives |
| API authentication | ~270 tokens for Hugging Face access |
| Model queried | Qwen2.5-Coder-32B-Instruct |
| Response format | Executable command sequences |
| Execution | Immediate on target system |
Why LLM-generated commands matter
| Traditional malware | LAMEHUG approach |
|---|
| Hardcoded commands | Dynamically generated |
| Static detection signatures | Polymorphic output |
| Fixed capabilities | Adaptive commands |
| Built-in APIs | AI-created scripts |
| Predictable behavior | Variable execution |
By using an LLM to generate commands, the malware can evade signature-based detection because the actual commands are created at runtime rather than embedded in the malware.
PROMPTSTEAL variant
Capabilities
| Feature | Description |
|---|
| Masquerade | Poses as image generation tool |
| Background execution | Runs reconnaissance silently |
| Dynamic scripts | New code generated per execution |
| Persistence | Avoids static code patterns |
Google’s assessment
Google Threat Intelligence confirmed:
“Those attacks were the first time [Google] had seen malware querying an LLM in the wild.”
The PROMPTSTEAL variant specifically uses Hugging Face to generate Windows commands for information collection, enabling attackers to maintain access without triggering defenses looking for specific code patterns.
Broader AI malware ecosystem
Google and other researchers have identified five new AI-powered malware families:
| Malware | AI capability |
|---|
| LAMEHUG | LLM command generation |
| PROMPTSTEAL | LLM-based reconnaissance |
| PROMPTFLUX | Uses Google Gemini for code regeneration |
| PROMPTLOCK | AI-assisted evasion |
| QUIETVAULT | Dynamic script generation |
PROMPTFLUX characteristics
| Feature | Description |
|---|
| AI model | Google Gemini |
| Capability | Self-regenerating code |
| Purpose | Detection evasion |
| Impact | Polymorphic at scale |
Attribution
CERT-UA assessment
| Tracking name | UAC-0001 |
|---|
| Western name | APT28 / Fancy Bear |
| Government affiliation | GRU (Russian military intelligence) |
| Specific unit | Unit 26165 / 85th GTsSS |
| Confidence | High |
Supporting evidence
| Evidence type | Details |
|---|
| Infrastructure overlap | Matches known APT28 C2 |
| Target selection | Consistent with Russian interests |
| TTP alignment | Matches documented APT28 operations |
| Timing | Coincides with Ukraine military operations |
Why Ukraine as testing ground
| Factor | Significance |
|---|
| Active conflict | Operational pressure to innovate |
| Target-rich environment | Government, military, defense |
| Lower risk | Retaliation constraints |
| Proof of concept | Validate before wider deployment |
| Historical pattern | Previous Russian cyber innovation tested in Ukraine |
Researchers note:
“Ukraine has historically served as a testing ground for Russian cyber capabilities, making it an ideal location for PoC deployments.”
Defensive challenges
Detection difficulties
| Challenge | Impact |
|---|
| Dynamic commands | No static signatures |
| Legitimate API use | Hugging Face traffic appears benign |
| Variable output | Different commands each execution |
| In-memory execution | Limited disk artifacts |
Behavioral indicators
| Indicator | Detection opportunity |
|---|
| Hugging Face API calls | Network monitoring |
| Unusual process spawning | EDR behavioral analysis |
| Reconnaissance patterns | UEBA anomaly detection |
| Token usage patterns | API monitoring |
Recommendations
For security teams
| Priority | Action |
|---|
| Critical | Monitor for Hugging Face API traffic from endpoints |
| High | Implement behavioral detection for reconnaissance |
| High | Block unnecessary AI platform access |
| Medium | Review LLM API usage policies |
| Ongoing | Update threat models for AI-powered attacks |
For organizations in targeted sectors
| Priority | Action |
|---|
| Critical | Assume targeting if operating in Ukraine/defense |
| High | Deploy advanced EDR with behavioral analysis |
| High | Segment networks to limit lateral movement |
| Medium | Train SOC on AI malware indicators |
| Ongoing | Share threat intelligence via ISACs |
| Priority | Action |
|---|
| High | Implement abuse detection for API usage |
| High | Monitor for automated/scripted access patterns |
| Medium | Consider authentication strengthening |
| Ongoing | Collaborate with threat researchers |
Detection opportunities
Network indicators
| Indicator | Detection |
|---|
| Hugging Face API calls | Proxy/firewall logging |
| Unusual model queries | API traffic analysis |
| Base64 in requests | Content inspection |
| High-frequency API access | Rate limiting alerts |
Endpoint indicators
| Indicator | Detection |
|---|
| Process spawning patterns | EDR monitoring |
| Reconnaissance command execution | Command-line logging |
| Data staging behavior | File system monitoring |
| Unusual PowerShell/cmd activity | Script block logging |
Context
LAMEHUG and PROMPTSTEAL represent a paradigm shift in malware development. By outsourcing command generation to large language models, attackers gain:
- Polymorphism at scale: Every execution can produce different commands
- Reduced development burden: No need to hardcode complex functionality
- Evasion advantage: Signature-based detection becomes ineffective
- Adaptive capabilities: Malware can respond to different environments
This is the realization of long-predicted AI-augmented cyber attacks. While security researchers have theorized about AI malware for years, APT28’s deployment of LAMEHUG in live operations against Ukraine marks the transition from theory to reality.
The use of publicly accessible AI platforms (Hugging Face) as attack infrastructure also creates attribution and takedown challenges—the same platforms used by millions of legitimate developers are now being weaponized by nation-state actors.
Organizations should expect AI-powered malware techniques to proliferate rapidly as other threat actors adopt similar approaches. Traditional signature-based defenses are insufficient; behavioral analysis and anomaly detection become essential for detecting this new class of threats.