Security researchers at Noma Labs disclosed a critical vulnerability in Docker’s Ask Gordon AI assistant that allows attackers to compromise Docker environments through malicious image metadata. The flaw, dubbed DockerDash, demonstrates how AI assistants can become attack vectors when integrated into developer tooling.

Vulnerability overview

AttributeDetails
NameDockerDash
DiscoveryNoma Labs (September 2025)
Affected productDocker Desktop, Docker CLI
FeatureAsk Gordon AI assistant (beta)
Attack techniqueMeta-context injection
ImpactRemote code execution, data exfiltration
Fixed versionDocker Desktop 4.50.0
Fix dateNovember 6, 2025

Timeline

DateEvent
September 17, 2025Noma Labs discovers and reports vulnerability
October 13, 2025Docker Security Team confirms vulnerability
November 6, 2025Docker Desktop 4.50.0 released with fixes
February 3, 2026Noma Labs publishes full technical details

How the attack works

DockerDash exploits Docker’s AI assistant through a technique Noma Labs calls “meta-context injection”—hijacking an AI’s reasoning process through malicious metadata.

Attack chain

StageAction
1Attacker creates Docker image with malicious metadata label
2Victim pulls/inspects the image using Ask Gordon
3Gordon AI reads and interprets the malicious instruction
4AI forwards instruction to MCP Gateway
5MCP Gateway executes through MCP tools
6Attacker achieves RCE or data exfiltration

The critical flaw

“A single malicious metadata label in a Docker image can be used to compromise a Docker environment through a three-stage attack… Every stage happens with zero validation.” — Noma Labs

The vulnerability exists because Ask Gordon processes image metadata without sanitization, and the Model Context Protocol (MCP) Gateway executes commands without validation.

Two attack paths

Path 1: Cloud/CLI systems (RCE)

FactorDetails
EnvironmentDocker CLI, cloud deployments
PermissionsFull tool execution
ImpactRemote code execution
SeverityCritical

On CLI and cloud systems, attackers can execute arbitrary code on the host system.

Path 2: Desktop applications (Data exfiltration)

FactorDetails
EnvironmentDocker Desktop
PermissionsRead-only (restricted)
ImpactSensitive data exfiltration
SeverityHigh

While Docker Desktop restricts Ask Gordon to read-only permissions, attackers can still weaponize read access to exfiltrate sensitive internal data about the victim’s environment.

Meta-context injection technique

The attack exploits how AI assistants process context:

Traditional prompt injectionMeta-context injection
Injects prompts via user inputInjects via metadata/labels
Requires user interactionTriggered by image operations
Visible in conversationHidden in image metadata
Targets chat interfaceTargets tool execution chain

Why it works

FactorVulnerability
AI reads metadataUnfiltered input to reasoning
MCP Gateway trusts AINo validation of AI requests
Tools execute blindlyNo confirmation before execution
Metadata is invisibleUsers don’t inspect labels

Impact assessment

Affected operations

OperationRisk
docker pull with Ask GordonImage metadata processed
docker inspect via AILabels interpreted as instructions
Image analysis queriesMetadata included in context
Container debuggingEnvironment data exposed

Data at risk

Data typeExposure risk
Environment variablesHigh
Mounted secretsHigh
Container configurationsHigh
Network topologyMedium
Build argumentsMedium

Docker’s mitigations

Docker implemented two key fixes:

Fix 1: URL rendering disabled

BeforeAfter
Ask Gordon rendered user-provided image URLsURL rendering blocked
Exfiltration via image requests possibleExfiltration path closed

Fix 2: Human-in-the-loop confirmation

BeforeAfter
MCP tools executed automaticallyExplicit user confirmation required
AI could trigger actions silentlyUser must approve tool invocations

Recommendations

Immediate actions

PriorityAction
CriticalUpdate to Docker Desktop 4.50.0 or later
CriticalUpdate Docker CLI to latest version
HighAudit images from untrusted sources
HighReview recent Ask Gordon interactions

Secure image management

ControlPurpose
Use trusted registriesReduce malicious image risk
Implement image scanningDetect malicious content
Review image labelsIdentify suspicious metadata
Limit Ask Gordon usageReduce attack surface

For organizations

PriorityAction
HighInventory Docker Desktop versions
HighPush updates to developer machines
MediumEstablish AI tool usage policies
MediumMonitor for unusual Docker activity

Broader implications

DockerDash highlights emerging risks in AI-integrated developer tools:

AI supply chain risks

RiskExample
Metadata poisoningMalicious labels in images
Context injectionHijacking AI reasoning
Tool execution abuseWeaponizing AI capabilities
Trust chain exploitationAI trusts data, tools trust AI

Affected tool categories

Tool typeSimilar risk
AI coding assistantsCode context injection
AI security toolsLog/alert manipulation
AI documentation toolsContent poisoning
AI debugging assistantsEnvironment data exposure

Detection

Indicators of compromise

IndicatorDetection method
Unusual Docker API callsAPI logging
Unexpected MCP tool invocationsAsk Gordon audit logs
Data exfiltration attemptsNetwork monitoring
Suspicious image pullsRegistry access logs

Image metadata inspection

Organizations should implement automated scanning for suspicious image labels:

Label patternRisk
Encoded/obfuscated contentPossible payload
URLs in labelsExfiltration risk
Command-like stringsInjection attempt
Excessive label contentContext manipulation

Context

DockerDash represents a new class of vulnerability emerging as AI assistants are integrated into developer tooling. The attack demonstrates that AI systems can be weaponized through their input processing—in this case, image metadata that developers rarely inspect.

The vulnerability also highlights the risks of the Model Context Protocol (MCP) and similar AI tool execution frameworks. When AI systems can invoke tools without validation, any input that influences AI reasoning becomes a potential attack vector.

Organizations adopting AI-powered developer tools should:

  • Treat AI input sources as untrusted
  • Require human confirmation for consequential actions
  • Implement monitoring for AI tool usage
  • Update promptly when security fixes are released

The rapid patching by Docker (within ~7 weeks of report) demonstrates responsible disclosure working effectively, but the underlying architectural risks in AI tooling remain an industry-wide concern.