Varonis Threat Labs disclosed on February 1, 2026, a prompt injection technique called Reprompt that could steal sensitive data from Microsoft Copilot through a single click. The attack required no plugins, no extended user interaction, and maintained control even after the Copilot chat was closed. Microsoft patched it on January 13, 2026.

Vulnerability overview

AttributeDetails
NameReprompt
TypePrompt injection chain
Affected productMicrosoft Copilot Personal
Attack vectorMalicious URL with injected prompt
User interactionSingle click
PersistenceContinues after chat closed
DiscoveryVaronis Threat Labs
Patch dateJanuary 13, 2026

Timeline

DateEvent
August 2025Varonis reports vulnerability to Microsoft
January 13, 2026Microsoft deploys patch
February 1, 2026Public disclosure

The attack chain

Reprompt chains three techniques to bypass Microsoft’s safety controls:

1. Parameter-to-Prompt (P2P) injection

Copilot accepts prompts via the q parameter in URLs and executes them automatically when the page loads:

https://copilot.microsoft.com/?q=[malicious_prompt]

An attacker delivers this URL to a victim. When clicked, Copilot executes the embedded instructions without additional user interaction.

FactorRisk
Legitimate domainURL appears trustworthy
Automatic executionNo user confirmation required
No visual indicatorVictim sees normal Copilot interface

2. Double-request bypass

Microsoft’s safety filters prevent sensitive data from being leaked—but only on the first request. By instructing Copilot to repeat each task twice, attackers bypass protections on the second attempt.

RequestBehavior
FirstSafety filters apply, sensitive data removed
SecondFilters don’t re-apply, data leaks

In testing: Copilot removed sensitive information during the first request but revealed it on the second.

3. Chain-request technique

After the initial prompt, Copilot continues receiving instructions from the attacker’s server. Each server response generates the next request, enabling continuous and stealthy data exfiltration.

PhaseAction
1Initial prompt instructs Copilot to fetch URL
2Attacker server responds with next instruction
3Copilot executes instruction, exfiltrates data
4Server responds with another instruction
5Chain continues until terminated

Why this evades detection: Client-side security tools can’t determine what data is being exfiltrated because the real instructions are hidden in server follow-up requests—not the initial prompt.

Attack flow

StepAction
1Victim clicks legitimate-looking Microsoft Copilot URL
2P2P injection loads malicious instructions automatically
3Copilot processes injected prompt, triggering data collection
4Double-request bypass extracts sensitive data past safety filters
5Chain-request maintains persistent exfiltration channel
6Data encoded into URL parameters and sent to attacker server
7Attack persists even after Copilot chat is closed

What could be stolen

The vulnerability affected Microsoft Copilot Personal (free consumer version):

Data TypeRisk
Email contentsFull message text accessible to Copilot
Document textFiles Copilot has processed
Meeting notesCalendar and Teams data
Chat historyPrevious Copilot conversations
Search queriesUser’s Copilot interaction history
Personal informationAnything shared with Copilot

What was NOT affected

Microsoft 365 Copilot for Enterprise uses a different architecture with additional DLP controls and was not vulnerable to Reprompt.

ProductVulnerableReason
Copilot PersonalYesConsumer version, limited controls
Microsoft 365 CopilotNoEnterprise DLP, Purview auditing, admin restrictions

However, Varonis noted that the underlying prompt injection vector “deserves attention across all Copilot products.”

Why this matters

Reprompt demonstrates a new class of data exfiltration:

Traditional attackReprompt attack
Requires malware installationNo malware required—just a crafted prompt
Extended interaction neededSingle click sufficient
Visible to userInvisible—victim sees normal link
Ends when detectedPersists after chat closes
Blocked by traditional DLPBypasses DLP—data flows through legitimate AI

Traditional security tools are designed to detect malware, network intrusions, and unauthorized access. Prompt injection attacks exploit the AI assistant’s legitimate access to user data.

Microsoft’s response

“We appreciate Varonis Threat Labs for responsibly reporting this issue. We have rolled out protections that address the scenario described and are implementing additional measures to strengthen safeguards against similar techniques as part of our defense-in-depth approach.” — Microsoft spokesperson

Patch details

Fix componentDescription
P2P validationAdditional checks on URL-injected prompts
Request chaining limitsRestrictions on consecutive external fetches
Safety filter persistenceFilters applied across request sequences
Monitoring improvementsEnhanced logging for suspicious patterns

The official Windows cumulative update KB5074109 (released January 13, 2026) and follow-on component patches adjusted Copilot behavior to close the specific chain demonstrated in the proof-of-concept.

What attackers could query

Before patching, an attacker could instruct Copilot to answer a wide range of questions about the victim:

Query typeExample
File activity”Summarize all of the files that the user accessed today”
Personal information”Where does the user live?”
Travel plans”What vacations does he have planned?”
Financial data”What are the user’s recent purchases?”
Communications”Summarize the user’s recent emails”
Work activity”What projects is the user working on?”

The chain-request technique allowed these queries to continue indefinitely, building a comprehensive profile of the victim.

Recommendations

For users

PriorityAction
CriticalUpdate Copilot—ensure you’re running post-January 13, 2026 version
HighBe cautious with Copilot links—even legitimate Microsoft domains can carry malicious prompts
HighDon’t share information that could be used for extortion
MediumReview Copilot permissions—understand what data Copilot can access
OngoingTreat AI assistant links with same scrutiny as any other link

For security teams

PriorityAction
HighMonitor Copilot-related traffic—watch for unusual outbound requests
HighInclude prompt injection in AI risk assessments
HighEvaluate enterprise vs. consumer AI tools for organizational use
MediumImplement URL filtering for suspicious Copilot links
OngoingTrain users on AI assistant link risks

For AI vendors

PriorityAction
CriticalTreat all external inputs (including URLs) as untrusted
CriticalEnsure security checks persist across multiple interactions
HighImplement rate limiting on external data fetches
HighLog and monitor for chain-request patterns
OngoingRegular security audits of prompt handling

Varonis guidance

“Avoid opening links from unknown sources, especially those related to AI assistants, even if they appear to link to a legitimate domain. Avoid sharing personal information in chats or any other information that could be used for ransom or extortion.”

Broader implications

AI assistant attack surface

VectorRisk
URL parameter injectionAutomatic prompt execution
Document-based injectionPrompts hidden in processed files
Email-based injectionMalicious content in messages Copilot reads
Calendar injectionPrompts in meeting descriptions

Why prompt injection is hard to fix

ChallengeImpact
Feature vs. vulnerabilityURL parameters are intended functionality
Context dependencyHard to distinguish malicious from legitimate prompts
User expectationsUsers expect AI to follow instructions
Data access requirementsAI assistants need broad access to be useful

Detection guidance

IndicatorMeaning
Copilot making unexpected external requestsPossible chain-request attack
URL parameters with encoded promptsP2P injection attempt
Repeated similar queries in quick successionDouble-request bypass
Data appearing in URL parametersExfiltration via URL encoding

Context

Reprompt is an early example of prompt injection attacks against production AI systems. As organizations deploy AI assistants with access to sensitive data, this attack surface will expand.

The distinction between enterprise and consumer versions is critical: enterprise deployments with proper DLP controls provide meaningful protection, while consumer versions may lack these safeguards.

Security teams should begin treating prompt injection as a first-class threat category alongside traditional vulnerabilities—and users should understand that AI assistant links carry unique risks even when they point to legitimate domains.

Comparison to other AI attacks

Reprompt joins a growing list of AI assistant vulnerabilities:

AttackTargetTechniqueImpact
RepromptMicrosoft Copilot PersonalP2P + double-request + chain-requestData exfiltration
Indirect prompt injectionVarious LLMsHidden prompts in documentsInstruction hijacking
ASCII smugglingMicrosoft 365 CopilotInvisible Unicode in emailsData theft via hyperlinks
Plugin exploitationChatGPT, CopilotMalicious plugin codeVarious

What distinguishes Reprompt:

  • No plugins required — Uses native Copilot functionality
  • Single click — No extended user interaction needed
  • Persistence — Continues after chat is closed
  • Invisible to user — Victim sees normal Copilot interface

Timeline comparison

PhaseTraditional phishingReprompt attack
DeliveryEmail with malicious attachmentEmail with Copilot URL
ExecutionUser runs malwareUser clicks link
VisibilityAntivirus may detectNo malware to detect
PersistenceRegistry, scheduled tasksCopilot session
Detection difficultyMediumHigh

The legitimate-infrastructure nature of Reprompt makes it particularly challenging to detect and block.