Varonis Threat Labs disclosed on February 1, 2026, a prompt injection technique called Reprompt that could steal sensitive data from Microsoft Copilot through a single click. The attack required no plugins, no extended user interaction, and maintained control even after the Copilot chat was closed. Microsoft patched it on January 13, 2026.
Vulnerability overview
| Attribute | Details |
|---|---|
| Name | Reprompt |
| Type | Prompt injection chain |
| Affected product | Microsoft Copilot Personal |
| Attack vector | Malicious URL with injected prompt |
| User interaction | Single click |
| Persistence | Continues after chat closed |
| Discovery | Varonis Threat Labs |
| Patch date | January 13, 2026 |
Timeline
| Date | Event |
|---|---|
| August 2025 | Varonis reports vulnerability to Microsoft |
| January 13, 2026 | Microsoft deploys patch |
| February 1, 2026 | Public disclosure |
The attack chain
Reprompt chains three techniques to bypass Microsoft’s safety controls:
1. Parameter-to-Prompt (P2P) injection
Copilot accepts prompts via the q parameter in URLs and executes them automatically when the page loads:
https://copilot.microsoft.com/?q=[malicious_prompt]
An attacker delivers this URL to a victim. When clicked, Copilot executes the embedded instructions without additional user interaction.
| Factor | Risk |
|---|---|
| Legitimate domain | URL appears trustworthy |
| Automatic execution | No user confirmation required |
| No visual indicator | Victim sees normal Copilot interface |
2. Double-request bypass
Microsoft’s safety filters prevent sensitive data from being leaked—but only on the first request. By instructing Copilot to repeat each task twice, attackers bypass protections on the second attempt.
| Request | Behavior |
|---|---|
| First | Safety filters apply, sensitive data removed |
| Second | Filters don’t re-apply, data leaks |
In testing: Copilot removed sensitive information during the first request but revealed it on the second.
3. Chain-request technique
After the initial prompt, Copilot continues receiving instructions from the attacker’s server. Each server response generates the next request, enabling continuous and stealthy data exfiltration.
| Phase | Action |
|---|---|
| 1 | Initial prompt instructs Copilot to fetch URL |
| 2 | Attacker server responds with next instruction |
| 3 | Copilot executes instruction, exfiltrates data |
| 4 | Server responds with another instruction |
| 5 | Chain continues until terminated |
Why this evades detection: Client-side security tools can’t determine what data is being exfiltrated because the real instructions are hidden in server follow-up requests—not the initial prompt.
Attack flow
| Step | Action |
|---|---|
| 1 | Victim clicks legitimate-looking Microsoft Copilot URL |
| 2 | P2P injection loads malicious instructions automatically |
| 3 | Copilot processes injected prompt, triggering data collection |
| 4 | Double-request bypass extracts sensitive data past safety filters |
| 5 | Chain-request maintains persistent exfiltration channel |
| 6 | Data encoded into URL parameters and sent to attacker server |
| 7 | Attack persists even after Copilot chat is closed |
What could be stolen
The vulnerability affected Microsoft Copilot Personal (free consumer version):
| Data Type | Risk |
|---|---|
| Email contents | Full message text accessible to Copilot |
| Document text | Files Copilot has processed |
| Meeting notes | Calendar and Teams data |
| Chat history | Previous Copilot conversations |
| Search queries | User’s Copilot interaction history |
| Personal information | Anything shared with Copilot |
What was NOT affected
Microsoft 365 Copilot for Enterprise uses a different architecture with additional DLP controls and was not vulnerable to Reprompt.
| Product | Vulnerable | Reason |
|---|---|---|
| Copilot Personal | Yes | Consumer version, limited controls |
| Microsoft 365 Copilot | No | Enterprise DLP, Purview auditing, admin restrictions |
However, Varonis noted that the underlying prompt injection vector “deserves attention across all Copilot products.”
Why this matters
Reprompt demonstrates a new class of data exfiltration:
| Traditional attack | Reprompt attack |
|---|---|
| Requires malware installation | No malware required—just a crafted prompt |
| Extended interaction needed | Single click sufficient |
| Visible to user | Invisible—victim sees normal link |
| Ends when detected | Persists after chat closes |
| Blocked by traditional DLP | Bypasses DLP—data flows through legitimate AI |
Traditional security tools are designed to detect malware, network intrusions, and unauthorized access. Prompt injection attacks exploit the AI assistant’s legitimate access to user data.
Microsoft’s response
“We appreciate Varonis Threat Labs for responsibly reporting this issue. We have rolled out protections that address the scenario described and are implementing additional measures to strengthen safeguards against similar techniques as part of our defense-in-depth approach.” — Microsoft spokesperson
Patch details
| Fix component | Description |
|---|---|
| P2P validation | Additional checks on URL-injected prompts |
| Request chaining limits | Restrictions on consecutive external fetches |
| Safety filter persistence | Filters applied across request sequences |
| Monitoring improvements | Enhanced logging for suspicious patterns |
The official Windows cumulative update KB5074109 (released January 13, 2026) and follow-on component patches adjusted Copilot behavior to close the specific chain demonstrated in the proof-of-concept.
What attackers could query
Before patching, an attacker could instruct Copilot to answer a wide range of questions about the victim:
| Query type | Example |
|---|---|
| File activity | ”Summarize all of the files that the user accessed today” |
| Personal information | ”Where does the user live?” |
| Travel plans | ”What vacations does he have planned?” |
| Financial data | ”What are the user’s recent purchases?” |
| Communications | ”Summarize the user’s recent emails” |
| Work activity | ”What projects is the user working on?” |
The chain-request technique allowed these queries to continue indefinitely, building a comprehensive profile of the victim.
Recommendations
For users
| Priority | Action |
|---|---|
| Critical | Update Copilot—ensure you’re running post-January 13, 2026 version |
| High | Be cautious with Copilot links—even legitimate Microsoft domains can carry malicious prompts |
| High | Don’t share information that could be used for extortion |
| Medium | Review Copilot permissions—understand what data Copilot can access |
| Ongoing | Treat AI assistant links with same scrutiny as any other link |
For security teams
| Priority | Action |
|---|---|
| High | Monitor Copilot-related traffic—watch for unusual outbound requests |
| High | Include prompt injection in AI risk assessments |
| High | Evaluate enterprise vs. consumer AI tools for organizational use |
| Medium | Implement URL filtering for suspicious Copilot links |
| Ongoing | Train users on AI assistant link risks |
For AI vendors
| Priority | Action |
|---|---|
| Critical | Treat all external inputs (including URLs) as untrusted |
| Critical | Ensure security checks persist across multiple interactions |
| High | Implement rate limiting on external data fetches |
| High | Log and monitor for chain-request patterns |
| Ongoing | Regular security audits of prompt handling |
Varonis guidance
“Avoid opening links from unknown sources, especially those related to AI assistants, even if they appear to link to a legitimate domain. Avoid sharing personal information in chats or any other information that could be used for ransom or extortion.”
Broader implications
AI assistant attack surface
| Vector | Risk |
|---|---|
| URL parameter injection | Automatic prompt execution |
| Document-based injection | Prompts hidden in processed files |
| Email-based injection | Malicious content in messages Copilot reads |
| Calendar injection | Prompts in meeting descriptions |
Why prompt injection is hard to fix
| Challenge | Impact |
|---|---|
| Feature vs. vulnerability | URL parameters are intended functionality |
| Context dependency | Hard to distinguish malicious from legitimate prompts |
| User expectations | Users expect AI to follow instructions |
| Data access requirements | AI assistants need broad access to be useful |
Detection guidance
| Indicator | Meaning |
|---|---|
| Copilot making unexpected external requests | Possible chain-request attack |
| URL parameters with encoded prompts | P2P injection attempt |
| Repeated similar queries in quick succession | Double-request bypass |
| Data appearing in URL parameters | Exfiltration via URL encoding |
Context
Reprompt is an early example of prompt injection attacks against production AI systems. As organizations deploy AI assistants with access to sensitive data, this attack surface will expand.
The distinction between enterprise and consumer versions is critical: enterprise deployments with proper DLP controls provide meaningful protection, while consumer versions may lack these safeguards.
Security teams should begin treating prompt injection as a first-class threat category alongside traditional vulnerabilities—and users should understand that AI assistant links carry unique risks even when they point to legitimate domains.
Comparison to other AI attacks
Reprompt joins a growing list of AI assistant vulnerabilities:
| Attack | Target | Technique | Impact |
|---|---|---|---|
| Reprompt | Microsoft Copilot Personal | P2P + double-request + chain-request | Data exfiltration |
| Indirect prompt injection | Various LLMs | Hidden prompts in documents | Instruction hijacking |
| ASCII smuggling | Microsoft 365 Copilot | Invisible Unicode in emails | Data theft via hyperlinks |
| Plugin exploitation | ChatGPT, Copilot | Malicious plugin code | Various |
What distinguishes Reprompt:
- No plugins required — Uses native Copilot functionality
- Single click — No extended user interaction needed
- Persistence — Continues after chat is closed
- Invisible to user — Victim sees normal Copilot interface
Timeline comparison
| Phase | Traditional phishing | Reprompt attack |
|---|---|---|
| Delivery | Email with malicious attachment | Email with Copilot URL |
| Execution | User runs malware | User clicks link |
| Visibility | Antivirus may detect | No malware to detect |
| Persistence | Registry, scheduled tasks | Copilot session |
| Detection difficulty | Medium | High |
The legitimate-infrastructure nature of Reprompt makes it particularly challenging to detect and block.