Echoleak is a new attack vector that exploits AI assistants by subtly manipulating prompts. The attack was executed without the use of malware or phishing, but rather through language as a weapon against Microsoft 365 Copilot.
Research conducted by Check Point revealed this. The zero-click attack on Microsoft 365 Copilot marks a turning point in cybersecurity. No malware, no phishing, no exploits – just subtle text manipulation was enough to trick the AI assistant into sharing sensitive data. “Copilot did exactly what it was designed to do: help. Only the instruction came from an attacker, not the user,” says the research team.
The attack injects a prompt into an innocent document or email. Copilot interprets this as a command, not as data. Without the user clicking anything, internal files, emails, or credentials were disclosed.
Obedience as a weakness
LLM-based AI assistants are optimized to understand and execute, even when instructions are unclear. Once they are deeply integrated with operating systems and productivity software, a dangerous combination emerges: a ubiquitous, obedient tool with access to sensitive data.
“The attack vector has shifted from code to conversation,” says Check Point. “We have built systems that actively convert language into actions. That changes everything.”
Many companies rely on LLM “watchdogs” that attempt to filter out dangerous instructions. However, these models are susceptible to the same deception. Attackers can spread their intentions across multiple prompts or hide instructions in other languages. Even Echoleak had safeguards in place. They were circumvented by a lack of context, not by a bug.
Tip: Microsoft turns GitHub Copilot into a full-fledged AI agent