Article 6Y385 Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interaction

Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interaction

by
hubie
from SoylentNews on (#6Y385)

upstart writes:

Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interaction:

A novel attack technique named EchoLeak has been characterized as a "zero-click" artificial intelligence (AI) vulnerability that allows bad actors to exfiltrate sensitive data from Microsoft 365 Copilot's context sans any user interaction.

The critical-rated vulnerability has been assigned the CVE identifier CVE-2025-32711 (CVSS score: 9.3). It requires no customer action and has been already addressed by Microsoft. There is no evidence that the shortcoming was exploited maliciously in the wild.

"AI command injection in M365 Copilot allows an unauthorized attacker to disclose information over a network," the company said in an advisory released Wednesday. It has since been added to Microsoft's Patch Tuesday list for June 2025, taking the total number of fixed flaws to 68.

Aim Security, which discovered and reported the issue, said it's instance of a large language model (LLM) Scope Violation that paves the way for indirect prompt injection, leading to unintended behavior.

LLM Scope Violation occurs when an attacker's instructions embedded in untrusted content, e.g., an email sent from outside an organization, successfully tricks the AI system into accessing and processing privileged internal data without explicit user intent or interaction.

"The chains allow attackers to automatically exfiltrate sensitive and proprietary information from M365 Copilot context, without the user's awareness, or relying on any specific victim behavior," the Israeli cybersecurity company said. "The result is achieved despite M365 Copilot's interface being open only to organization employees."

In EchoLeak's case, the attacker embeds a malicious prompt payload inside markdown-formatted content, like an email, which is then parsed by the AI system's retrieval-augmented generation (RAG) engine. The payload silently triggers the LLM to extract and return private information from the user's current context.

[...] "The attack results in allowing the attacker to exfiltrate the most sensitive data from the current LLM context - and the LLM is being used against itself in making sure that the MOST sensitive data from the LLM context is being leaked, does not rely on specific user behavior, and can be executed both in single-turn conversations and multi-turn conversations."

EchoLeak is especially dangerous because it exploits how Copilot retrieves and ranks data - using internal document access privileges - which attackers can influence indirectly via payload prompts embedded in seemingly benign sources like meeting notes or email chains.

Original Submission

Read more of this story at SoylentNews.

External Content
Source RSS or Atom Feed
Feed Location https://soylentnews.org/index.rss
Feed Title SoylentNews
Feed Link https://soylentnews.org/
Feed Copyright Copyright 2014, SoylentNews
Reply 0 comments