Microsoft Confirms Copilot Chat Incorrectly Accessed Confidential Emails in Some Users’ Accounts

Microsoft has admitted to a technical error that caused its Microsoft 365 Copilot Chat assistant to mistakenly access and potentially disclose confidential emails for certain enterprise users.

The company explained that the issue arose from a configuration glitch, which led the Copilot Chat tool—integrated within applications like Outlook and Teams—to display information from users’ draft and sent folders, including emails marked as confidential. This behavior was unintended and did not align with Microsoft’s data protection standards.

A Microsoft spokesperson told BBC News, “We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labelled confidential authored by a user and stored within their Draft and Sent Items in Outlook desktop.” The spokesperson emphasized that while access controls and data protection policies remained in place, the incident was not in line with the expected secure experience.

The company has since deployed a configuration update globally for enterprise clients to prevent further occurrences.

Root Cause and Discovery

The issue was first brought to light by technology news outlet Bleeping Computer, which reported a service alert indicating that confidential emails were being improperly processed by Copilot Chat. Microsoft reportedly became aware of the problem in January, after the tool began summarizing messages in draft and sent folders despite existing sensitivity labels and data loss prevention protocols.

An NHS IT support dashboard in England also displayed the alert, attributing the root cause to a “code issue.” The NHS clarified that no patient information was exposed, and the processed emails remained with their respective creators.

Expert Perspectives on AI Risks

Industry analysts noted that such lapses highlight the inherent risks as enterprise AI tools expand rapidly. Gartner analyst Nader Henein remarked that mistakes like this are “unavoidable” given the fast pace of AI feature deployment and the push for widespread adoption.

Cybersecurity expert Professor Alan Woodward of the University of Surrey emphasized the importance of privacy-by-default settings and opt-in controls for workplace AI tools. He warned that data leaks—whether accidental or not—are likely to occur as organizations integrate AI more deeply into their workflows.

Share This Article
Teja keeps an eye on the world’s pulse, finding trending articles from every corner of the map and making them easy to understand.
Exit mobile version