Microsoft uncovers ‘whisper leak’ flaw, attackers may see ChatGPT and Gemini chat topics
text_fieldsMicrosoft has disclosed a new vulnerability in most AI chatbots that could allow attackers to determine what topics users discuss with them.
The flaw, called Whisper Leak, affects server-based large language model (LLM) platforms such as ChatGPT and Gemini.
According to Microsoft researchers, the vulnerability can be exploited through a side-channel attack that observes network traffic patterns. Even when data is encrypted, the attacker can still infer conversation topics based on traffic metadata.
Microsoft said it has collaborated with multiple vendors to fix the issue.
In a blog post and a study published on arXiv, Microsoft explained how attackers could analyse the size and timing of data packets exchanged between users and chatbots. By training an AI system on this metadata, attackers could identify conversation themes — without decrypting the content itself.
“Importantly, this is not a cryptographic vulnerability in TLS itself, but rather exploitation of metadata that TLS inherently reveals about encrypted traffic structure and timing,” the study highlighted.
The researchers found the flaw in 98 percent of the 28 AI models tested, including standalone chatbots and those integrated into search engines or apps. Transport Layer Security (TLS), which typically secures such communications, was not broken by the exploit — instead, it was bypassed by analysing how messages travel through the network.
Microsoft warned that organisations with access to network data — such as Internet service providers (ISPs) or government agencies — could use this method to identify users discussing sensitive topics, including political dissent or money laundering.
After confirming its findings, Microsoft shared the information with affected companies. The company said OpenAI, Mistral, Microsoft, and xAI have already implemented mitigations.
Microsoft said that it had engaged in responsible disclosures with affected vendors and was pleased to report successful collaboration in implementing mitigations. It added that OpenAI, Mistral, Microsoft, and xAI had deployed protections at the time of writing, noting that this industry-wide response demonstrated the commitment to user privacy across the AI ecosystem.
The company noted that OpenAI and Microsoft Azure added a random text sequence — an “obfuscation” field — to chatbot responses to hide token length, which significantly reduces the attack’s effectiveness.
For users, Microsoft advised caution when using chatbots on untrusted networks. It suggested using VPNs, choosing on-device LLMs, avoiding highly sensitive topics, and preferring services that have adopted the new security measures.

