AI chat privacy at risk: Microsoft details Whisper Leak side-channel attack – Security Affairs

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
AI chat privacy at risk: Microsoft details Whisper Leak side-channel attack
SECURITY AFFAIRS MALWARE NEWSLETTER ROUND 70
Security Affairs newsletter Round 549 by Pierluigi Paganini – INTERNATIONAL EDITION
China-linked hackers target U.S. non-profit in long-term espionage campaign
A new Italian citizen was targeted with Paragon’s Graphite spyware. We have a serious problem
LANDFALL spyware exploited Samsung zero-day CVE-2025-21042 in Middle East attacks
Cisco fixes critical UCCX flaw allowing Root command execution
Cisco became aware of a new attack variant against Secure Firewall ASA and FTD devices
Google sounds alarm on self-modifying AI malware
Alleged Russia-linked Curly COMrades exploit Windows Hyper-V to evade EDRs
SonicWall blames state-sponsored hackers for September security breach
U.S. sanctioned North Korea bankers for laundering funds linked to cyberattacks and peapons program
Former cybersecurity employees attempted to extort five U.S. companies in 2023 using BlackCat ransomware attacks
U.S. CISA adds Gladinet CentreStack, and CWP Control Web Panel flaws to its Known Exploited Vulnerabilities catalog
Nine arrested in €600M crypto laundering bust across Europe
Google fixed a critical remote code execution in Android
SesameOp: New backdoor exploits OpenAI API for covert C2
Google Big Sleep found five vulnerabilities in Safari
Jabber Zeus developer ‘MrICQ’ extradited to US from Italy
Chrome 142 Released: Two high-severity V8 flaws fixed, $100K in rewards paid
Microsoft revealed a new side-channel attack called Whisper Leak, which lets attackers who can monitor network traffic infer what users discuss with remote language models, even when the data is encrypted. The company warned that this flaw could expose sensitive details from user or enterprise conversations with streaming AI systems, creating serious privacy risks.
AI chatbots now play key roles in daily life and sensitive fields like healthcare and law. Protecting user data with strong anonymization, encryption, and retention policies is vital to maintain trust and privacy.
AI chatbots use HTTPS (TLS) to encrypt communications, ensuring secure, authenticated connections. Language models generate text token by token, streaming outputs for faster feedback. TLS uses asymmetric cryptography to exchange symmetric keys for ciphers like AES or ChaCha20, which keep ciphertext size near plaintext size. Recent studies reveal side-channel risks in AI models: attackers can infer token length, timing, or cache patterns to guess prompt topics. Microsoft’s Whisper Leak expands on these, showing how encrypted traffic patterns alone can reveal conversation themes.
Microsoft researchers trained a binary classifier to detect when a chat with a language model involved a specific topic, “legality of money laundering”, versus general traffic. They generated 100 topic-related prompts and over 11,000 unrelated ones, capturing response times and packet sizes via tcpdump while randomizing samples to avoid cache bias. Using LightGBMBi-LSTM, and BERT models, they tested time, size, and combined data. Many models achieved over 98% accuracy (AUPRC), proving that topic-specific network patterns leave identifiable digital “fingerprints.”
“We evaluated the performance using Area Under the Precision-Recall Curve (AUPRC), which is a measurement of a cyberattack’s success for imbalanced datasets (many negative samples, fewer positive samples).” reads the report published by Microsoft. “A quick look at the “Best Overall” column shows that for many models, the cyberattack achieved scores above 98%. This tells us that the unique digital “fingerprints” left by conversations on a specific topic are distinct enough for our AI-powered eavesdropper to reliably pick them out in a controlled test.”
A simulation of a realistic surveillance scenario found that even when monitoring 10,000 random conversations with only one about a sensitive topic, attackers could still identify targets with alarming precision. Many tested AI models allowed 100% precision, every flagged conversation correctly matched the topic, while detecting 5–50% of all target conversations. This means attackers or agencies could reliably spot users discussing sensitive issues despite encryption. Though projections are limited by test data, results indicate a real, growing risk as attackers gather more data and refine models.
“In extended tests with one tested model, we observed continued improvement in attack accuracy as dataset size increased.” continues the report. “Combined with more sophisticated attack models and the richer patterns available in multi-turn conversations or multiple conversations from the same user, this means a cyberattacker with patience and resources could achieve higher success rates than our initial results suggest.”
Microsoft shared its findings with OpenAI, Mistral, Microsoft, and xAI, which implemented mitigations to reduce the identified risk.
OpenAI, and later Microsoft Azure, added an obfuscation field to streaming responses, inserting random text to mask token lengths and sharply reduce attack effectiveness; testing confirms Azure’s fix lowers the risk to non-practical levels. Mistral introduced a similar mitigation via a new “p” parameter.
Though mainly an AI provider issue, users can enhance privacy by avoiding sensitive topics on untrusted networks, using VPNs, choosing providers with mitigations, opting for non-streaming models, and staying informed about security practices.
Follow me on Twitter: @securityaffairs and Facebook and Mastodon
Pierluigi Paganini
(SecurityAffairs – hacking, Whisper Leak)

Hacking / November 09, 2025
Malware / November 09, 2025
Breaking News / November 09, 2025
APT / November 08, 2025
Security / November 08, 2025
To contact me write an email to:

Pierluigi Paganini :
[email protected]
Copyright@securityaffairs 2024

source

Scroll to Top