Daily Cyber Briefing

Hacking Encrypted Chats: The Whisper Leak & The CMMC Compliance Clock

Mike Housch Season 1 Episode 47

Today we expose the 'Whisper Leak' LLM attack that infers sensitive conversation topics from encrypted metadata. Plus, we break down the start of CMMC enforcement and why supply chain risks are soaring, according to the new OWASP Top 10 list.

Welcome back to Cyber Scoops & Digital Shenanigans. I’m your host, Mike Housch, diving into the latest exploits, vulnerabilities, and the regulatory deadlines keeping CISOs up at night. We have a packed show today, covering everything from brand new AI side-channel attacks that can compromise seemingly encrypted conversations, to critical compliance deadlines hitting defense contractors, and the persistent problems plaguing application security.

Let’s kick things off with a chilling discovery from Microsoft researchers: an AI side-channel attack they’ve dubbed “Whisper Leak”. Now, you might be using a chatbot and assume your communication is secure because it’s end-to-end encrypted, likely using HTTP over TLS, or HTTPS. Well, think again. The Whisper Leak attack allows adversaries monitoring network traffic to infer the topic of your conversation with a remote language model.

How you ask? Well it exploits metadata patterns. While TLS encryption secures the content, it preserves the size relationship between the plaintext and the ciphertext. Large Language Models, or LLMs, generate responses by predicting tokens, words or sub-words, in a step-by-step, streaming approach. This process influences the timing and size of the data chunks the LLM sends back to the client. Essentially, the size of the transmitted data chunks is leaked. When combined with timing information, these leaked patterns form the basis of the Whisper Leak attack.

This vulnerability is serious; the researchers say it impacts all LLMs. This poses a significant risk to entities under surveillance, whether from ISPs, governments, or cyber actors, potentially exposing highly sensitive conversations, think legal advice, medical consultations, or other private topics. Microsoft researchers specifically noted that this poses real-world risks to users facing oppressive governments who might be targeting topics like protesting, banned material, election processes, or journalism.

The testing showed frightening accuracy. Researchers trained a binary classifier to distinguish between the topic of "legality of money laundering" and background traffic, observing the encrypted traffic alone. They found that 17 out of 28 tested models achieved over 98% accuracy in distinguishing the target topic, with some exceeding 99.9% accuracy. That means an attacker could identify 1 in 10,000 target conversations with nearly zero false positives.

So, what are the mitigations? Researchers suggest random padding, token batching, and packet injection. Good news: OpenAI and Microsoft Azure have already implemented an additional field in streaming responses that adds a random sequence of text of variable length to mask the token length. Mistral also added a similar parameter. For users, the advice is clear: avoid discussing sensitive topics with AI chatbots when on untrustworthy networks, use VPN services, and stay informed on whether your provider has implemented these mitigations. You can also try using non-streaming models, if available.

Now let’s move into the trenches of application security with the latest release from the Open Worldwide Application Security Project, or OWASP. They just published their updated Top 10 categories of application risks for 2025.

The list, which helps organizations prioritize, confirmed what many of us suspected: Broken access control remains the top issue. It impacts 3.73% of applications tested and includes errors like bypassing access control through URL tampering or APIs missing controls. The top prevention tip? "Except for public resources, deny by default".

Coming in at number two is Security misconfiguration. This category has risen due to an engineering trend that bases security increasingly on configuration methods.

In third place, we have Software supply chain failures, a new category replacing "vulnerable and outdated components". Though it has relatively fewer occurrences, supply chain issues carry the highest average exploit and impact scores from CVEs.

And speaking of supply chain risks, we have a very recent example of a targeted attack leveraging a common software dependency platform. Cybersecurity researchers discovered a malicious npm package named "@acitons/artifact". This package is a classic typosquatting attempt, mimicking the legitimate "@actions/artifact" package. The intent was highly targeted: to execute during a build of a GitHub-owned repository, exfiltrate the tokens available to the build environment, and then use those tokens to publish new malicious artifacts as GitHub. The malware specifically checked for the presence of certain github variables set in a GitHub Actions workflow.

It’s worth noting that for applications leveraging Large Language Models, OWASP has a separate project ranking risks for LLM and Gen AI applications. Topping that specific list is prompt injection, where model responses are manipulated via prompt input to bypass security checks.

Also new to the main OWASP Top 10 list is the category for "mishandling of exceptional conditions," which covers code that doesn't respond correctly to race conditions or revealing sensitive information in error messages.

Let’s hit a few fast updates crucial before we end and kick this day off.

First, enterprise software maker SAP released its November 2025 security patches, addressing 18 new and one updated security note. Crucially, they patched CVE-2025-42890 in SQL Anywhere Monitor, described as an insecure key and secret management vulnerability with a CVSS score of 10/10. This bug, which could allow attackers to execute arbitrary code due to hardcoded credentials, was severe enough that SAP resolved the issue by removing SQL Anywhere Monitor entirely. They also fixed a critical code injection defect in Solution Manager (CVSS 9.9).

Next, a quick win for digital privacy: Mozilla announced improved browser fingerprinting protections in Firefox 145. Fingerprinting is a hidden tracking technique where websites collect details like time zone or graphics hardware information to create a unique ID, trackable across sessions, even if cookies are blocked. Mozilla’s research shows these improvements have cut the percentage of users seen as unique by almost half. These protections work in Firefox's private mode or Enhanced Tracking Protection strict mode, without needing extensions.

Finally, we’re watching Europe, where leaked plans by the European Commission to overhaul digital privacy legislation are being heavily criticized by privacy activists. Noyb warned that the proposed legislative changes could poke so many holes in existing rules that they would “make GDPR overall unusable for most cases". The proposals, framed partly as relief for small businesses, are accused of favoring Big Tech interests. For example, a change to GDPR might allow data controllers freer rein to use pseudonomized personal data for commercial benefit without strict data protection rules applying. Also, controversial reforms to the AI Act might give AI systems special exemptions to process data that would otherwise require a legitimate legal basis under GDPR, potentially privileging a risky technology over traditional processing methods.

That’s a wrap on this edition of Cyber Scoops & Digital Shenanigans. We’ve seen that even encrypted chats aren't safe from sophisticated side-channel attacks like Whisper Leak, that critical compliance deadlines like CMMC are live, and that the basics—like broken access control—still dominate the risk landscape.

Stay vigilant, stay compliant, and stay safe out there. I'm Mike Housch, and we'll catch you next time.