Daily Cyber Briefing
The Daily Cyber Briefing delivers concise, no-fluff updates on the latest cybersecurity threats, breaches, and regulatory changes. Each episode equips listeners with actionable insights to stay ahead of emerging risks in today’s fast-moving digital landscape.
Daily Cyber Briefing
Agentic AI, Vishing, and the Critical SAML Bypass
We break down the newest frontiers of cyber defense and attack, including how Google is using a new User Alignment Critic to shield Chrome's agentic AI from prompt injection, and why a critical flaw in the Ruby SAML library demands immediate patching,. Plus, a deep dive into the sophisticated vishing campaign that weaponizes Microsoft Teams and QuickAssist to deploy fileless .NET malware,.
Welcome back to Cyber Scoops & Digital Shenanigans. I’m Mike Housch, and today we’re tracking the evolution of the attacker—and the defender—in three critical areas: Artificial Intelligence, corporate communication platforms, and identity management. The threats are getting increasingly sophisticated, relying less on exploiting vulnerabilities and more on exploiting trust and context,. We’re going to look at the layered defenses Google is building into Chrome to combat prompt injection, discuss a truly nasty new vishing attack hitting Microsoft Teams, and finish with a critical authentication bypass vulnerability that demands immediate attention,. Let’s jump right in.
Our first scoop focuses on the rapid integration of AI into browsing experiences, specifically with Google introducing Gemini in Chrome and previewing its agentic capabilities. This new power, however, comes with a primary threat: indirect prompt injections, which can lead to data leaks and other unwanted actions performed by the agent.
Google is now implementing layered defenses to make it difficult and costly for attackers to harm users. The key defense here is a new, separate AI model built with Gemini, called the User Alignment Critic. This critic is isolated from untrusted content, and its job is to vet the agent’s actions to determine if the proposed action aligns with the user’s stated goal. If the action is misaligned, the Alignment Critic will veto it. Importantly, this component is designed to only see metadata about the proposed action, preventing it from being directly poisoned by unfiltered web content.
Google is also tightening up origin-isolation capabilities by expanding existing Site Isolation and same-origin policy protections with Agent Origin Sets. This architecture limits the agent to only access data from origins related to the task at hand, or data that the user has chosen to share. This step prevents a compromised agent from acting arbitrarily on unrelated origins. Furthermore, they are adding guardrails by triggering user confirmation before impactful actions are taken, serving as a check against both model mistakes and adversarial input. The agent requests confirmation before navigating to sensitive sites like banking or healthcare portals, before signing in, and before completing purchases or sending messages.
Now, while Google is busy hardening its browser agent, security researchers have simultaneously identified a severe new vulnerability in how other AI systems interact with external tools, centered around the Model Context Protocol, or MCP,. This protocol, introduced by Anthropic, standardizes how large language models integrate with external tools and data sources.
Paloalto researchers showcased how malicious MCP servers can exploit the sampling feature to carry out three critical attack vectors,:
First, Resource Theft. Attackers inject hidden instructions into sampling requests, causing the LLM to generate extra, non-visible content. This unauthorized workload drains AI compute quotas and API credits without the user noticing. Think of a malicious code summarizer that appends instructions to generate fictional stories alongside legitimate code analysis, consuming substantial resources behind the scenes.
Second, Conversation Hijacking. Compromised MCP servers can inject persistent instructions that fundamentally alter system behavior across the entire conversation session. For example, a hidden prompt forced an AI assistant to “speak like a pirate” in all subsequent responses during a demonstration.
And third, the scariest one: Covert Tool Invocation. Malicious servers use prompt injection to trigger unauthorized tool executions. Researchers showed how hidden instructions could trigger file-writing operations, enabling data exfiltration, persistence mechanisms, and unauthorized system modifications without explicit user consent.
The core issue is MCP sampling's implicit trust model and lack of built-in security controls, which allows servers to modify prompts and responses to slip in hidden instructions,. Effective defense here requires multiple layers, including strict templates for request sanitization, response filtering to remove instruction-like phrases, and rigorous access controls to limit server capabilities. Organizations should also require explicit approval for tool execution. This is a major area for CISOs to watch as LLM integration expands across enterprise apps.
Moving away from AI, let's turn our attention to the corporate communication landscape, where attackers are blending traditional voice phishing—or vishing—with modern tools to execute sophisticated attacks.
SpiderLabs security analysts identified a new campaign that leverages Microsoft Teams calls and the native Windows remote support tool, QuickAssist, to deploy stealthy malware,. The attack starts with a Teams call from an external account using a spoofed display name, designed to impersonate a legitimate internal administrator or senior IT staff. By creating a sense of urgency, the threat actor disarms the victim.
The social engineering phase is crucial: the victim is persuaded to launch Microsoft QuickAssist, which is a native Windows utility. Using a built-in tool is a key tactical shift, as it effectively bypasses many standard security controls that typically flag third-party remote access software.
Once remote access is established, the attacker initiates a multi-stage infection. After about ten minutes, perhaps to reduce suspicion, the victim is redirected to a malicious domain. A file disguised as a legitimate updater, updater.exe, is then introduced to the system,.
The malware itself is particularly stealthy. It relies on a .NET malware wrapper that executes code directly in memory, minimizing the forensic footprint on the endpoint. This fileless approach significantly complicates incident response efforts because fewer artifacts are left behind on the disk for investigators to analyze. The core infection chain involves updater.exe, which wraps an embedded library, loader.dll. This loader connects to a command-and-control server to retrieve encryption keys, which are then used with AES-CBC and XOR operations to decrypt a malicious assembly. Crucially, that decrypted code is never written to the disk; it is loaded directly into the system’s memory via .NET reflection, ensuring a highly persistent and stealthy compromise.
In response to the rise of these fraudulent activities, Microsoft is enhancing Teams security by introducing a feature called "Report a Suspicious Call". This capability is designed to empower users to flag potentially malicious or unsolicited calls directly within the application. When a call is flagged, the report is sent to Microsoft for analysis, helping their security teams recognize patterns of malicious behavior. This user-driven approach aims to create a more resilient and secure communication environment by providing real-time intelligence on emerging threats,. This feature is scheduled to begin rolling out globally in February 2026.
Finally, let’s cover a critical vulnerability that affects identity and access management for thousands of organizations using single sign-on. A flaw tracked as CVE-2025-66567 has been discovered in the Ruby SAML library. This issue is so severe that it has been assigned a critical CVSS score of 10.0.
This vulnerability is particularly dangerous because it allows attackers to bypass authentication mechanisms completely in affected applications, requiring no special privileges or user interaction. The root cause is an incomplete fix for a previous issue, stemming from how different XML parsers—specifically ReXML and Nokogiri—interpret XML documents differently.
This parsing discrepancy allows an attacker to execute a Signature Wrapping attack. In this attack, malicious SAML responses are crafted to appear legitimate to the vulnerable parser while containing unauthorized modifications to authentication claims. A successful exploitation grants attackers unauthorized access to systems without requiring valid credentials. Considering SAML is widely used for single sign-on authentication across enterprise applications, this is a high-priority risk.
Organizations must prioritize this patch immediately. If you are using Ruby SAML, security experts emphasize the urgency of upgrading to version 1.18.0 or later, which addresses this vulnerability. The weakness is classified under CWE-347, indicating fundamental issues with how the library validates digital signatures on SAML assertions.
To recap our top stories: AI security is moving fast, requiring multi-layered defenses like Google’s User Alignment Critic to combat indirect prompt injection attacks, and requiring us to secure protocols like MCP against resource theft and covert tool invocation,. Corporate defenses need to account for sophisticated social engineering attacks that weaponize native tools like QuickAssist alongside Teams,, even as Microsoft rolls out new user-reporting features. And finally, check your identity management infrastructure for the Ruby SAML flaw; patch immediately to avoid complete authentication bypass.
That’s all the time we have for this edition of Cyber Scoops & Digital Shenanigans. Stay safe out there, and we'll catch you next time!