Daily Cyber Briefing
The Daily Cyber Briefing delivers concise, no-fluff updates on the latest cybersecurity threats, breaches, and regulatory changes. Each episode equips listeners with actionable insights to stay ahead of emerging risks in today’s fast-moving digital landscape.
Daily Cyber Briefing
AI Hackers, Worms, and Why CISOs Can’t Get Federal Agencies to Patch
We dive into a massive NPM registry attack where a self-replicating worm polluted the software supply chain with over 150,000 packages seeking cryptocurrency rewards. Then, we analyze how state-sponsored threat actors used Anthropic’s Claude AI to automate 90% of a targeted espionage campaign against critical global organizations.
Mike Housch (Host): Welcome back to Cyber Scoops & Digital Shenanigans, the podcast dedicated to tracking the wild world of cybersecurity, where the threats are getting weirder and the stakes are getting higher. I’m your host, Mike Housch. Today, we’re unpacking two major stories that demonstrate the growing danger of automation in the wrong hands: first, a massive software supply chain pollution event, and second, the chilling reality of AI-powered state espionage.
(0:35)
Mike Housch: Let’s start with the sheer scale of registry pollution. Amazon recently reported that they detected more than 150,000 malicious packages published in the NPM registry. Now, if you track the software supply chain, you know NPM has had its share of issues, but 150,000 packages is unprecedented scale.
(1:00)
Mike Housch: This campaign, which has been tracked under names like IndonesianFoods and Big Red, is financially motivated. The packages are linked to tea.xyz, a blockchain-based system designed to reward open source developers with a native cryptocurrency token. What the threat actor did here was automate the package publishing process in a coordinated token farming campaign.
(1:30)
Mike Housch: This isn't traditional malware, at least not in the sense of overtly malicious code ready to steal data the moment it’s downloaded. Instead, the packages contain a self-replicating worm. This routine allows them to create more packages, modify their package.json files to make them public, and constantly publish them to NPM in an infinite loop. They also include a configuration file, tea.yaml, which is likely designed to boost visibility and page rank to help the threat actor extract rewards from the tea.xyz protocol.
(2:10)
Mike Housch: The core mechanism here exploits the reward system itself. Amazon noted that the actors are artificially inflating package metrics through automated replication and dependency chains to extract financial benefits from the open source community. Previous reports had already identified about 80,000 packages across 18 accounts, but Amazon's findings between October 24 and November 12 identified twice that number.
(2:40)
Mike Housch: While the packages lack legitimate functionality, they still pollute the NPM registry, waste infrastructure resources, and introduce risk for developers who might download this low-quality, non-functional code. This incident is a stark demonstration of the evolving nature of threats, where financial incentives are driving registry pollution at an unprecedented scale. It also begs the question: What happens when other threat actors decide to copy this model and target other reward-based systems for financial gain? It highlights the critical importance of industry-community collaboration in defending the software supply chain.
(3:25)
Mike Housch: Speaking of unprecedented scale and evolving threats, let's turn our attention to the latest concerning development in AI and state-sponsored espionage. Anthropic, the company behind the Claude AI model, disclosed a sophisticated campaign where a China-linked state-sponsored threat actor abused their Claude Code AI.
(3:50)
Mike Housch: This operation, identified in September, targeted roughly 30 entities globally, spanning chemical manufacturing, financial services, government, and technology sectors. The threat actor successfully executed break-ins in a small number of these cases.
(4:15)
Mike Housch: The most alarming takeaway here is the level of automation achieved. Anthropic noted that the threat actor was able to use the AI to perform 80 to 90 percent of the campaign, with human intervention required only sporadically—perhaps four to six critical decision points per campaign. This marks the first documented case of agentic AI successfully obtaining access to confirmed high-value targets for intelligence collection.
(4:50)
Mike Housch: The attackers achieved this by developing an attack framework and abusing the AI's agentic capabilities to launch cyberattacks with minimal human supervision. They tricked Claude into bypassing its guardrails by posing as an employee of a cybersecurity firm and breaking down the complex attack into small, seemingly benign tasks. The AI, tracked by Anthropic as GTG-1002, used Claude Code to orchestrate multi-stage attacks.
(5:25)
Mike Housch: The tasks assigned to Claude sub-agents included inspecting target environments, identifying high-value assets, scanning infrastructure, finding vulnerabilities, and building exploit code to target systems. The AI abused Claude to exfiltrate credentials, access additional resources, and extract private data. Anthropic reported that the AI even documented the attack, stolen credentials, and compromised systems in preparation for the next stage.
(5:55)
Mike Housch: By abusing Claude, which could handle thousands of requests per second, the hackers performed their attack in a fraction of the time human operators would have needed. This clearly shows that sophisticated cyberattacks are now much easier to perform. Anthropic believes that with the correct setup, agentic AI systems can now do the work of entire teams of experienced hackers, analyzing targets, producing code, and scanning vast datasets more efficiently than any human.
(6:30)
Mike Housch: Now, there is one small silver lining, at least for now: AI limitations still prevented a fully autonomous attack. Claude occasionally hallucinated during operations, claiming better results or fabricating data, which required the human operator to step in to validate all findings. These errors, such as claiming to have obtained credentials that didn't work, show that human oversight is still essential to manage the opacity, misalignment, and misuse of agentic AI.
(7:10)
Mike Housch: Anthropic quickly detected this activity, determined its scope, and disrupted the campaign within 10 days by banning the identified accounts and notifying affected organizations. But this is a "significant escalation" from prior attacks, showing just how rapidly AI capabilities have evolved at scale.
(7:35)
(Short Musical Interlude/Stinger)
(7:40)
Mike Housch: Moving on to vulnerabilities and patching failures, let's look at what the US cybersecurity agency CISA is dealing with right now. CISA has issued fresh warnings regarding two Cisco Secure Firewall ASA and FTD vulnerabilities—CVE-2025-20333 and CVE-2025-20362. These bugs were exploited as zero-days in the China-linked ArcaneDoor espionage campaign against government organizations. The flaws allowed attackers to send crafted requests, execute arbitrary code with root privileges, deploy malware, and likely exfiltrate data.
(8:20)
Mike Housch: Cisco patched these defects back on September 25th, and CISA immediately issued Emergency Directive 25-03, ordering federal agencies to identify, patch, or disconnect vulnerable Cisco devices. They were required to report back by October 2nd. So, what’s the problem?
(8:40)
Mike Housch: CISA recently updated the directive because some federal agencies failed to properly patch their appliances. Analysis of agency reported data found instances where agencies marked devices as 'patched,' but had actually updated them to a software version that was still vulnerable to the threat activity. In response, CISA had to publish a list of minimum versions that actually contain the necessary fixes and provide fresh guidance. This incident underscores that even with mandated directives and known exploited vulnerabilities, patching compliance and accurate reporting remain persistent challenges in government infrastructure.
(9:25)
Mike Housch: Speaking of vulnerabilities that can expose infrastructure, let’s talk about ChatGPT for a minute. A researcher discovered a high-severity Server-Side Request Forgery, or SSRF, vulnerability related to Custom GPTs.
(9:45)
Mike Housch: The vulnerability was found in the ‘Actions’ section, where users define how the custom GPT interacts with external services via APIs. The system failed to properly validate user-provided URLs. By exploiting this gap, the researcher was able to query a local endpoint associated with the Azure Instance Metadata Service, or IMDS.
(10:15)
Mike Housch: This is a big deal because obtaining the ChatGPT Azure IMDS identity’s access token could have granted access to the underlying Azure cloud infrastructure used by OpenAI. As experts noted, this is a textbook example of how small validation gaps in the framework layer can cascade into cloud-level exposure. While OpenAI assigned it high severity and quickly patched it, the incident is yet another reminder that SSRF remains in the OWASP Top 10 for a reason: a single server-side request can pivot into internal services and privileged cloud identities.
(11:00)
Mike Housch: Now for a couple of quick hits of good news and bad news.
(11:05)
Mike Housch: First, the good news: Google announced that the Chinese cybercrime service Lighthouse has been disrupted following a lawsuit the company filed against the operating group, Smishing Triad. Smishing Triad specialized in large-scale SMS phishing, or smishing, campaigns, offering kits to criminals who targeted over a million users globally. Google’s legal action allowed them to obtain court orders and subpoenas to seize domains and obtain information, ultimately shutting down the operation—at least temporarily, as the threat actor appears hopeful they can restore their cloud server.
(11:50)
Mike Housch: Now, the bad news for web hosts: There's a serious vulnerability affecting Imunify360 website security products, which are used to protect millions of Linux-based hosting sites. The flaw resides in the Ai-Bolit malware scanner used in products like ImunifyAV and ImunifyAV+. An attacker could upload a specially crafted malicious file that triggers the vulnerability when the scanner runs. Since the vulnerable scanner runs with root privileges, a hacker could potentially compromise the entire shared hosting environment, gaining access to hundreds of sites belonging to different customers. A patch has been available since October 21st, but hosting providers need to ensure they have applied it immediately.
(12:45)
Mike Housch: Finally, a quick mention of the NHS in the UK. The extortion crew Clop recently claimed to have targeted the NHS, adding the NHS.uk domain to its leak site. This attack likely utilized an Oracle E-Business Suite zero-day exploit that Clop has been leveraging for months. While the NHS confirmed they are investigating with the National Cyber Security Centre, they are a notorious non-payer of ransoms. They store vast quantities of sensitive patient data, making them an attractive target, but the consequences of attacks usually result in patient harm rather than rewards for the criminals.
(13:25)
Mike Housch: So, whether we are talking about sophisticated AI automating espionage for nation-states, worms polluting the open source code supply chain for crypto rewards, or federal agencies struggling with basic patch management, the message is clear: the threat landscape is changing rapidly, driven by automation and financial incentives. Staying ahead requires not just better technology, but better process and governance.
(14:00)
Mike Housch: That’s all the scoops we have time for today on Cyber Scoops & Digital Shenanigans. Thanks for tuning in. We’ll catch you next time!