InfoSec.Watch

InfoSec.Watch Podcast — Episode 124: Edge Devices Under Fire

Infosec.Watch Season 2 Episode 124

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 8:53

Send a text

Edges are where attackers thrive—and where many teams see the least. We dive into how identity-adjacent features, single sign-on, and device management planes have become high-impact targets, and why routers, VPNs, and firewalls now sit at the center of modern intrusion campaigns. From unsupported hardware to multi-terabit DDoS events, we break down what matters most and the steps that actually change your risk.

We walk through CISA’s directive to remove end-of-life edge devices and translate it into a practical playbook: inventory every public IP, map models and firmware to vendor support, and set non-negotiable retirement deadlines. Then we stress-test DDoS readiness at today’s scale, with concrete checks for always-on scrubbing, runbooks, and confirmed capacity with your CDN, WAF, and upstream providers. On the software side, we examine fresh NPM and PyPI compromises and outline a developer-first defense: dependency pinning, integrity checks, SBOM usage, mirrored registries, and CI/CD policies that block unknown maintainers by default.

Urgency ramps up with active exploits added to CISA’s Known Exploited Vulnerabilities list. We prioritize SmarterMail, SolarWinds Web Help Desk, and GitLab SSRF with rapid patching, strict segmentation, emergency hardening, token rotation, and egress controls. We also spotlight a trend to watch: adversary-in-the-middle frameworks targeting routers and edge devices to hijack traffic. The counter is clear—treat the edge as a tier-one detection surface with telemetry for config drift, new admins, DNS and NTP anomalies, and require phishing-resistant MFA like FIDO2 or passkeys for all admin access.

To help teams move faster, we highlight the KEV catalog’s machine-readable feed and show how to wire it into vulnerability management to auto-open tickets and enforce tight SLAs based on real-world exploitation. We close with an actionable one-week project: enumerate public edges, flag end-of-support gear, and either replace it, shield it behind managed services, or lock its management plane behind VPN with strict allow lists. Subscribe, share with your team, and leave a review with the one control you’ll implement first—what’s your next move to harden the edge?

Support the show

Thanks for listening to InfoSec.Watch!

Subscribe to our newsletter for in-depth analysis: https://infosec.watch
Follow us for daily updates:
-  X (Twitter)
- LinkedIn 
- Facebook -   

Stay secure out there!


SPEAKER_01

Welcome back to the InfoSec Watch Podcast, your weekly dose of cybersecurity news.

SPEAKER_00

This week it feels like we're getting a masterclass in why identity adjacent features and device management planes are just such prime targets. It's a huge theme right now.

CISA Order On Unsupported Edge Devices

SPEAKER_01

Totally. When your SSO becomes just another off-path to exploit, an MDM is the gatekeeper to your entire mobile fleet. While patching is just step one, validation and active hunting have to be step two. So let's get into it, starting with our top stories.

SPEAKER_00

First up, CISA made a pretty big move. They've directed federal agencies to identify and remove all unsupported edge devices. You know, we're talking about those internet-facing routers and firewalls that just can't receive security updates anymore.

SPEAKER_01

Yeah, the end-of-life infrastructure. It's a classic problem. Attackers have scripts running 24-7 just looking for this stuff. It's the lowest of the low-hanging fruit.

SPEAKER_00

And the takeaway really applies to everyone, not just the feds. You have to build an inventory of your internet-facing edge gear. Know what you have.

Multi‑Terabit DDoS Reality Check

SPEAKER_01

Right. And then map each model, each OS, to the vendor's support status. And be roofless. Set an internal deadline to replace or retire anything that's out of support, no exceptions.

SPEAKER_00

Speaking of overwhelming things, researchers reported a massive DDoS event this week. It's being attributed to the iSuru or Kim Wolf botnet. The sheer scale of it is kind of mind-boggling.

SPEAKER_01

It really is. It just underscores how quickly volumetric capacity is scaling up. We're talking multi-terabit per second events now. Your, quote, traditional upstream protection can get completely steamrolled by an attack like this.

SPEAKER_00

Right. So the action item is to validate that you have an always-on DDoS plan, and not just a document that sits on a shelf. I mean, do you have the emergency contact paths, the run books? Is scrubbing enabled or on standby?

SPEAKER_01

Exactly. And you need to have that conversation with your CDN, your WAF, and your upstream provider to confirm they can actually handle these multi-terabit per second events for your most critical domains. You don't want to find out during the attack.

SPEAKER_00

Okay, and sticking with the theme of things you don't control, there was a nasty package compromise targeting both NPM and PiPy.

SPEAKER_01

The supply chain again. What was this one dropping?

SPEAKER_00

The usual suspects: credential and wallet stealing functionality, plus some remote access malware. It's just another perfect example of the blast radius when dependency trust fails. One bad package gets in and it's suddenly everywhere.

SPEAKER_01

It really reinforces the basics. You have to enforce dependency pinning and integrity checks. Use your lock files, check your hashes, use an SBOM.

Active Exploits: SmarterMail, SolarWinds, GitLab

SPEAKER_00

And in your CICD pipeline, you should be blocking unknown package installs by default. Only allow things from approved allow listed registries and maintainers.

SPEAKER_01

Alright, that's a good transition to our vulnerabilities spotlight. These are the bugs that are being actively exploited in the wild. So listen up. Sloan, what's first?

SPEAKER_00

First, CESA added a smarter mail issue to its known exploited vulnerabilities catalog, the Kev. That's the big signal that this isn't theoretical anymore. Attackers are using it. So the patch urgency for any exposed deployments is um extremely high.

SPEAKER_01

Definitely. So for that one, you need to immediately identify all your smarter mail instances. And that includes any shadow IT setups. Lock down the admin interfaces to VPN or IP allow lists and get it patched on an emergency change window.

SPEAKER_00

Next on the Kev list is a SolarWinds web help desk vulnerability. Same story. It being added to Kev elevates the priority for any organization running it in a reachable network zone.

SPEAKER_01

Right, so you have to treat Web Help Desk as a tier one patch target. Patch it fast. But also think about segmentation. Move it away from domain controllers and other admin tooling, and then monitor really closely for any suspicious process creation coming from that application host.

SPEAKER_00

And one more from the Kev catalog this week: a GitLab SSRF, or server-side request forgery issue. This one highlights that even older but still reachable DevOps platforms are high-value targets if they're left unpatched.

Adversary‑In‑The‑Middle On Routers

SPEAKER_01

Yeah, SSRF can be nasty. So the playbook is audit your external GitLab exposure, patch to a fixed version, and critically rotate any credentials or tokens that could have been reachable through those SSRF paths. And if you can, enable egress controls to limit what your GitLab servers can even call out to in the first place.

SPEAKER_00

Good advice. Now let's shift gears to our trend to watch. This one is, well, it's a bit of an evolution in tactics.

SPEAKER_01

Okay, I'm listening.

SPEAKER_00

Reporting this week is highlighting adversary in-the-middra frameworks and implants, like one called DNife, that are specifically targeting routers and edge devices. The goal is to hijack traffic to facilitate credential theft or malware delivery.

SPEAKER_01

So they're turning the network infrastructure itself into the attack platform. That's clever and dangerous. Instead of attacking a host behind the firewall, they become the firewall.

SPEAKER_00

Exactly. So the defense has to evolve too. The key takeaway is to add router and edge device telemetry to your detection program. You need alerts for config changes, new admin users, or unexpected DNS and NTP changes on those devices.

SPEAKER_01

And for access control, this makes the case for requiring MFA resistant phishing controls like FIDO2 or pass keys for any kind of admin access to that infrastructure. A simple password just won't cut it.

Tool Of The Week: KEV Feed Automation

SPEAKER_00

For sure. Okay, time for our tool of the week. And this one is something we've already mentioned a few times.

SPEAKER_01

Let me guess, the Kev Catalog.

SPEAKER_00

You got it. Specifically, the Kev Catalog's machine readable feed. It's such a practical way to automate your prioritization for what absolutely must get fixed now, because it's based on confirmed real-world exploitation.

SPEAKER_01

It's a game changer for Vuln management. The takeaway is to wire Kev ingestion directly into your Vuln management pipeline. Auto-open tickets when a CVE from the Kev catalog matches an asset you own.

Actionable Move: Audit Public Edge Gear

SPEAKER_00

Right, and then enforce a strict SLA on it. Something like 7 to 14 days to patch based on the exposure of the asset. It cuts through all the noise of CVSS scores and focuses on what's actually being used to break in.

SPEAKER_01

Great points. That brings us to our actionable defense move of the week. This is one thing you can do in the next week to materially reduce your risk.

SPEAKER_00

I'm ready. Let's hear it.

SPEAKER_01

This week, the focus is on the equipment that sits directly on the internet. The stuff that so often escapes normal patch SLAs. Create a list of all public IPs you own. Map them to the device model and firmware version. Flag anything that is end of support.

SPEAKER_00

Okay, so you have your problem, children. Then what?

SPEAKER_01

Then you have three choices. A. Replace it. B move it behind the managed security service so it's not directly exposed. Or C, at the very least, restrict its management planes to VPN with strict IP allowless. Just get it off the public internet.

Final Takeaways And Wrap

SPEAKER_00

That's a fantastic high-impact project. Alright, let's wrap it up with a final word.

SPEAKER_01

I think the takeaway for me this week is that exploitation is really being shaped by two big realities. First, that attackers love the edges, places where our visibility and our patching discipline are weakest.

SPEAKER_00

Right. And second, that supply chain trust failures just keep expanding the set of possible initial access paths. It feels like we have less and less control.

SPEAKER_01

Exactly. So the teams that are going to win are the ones that automate their prioritization using things like the Kev feed and the ones who can remove that legacy exposure faster than attackers can operationalize it. It's a race.

SPEAKER_00

A race we have to win. Well, that's all the time we have for this week.

SPEAKER_01

Thanks for tuning in. For more updates throughout the week, be sure to follow us on X, Facebook, and LinkedIn. And don't forget to subscribe to our newsletter for all this and more delivered right to your inbox at InfoSec.watch.

SPEAKER_00

Stay safe out there, and we'll see you next week.