Technology Tap: CompTIA Study Guide
This podcast will give you help you with passing your CompTIA exams. We also sprinkle different technology topics.
Technology Tap: CompTIA Study Guide
Top 10 Hacks in 2025 Part 2
In this episode of Technology Tap: CompTIA Study Guide, we explore a groundbreaking shift in cybersecurity threats focused on operational availability instead of data theft. Using five headline patterns from 2025, including a case where hospital scheduling systems were compromised, we highlight critical lessons for IT skills development and tech exam prep. Learn how these attacks challenge traditional security thinking and why ensuring system availability is vital for technology education and anyone preparing for CompTIA exams.
From there, we dig into poisoned updates and the uneasy truth that digital signatures prove origin, not intent. By compromising a vendor’s build pipeline, adversaries delivered “trusted” software that waited, watched, and embedded itself as infrastructure. Antivirus didn’t catch it; analysts comparing subtle anomalies did. We unpack practical defenses: behavior monitoring for signed code, attestation, SBOM use, and staged rollouts that verify after trust, not just before.
Next, the social engineering target shifts to the help desk at 24/7 casinos, where urgency is the culture. With real names, roles, and believable pressure, attackers turned resets into keys. The logs showed everything as legitimate because the system allowed it. We share fixes that work under fire: just-in-time privilege, second-operator verification for high-risk requests, audited callback flows, and playbooks that slow down when stakes go up.
Then the cloud nightmare: a leaked admin token, logging disabled, and entire environments—plus backups—deleted. No exotic exploit, just excessive privilege and shared control planes. We break down guardrails that change outcomes: least privilege everywhere, break-glass elevation with time limits, immutable backups in isolated accounts, and monitoring that attackers can’t silence.
All roads lead to the same insight: humans aren’t the weakest link; they’re the most overused control. Real resilience comes from systems that assume trust will be abused and still contain damage—observed trust, independent logging, and workflows that don’t require perfection from people under pressure. If you’re building or defending, this is your blueprint for 2026: reduce blast radius, verify behavior, and never make a human your final barrier.
If this hit a nerve or sparked an idea, follow, share with a teammate, and leave a quick review. Tell us: where does your organization rely on trust without verification?
100% Local AI. No cloud. No tracking. Convert URLs, PDFs & EPUBs into high-quality audio.
Art By Sarah/Desmond
Music by Joakim Karud
Little chacha Productions
Juan Rodriguez can be reached at
TikTok @ProfessorJrod
ProfessorJRod@gmail.com
@Prof_JRod
Instagram ProfessorJRod
And welcome to Technology Tap. I'm Professor J. Rodney. In this episode part two of the top ten hacks of twenty twenty-five. Let's tap in. As I said in the previous episode, I'm not naming the companies. This actually could be a teachable moment for students and teachers out there. You can listen to it and try to do a little research and find out which company this affected. And it's one that's various, like the one for the learning management system of the school. There's actually a couple of schools. I know one that's local to me that it that's exact, you know, that happened to. So it may not just be one, it may be more. Anyway, for those of you who don't know me, my name is Professor J. Rod. I'm a professor of cybersecurity. And I love helping students pass their A Plus, Network Plus, and Security Plus exams. But every now and then I, you know, do a little twist on the topics. I like to talk about like the history of technology, and now I'm doing the top 10 hacks of 2025, part two. So let's kick it off with number five. After attackers realized they can extract a value without breaking anything, they started asking, What if we don't steal data at all? What if we steal something more valuable? Time. That's hack number five. Healthcare. And the moment the attackers realize availability hurts more than breaches. Alright, let me start this one with something that usually makes the room go quiet when I say out loud. Hospitals can survive data loss. They cannot survive confusion. And in 2025, attackers finally understood the difference. For years, healthcare breaches followed a familiar script. Patient records stolen, database encrypted, HIPAA violations, press release. Everyone knew the playbook. So security teams hardened. Electronic health records, billing systems, and patient databases. And they did a decent job, which is why attackers stopped going after those systems. Instead, they asked a different question. Not how do we steal data, but how do we stop the hospital from functioning? And that's a very different mindset. And the answer led them somewhere unexpected. Scheduling. Let me slow this down because this is subtle. When people think hospital systems, they think patient charts, lab results, imaging. But none of that matters if you don't know who's coming in, when procedures happen, where staff need to be. Scheduling is the nervous system of healthcare, and the nervous systems don't like being disrupted. This didn't start with ransomware splash across the screen. It started quietly. A low privileged account, a phishing email that looked boring, someone clicked clicking during a long shift. From there, attackers move lateral, not fast, carefully, until they reach the schedule systems. And here's what made this attack so efficient. Scheduling systems integrate with everything, affect everyone, are time sensitive, and are often less protected. Encrypt a file and you can restore it. Encrypt coordination and chaos spreads immediately. Appointments disappear. Surgeries overlap. Staff don't know where to report. And suddenly leadership wasn't asking about cybersecurity. They were asking, can we operate today? Remember the API attack? How nothing broke but value drained out? This is the opposite. Nothing was stolen, but everything stopped. Different tactic, same principle. Let me say this the way I say it in class. Availability is not secondary, it is operational survival. We teach confidentiality first, integrity second, and availability last. Attackers flipped that script. This attack didn't trigger immediate panic for one reason. No patient data was leaked, which meant slower legal response, quieter media coverage, and delay executive escalation. But internally, everything was on fire. Patients waited, doctors improvised, and nurses scrambled. And every minute increased pressure. The ransom note didn't threaten exposure, it didn't mention data. It simply said restoration requires payment. That was it. No countdown timer, no threats. Just an inventability. Let me ask you this: if backups exist, why didn't they just restore? Because restoring schedules isn't just restoring files, it's restoring trust and time. Which appointment is right, which version is accurate, which schedule is correct or current. You can't easily fix that. If this shows up on the exam, the question wouldn't ask what malware was used, it would ask what security principle was the most directly impacted. And the answer wouldn't be confidentiality, it would be availability. You can lose data and recover, you can lose time and people get hurt. Because this is where cybersecurity stops being abstract. And once attackers realize they could shut down operations without ever touching patient records, they ask another question. Alright, before I tell the story, let me ask you something simple. When was the last time you hesitated before installing an update? Not a sketchy pop-up, not a random download, a real update from a trusted vendor. Signed. Right? Trusted. Exactly. The hesitation or lack of it is where this hack lives. For years, security training told us keep system patched, install updates promptly, unpatched systems get breached. And that advice was and still is correct. But in 2025, attackers realized something subtle. If defenders are trained to trust updates, then updates become the delivery mechanism. No deception is needed, just patience. The notification popped up like it always did. Same vendor, same branding, same digital signature. IT teams pushed it out automatically. Why wouldn't they? This was best practice. No one called it suspicious, no one delayed it for review, and that's the moment trust became a liability. Let me slow this down. Security teams spend enormous effort validating users, but how often do they validate software behavior after installation? Most environments check is it signed and is it from a known vendor? And then they stopped asking questions. Attackers counted on that silence. This wasn't about tricking customers. This attack happened upstream. The attackers didn't breach thousands of companies, they breached one, the vendor. Specifically, the build environment, the update pipeline, and the system that signs code. Once malicious code was added before signing, every downstream system accepted willingly. No alarms, no warnings, because authentication was never the problem. Remember hack number 10, the voice call? That worked because people trusted authority. Remember hack number nine, the building system? That worked because the organization trusted vendors. This is the same pattern, just scaled. Trust doesn't disappears, it compounds. Here's what surprised people. The update didn't encrypt files, didn't crash the system, didn't announce itself. It waited, it observed, it established quiet persistence because loud malware gets removed. Quiet software becomes infrastructure. From the endpoints perspective, this was a loud behavior. The software had permission to run, it had permission to communicate, and nothing it did immediately violated policy. Which brings us to an uncomfortable realization. Most security tools are designed to stop unknown software, not trusted software behaving badly. Let me say this plainly because this is where the students usually pause. Digital signatures prove who sent the code. They don't prove what the code would do. We confuse authentication with safety. Attackers exploit that confusion. The attack wasn't discovered by antivirus, it was discovered by analysts comparing notes. Different organizations noticed similar strange behavior, subtle things, unexpected outbound connections, processes that shouldn't exist but did. Only later did someone ask the right question. What if the update itself is the problem? That question took weeks to ask, and by then the damage was widespread. If this shows up on the exam, the question wouldn't be, how did the malware get installed? It would be which security risk arises when trusted third-party software is compromised upstream? And the answer wouldn't involve end users at all. It would be supply chain risk. A sign update proves origin, not intent. So say that again. Because once you internalize it, you stop treating updates as unquestionable. And once the attacker realized they can spread silently through trusted software, they ask another question. What if we didn't attack the system at all? What if we attacked the people who fixed the systems? This brings us to hack number three casinos, help desk, stay on the line urgency. And the front door no one thought was the front door. Alright, before I tell you what happened here, let me ask you something I ask every semester. Where is the front door in your organization? Most people point to the firewall, the VPN, the login page. Almost no one says the help desk, and that's exactly why attackers did. Casinos don't work like normal business. They operate 24-7. Downtime costs money immediately. Customer experience is everything, which means one thing for IT. Speed matters more than caution. Help desk staff are trained to fix problems quietly, keep operations moving, and avoid disrupting guests. Attackers study that culture carefully. They didn't start with malware, they started with phone calls. Polite, urgent, frustrated. The caller sounded like an employee who couldn't log in, had guests waiting, needed access now. And here's the important part. They sounded believable. Because the attackers have already done their homework. Let me stop here and underline something. Social engineering works best when it sounds boring. No dramas, no threats, just inconvenience. Before the calls ever happen, attackers gather information. Not hacking tools, information. Employees name from LinkedIn, job roles from public posting, schedule shift from casual conversation, vendor names mentioned in forms. So when the help desk answered, the attacker didn't say, Hi, I need help. They say, hey, this is Mike from Table Services. I'm locked out again. Statement was last week. I'm on the floor and I got the guests waiting. That sentence does something powerful. It creates pressure. And pressure collapse process. The help desk follow procedure. That's the uncomfortable truth. They ask a few questions, they verify what they could, they reset credentials. Because from their perspective, the caller knew internal language. The request made sense. And denying access would cause immediate problems. And this is where security theory meets reality. Policies don't survive urgency well. Remember hack number 10 and AI voice call? Same weakness. Authority plus urgency equals compliance. Different tool. Same human response. Once attacker had one account, things moved quickly. Because casinos, like many or like many organizations, rely heavily on centralized identity system, role-based access, and trust between departments. One reset became another, another call, another employee needed help. Within hours, access systems were confused, credentials overlapped, trust chain broke. Room keys stopped working, staff accounts conflicted, operations slowed, and suddenly leadership wasn't asking about cybersecurity. They were asking, why can't guests get into their rooms? Let me say this plainly. They didn't bypass multi-factor authentication. They convinced someone to undo security on their behalf. And that's a very different kind of failure. Security logs didn't show brute force attempts, suspicious malware, or abnormal traffic. They showed help desk activity, credential resets, account changes, all legitimate. Which means from the system's perspective, nothing was wrong. If this showed up on an exam, the question wouldn't be what exploit was used. It would ask which roles represent a high risk attack surface due to social engineering. And the answer wouldn't be administrator, it would be help desk. The help desk isn't a support function, it is an access control system staffed by humans. Because once you see it that way, you stop treating help desk security as optional. The incident didn't end with data death, it ended with downtime, confusion, embarrassment, and a painful realization. The attackers didn't need to break in, they just needed someone to help. And once attacker proved they can talk their way past the fences, they ask even a bigger question. What happens if we don't need people at all? What happens if we get one account that controls everything? That's hack number two, the cloud, admin access, and the morning entire environments disappeared. Alright, before we talk about what happened here, I need you to clear I need you to clear one misconception. The cloud is not safer by default. It is faster by default, more convenient by default, more powerful by default, and power without restraint is dangerous. Just ask Peter Parker, right? The incident didn't start with alerts, it started with confusion. An engineer logged in into the cloud dashboard and froze. No instance, no storage buckets, no database. At first, the assumption was simple. Dashboard glitch. Refresh, same thing. Another engineer logged in, same emptiness. And then the realization hit slowly, painfully. This wasn't an outage. This was deletion. Let me stop here and ask the question I always ask. Who has admin access in your environment? Really? Not who should, who actually does. Because this hack lives in a gap between the two answers. There was no malware, no phishing email that morning, no bruise force attack. The attacker already had what they needed. An admin token. And once they have that in the cloud, you don't break rules. Use that. This part is uncomfortable because it's mundane. The token didn't come from some elite hack, it came from a repository that was accidentally made public. A CICD pipeline log. A configuration file that wasn't supposed to be exposed. In other words, normal mistakes. And attackers are very good at watching for normal mistakes. They didn't start deleting things. That would have been loud. First they checked permissions, then they check logging, and then they did something very smart. They turned the lights off. Audit logs disabled, monitoring reduced. Because if no one is watching, everything that follows looks like signs. Silence. Remember hack number six, the APIs? Remember hack number seven, the sessions? Same pattern here. The attacker didn't need to exploit anything. They stepped into a role that already had authority. Only after everything was quiet did the real damage begin. Resources deleted, not encrypted, deleted. Databases, compute, storage, and then backups. Because in many environments, backups are protected by the same admin account, which means the safety net fell with everything else. Let me say this plainly. Which version is clean? Which amount can't even restore anything now? Which account can't even restore anything now? Recovery took weeks. Some data never came back. This has shown up in the exam. The question wouldn't be what malware caused the breach? It would ask, which missed misconfiguration allowed a simple compromised credential to cause total loss? And the answer would be excessive privilege, lack of least privilege. Say that again. The cloud doesn't forget mistakes, it amplifies it. Because once you understand that, you stop treating admin access casually. After this incident, organizations started asking new questions. Do we really need permanent admin accounts? Are backup isolated from production identities? Can logging be disabled by the same people it monitors? Those questions came too late for this victim, but they defined the future. And now we arrive at the final hack. Not because it's the most technical, not because it's the most expensive, because it's the one every single other hack depends on. Hack number one isn't malware, it's in it isn't phishing. It's the human layer. And the year everyone finally admit that humans were never meant to be security controls. Alright, before I say anything, let me say this plainly. Hack number one isn't a single incident. This wasn't one breach, this wasn't one company, this wasn't one headline. Hack number one was a pattern. And once you see it, you can unsee. Think back through everything we talked about so far. The phone call that sounded normal, the building system no one monitored, the library no one thought to secure, the LMS sessions that never expire, the APIs that answer too freely, the hospital schedules that collapse, the trusted updates that betrayed everyone, the help desk that just wanted to help. The cloud admin account that erased everything. Different industries, different systems, different tools. Same root cause. Someone trusted something that they didn't verify. Let me slow this down because this is where people usually push back. They'll say, Well, users need better training, or people are the weakest link. I want to challenge that idea. Humans aren't the weakest link, they're the most overused control. We ask people to do the impossible things. Detect fake voices that sound real, identify malicious requests that look normal, question authority under pressure, follow perfect procedures during chaos. And then we act surprised when they don't. In 2025, attackers didn't exploit ignorance, they exploited expectations. They behaved exactly the way the system and people expected them to. This year the defender stopped saying, if users follow the rule, and I start asking, why did the system allow this to depend on a person at all? That's a big shift, and it's an uncomfortable one. Because it means the problems isn't users, it's design. Remember the line from the start of the first episode? Attackers in 2025 didn't break security, they operated inside it. This is what it means. Every hack worked because the system said, yes, that's allowed. Let me give you the framing that ties this whole episode together. Cybersecurity used to be about keeping the bad people out. In 2025, cybersecurity became about containing damage when trust is abused. Because trust would always be abused. That's not pessimistic pessimism, that's realism. After these incidents, organizations started shifting how they think. Not overnight, not perfectly, but noticeably. They start asking different questions. Why does one person have so much authority? Why does one login grant so much access? Why do we trust the system without watching it? And those questions are healthier than any tool. Security doesn't fail when people make mistakes. It fails when the system requires people to be perfect. Because that's hack number one. If you're a student listening to this, you're not training to memorize tools. You're training to recognize patterns. If you're already working in IT, your job isn't fixing solutions. It's just uncomfortable questioning about trust. And if you're designing systems, never ask a human to be your last line of defense. So when people look back at 2025 and ask, why were there so many breaches? The answer won't be hackers got smarter. It'll be this. We finally realized that trust, unchecked, unmonitored, unquestioned, was the most dangerous vulnerability we had. And once you understand that, you start building systems differently. Alright, that's gonna do it for this episode of Technology Tap. If this one made you uncomfortable, good. That means you're paying attention. Because the goal isn't fear, it's awareness. And the moment you start questioning trust, that's when security actually begins. I'm Professor J Rod, and happy new year. And let looking forward to what 2026 brings us all. This has been a presentation of Little Cha Cha Productions, art by Sabra, music by Joe Kim. We're now part of the Pod Match Network. You can follow me at TikTok at Professor J Rod at J R O D, or you can email me at Professor Jrodj R O D at Gmail.com.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.