The Hacker's Cache
The show that decrypts the secrets of offensive cybersecurity, one byte at a time. Every week I invite you into the world of ethical hacking by interviewing leading offensive security practitioners. If you are a penetration tester, bug bounty hunter, red teamer, or blue teamer who wants to better understand the modern hacker mindset, whether you are new or experienced, this show is for you.
The Hacker's Cache
#71 Metasploit Creator: Why CVEs Won’t Save You in 2025 ft. HD Moore
In this episode of The Hacker’s Cache, Kyser Clark sits down with HD Moore, the legendary creator of Metasploit and CEO of RunZero, to discuss why relying on CVEs is putting organizations at risk in 2025. They unpack the truth about vulnerabilities that never get CVEs, the hidden dangers of SSH exposures, and why attackers are outpacing defenders through innovation. HD also shares bold takes on AI’s role in cybersecurity, the overreliance on tools and certifications, and why exposing version numbers might actually make systems safer. This episode is packed with insights every ethical hacker, pentester, and cybersecurity professional needs to hear.
Connect with HD Moore: https://hdm.io/
Connect
---------------------------------------------------
https://www.KyserClark.com
https://www.KyserClark.com/Newsletter
https://youtube.com/KyserClark
https://www.linkedin.com/in/KyserClark
https://www.twitter.com/KyserClark
https://www.instagram/KyserClark
https://facebook.com/CyberKyser
https://twitch.tv/KyserClark_Cybersecurity
https://www.tiktok.com/@kyserclark
https://discord.gg/ZPQYdBV9YY
Music by Karl Casey @ White Bat Audio
Attention Listeners: This content is strictly for educational purposes, emphasizing ETHICAL and LEGAL hacking only. I do not, and will NEVER, condone the act of illegally hacking into computer systems and networks for any reason. My goal is to foster cybersecurity awareness and responsible digital behavior. Please behave responsibly and adhere to legal and ethical standards in your use of this information.
Opinions are my own and may not represent the positions of my employer.
[Kyser Clark]
Welcome to the Hacker's Cache, a show that decrypts the secrets of cybersecurity one byte at a time. And today, my guest is a legend in cybersecurity, HD Moore. He's best known as a creator of Metasploit Framework, one of the most influential tools ever built for ethical hacking and security research.
Over the years, HD has shaped how the industry approaches vulnerability discovery, open source collaboration, and exposure management. He's now the founder and CEO of RunZero, a company redefining asset discovery and visibility across complex networks. From pioneering office security frameworks to dissecting modern challenges like SSH exposures and unauthenticated scanning gaps, HD's work continues to push the field forward.
So, HD, thank you so much for hopping on the show. Go ahead and unpack some of your experience and introduce yourself to the audience.
[HD Moore]
Sure. Yeah, thanks, Kyser. My background is basically being internet hoodlum hacker in the 90s, where you didn't really have a job for doing this type of work yet.
There wasn't really any kind of training you could take to do it. So, all the folks that you know that may have been, you know, have 20 plus years of experience in security, they typically didn't start off professional. They started off kind of on RSC, on forums, just doing hygiene to the internet, basically.
So, I've always loved the idea that both telephone lines and with internet, that you can just kind of pick a random number and reach out, and there might be something there. And who knows what you'll find, right? It could be a dam control valve.
It could be some random XP machine. It could be almost anything. So, I love the idea of like exploring stuff, mapping networks.
And from there, I spent a lot of time just kind of building on that to then work on things like Metasploit. They're much more exploit focused. But since then, I've actually stepped backwards.
And now I've been very focused on network discovery and fun kind of asset inventory tricks.
[Kyser Clark]
Yeah. And one thing you touched on there was like back in the day, there wasn't a lot of training. So, like looking at, I mean, you've been in the field for a long time.
And now you're looking at it now, like there's kind of like an influx of training. Do you think that's a problem at all? And like, I know a lot of people like, you know, there's some certification haters out there.
I like certifications, obviously. But yeah, what's your take on like the modern training in cybersecurity, good, bad, indifferent?
[HD Moore]
Mostly indifferent. I mean, certs are great if your customers require them. But, you know, end of the day, it's really knowing something, not having a piece of paper saying you know something.
So it depends on where you're working. If you work for a consulting firm or like a large corporation, they typically, you know, pay more for folks with certs. If you're in a small company like a startup, I mean, you don't even have to graduate high school half the time.
As long as you can do the work, no one really cares. So I think certs and the training around them are fantastic for letting folks learn a particular subset of the field really well. But I'm not particularly, you know, in love with certs or hate them by any means.
It's very much just another way to learn the trade.
[Kyser Clark]
And so looking back at Metasploit, what was the original problem that you were trying to solve? And how different was the early hacking landscape compared to today's ecosystem of frameworks and automation?
[HD Moore]
Sure. I mean, going back to kind of when Metasploit was created, there were no kind of like multi-exploit toolkits out there besides some stuff from like THC or TASO or LSDPL, like some of the hacking groups. But typically, every exploit was its own unique program with its own dependencies, its own way to compile it, hard-coded shellcode, hard-coded targets.
If you wanted to change how an exploit worked, either like adding a target or changing a payload or making it connect back separately, you'd effectively have to like re-implement the entire thing. It was painful. I mean, I had to like learn C not because I had any desire to learn C, but because I was either modifying exploits or learning about vulnerabilities that were based in C language.
And I love writing the much easier programming languages like, you know, Visual Basic, Perl, Ruby, and these days Go. So kind of at the time, every exploit was kind of unique. You had to kind of create your own collection of them.
That was the idea behind Metasploit. Let's put it all in one place. Let's make it safe.
Let's make sure that these really cool security research and innovation that's happening still works 20 years later. Otherwise, you know, you pull a piece of code from a presentation at DEF CON, like 10. Good luck getting it to compile these days, right?
So the idea behind Metasploit is let's take all that awesome work the security community is doing and kind of like put it into a functioning archive where you can still go back in time and use techniques from 15, 20 years ago, and they still work just great today. And to that end, I mean, Metasploit's fantastic. I really love what Rapson's been doing with it.
Like there's thousands of exploits. I think when I left, we're, you know, maybe in the one or two thousand. Now there's even more.
So I really love kind of how the project has come together and how the community still supports it.
[Kyser Clark]
Yeah. And like I told you on the recording, I used Metasploit earlier today and I use it very frequently. So appreciate your contributions to the cybersecurity field and ethical hacking.
And we're definitely spoiled, you know, people who because I've been a pen toucher for about a year and a half now. I've been in the field for seven years. So, you know, Metasploit has always been around since I've been in the field.
And, you know, I feel like I'm a little bit spoiled, you know, with all the tools that kind of do the work for you nowadays.
[HD Moore]
There is the kind of danger of that, right? I don't want to name which Slack it was, but there's a really popular security Slack and I pop into one of the channels and someone's like, hey, which chat GPT do I use to learn this tool? And like every circuit of my brain just screamed at once, like, what?
No, like, go learn the tool. Like, why are you trying to ask a particular model to do it for you? It was amazing.
So on one hand, you have folks who go really, really deep and they learn how to build their own tools and they really contribute to everything. They have other folks who expect literally like chat GPT to do it for them now. So I think we're all going to have that kind of split of desire and expertise in the field.
But I think one thing that's better these days is it's become much easier to build your own security tools using, you know, existing libraries. Like the Python ecosystem around ImpactKit for SMB relays, NetDLAM crashing, Kerberos ticket stuff is amazing. Like there's way more tools and exploits these days because of the contributions by, you know, the ImpactKit folks at Core Impact or other like libraries like Metasploit because it makes the, you know, the framing and the testing and builds of exploit themselves much easier.
[Kyser Clark]
I'm so glad you mentioned chat GPT because I want to talk about AI with you and get your take on it. So obviously the AI wave is transforming security tooling and every layer. And how do you see AI shaping the future of office security and defense security?
And do you think it's overhyped or do you think it's dangerous in security research? What's your take on AI overall?
[HD Moore]
I mean, the biggest danger for AI is that you trick yourself. I mean, flat out, people will convince themselves that AI did something that and it's really hard to avoid that. Like, you know, I'll give you an example this weekend.
I was working with, just playing around with the new IP my implementation to try out some of the new ciphers. And, you know, I was like, let's try cloud. I haven't tried cloud before.
You know, so I asked it to basically write an IP, my RACP, RCMP hashing, like that old protocol layer. And it claimed to did it. And it gave me all these hashes.
It looked fantastic. And I went and she tried to do it on real system. And it's like, oh, your password's wrong.
I was like, no, that password's right. And it would argue with me for hours about how, no, I was wrong. And the password was incorrect.
So I finally like dug deep enough in the code to realize like every single thing around the session handling was completely wrong. It just skipped over the entire like open session part. And it just been like gaslighting me for an hour while I was paying however many cents per minute to use it.
So I think that's really the challenge is a lot of folks will say, look at this cool thing I found, but then the question is, is it real? Did I just delude myself? And there's so much about how these things are built that it's easy for people to like catch that, like, the buzz and almost like casino aspect to it.
Like every time you ask, you know, Claude to write something for you, you feel like you're pulling an arm of the one-armed bandit once more and seeing for 17 cents, do I get an exploit out of it or not, right? So it's one of those things where I feel like people definitely over-hype it. They definitely over-rely on it.
I think we're seeing the first wave of AI hype finally crash and kind of recede. I feel like we peaked around August, September, at least for security. And now we're starting to see like the useful stuff, not just the hype around it.
So, you know, there's obviously way too many AI companies and they're not all going to be around in a couple of years. So the question is like, what are the long-term cases? One specific area where I think it does a fantastic job is with fuzzing because you don't have to confirm that it's correct.
The program crashes or it doesn't. So anything where you can automatically verify the results of the LLM generated data is a really good use case, but anything that requires a human to then, one, understand the context and figure out whether it's right and whether it's slightly wrong or not, that's much harder to operationalize.
[Kyser Clark]
Yeah. I'm glad you mentioned the gaslighting. There's actually, I think, so I don't use Cloud, but from what I hear, they're either just released or they're coming out with like a mode that doesn't always hype you up because like, you know how like AI would be like, you're absolutely right.
You're never wrong. So there's like a setting that like reverses that. And it's like, it really criticizes you, which is definitely needed for LLMs, in my opinion.
[HD Moore]
Yeah, for sure. For things like research papers and talks, I've used LLMs to critique what I was working on. It's like, hey, pick out the holes in my argument.
Where is it inconsistent? I think for that, it's pretty good. Like I definitely wouldn't want to take any of the output and put it into what my writing is.
But I was very, very sad recently. I spent probably two months working on a keynote for Sector. And it's a lot of, you know, writing presentations is the bane of my existence.
I loathe every second of it, but it's important to do it. I just, I'm not particularly good at it. And I just have to spend so many hours to get something functional.
So I spent all this time doing the prep, doing the rehearsals, doing this stuff. I give the presentation, it goes off pretty well. And I was like, let me just like try Gemini and see what it would do.
So I gave it basically my script from the talk. And it came back with a super polished video that had like better delivery than what I did myself. I'm like, oh, I think I'm obsolete already.
[Kyser Clark]
Speaking of obsolete, that's something that I think about a lot. And I talk about on this podcast a lot is how AI has essentially replaced like lower level tech and cybersecurity roles and internships. And we're seeing like a lot of tech companies laying off, not like tech workers, but like white collar workers.
Like Amazon just laid off, I think like 30,000 people or something like that. So, I mean, it seems like AI is definitely destroying more jobs than it's creating. And I think it's still a problem, but I mean, what's your take on that?
Do you think it's a problem and do you see tech and cybersecurity work getting replaced with AI anytime soon?
[HD Moore]
I mean, folks are definitely over-investing for sure, right? I mean, there's no chance of us getting the economic output that we're putting into it anytime soon. There's going to be a lot of losers when the bubble fully pops.
If you go back to like the dot-com crash, we overbuilt telecommunications like fiber and things like that. So you end up having tons of dark fiber that never got used for 10 more years. We may be in the same spot with data centers.
Like all it takes is an algorithmic change to make everything more efficient. You don't need 10 more data centers. You can get by with half of one or one, right?
So I think we're definitely over-building, over-investing. I think folks who are trimming their workforce and saying it's related to AI are not really being truthful. I feel like that's their excuse for doing basically silent layoffs.
Salesforce is a good example of that. They said, hey, we're not going to hire any new developers. We're going to get more with AI.
I think that was a quote from a couple years ago. And so they may have done that, but effectively they're just laying it off. They're just letting people attrition and they're not back-filling them, trying to get more value out of the team they have.
And while it's always good to shoot for efficiency, blaming AI seems kind of cliche at this point. Just tell people we're trying to do more with less or we're trying to be more efficient, but don't pretend that AI is actually going to change those jobs because it will not. Some folks may say, well, for the spreadsheet monkey job where someone's really just taking these reports and doing these things, spitting it back out again, I can obviously have AI to do that for me.
And the answer is yes, if you review it, but are you going to review it? Are you actually going to make sure it's correct? Otherwise, you're just going to, again, gaslight yourself.
You're going to rely on data that's now not true or there's mistakes in it that you're not going to find unless you have somebody actually doing it. So I think we're still far away from being able to rely on AI to replace jobs. I think there's definitely areas where it's able to improve your job, make things easier or help explain something, but we're pretty far away from replacing use as far as quality goes.
And folks who are leaning too hard into that will find out soon enough that that was a mistake.
[Kyser Clark]
Yeah, I appreciate you unpacking that. My take is I think there is a possibility tech and cyber security are going to replace AI in the long run. And I'm just upskilling and learning as much as I possibly can to future-proof my career.
And if AI starts getting replaced, hopefully I'm on the last of the chopping block. And if people aren't getting replaced, well, I got better and I upskilled myself. So it's a win-win for me.
And I'm just hoping for the best and preparing for the worst. It's kind of my mentality with it.
[HD Moore]
That's fair. I mean, one thing to kind of keep in mind too is that LLMs are all trained on existing data. So it's not going to be able to do something that it doesn't know about yet.
And it may not be able to do something that didn't exist six months ago based on the training cutoff. So if you're working on your own tools, if you're finding new employabilities, if you're finding new ways to mess protocols or new web app bugs, AI is not going to replace that because literally it doesn't know about it. There's no way it's going to find the same thing the same way.
So I think it's more important than ever to have humans in the loop doing the research, doing the pen testing, doing assessments, because you're going to find things that an AI is not going to be able to help you find in the near term.
[Kyser Clark]
Yeah. You know, what's funny is, that's so funny because I was saying I was using Metasploit today and I had asked ChadDBT, I was like, how do I add a custom Metasploit module? And I just want to see if it would give it to me.
And it told me to use Metasploit 3 and we're on 6 now. And I was like, man, this thing's kind of outdated. It was like, launch Metasploit 3.
[HD Moore]
I was like, man, what's going on here? Yeah, it's about, what, nine years old? Something like that.
It's pretty far out.
[Kyser Clark]
Yeah. All right, HD, let's move on to the security MATLABs. Are you ready for security MATLABs?
I'll try to be, yeah. So for those who don't know, for those who are new to the show, HD will have 40 seconds to answer five questions that are phone blank questions. If he answers all five questions in 40 seconds or less, he'll get a bonus.
Six MATLAB that's unrelated to cybersecurity. Family feud, kind of like family feud, that's mine. His time will start as soon as I answer the first question.
All right, here we go, HD. If I had a hacker catchphrase, it would be. Cash me outside.
My most expensive mistake in tech was. AI. A time I got blamed for something I didn't do was when.
Your boss sold exploits to Russia.
[HD Moore]
My favorite aha moment in hacking came when. You realize that the machine you're hacking has been your local machine the entire time.
[Kyser Clark]
If I had a cyberpunk movie made about me, it'd be called. Kyserpunk. 32 seconds.
Great timing. And let's just go ahead and do the bonus. You were in the right for the bonus.
So here it is. It can and it can be related to security if you want to relate it to security, but it doesn't have to be. You can even dodge a question entirely if you don't think it's a good question.
So here it is. A modern tragedy is running out of. Compute credits.
And you can unpack that as much as you want. This isn't our part, so if you want to explain it, you can.
[HD Moore]
If not, we can just leave it at that. I'll just pick on Claude again. You know, if you're not paying the $200 a month fee, then typically you'll run out of credits in the middle of some problem.
And I'll say, try again at 6 p.m., try again at 10 p.m. And, you know, it's kind of like it stops all your work right where you are. So, you know, there's definitely kind of a casino mentality when it comes to using these LLMs where the biggest concern is running out of credits halfway through.
[Kyser Clark]
Yeah, that is a modern tragedy. That's a great answer to that. So, man, you had two good, somewhere on pack, the most interesting response that you had on Securing AdLibs.
And you had two interesting responses. The one, I got blamed for something I didn't do was when, and then the most expensive mistake I did in tech. So do you want to explain one of those?
[HD Moore]
Sure. If you've been following all the drama around L3 Trenchant, effectively somebody who went by the name Jay Gibson in an article said he was unfairly blamed for exploits leaking to Russia when he worked at this company. And his boss said, no, no, someone hacked your phone and they took your stuff and you're responsible for leaking your exploits.
He's like, I didn't even have access to those. I couldn't have done it. And then, so that was kind of in the news for a little bit.
And then shortly after, about a week later, his boss was indicted for selling the same exploits to Russia, to a Russian cyber broker. And effectively, it makes it really clear that his boss basically framed him for the leak of exploits to take the folks off his trail. So it's been a pretty awful story.
A lot of folks I know have worked with these folks and were just kind of blown away about the lack of trust, about someone kind of betraying, not just their friends and their company, but their country and all that fun stuff. So it's been pretty rough. So a lot of folks in the industry are shooken up a little bit by the Trenchant news, and especially with somebody framing their employee for a leak.
[Kyser Clark]
And when did that happen?
[HD Moore]
I want to say the actual leaks were ongoing for about three years, but the FBI made it all public about a week ago. Interesting. I need to do more research on that.
I know a lot. It's a wild story. It's definitely interesting to see the first article where the Jay Gibson comment about being fired for leaking.
And then shortly after, it comes out that his boss was selling them all to Russia. And since then, his boss has pled guilty to the charges. So it's not even alleged at this point.
[Kyser Clark]
Wow. Wow. Yeah, that's definitely something I need to unpack after recording here.
But moving on here. So one of your recent talks, you talk about SSH exposures and vulnerabilities. So you described SSH as effectively the other secure transport.
What makes SSH such a fascinating and still unappreciated attack service in 2025?
[HD Moore]
Well, I mean, we use things like TLS for HTTPS websites all the way along, right? We all know how it works. There's a certificate and it's signed by something like a CA and you click through and you get the thing.
SSH has something very similar, but there's no real CA system. It's simply every host has its own key. And then the protocol itself is pretty wild.
There's not a single way to authenticate. Some of the most common ways you authenticate to SSH are effectively just keyboard interactive, which means it asks you a question and you respond. It's pretty ill-defined for a protocol that's that wide.
But kind of the reason I mentioned the other secure transport is it's one of the most commonly exposed admin services on internet. It is just behind HTTP. It's more common than RDP.
It's everywhere. Every type of device, every OS you can imagine has an SSH server at some point. And we use SSH for all kinds of things from accessing cloud environments to deploying stuff to even doing like ZFS file system syncs in our NAS environment.
Like all kinds of stuff goes through SSH. So we did some research starting about almost two years ago now, where we looked at all the other weird stuff out there. So not OpenSH, not like the common ones, but what about everything else out there and which one of those completely messed up the protocol?
So we look at for the negotiation and a handshake, we say, how about we just skip authentication and ask for shell anyways, and just start jamming shell requests in every other request in the flow. And we built a tool called Shamble, S-H-A-M-B-L-E, that automatically tries all this stuff. So if you have a network full of like oddball SH daemons, you can just run Shamble locally and it'll randomly drop shells.
And some of the vulnerabilities, about 15 or so we reported, it's still finding new stuff all the time. So it's just a really fascinating chunk of exposure that I think a lot of folks aren't aware of. There's also some really cool tricks you can do in SSH.
If you have someone's public SSH key, let's say from like GitHub, you can then figure out which computers on the internet that key has access to with their public key. And so when the XE Utils backdoor came out, we went hunting. We took the GitHub public key for the geotan user from GitHub, and we tried to authenticate the public key to every single SSH server on the internet, trying to figure out which other server that user would have had access to.
And in the meantime, in doing so, we found a ton of other vulnerabilities, memory correction stuff, all kinds of other bugs, but we actually found zero real hits for a geotan. So we're trying to dox the geotan backdoor and find the person involved. And in effect, we ended up finding a bunch of zero to answer.
[Kyser Clark]
Nice. Yeah, that's really interesting. And that's something I need to look into more.
And is that top public?
[HD Moore]
Oh, yeah, it's all public. It's shamble.com. We'll redirect to a GitHub repo.
But it's github.com slash runzeroinc slash shamble. And it's all in Go. It's all open source.
We had to fork the SH library and even the TLS library to support all the oddball stuff we do.
[Kyser Clark]
And then another one in your presentation. So you talk about your next incident won't have a CVE. And you pointed out how many real world breaches come from exposures that never get CVEs assigned.
How do you think organizations should start reframing their vulnerability management mindset?
[HD Moore]
Sure. There's really two reasons for it. Typically, for a lot of emerging vulnerabilities, there's no CVE yet.
The vulnerability just became public that second. What we've seen the last couple of years are that attackers are often exploiting a vulnerability before the vendor knows about it, before CVE is assigned. And so you'll have a breach before you even know about what the vulnerability is.
There's literally no CVE by the time you're compromised. Then you have another category of vulnerabilities, which are not typical. There could be misconfigurations, default password, other weaknesses, default settings that are bad.
And those get used to break into machines all day long, but you're never going to get a CVE issued for it because it's not something the vendor's going to patch. It's going to be inherent behavior in the protocol. So a good example there would be IPMI, your HPILOs, your Dell IDRAX, all those.
They speak a protocol called RACP. And the entire protocol itself is vulnerable. It will effectively give you the password to hash remotely if you ask for it nicely.
And then you can crack the hash when you get it. So almost by definition, the protocol is vulnerable, but there's no fix for the protocol because that's just how the protocol works. Effectively, every machine that supports IPMI has to expose this dodgy authentication system that shares your password hash.
And so that's a case where it's really simple to go dump the hashes or something using Metasploit and then crack the hashes, then go take over the machine through its ILO, DRAC, card, whatever. It doesn't matter how secure your server is. If you can hijack your DMC and control, and if we move the thing to a rescue disk and back to the hard drive and so on.
So there's a lot of interesting vulnerabilities and real world exploits that either are happening through vulnerabilities that don't have a CV assigned yet or happening through things that will never get a CV assigned because they're configuration based.
[Kyser Clark]
Yeah, I think, I mean, that's a great point you make there, because I think a lot of people, you know, like, oh, there's no CV, so it can't be that bad, right? So what would you say to the people that are like, you know, that might be exposed to this stuff and there's no CV assigned? Like, how would you, if you was explaining to someone who was of the opinion that, hey, this is, I'm a CV, so it's not a problem.
[HD Moore]
Yeah, I mean, they're a good question. There's like, find your PenDuster friends and ask them what they use to break in, and half the time there won't be a CV for it, right? It's going to be, oh, I stole a credential here, or I did this, or there's this bug here, or I did a relay or whatnot.
And those are all the fun bugs. So the challenge right now is a lot of the newer security tools are CV only. So if you, and some of the new, like, EDR-based fault management tools, if you install your antivirus or some malware includes a Vuln scanner now by default, that Vuln scanner only reports things where there's a CV associated.
So by definition, you're going to miss most of it. You're missing a lot of exposure. And if you don't know any better, if you didn't know that the tool was only reporting one type of Vuln, not everything, then you're going to assume, hey, this machine's patched.
It's great. I'm good. And instead it's got a default credential.
It's, has a default password hash leak, like the IP my issue or something like that. So you really have to be more careful what tools you use and use something that actually does a lot of like unauthenticated detection mechanisms, like Tentable is a good example of a company that does a lot of really good detection and unauthenticated. And then there's a lot of other tools that are just really bad at unauthenticated scanning and just don't look for these kind of non-CV exposures.
[Kyser Clark]
Yeah. So you've been asked, spoken about the gaps between compliance and actual security. If you could rewrite one rule or principle that the industry tends to get wrong, what would it be?
[HD Moore]
There's this concept that if your server or service exposes a version number to the internet or to the network, that's a bad thing. And it should be a vulnerability. And that's something I'm like working with a bunch of folks to help change.
We feel like, you know, every service should be advertising its entire software bill of materials. Like you should be able to go to any web server and say, give me a list of everything on this machine package-wise, build-wise, library-wise for any application. So like, I would recommend that we have a new dot well-known endpoint.
That's effectively an S-bomb dump for whatever the application is. A lot of folks will say, but no, the hackers will basically use this stuff to target machines. And my take is like, they don't care.
They're going to throw the exploit at it, whether they know the version or not. So you're not slowing anybody down on the attacker side by hiding your version number. You're just making it much more difficult for the defenders to know what to go patch.
So if I had one recommendation, it's kind of counter to existing guidance, is that you really, if you're a software developer, you need to do everything you should to expose your software version, your build date, as much of your S-bomb as you can directly to the network to allow security tools to scan for it. Otherwise your users will let these things rot and not know that they're falling really far behind.
[Kyser Clark]
So you're saying expose the version numbers like in the software, but like, what about like someone who is using these tools and are putting these servers up, like should they be hiding their version numbers?
[HD Moore]
Yeah, there are actually RCVs issued for a tool that exposed version. And it's like, that's kind of, that's not the point, right? If anything, we want every service to tell us exactly what it is.
Like I'd love every machine on the network to say, here's exactly what OS I'm running, which patch level in a way that you can quickly just gather real quickly and make a decision about it. Instead, we've done all this work to hide the first numbers to obfuscate, you know, what specific software we're running. Like you see a lot of new web servers that will take away the server headers.
You don't know that it's Go-based or Nginx or whatever. They like do that for security, but that doesn't really improve security. That just makes it harder for you to know that, hey, there's a new Nginx vulnerability.
What do I go fix? You're not going to know that it's there unless you have an agent on the machine or some other access to that system. And where that all falls apart is when you get a vendor appliance or a Docker container where you don't have the source code, you're basically just getting kind of an opaque binary.
You don't know what the versions are in that software. You just have some service you run. So unless the vendor is telling you, unless there's some way to pull it out, it's really difficult for you to know that this machine is vulnerable without an attacker coming by and popping the machine.
And that's how you find out.
[Kyser Clark]
That's an interesting take. That's probably a hot take a little bit, because I mean, as a pentester, like I've definitely reported on, hey, your version number is exposed here. You should definitely take that off.
But I can see your argument. I can see like, because let's say that you have a vulnerable version and you hide it. And that doesn't change the fact that you're still vulnerable to it.
But now if you get a pentest from someone like me, and I don't see the version number, now I can't tell you that you're vulnerable, because I didn't see the version number in case you hit it. So I can see your argument there. It's an interesting take that I've never heard anyone else say.
So I appreciate that. Rob, thanks.
[HD Moore]
Something we noticed, if you look at all the honeypot logs out there, if we look at like what Gray Noise publishes or DShield Sans or any of those folks, is that attackers are not looking for version numbers. They're just throwing the exploit directly in endpoint. So it doesn't matter what your version is.
They're going to try them all anyway. So it doesn't really save you any time to hide the version in that sense. You're just making things harder for the defenders.
[Kyser Clark]
Interesting.
[HD Moore]
Wow.
[Kyser Clark]
That's like, that's a bomb drop there. So have you, I mean, is there other people that has the same mentality as you on that? Because that's the first time I've heard it.
[HD Moore]
Yeah, there's a group within Western government that's trying to publish a paper on this and say, here's the reasons why it's actually better to show your version than not, basically. And that's still going through review, but hopefully it'll be a thing and a kind of a campaign we can kind of champion soon once it goes public. You know, I think it's fair to talk about it now, because I do think it's an important point, but hopefully this paper will get released soon and get approved.
And we can actually go out there kind of en masse and be like, Hey, everyone, we really want you to expose your response to the network. And here's why you'll help your defenders and it doesn't stop your attackers or improve, you know, making it easier for your attackers.
[Kyser Clark]
Nice. Interesting. Well, yeah, that's, that's first time I heard that.
And I will definitely think about that as I'm doing my future contents for my clients.
[HD Moore]
And I've been the same boat, right? I've done Pantus reports where we've reported aversion vulnerability and say, no, an attacker can use this. And now that I've been in the software space long enough, I'm like, you know what, I don't think that's useful anymore.
[Kyser Clark]
Yeah. So after decades of shaping how people approach hacking and security, what still excites you the most when you sit down and work on something new today?
[HD Moore]
I'm kind of the same old, same old, right? I like discovering new stuff. I like finding things that other folks haven't seen.
Some of my pet projects are on like a big honey net. So I've got a bunch of traffic coming in from a big network segment that goes directly to my lab. And I kind of see what comes in.
One of my favorite pastimes is I see a weird probe come in for TCP or UDP for something like a weird web request or UDP thing. I don't know what it is. You can do it the hard way and go research it, or you just take the exact same request and go scan the internet with it and see what comes back.
So that's been my, my fun way of scooping zero day for random attackers is to look for the things coming to the honey net, turn around and scan the back and then go figure out what responded to this thing that they're trying to scan for. And it's kind of fun because you see a lot of stuff well before it ends up in a conference talk.
[Kyser Clark]
And on the opposite side of that coin, what is your biggest concern in InfoSec world nowadays?
[HD Moore]
It feels like we're not valuing expertise as much. There's a lot of folks that are, sorry, a lot of teams that are very tool-based as opposed to like researcher, hands-on based. You can be an expert at volume management, but that typically means being an expert at this set of tools, not how to do actual vulnerability detector yourself or writing your own stuff or doing the investigation.
So in a lot of ways, the security industry has been kind of codified around being tool operators, not being around, you know, hands-on engineers who are doing this work themselves. And it's probably a weird, it's probably a weird way to frame that. But I guess I worry that we end up with a lot of folks who can't do anything unless they're given a tool when I really feel like, you know, everyone should be using tools where they can.
The idea is like, you should be able to lean on those tools to go further and to find things that the tools themselves can't do themselves.
[Kyser Clark]
Yeah, I can see, I can see the concern there. And I mean, as someone who's been, I've been pending for your app now a little longer, and I mean, I did some training before I got my first job, but it is too heavy. And I mean, that's like every training that you do is tool, tool, tool, tool, tool.
And then it's like the advanced courses are like the ones that teaches you how to do it manually, I suppose. It's like, you don't really learn how to do like the real nitty-gritty manual stuff until you get to like the expert level. And I can, I can definitely see how that's a problem.
[HD Moore]
Yeah, I wish we started the other way around and said, okay, step one, figure out how your keyboard works. And step two, here's Wireshark, look at the network. Step three, now let's talk about exploits and connections, right?
But like, let's get the basics of like programming, electronics, networking out of the way first, and then build on it. For right now, it's very much, I'm gonna use a tool that gets me 90% of the way there. But when that tool doesn't happen to work for my specific use case, like I'm stuck unless I go do all this other advanced training, right?
[Kyser Clark]
Yeah, 100%. I mean, that's, that's something that I'm struggling with right now. Because like, you know, I'm, I know all these tools.
And then when they don't work, it's like, I like, feel like I'm bashing my head against a brick wall. I'm like, well, now I gotta dive deep and figure out how to do this without the tool. And it's so frustrating, because it messes up my routine.
And because I get in a flow, and it's like, man, now I gotta do this. And like I said, the tool definitely spoiled, spoiled us. And you're absolutely right.
So with open source communities, you talked about them being both terrifying and exciting. What lessons from leading my exploits still guide how you build teams and communicate around tools today?
[HD Moore]
Open source is kind of the terrifying, exciting, I mean, from the perspective of running a project, you're going to get so much criticism for anything you do. You're basically putting anything you're working out there for, you know, a critical eye by the entire internet, right? And especially if you put something out that's like a security tool, there's nothing security people like to do more than criticize other people's stuff, right?
It's very much a dog pile on this is wrong for these 10 reasons, because that's what we do all day is we find problems with stuff. So if there's a new security tool, you're gonna find all the problems with it, then you may say, oh, it's actually useful now. But open source is great.
I feel like the only way we're going to level up the field, the only way we're going to build the capabilities is to go build these tools. And for a long time, the open source tools were actually the cutting edge for what was happening in security. And now we've kind of seen it fall down a little bit where security tools, even like Metasploit as of about 2006, seven or so, we went from being the folks to build the tool in the first place and build that in the first place to just trying to keep up with what the attackers are doing in the web.
Like we went from like, here's a new exploit technique to CRUD. Here's like what technique being used in the wild. Let's go put that into Metasploit instead of built and invented ourselves.
So we definitely saw a flip there. And these days we're very much attacker heavy. Attackers are the ones who are actually coming up with new techniques.
We're finding new ways to break into stuff. Like the stupid click fix thing, right? It's the dumbest one to build in the world.
Like you tell someone to hit control R and then paste a PowerShell string and hit enter. And yet it's incredibly effective. And somehow as pentesters, we were not doing this to people 10 years ago.
It did not occur to any of us that it would be obvious or not obvious to somebody that that would be a bad thing to do. So things like that, I feel like the attackers are really pushing the innovation still in offense right now. And I would love it for it to be kind of the other way around where more folks in the industry are pushing it.
And I think the way you do that is through tools and collaboration and trying to constantly up-level your tools through feedback and pull requests and so on.
[Kyser Clark]
Yeah, that's a very interesting point. And why do you think that is? Why do you think the hackers, the criminal hackers are on the bleeding edge?
Like why are they innovating more than the white hat hackers and pentesters?
[HD Moore]
I'm sure it's just money, right? You get paid a lot better to be a criminal if you can ransomware somebody or double extort them than just doing your day job someplace. So we saw that recently with there was three individuals, two of them were working as ransomware negotiators for legit companies who are also running a ransomware gang on the side, right?
So that's just kind of how things are. Same with exploit development. We've gone from like people sharing exploits because they're cool and they like to show them off to now selling them.
And if you're not selling to ZDI, you're selling to another broker or you're like this gentleman at LV Trenchant and selling to a Russian cyber broker. The money is just crazy these days for how much money you can get for, even if you're not doing the attack yourself, for building the tools and selling the tools. And I think that's really pushed all the incentives to be, keep your tools close to your chest, don't share exploits.
Fortunately, there's still a lot of folks who are publishing full details about vulnerabilities they find, but in a lot of ways it's gotten hyper-commercialized because of how valuable this stuff has become.
[Kyser Clark]
Yeah, it makes sense. I mean, money is a huge incentive in everything. So it makes sense.
So H.D., we're running out of time. And I got the final question here. Do you have any additional cybersecurity hot takes or hidden wisdom you'd like to share with the audience?
[HD Moore]
I guess the one I've been thinking about for 15 years now is that Python still sucks. We didn't use it for Metasploit for good reason. I still hate the fact that everything around AI or machine learning or some scientific stuff is Python-based.
I think the entire v-invent thing is a awful, awful system. Every time I have to touch Python, I regret it immediately, either when reading the code, modifying the code. It's the only language that you literally can't see through the indentation in the white space, right?
So my hot take is Python's still awful. Please pick something else.
[Kyser Clark]
Yeah, I'm so glad you said that because as I was walking around today doing my workout, I was thinking about this interview. And I was wondering, why is Metasploit in Ruby? Now I know.
Mystery solved.
[HD Moore]
So at the time, Ruby was almost completely unknown. I think Rails had just started or Rails wasn't, which is kind of becoming a thing. The challenge is there was two commercial projects, Core Impact and Unity Canvas that were starting around the same time Metasploit was.
And in the case of Core, they actually beat us to it. They had a product out before we even started working on Metasploit. I was an early beta user of Core Impact and it was all written in Python.
And so with Metasploit, we got a lot of criticism early on saying, oh, you're just stealing code from these other people. And so one way to prove that no, we were not doing that was to pick a totally different language. So Metasploit originally was in Perl and then later on in Portable to Ruby.
But in doing so, I still think those languages were the better choice for what we're trying to do at the time to class structures and all that mix-ins and all that were much better fit. But these days, I think a lot of people are excited about using things like Rust. And I'm too old and too dumb, and I'm looking to use Go because it's much simpler and I can get my job done.
So I think the Go security tool and ecosystem is amazing. There's projects out there like Nuclei, Subfinder, all kinds of really cool Go-based security tools like Amass. So if you wanted to get into building your own tools, I think even though Rust is probably the more popular trendy choice, I think Go is still the better language these days for working with the ecosystem.
[Kyser Clark]
Thank you for unpacking that, HD. And thanks for being on the show and running your expertise and your insights and your opinions. Really do appreciate it.
Where can the audience get a hold of you if they want to connect with you?
[HD Moore]
You can find all my stuff at hdm.io. And it's got everything from my signal contact, email, LinkedIn, and all that stuff. There's a lot of impersonation scams that pretend to be me and try to recover Bitcoin. And so I had to basically put a whole thing on the website saying, hey, I'm not recovering your Bitcoin.
Someone says they're me, there's a scammer and so on. So yeah, it's really important to have all your identity verification in one place these days.
[Kyser Clark]
And audience, best place for HMH to drop a YouTube comment. I'll see it. And if a question I will answer it.
And if it's a good question, I'll feature it on one of my Q&A episodes. The audio listeners, if you're on Spotify, Apple Podcasts, or another podcast platform, do me a favor. Ready to show five stars, but I'll have to show out a ton.
Try to show it to your friends. And if you're on YouTube, hit the like button and subscribe for more cybersecurity content. This is Kyser and HD signing off.