TLP - The Digital Forensics Podcast

Episode 11 - Velociraptor, Containerisation and Infrastructure Deployed as Code with Myles Agnew

Clint Marsden Season 1 Episode 11

Send us a text

In this episode of Traffic Light Protocol, we sit down with Myles, a cybersecurity veteran with over 15 years of Cyber experience and background as a Combat Engineer in the Army. Myles brings his unique perspective on integrating automation and cloud technologies into cybersecurity infrastructure deployment (Used specifically when deploying Velciraptor-  an advanced open-source endpoint monitoring, digital forensic and cyber response platform).

We delve into his journey from the military to his current role in deploying and managing advanced cloud infrastructure using docker containers and kubernetes orchestration platforms.

Quotes from Myles:

  • "My time in the Army taught me the value of precision and strategy, which I now apply to cybersecurity."
  •  "Cloud environments offer flexibility, but they also demand a new level of vigilance and control." 
  • "With containerization, we’re not just deploying applications; we're creating a more secure and manageable environment." 

Key takeaways:

Strategic Integration: Integrating automation and cloud technologies can significantly enhance both the efficiency and effectiveness of cybersecurity practices.

  • Proactive Security Measures: Shifting from reactive to proactive security strategies is essential for staying ahead of emerging threats.
  • Cloud Security Fundamentals: Understanding the fundamentals of containerization and orchestration is crucial for maintaining a secure cloud environment.
  • Efficiency Through Automation: Automation not only speeds up response times but also reduces the likelihood of human error in security processes.
  • Vigilance in Cloud Environments: While cloud technologies offer numerous benefits, they also introduce new security challenges that require continuous vigilance and adaptation.
  • Role of Military Experience: Insights gained from military experience can offer valuable perspectives on discipline, strategy, and precision in cybersecurity practices.
  • Future Trends: Keeping up with trends in automation and cloud security will be key to adapting to future cybersecurity challenges.

Links and resources:

Contact Myles

Website: MylesAgnew.com
Github: https://github.com/mylesagnew

ASD threat intel:
https://www.asd.gov.au/about/what-we-do/cyber-security

Tools:
Cuckoo Sandbox- https://github.com/cuckoosandbox
Wordfence - Available in Wordpress plugins
WPS Scan (on Kali Linux) for scanning your own Wordpress site for vulnerabilities
Yara Signator: https://github.com/fxb-cocacoding/yara-signator 

(0:01 - 2:39)
Yeah, well, it's great to have you here, mate. I just wanted to kick off with what brings you here today. So, yeah, obviously, I'm in cyber just like you. 

I'm on the blue team side. I'm mainly based around infrastructure and engineering. And my background is network and engineering. 

So I've done a lot of networking and cloud infrastructure. So I've been there from the old days where you had metal and then virtualization, and now containerization and cloud services and microservices that everyone's talking about. Do you ever miss the old days? Uh, no, because, you know, restoring a system with 50 floppies is a lot harder than, you know, plugging a USB in and just running a bit of code and walking away and it's, you know, done in like 35 minutes, 40 minutes. 

It used to take hours and hours to get the end point ready to use. Like, are you serious? There were 50 floppies to do a restoration? Um, back in the old days, yeah. So you had Windows, which was around about 30 floppies, and then any apps that you had. 

So, yeah, it was a long process. You wouldn't just, you know, plug a USB in and get it done. CDs obviously helped. 

When they came out, you'd plug in a floppy, boot off a floppy, and then, you know, install from CDs. But you'd still have install CDs. So you'd have one for your operating system, one for your office package, and then your corporate apps. 

Did, uh, is that, is that why, is that why you've kind of, like, is all the automation stuff that you do these days, is that, is that kind of born from, I never want to see another floppy disk again? Yeah. So it's, it's, it's all about time, right? So you can save, time's probably the only thing that we can't replace. So the quicker we can make or do something that, you know, the better we are, we can, you know, work on more complex problems rather than, you know, the low level ones. 

For example, like that CrowdStrike outage with the blue screen of death, you know, that should have never have happened, but it, to risk, to recover from that, it's like 15 minutes work. Back in the old days, it wouldn't have been that. It would have been a lot, you know, going through the floppies and starting all over again. 

So we've come a long way. Yeah. Well, we have, um, I heard something about, uh, a method of recovery for the CrowdStrike thing where I think people were trying to put BitLocker keys in and someone said, use a barcode scanner to basically use it as a keyboard.

(2:40 - 3:12)
Is that, um, is that what you mean with the 15 minute recovery option or is there something? Yeah. So normally you wouldn't go in BitLocker mode that would only happen on large enterprises and large enterprises have other methods of deploying software. So they can use other tools to recover, but most small businesses would have been able to just boot in safe mode and delete the file that was causing the issue and then recover that way in about 15 minutes work.

(3:12 - 3:44)
And that's what generally most people were told to do. Barcode method. Yes. 

That's awesome for inputting a BitLocker key because, um, if you've ever done that before, I know I have a couple of times. It is very frustrating. It's, uh, it's painful. 

And if you've been working long hours, like you, I don't know, the text kind of jumbles and sometimes I'll put in like two numbers the wrong way. I'll do it with phone calls as well. Like if I'm dialing a number for the three and a seven instead of a seven and a three and I'm like, Oh, why is this not connecting? Easy to do.

(3:45 - 6:25)
Um, so is that, is that something that, that you love working on? Like solving those, those complex problems? Yeah. And also preventing them, like having the right tool set to go, all right, we've seen this happen on a percentage of our fleet. How do we stop it happening to the rest of the fleet? And there's, you know, multiple tools, uh, coming back to that, uh, CrowdStrike one, you could have used, uh, app control and then just sent the hash to all your app control agents and said, block this file from downloading or executing. 

That's another way around. Uh, there's, you know, so having that diversity in your technology stack is really important. I know a lot of companies, I'm very monolithic with the Microsoft stack or the open source. 

I come from an open source background. So I've got a mix of the both proprietary when you work for large enterprise and government, and then obviously the open source community as well. It's really important to have that, um, diversity. 

Um, I'm interested in, in your open source background. That's, that's not something I knew about you. Is that, were you contributing to projects or you've used it a lot? Yeah. 

So WordPress is probably one of the main projects that I, um, help with, uh, not in writing code, but actually providing security advice, um, mainly around the way it's installed in the way that app's deployed and also, uh, helping, uh, security vendors develop, uh, products. So for example, one of the greatest, um, security, uh, apps in WordPress land is a WordFence and having threat intelligence in your WordFence is really awesome. And that helps, uh, reduce your risk. 

I got threat. Um, sorry, I didn't get through until I got, I got WordFence installed on, on my WordPress installed. It was like one of the first things I've done. 

Cause I've done some IR jobs before where that was used as a bit of a recovery phase. I think we're doing some scanning to find some web shells, which was, which was pretty cool. What, um, what is that threat Intel piece on, on WordFence doing for you? So it basically gives you a, like a block list, uh, what to block from your websites. 

So that's really cool, especially around your admin page, because what usually happens is they try and, um, get password resets and just clog up your whole site with a denial of service. Uh, and what WordFence does is, you know, shut that down fairly quickly and also speeds the threat intelligence into their repo, where they deploy it to other WordFence sensors. So it's, um, yeah, really great little, um, plugin that I highly recommend people using.

(6:25 - 6:54)
And it's, it's WordFence is free, right? There's no upgrade. Correct. Yeah. 

Yeah. You can buy the pro version. So the pro version has some extra functionality like 2FA and some really good, um, uh, scanning capability and some recovery capability. 

So like, it doesn't allow modification of files. You can say, lock these particular files, or you can say, I want to see what has changed between this version and this version and I'll show you. So it's, yeah, it's quite clever.

(6:55 - 10:22)
Yeah, that's, that's a good recommendation. I might do that myself. I was kind of on the fence, you know, starting a new blog, there's obviously like so many costs and you just go, oh, I might just do the cheap option, but I think it's paying off because I had, I think someone was trying to DDoS the site and also just try and get in with password spraying for the admin account for a while. 

I get an email every, every night from WordPress or WordFence saying, yeah, we've blocked these IPs and keeping these bad guys out. So it's, it's handy because it's just such a, such a wild, it's the wild Western. Oh, absolutely. 

So I've even pivoted slightly even further than that. So I use WordPress to generate my content and then I create static HTML, deploy it to a cloud instance and then use APIs for any functionality that I want. And those APIs are locked down to only perform a certain function. 

So you can reduce your attack vector down to, you know, very minuscule rate by doing that. I love that. That is so over the top and I want to dig into it, right? So can you go back a few steps here? WordPress generates the content. 

You're still typing it. You're, you're not using a, an AI function of WordPress. Okay. 

So I have a container, Docker container on my window, on my windows machine that runs WordPress and it's generates, has a plugin there called static HTML and then it spits out the HTML and then the functions like the form functions, I use type form, which is a, it can be an API or you can directly link to the type form and you just upload that to your cloud provider, which could be any of them. I, at the moment I'm using Google, testing Google out in their platform and all the HTML files sit in there and people connect to it and yeah, it just works like a normal WordPress site, except it's all static HTML. So there's no login, there's no RPC, RSS stuff as well. 

It's got the site map, very simple and yeah, it just works. Has that, has it changed? I kind of know if you're doing what I do on the weekend, I'd sometimes get a bit curious and go through the logs when I should be taking some, you know, some downtime, but I can't help myself. Is, is that reducing those attacks that, that you might've been hitting, getting previously? So the, really the only logs you see are the objects getting accessed from the IP addresses and that's it. 

That's all the logs you see. Everything else, the interactions and the posts are done on third parties like type form or you can have other APIs to other apps. I think active campaigns, another one that I use it's a commercial list, but yeah, you can, integrate it with a whole bunch of like Google forms if you wanted to. 

There's, yeah, there's nothing stopping you doing that. If you, if you did have an incident, where do you, I don't know, I'm not, we're not trying to like show all of the vulnerabilities of your setup, but like where, where would they be and what, what artifacts would you have to look at to dig into it? So to dig into it, there'd be only two places. So you'd look at the third party app that I'm using, like type form or active campaign, and then the logs in GCP itself.

(10:22 - 11:50)
And it would tell me, you know, if someone's got hold of my, you know, bucket or misusing my keys, that's the other thing. API keys are obviously vulnerable. You obviously want them, have them under control and have alerts. 

For example, if you're sending someone entering forms on your website, you have capture controls, but you also have a backend control for volume. So if, you know, you get more than five per second, you should slow it, slow the volume down and, you know, prompt for more captures or some other prompt to slow it down. Yeah, there's quite a few ways of doing that, but yeah, the artifacts that you're looking for for response would be in the Google Cloud logs and the application that you're using. 

And then you'd combine them together to get an understanding of what's occurred. So we just pull them down and then throw them into some other analysis tool. Would you be able to pass them in something like Elk or could you just be the CSVs? Yeah, yeah. 

They can go into Splunk, Sentinel. There's nothing fancy about these logs. They're a Ceph common format that, you know, all vendors use, depending on the app. 

Third-party apps may not provide that level, but generally they'll provide it like a CSV that you can export and that'll give you some detail. Yeah, nice. That does make it, yeah, it does make it easy.

(11:53 - 17:01)
I want to take it a bit more, a bit more left field. What do you reckon some of the tools that you're using right now are indispensable for your job? Yeah, so around my job, infrastructure as code is the core of what I do. So Terraform, Kubernetes, Docker containers. 

So having systems already pre-built with a lot of the base software is really important. A lot of software vendors, such as Velociraptor, offer containerization in Docker containers. So you can use those and then orchestrate the whole deployment, whether it be Kubernetes and then have Terraform deploy the Kubernetes cluster with the Velociraptor inside, with the whole deployment across, you know, multiple cloud providers. 

And that's the ultimate right, not to be locked into a particular vendor, having the ability of lifting and shifting that capability from any cloud provider you need. Absolutely. Yeah, they're the core products that I live with every day. 

Obviously use the three different cloud vendors, Azure, AWS and GCP, but mainly I live in Azure space for enterprise. Okay. My ears perked up when you mentioned Velociraptor. 

Tell me more about what you're doing in that space. Yeah, so it's a DFIR tool, for those that don't know, is an open source tool sponsored by Rapid7 at the moment. And it's basically a tool that allows the DFIR teams to get artifacts from endpoints. 

What I provide is a deployment method. So in cloud, previously in my previous roles, I've set up Terraform scripts that deploy into cloud providers and then connect into whether it be an EDR tool or a application deployment system. So an EDR tool like CrowdStrike, you could use it to deploy Velociraptor or Cortex. 

The other tools you can use is like Intune. You can deploy it via Intune and you basically deploy it across your fleet and then have that data suck up into the cloud and sit in buckets and then run your queries across those buckets to get an understanding of what's going on. That's the high level concept of the deployment. 

And yeah, my role is to obviously deploy those mechanisms to allow the DFIR team to get in there and run their custom queries and start looking for artifacts. I think that being able to deploy it quickly and kind of reliably and at a script level is the best thing ever. Are you a fan of creating a new environment for every new engagement? Yes, 100%. 

And also some enterprises will block the native port. So running it on 443, obviously using different cryptography techniques per engagement as well. So you're not reusing the same credentials or the same mechanisms. 

The only thing that is the same is the backend infrastructure as far as it's set up in a particular way. But yeah, it's all fresh every time. How does that kind of get, does that get communicated to you when you build it? It gives you all of the creds that are needed and certs? Yeah, I've got that all built in as part of my process. 

So when I write deployment code, I use a two-step method. And if anyone's checked out my GitHub, I do it in all programming languages. What is your GitHub by the way? It's just called Myles Agnew, M-Y-L-E-S-A-G-N-E-W. 

You can look up and I drop my PowerShell scripts and a few other scripts in there, but I use this methodology because I come from a security background. So I have all the functions in the main script and then I use an environmental variable file to put in secrets and stuff like that. So I put all the variables in this secure file. 

And when I deploy on GitHub, I just share the generic details in there and not secrets and all that good stuff. But this enables me to write code and change code as necessary because sometimes you might deploy in a different subnet or a different provider or whatever. So all those variables are kept in that variable file and I just change them as I need to so I can reuse my code. 

So I'm one of those fans that write once, use many. And to save myself from being breached or causing security concerns, I only use that file for the credentials and stuff like that in that file. I'm still learning. 

Everyone that's in this industry is always on a learning curve. So I'm still learning how to use key vaults and passwording or parsing crypto creds and all that sort of jazz in the automation as well. But at the moment, this is how I do it.

(17:01 - 17:50)
Yeah, that was going to be my next question. If someone was to deploy this themselves, what steps do they need to take to make sure that if their device was compromised by something like Raccoon or Redline Stealer or Vida or someone stumbles across their device, how should they be protecting all those cryptographic material like passwords, right? That's what we're talking about here. Yeah, absolutely. 

If you're aware you've compromised, obviously start rolling all your credentials on a different device. But yeah, a lot of people use different like password managers and key vaults as such. So that's another methodology.

(17:50 - 19:15)
But yeah, if you've been compromised, the only option is to reroll everything. I personally use YubiKeys as part of my cryptography. So my password manager, I've got two password managers. 

I've got one in the cloud and then one on-prem. So the on-prem is for all my lab stuff and they both use YubiKeys. And as part of my master backup plan, I've got a spare YubiKey and that's in a safe where if something happened to me or something happened to my YubiKey, they could get the YubiKey and if they knew my credentials, they could open up a password vault. 

Mate, you have covered all the bases. I feel like you're scripting everything, you're making it easy, you've got DR plans in place. Like, is there anything that you haven't thought about in advance? It's usually trial and error. 

Something's happened in my life that sent me down this path. So why I got into cyber, I think it was 2003, 2004, one of my websites was hacked by a Brazilian hacking group and defaced and all that sort of stuff. So that's sent me down a magical path and I'm still on that path to this day to make sure that sort of thing doesn't happen, but also having that when stuff really goes wrong, that you have the capability of standing yourself up again, because that's the most important part.

(19:15 - 20:55)
Getting knocked down, everyone's going to get hacked eventually. It's how quickly you can get up and keep moving forward is the key goal. I feel like that has an origin of a Rocky quote It is. 

Absolutely. Go the blue team. Yeah. 

And that's what it's all about, right? And I've dealt with red team most of my career and they're really good at what they do. And it is so much easier to destroy something than it is to build something. I remember early in my career in the military, I used to blow stuff up. 

And then when you blow a bridge up, for an example, they turn around and make you build the bridge again. So there's a lot of hard work in building something than destroying it. So that's why I focus on the building side rather than the destroying side. 

That's a very balanced approach then. Yeah. Well, you understand how simple one mistake, I mean, we saw it with the CrowdStrike, it was just a simple mistake, but it caused so much havoc. 

And if you have these certain controls in place and that doesn't impact the majority of people and having that understanding is really important. Yeah. Because that's a good point, right? When we're building things, especially in an IR capability or like a forensic capability, it's really easy to just focus on the solution of I've got a problem that I need to answer this question right now. 

I need to complete this investigation right now. And it's easy to take shortcuts. 100%.

(20:56 - 21:51)
Yeah. So it leads me into the, I don't know if you've heard this hierarchy of competency. Is this where you can be studied as unconsciously incompetent? Yes. 

Yeah. And then you move up the chain. Correct. 

And then conscious, incompetent, conscious, competent, and unconscious, competent. So the best way I describe it, because I was born in a town called Bathurst in New South Wales and we had V8 supercars on Mount Panorama. Unconscious incompetence, I always refer to as driving. 

You can jump in a car, you don't think about it, you put your indicator on, you drive, and it just happens. Like you don't have to think about it because you've done it that many times. That's unconscious incompetence. 

When you first started, you were obviously unconscious incompetent, didn't know what an indicator was, didn't know steering. And then you obviously progressively grow up that chain. Yeah.

(21:52 - 22:41)
The essential, the ultimate goal is to get unconscious competency, but not all things are capable of doing that because especially in our field, things change so much. So what was true today might not be true tomorrow. Yeah. 

It's like what you said before. We are always constantly learning. And I think as you said, if you're in cyber, if you're not learning, you will be left behind before you know it. 

Yeah. You'll be irrelevant fairly quickly. What's your favorite way of keeping on top of the latest in cyber and learning new things? Yeah. 

So I listen to a lot of podcasts and audio books. Cyberwire Daily I know is an American one. It's quite interesting.

(22:41 - 23:18)
I find that interesting for the news. I listen to a few technical ones from like AWS and GCP, security professionals and understanding what they're doing in their fields. And then obviously I pick out audio books relevant to what I'm doing at the moment. 

So at the moment, I'm doing a lot of cloud infrastructure sort of stuff. So yeah, I'm looking at audio books around that and mindset and personal development. You're just on the go. 

You're unstoppable. Yeah. Well, that's the thing.

(23:18 - 24:19)
If I'm kept busy, I'm less likely to sit around twiddling my thumbs. I've got a lot of little projects and the variety adds to understanding the stuff. So I do 3D printing as well. 

And I've got my own home lab. So I don't do everything in the cloud. I try and do a lot more in my home lab before deploying to the cloud. 

And it's one of those things. I'm one of those learners that learns a lot more from doing than watching. So I'm happy to watch like a YouTube video on how to do something, but I'll do it at the same time. 

That's generally how I operate and learn. That's the best way to move forward. I think if you can just consume content forever, but I think there's a difference between the content that is being released by content creators and what is involved in actually performing the work itself. 

There's stuff that might not be sexy, but you still have to do it. Yeah. And some of the stuff is missed.

(24:20 - 26:06)
Like for example, you might get basic content from a provider, but when they made the video and what it is now, the systems change and it doesn't match. So actually doing it is a lot better because you understand the whole process rather than just the little snippet that you just saw. Yeah. 

Well, that's the thing. And you don't understand what is missing until you were doing it yourself. And so I guess it's easy to become a little bit academic in, if we just are constantly reading and reading and reading, and I've fallen into this. 

I just thought, Oh, I don't have enough information. But then I did some training recently where I actually had so much more value straight after the course, just doing the labs over and over again, and just building that capability and then figuring out, Oh, that doesn't really line up. And that doesn't work how it worked in the class. 

And then kind of going, Oh, why, why is that? It just, it was like doing a, like doing a guided CTF, I guess. Do you get into CTFs much yourself these days? I've done a couple. I've got friends that just, that's all I do is CTFs, but yeah, normally I look at building the infrastructure for CTFs. 

Oh, nice. Okay. Yeah. 

Any, any ones you can talk about publicly? Mostly around WordPress because I've, you know, I understand that's the environment that I sort of really learnt in. Yeah. There's so many plugins and so many variables in that environment that you can create vulnerabilities that are so easy to manipulate and drop web shells into. 

And yeah, it's, yeah. If you're deploying a capture the flags, yeah. Use WordPress.

(26:07 - 26:18)
I don't think I've ever been to any capture a flag where they haven't had a variant of WordPress installed on something that has been compromised or has a flag in it. Yeah. That's a common thing.

(26:19 - 27:09)
I would love to get some ideas on how to build a CTF. Like what, what would be the dummies guide to getting started? Yeah. So obviously the infrastructure, I use a containerization or virtualization. 

So Proxmox it's fully open source and you basically configure your services in there and have a network and obviously plug it into either a wifi or an ethernet network and then allow your flag capturers into that network. And then they obviously can scan your infrastructures for vulnerabilities and start looking at capturing those flags. But there's quite a few different tool sets out there that offer and run in those virtual environments.

(27:10 - 29:59)
I've never deployed any of the tool sets. I always use like ones that I know of like WordPress like I stated before. So I'd fire up a container with WordPress that had the vulnerability and then I would test it using Kali Linux and do it that way. 

But yeah, there's quite a few capture the flags repos out there that have full VMs, full docker containers that you can just ingest into your lab and play around with. So that's, that's the best way to do it then just use something that's already containerized, already released on GitHub and then off you go, plug in and away you go. Yeah. 

That's the quickest way to learn how to do it. And then obviously build your own. The reason why I built my own is because my speciality was WordPress. 

So that's why I did it that way. But yeah, if I was to redo it again and, you know, build a more engaging system, that's what I would do. Did you do anything like building a scoreboard as well? Or is that, is that something that is available? I think that's something available. 

I've seen a few different tools. I recall that Splunk had something as well. You could have a scoreboard in Splunk. 

Done a lot of blue team training as well with Splunk and yeah, they've got some pretty cool scoreboarding tools. Yeah. I remember doing their like boss of the sock stuff. 

It's always what's it, what's their theme? Like there's some craft brewery in San Francisco or something. Yeah. Yeah. 

Yeah. That's the one. Yeah. 

They, they have a lot of 80s themes in their flags. So I remember the mobile, the phone hacked was that Jenny, Jenny song, 8 6 7 5 3 0 9. It's stuck in my head. And yeah, that, yeah, that, that, that number was used. 

So that's how I, yeah, it's quite hilarious. You, you talked a little bit about building those Velociraptor environments before. Have you got anything? I think it's like, I've seen it when, when Mike, Mike Cohen released it, when I say five years ago, six years ago, and it's come so far in that time. 

If, if you were doing, if someone was asking you, Hey, with your experience with, with deploying it and building these automation scripts to get it out there, have you got anything that you would say, make sure that you do these things to make your life easier? And these are some, some gotchas. Yeah. So the big gotcha is obviously firewalls controls. 

So, you know, the native port that Velociraptor works on, you may need to change to 443 or port 80, and don't be afraid of using port 80 and then encrypting on that, but just double check that the.

And it's come so far in that time. If you were doing, if someone was asking you, hey, with your experience with deploying it and building these automation scripts to get it out there, have you got anything that you would say, make sure that you do these things to make your life easier and these are some gotchas? Yeah, so the big gotcha is obviously firewalls and controls. So the native port that Velociraptor works on, you may need to change to 443 or port 80 and don't be afraid of using port 80 and then encrypting on that, but just double check that the web proxy or the firewall allows. 

Otherwise just use 443. On 443, would you use your internal CA to throw a cert on that or would you buy one? Yeah, I always use self-signed between the two so no one can sort of intercept because if someone gets hold of one of your certs, obviously they can do some incredible things with them. But on the endpoint, it's basically a, remember this metrics of the city, it's useless. 

It's like a public key. It's, yeah, the private keys on the server, the public keys on the endpoint. So even if they popped an endpoint and got the key and tried to, the worst thing they could do to you was denial of service and just upload a whole lot of garbage. 

But other than that, yeah, you're pretty, security wise, pretty safe. Using third parties is always interesting, but yeah, I always like to separate the infrastructure out. So when I do enterprise deployment, I don't use any of the enterprise certificates or anything like that. 

Always use other ones. And if you've got a web filter, making sure that they allow that particular CA or certificate through, because obviously the newer technologies, especially the cloud-based ones can do SSL inspection and check the certificate validity. And I've seen people use like let's encrypt or commercial CAs as well. 

It's really up to the engagement, how they want to do that. But I prefer self-signed and knowing the serial of each end is a lot easier than having another third party in the middle. Just adds a bit more complexity and a bit of delay sometimes.

Yeah, that's it. The delay is probably the, usually when I get a call to deploy something like this, they want it yesterday and I'm trying to deploy it in 30 minutes, 45 minutes. Yeah. 

So that's just the server side, getting that all spun up. And then obviously you've got all the endpoint deployment that can take, depending on how big the environment is and how complex it can take a couple of days before you get a good 80, 90% saturation. Yeah. 

Okay. And that would not just be limited to cloud, right? I mean, would that be the most of it or is there an on-prem? Yeah. On-prem, endpoint devices.

Yeah. You can do the orchestration piece both in cloud and on-prem. It really depends on the enterprise, but if you're like a small enterprise, like a little server, you could containerize it all in there and just deploy directly from there and run it all in one hit rather than using a cloud provider. 

Because there'd be plenty of people who are still using bare metal on-prem. It's easy for us to working in the cloud all the time to go, oh, of course you're in the cloud, but there'd still be heaps of people who run local servers, Yeah. And you've got like ones that have a whole bunch of test servers that are laying around doing nothing. 

So you can repurpose those for these types of deployments and engagements as well. Yeah. Coming from an old metal background, I still have my own labs that all run on metal.

And yeah, it is great because you can transfer between the two. It's not hard. It's just using a right platform like Kubernetes or Docker and transmitting those images across and using them in the right way. 

If someone's, say you've got a small firm, IT guy who works there is pretty switched on. They have an incident, he's done some good training courses. He's done one about Velociraptor, but kind of never got around to deploying it.

What's their best option to get spun up locally? So I would, the first thing I would do is go onto GitHub and look for the Velociraptor repo and yeah, follow the instructions from that and unload it and deploy that way. Would you go, like, would you say Docker or Kubernetes for them? Well, in the instruction set, it's Docker and Kubernetes is more an orchestration. So like it's technically different, but Kubernetes basically can get a Docker container and orchestrate a whole bunch of other components. 

You still need Docker to provide the base image of the app and all the dependencies that are in there. Whereas Kubernetes does a lot of the orchestration, the variables, like what provider, how much CPU, how much memory and all that sort of stuff. So it's slightly different, but yeah, you could do it that way, or you could, you know, just download it onto the raw machine and run it that way. 

So it's really up to the individual. Oh yeah, that's right. You can run it, you can run it just from command line, from the downloaded directory. 

Yeah. Yeah. Yeah. 

So that, yeah. Yeah. If anyone's starting, that's the first point because that's how I first learned about Velociraptor is I downloaded it and installed it and I've used other tools like OS query as well.

That's a great open source tool for getting data around your endpoints and understanding what your endpoints are doing. Is that giving you like WMI based information from Windows endpoints or is it Linux as well? Yes. So it is Linux as well. 

And I believe it's Mac. So it just gives you a whole bunch of metrics and you can, you know, obviously run queries across it, use it as a source of truth of what's going on in the endpoints. I know a lot of people, especially small enterprises use OS query for like vulnerability management. 

So what happens is they get OS query to dump all the, you know, application information into a central repo. And then they run a query across that repo to go, Oh, are we running this vulnerable version of what, you know, Adobe or whatever. And then they can go, Oh, here's a list of all the machines that are running Adobe.

At an old job, we used to have a tool called land manager. And I have a feeling that that was always query driven. When you said they just dumped it into a database and you can query it. 

I'm like, that sounds like what they were doing. Yeah. Yeah. 

A hundred percent. Yeah. I think it was SQL kind of dream. 

You can drop it into SQL. You can drop it into Syslog, a whole bunch of other stuff. What's what's the query language that you use for OS query? Well, it, yeah, it depends on your end point really. 

So it uses basic SQL commands. Oh yeah. Okay, cool. 

Yeah. All right. So that's, that's easy then.

That's pretty interesting. Yeah. Yeah.

Is always query open source. Yes. A hundred percent open source. 

Give that a big tick. Yeah. So I'm, yeah, I'm very passionate about open source. 

Two reasons. One is that you can look at the source code and if you have issues, you can, you know, report them to the developers and help the community. But the other part of it is, is giving on it to close proprietary software.

Cause I don't know if you know, but when the internet was, you know, growing in the late nineties, it was open source that really led to the massive growth. If it wasn't for open source and using proprietary systems, I don't think the internet would be as capable as it is now. People were just like volunteering their time and to work collaboratively. 

We had such a larger, I guess, surface area of people building and bug fixing and implementing. Yeah. Well you think of the browser wars back in the day where Microsoft had a proprietary browser and then you had Netscape and Internet Explorer. 

There were other vendors as well as Opera and a few others. Yeah. That was good. 

That was fast. Yeah. It still is. 

And now you've got Brave and you've got more, you know, security conscious type browsers, but there were web servers as well on the other side of that coin. So, you know, to publish a website, if you had to pay for IIS and, you know, SQL, half of these businesses wouldn't be available. Google certainly wouldn't have been invented.

Google's a massive contributor to open source and a massive developer of open source projects. The whole Android ecosystem is open source. Yeah. 

So that's the advantage. Yeah. That's the advantage of open source is that the communities there, especially in Australia, we've got a Linux foundation here in Australia that I believe Velociraptor was first launched at a Linux convention here in Australia. 

That's a nice bit of history that you don't hear every day. Yeah. Yeah. 

So, and a part of that presentation was showing how Velociraptor could get Linux information rather than having to, you know, go to different log files and download them. And it was really cool. If you have a look at Velociraptor website, I'm 99% sure you can find it in there. 

There's a YouTube video about it. Oh, cool. I will definitely do that. 

Yeah. It's part of some, some study I'm doing at the moment for a big forensic course. So yeah, I love it. 

It's, it's just getting my head around that scripting language. I've been guilty of kind of not, not investing time in this more scripted angle. I was just preferring to use GUI tools that I can quickly filter. 

And I guess I've, I've had a bit of a turnaround where I'm now viewing things as learn it during the downtime when there's not a big incident and practice it just so that when there's something that blows up, ready to go and, and I can pull it out of the toolbox and it's not trying to learn on the fly and, and figure it out. So that's, that's a shift I've had. Absolutely. 

And a lot of that work that you do in like data analysis is data mangling. So using grip and using, you know, I, I know when I first started, I was using Excel and now I really, I don't even use Excel for anything. Yeah. 

Everything's either in a scene tool or done on a command line, searching logs. What are your favorite command line tools? So grips probably my favorite in Linux world. I don't deal with much endpoints on windows machines, but it's KQL using Sentinel. 

And obviously Splunk is has its own sort of query language as well, which is quite easy to pick up on fun times with those sorts of things. But the great thing that I've found is that a lot of the AI tools now can help you, you know, retune your query. So like I, I have a whole bunch of queries that I run and I can throw it into like chat GPT and tell it to put a variant in it.

So I'm looking for this, can you correct this query? And then it'll give me a syntax and then I'll try it. It's not perfect every time, but if you try it on a, you know, a test environment first, I don't recommend running live queries from chat GPT on a production scene tool. You'll make some people very upset, but yeah, testing it out and, you know, defining that makes your life a lot easier. 

Yeah. That was, that was something that I was going to ask you'd recommended as a website. I don't know if I want to get too commercial here, but they were going to give you, I think they were converting Yara rules to other detection languages. 

Yeah. Yeah. Yeah. 

So that's, that's an open source thing as well. It's called Sigma. Oh Sigma. 

That's yes. Yes. Yes. 

Yeah. Yeah. So Sigma rules. 

So you have all these different query languages and especially in the same rule, same detection is, you know, you're looking for something really, and understanding what you're looking for and what tool you can look from. And there's so many tools. So you have your same tool, you have your EDR, you have your vulnerability management, all these different tools.

How do you standardize your query to look for a particular object or a threat, a hash, IOC, whatever you want. And Sigma rules are just a detection format. So you can, you know, have a Splunk detection and have it rewritten for CrowdStrike or Defender. 

And Elk is the other one as well. And you can, yeah, obviously run those across multiple environments and get the information back. But that's more of a code pipeline sort of mentality. 

And what I mean by that is you've got to have your, all your code in a, in a pipeline where you understand all the variables. So for example, Lunk uses a common information format, SIM. And Sentinel uses Ceph.

So you've just got to have those variables already pre-punched into that pipeline. So when you say, all right, I want to look for an IOC, there's an IP address of whatever. And I want to look across this whole different platforms and then convert it. 

The code pipeline takes it in, converts it for Splunk, then converts it for CrowdStrike and then converts it for whatever. And then at the end, you can either have it run directly on those platforms or you copy and paste into it. I love that because I just have a requirement of, I'm trying to find this IOC. 

I'm consuming this thread Intel. Thank you to, you know, New South Wales state government for giving us this or thank you ASD. And cool. 

How do I plug this in? I don't have time to figure out, oh, how do I search in this system and this system? And I just want it to work. Yeah. So the, the comparison is pretty much like Yara rules. 

So it's like the Yara rules for Sentinel for, for SIEM detection. And that's a good way to describe it. Yeah. 

So for those people that don't know what Yara is, it's a end point piece of software where you can tell it what IOCs you're looking for, and it'll go find them for you on that particular machine. So yeah, it's very handy for DFIR people that are looking for certain artifacts to help understand if they're being compromised. And it's, it's got a little, it's got a little nuance to it, right? There's a bit of a format. 

There's like a little description and then there's the, the search for this and there's, cause it's saying so you get another regex analyzer. Is that right? Yep. So you've just kind of got to, I mean, chat GPT could probably help with this, but yeah, absolutely. 

So all of this is XML, both systems use XML. So yeah, it's, it's not hard. Like it's, it's not, yeah, it's not, it's not complicated. 

It's just, I think you can get started with some really simple Yara queries and, but if you are advanced, you can really put in some, some wild stuff. Yeah, absolutely. I actually found a really cool tool.

I was doing some, I was researching something the other day and it's called Yara Signator or Yara Signator. And you can pump some malware into it and it will generate Yara signatures based on what you've provided. So if you've got a, like a malware repository, it will generate the Yara queries for it. 

Oh, that's awesome. Yeah. It was, yeah, it was basically automatic Yara rule generation for malware repositories currently used to build Yara signatures for Malpedia and limited to x86 and x86-64 executables and memory dumps for Linux, OSX and Windows. 

Yeah, it's pretty, pretty beefy. Like it needs eight cores CPU and 16 gig Ram and a curated malware repository. Yeah. 

But yeah, it's, it's, I will, I will totally put this in the notes because I think there's a lot of value there. I just, yeah, been doing about 200 other things. So haven't gotten around to it.

That's awesome. Yeah. So back in the day I would use a platform called Kakuhu for sandboxing.

Oh yeah, yeah, yeah, yeah. I remember that. And then use all the IOCs that I found in the sandboxing to go back in to create block rules, whether it be, you know, block the IP, block the URL, block that binary hash of the file, that sort of thing. 

Yeah. That's how I used to do it in the old days. Now there's a lot more automated tools out there, but it's still a good one for running in your own home lab to get an understanding of what sort of things you can get out of a tool like that. 

And I still believe it's still being updated. Take a look at it now. Yeah. 

I'd love to talk more about that. Maybe I might have a bit of a play with it and we can yeah, absolutely go through it and, and talk about how, how it is. Cause I mean, I'm all about free stuff as well. 

Free using free tools. It's great that we've got so many commercial tools available, but if you're running things at home or if like me, we've kind of closed out the budget cycle for the year at work, the boss is like, well, sorry, it's not happening. You know, we'll see you in another nine months. 

You still have a requirement to, to get these things over the line and anything to, to speed up that incident response and, and report generation is, is helpful. Absolutely. Right. 

This has been an epic podcast interview. I think we should totally do this again. I think we've gone through like a quarter of the, some of the tools that I know and what I can do and how I do it. 

I've got like an A3 page of handwritten notes about different threads that you've the different things you've spoken about. So we can kind of pull some threads. I think we have kind of scratched the surface. 

You've been around in the industry for over 15 years. So there's, there's a lot of experience, a lot of knowledge. I think that everyone listening has, has gained a ton. 

I have learned so much and it has been awesome to kind of talk about some older, older technologies that we both, that we both remember as well. So my thanks, thanks again for coming on. No worries. 

We're, if people want to find you online, obviously you've got the GitHub, just milesagnew and you're on LinkedIn as well, aren't you? Yes. I'm on LinkedIn and I've got a website just called milesagnew.com spelled with a Y. And yeah, if you Google search me, you can find me quite easily. I'll drop that in the, in the notes with a link so everyone can, can do that.

Let's let's wrap it up there, mate. Thank you again. No worries. 

Not a problem. I'll see you for coffee in another couple of weeks. Absolutely. 

Talk to you later. Have a good one. Cheers. 

Bye.


People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.