CISSP Cyber Training Podcast - CISSP Training Program

CCT 252: Logging and Monitoring Security Activities for the CISSP (Domain 7.2)

Shon Gerber, vCISO, CISSP, Cybersecurity Consultant and Entrepreneur Season 3 Episode 252

Send us a text

Dive deep into the critical world of security logging and monitoring as we explore Domain 7.2 of the CISSP certification. This episode unpacks the strategic considerations behind effective logging practices that balance comprehensive visibility with practical resource management.

We begin with a thought-provoking look at Anthropic's new AI chatbot designed specifically for classified government environments. Could this be the beginning of something like Skynet? While AI offers tremendous capabilities for processing classified data, these developments raise important questions about reliability, oversight, and unintended consequences.

The heart of this episode focuses on building a robust logging and monitoring strategy. We examine the various types of logs you should consider—security logs, system logs, application logs, network logs, and database logs—while emphasizing the importance of starting small and focusing on critical systems. You'll learn why centralized logging through SIEM platforms has become the industry standard, and how to approach log retention policies that balance regulatory requirements with storage costs.

Active monitoring, passive monitoring, and the correlation of events each serve distinct security purposes. We explore how techniques like log sampling and clipping levels can help manage the overwhelming volume of data modern networks generate, while highlighting the risks of missing critical security events if these techniques aren't properly implemented.

Special attention is given to egress monitoring—watching what leaves your network—as a crucial but often overlooked security practice. Since attackers ultimately need to extract data from compromised systems, monitoring outbound traffic can catch breaches even when the initial compromise was missed.

The episode rounds out with discussions on emerging technologies transforming the security monitoring landscape: SOAR tools that automate security operations, the integration of AI and machine learning for threat detection, and the strategic use of threat intelligence to understand attacker methodologies through frameworks like the cyber kill chain.

Whether you're preparing for the CISSP exam or working to strengthen your organization's security monitoring capabilities, this episode provides both the conceptual understanding and practical considerations you need. Connect with us at CISSP Cyber Training for more resources to support your certification journey.

Gain exclusive access to 360 FREE CISSP Practice Questions delivered directly to your inbox! Sign up at FreeCISSPQuestions.com and receive 30 expertly crafted practice questions every 15 days for the next 6 months—completely free! Don’t miss this valuable opportunity to strengthen your CISSP exam preparation and boost your chances of certification success. Join now and start your journey toward CISSP mastery today!

Speaker 1:

Welcome to the CISSP Cyber Training Podcast, where we provide you the training and tools you need to pass the CISSP exam the first time. Hi, my name is Sean Gerber and I'm your host for this action-packed, informative podcast. Join me each week as I provide the information you need to pass the CISSP exam and grow your cybersecurity knowledge. All right, let's get started. Let's go cybersecurity knowledge.

Speaker 2:

All right, let's get started. Good morning everybody. It's Sean Gerber with CISSP Cyber Training, and hope you all are having a beautifully blessed day today. Today we are going to begin domain 7. Yeah, domain 7.2 of conducting logging and monitoring activities. So this is an ongoing saga that we have with CISSP Cyber Training and we are pretty excited about the fact that we get to provide you some great content. Again, 7.2 is what we're going to be chatting about today, but before we do had an article I wanted to bring up and get it in front of your attention.

Speaker 2:

So this is an article out of Ars Technica, and they talk about Anthropic. This is an article out of Ars Technica and they talk about Anthropic. So Anthropic is releasing an AI chatbot specifically for classified use. Now it's called ClawedGov is what it's designed for, and it's designed specifically for around national security and intelligence agencies. So the goal of it is to take all this information in a classified setting and use a chatbot that is creating for defense-related documents. It's going to be handling all kinds of things that are critical to national security. Well, is there any downside to this? I don't know. This really does start to sound like Skynet. Yeah, skynet's coming. We just don't realize it yet, or it's probably already here and we just don't even know it, but this point of using chat GPT-Gov was introduced in 2025. And this is a wider push by a lot of AI firms to get their fingers into the government and military complex and the ability to use their chat capabilities to search gobs of data.

Speaker 2:

Now, coming from a classified environment, living that for many, many, many years, yes, I can definitely see the value in this. There's some huge value in having AI inside the government apparatus to be able to look through classified and unclassified data. Obviously, as you're studying for the CISSP, you understand the permissions around this can be substantial and you're going to want to make sure that those are pretty tight. But, that being said, there's so much information in the classified networks that you never actually have a good understanding of what's out there. It doesn't matter if it's the Chinese get classified. Russian, french, israeli, us everybody creates gobs and gobs of data that has not been searchable for the most part. I mean, you can kind of poke around and I'm sure it's better than when I was in there, but the bottom line is is that it's been really hard to do so does this make sense? Yes, it does make total sense. However, there are some questions I have, and one of the things around is the reliability and even the article brings this up is the reliability of the intelligence that this thing is able to garner from that.

Speaker 2:

Now I go back, you know, reverse the course back to Saddam Hussein in the desert storm, and some of you might be listening this going desert, what? Yeah, this is a while ago, right, so I'm showing my age. But Saddam Hussein we basically invaded Iraq to because of the fact that that we felt that Saddam Hussein had chemical weapons, and that was the US government thought that. Now, being said, whether it's a conspiracy theory, whether it was all Bush trying to push it, who knows, don't care. At this point Doesn't really matter. The point of it is they got it wrong, right, they got it wrong based on the intelligence they had.

Speaker 2:

So now you add AI into this, and then you add Skynet, and then now, all of a sudden, ai is smart and AI starts thinking on its own and it starts feeding you information to make you do things that it wants you to go do. Yeah, mission Impossible, fallout, whatever the most recent Mission Impossible thing all over again. So again, that's obviously fast forwarding a bit and that's kind of creepy scary, but the point of it is is that this is something that Anthropic is releasing their AI chatbot to work on specific classified work. It's moving in this direction. So I say all this if you're listening to this, you probably are dealing with AI in your organization. Hey, come visit me with nextpeaknet and we can help you with that as well. We've developed an actual AI cybersecurity framework, risk framework for you. So, yeah, it doesn't really matter where you go and get this risk analysis done. You need to get it done somewhere, because if you think that you don't have AI in your environment, maybe you don't, but you will soon. So it's imperative that you start thinking ahead of this now. I'm working with some really strong contracts right now and these folks are now starting to go oh yeah, ai is coming. You better look at it. I truly believe it, and this is just another example of that.

Speaker 2:

Okay so, let's go ahead and get into what we're going to talk about today. Okay so this is domain 7.2, conducting logging and monitoring activities. So this is a big part of the CISSP. Logging and monitoring is a huge factor and it's something you really need to be understanding and you need to be prepared for as it relates to taking the test. So what is logging and monitoring? Well, we've talked about this in numerous cases around CISSP, cyber training, but one of the importance of it is a foundational security control. Now logging and monitoring it does provide you extreme visibility into your system. Now, the part of it is that's going to be a factor we'll talk about this is the amount of logging that you do. Now, logging and monitoring as a whole air quotes whole can be very limited and one of the things that affects that is cost, which we'll get into in just a minute.

Speaker 2:

So you really need to consider what is your logging strategy. But this will help detect malicious or anomalous behavior that's occurring within your organization. Now there might be some compliance requirements around this, such as GDPR, hipaa, pci, dss, anything out there GLBA. They may require some level of logging within your company. Now, they won't come out to the specifics, but they'll tell you you need to be logging critical type systems. So it's important that you consider what is your logging strategy. I just said that twice now in the same slide. You need to have a strategy. What is your overall plan? It does deal with incident detection as well. Real-time monitoring and logging can identify security instances way before they escalate into something else. So you need to have a good idea of those and then they actually help a lot with forensics investigations. Now, they can't help with forensics investigations if, and if only only if you have them put in strategic and tactical places within your company so that you can catch the right amount of data. Again, you're going to deal with forensics and dealing with PCAPs, and that's right, your packet captures. Those need to be stationed in certain spots within your network to ensure that you're getting the right amount of data coming into the overall system. Now, some key concepts Centralized logging this comes into this.

Speaker 2:

It goes into a central repository, what we would typically call a SIEM. Now you have Splunk, you've got Azure, you've got ArcSight. You've got various types of SIEMs that are out there that they basically aggregate the data. Now, some are better than others, depending upon the need for your organization, and we're not going to get into SIEMs today, but it really kind of depends. If you're dealing with a cloud environment, maybe you want something that's more Azure. If you want something that deals with all types of network logs, from basically security logs to overall network architecture logs. Maybe Splunk is the thing. Do you have a limited budget? Maybe that may impact it as well. So something just to kind of walk through and just determine Log retention policies how long do you want to keep the stuff that you're storing?

Speaker 2:

A lot of things can take that into account. One is the regulatory requirements around the amount of data that you store. Two, how fast do you need to recover? And if those logs are important for you to recover, if they're imperative, for that Is legal or business or any sort of operational needs around that require you to keep them for a period of time. So it's important for you to understand these policies and it's important for you to have a good plan before you get moving forward too far down the path Implementing them. Having a limited log recovery or storage is fine, but you need to really consider what is your long-term strategy around it.

Speaker 2:

Log integrity you need to ensure the logs are tamper-proof. Somebody can't go in and start deleting them or modifying them to hide what they're doing. Do you have hashing or encryption or something like that set up specifically for your logs? It's imperative that you really do. You have a good idea, and this comes back to what I said just a minute ago your strategic goal around logging and monitoring and then alerting and notifications. What kind of alerts are set when thresholds are exceeded? Right, if you have a certain level of logins from an external location, is that alert kicked off? Will it then send you a notification? How is that notification done? As a friend of mine says, with Nextpeak, he goes you know what. What is the thing? Where it goes from the life cycle, from the time you start it to the time it dies. What is the overall process and how will that work? So there's different types of logs.

Speaker 2:

You have a security log. These security logs contains authentication events, access attempts and security events, successful and failed logins, changes to user roles and privileges. All of those are examples of what could be a security log. If somebody's going in and changing their roles, their role or their privileges, you need to have a record of that. Is it somebody that's supposed to be doing that? Is it not? It could be a bad guy or girl going in and actually modifying their credentials so that they can do something nefarious or inappropriate.

Speaker 2:

System logs you need to track the events related to the operating system or a hardware. This comes under boot processes, any kernel events. All of these are system logs that are stored. Now, if you're looking at having to rack and stack these, you may make a choice where you know what I'm not going to take. Maybe application logs I'm not going to take system logs. Or maybe you will take application logs, which I'm going to get into in just a second.

Speaker 2:

You decide which logs you want to keep and there's different kinds of reasons behind that. So application logs these record events generated by specific applications. So if you have an application such as Salesforce that's a big one, I know, but let's just say Salesforce and you have certain application, part of that Salesforce thing is going on, it's doing, it's recording something, whatever that might be, that would then have a specific log for that. You have databases. They have a specific log. The point of it is that you have security logs, system logs and application logs. You have to decide which ones do you want to keep. All some maybe modify it a bit, it's up to you, but a lot of it will come down to what is your overall quote-unquote strategy.

Speaker 2:

Now, network logs these are some other ones to kind of consider. This goes into your router, your switch activities, traffic flow logs, net flow, s-flow, different types of flow logs that are going in, and again, you can see all these types of logs are going to continue to grow. Well, when you store all this, what does it cost? It costs money and it costs storage space. So you're going to have a lot of data you're going to have to keep and it's going to potentially cost you a lot of money. So then you have to come back to your thought process around that Database logs. These are important one to really start considering keeping. Now, again, there's a lot of databases within most enterprises, a gob of databases. So you're going to want to make sure that you pick the most critical databases and start from there.

Speaker 2:

I recommend start small with your strategic plan. What are you going to keep? My critical apps, my critical databases, my critical log app, different types of logs that are going to security logs those should all be kept. And then from there, are there any network logs that connect the two together that I feel I should keep? That's how I would rack and stack this. Then, from there, once you have a good plan and an idea of how you want to do it, then you can expand that out, keeping in mind the cost that's going to go with this, both from an opportunity cost and from a capital expense. How much is this going to cost your organization and how much time is it going to cost by your people going out and configuring all this, and how much time is it going to cost by your people going out and configuring all this? Then the last one is an audit log. These are basically capture of changes to systems, applications, user privileges. It's a secondary log that might be kept because it has an audit function. Now, your organization or application or whatever may not have an audit piece of this, but it might be something your audit team wants to set up specific snapshots to help them in their overall plan. It just kind of comes down to you and what you're going to do and how your organization is going to handle it.

Speaker 2:

Now the role of monitoring is again. It's around early threat detection. You identify potential threats or intrusions in near real time. That's the goal, right the moment that you see it, you can then quickly make a decision and flip to it and take care of it. It also deals with operational efficiency or efficiency. I don't know, it's a big word, Don't know what that means really, truly. But bottom line is it's your operational aspects right. It ensures the system availability and performance can be used to identify these issues early. If something breaks early, you now can identify it quickly.

Speaker 2:

Compliance validation Compliance is a big deal. A lot I do with organizations that are like, yeah, yeah, compliance, oh, yeah, we're going to do our thing, we're going to be there protecting our hacking stuff going on Compliance eh no, don't really care too much about them. That's a wrong approach, right? So, depending on your organization, your regulatory aspects of it, you may need to have a very close relationship with your compliance folks and in other cases maybe you don't. But I would highly recommend, if you don't have a good relationship with them, you go and build one, one that's going to help your organization. Two, it's going to help you professionally. And three there's probably a three in there, but I can't think of what that is but at least those two right, it'll help your organization, it'll help you professionally. Just stick with those.

Speaker 2:

The last thing around role of monitoring is behavior analysis. Establish baselines for normal activity to detect deviations and or anomalies. People will make choices right and if you have behavior analytics that are baked into this, it can go a long way in helping you detect any sort of issues, anomalies that may be occurring within your environment, within your network, because of those behavior activities. Now, some key components we have active monitoring, passive monitoring and correlation of events. Your active monitoring this involves real-time tracking events and generating alerts of the specific anomalies that are going on. Passive monitoring this is where you're looking at your logs and your data. After the collection, it's often used for forensics activities. Now, forensic piece of this again, the important part, like we mentioned earlier, is the amount of logs that you store. It's really hard to do forensics on logs that don't exist, right? So if you want to have some level of forensics capability, you need to strategically plan where you're going to pull these logs from and then you have them in a protected, centralized location where only select people can or applications can gain access to them. So that's an important part.

Speaker 2:

Correlation of events this is identifying patterns and relationship across multiple data sources to provide actionable intelligence. Right, you're looking at their events, you're correlating between them. Kind of what we've talked about in numerous places within the CISSP. Cyber training is, if you have multiple data sources coming in, how can you use that information to give you a product that you can then go oh, a plus B equals C, not A plus G plus D equals F. That doesn't work right. You want the correlation of events and all these different signals coming in, the protection of log data. Attacker will want to manipulate this data. So what do you need to do? You need to protect it, like we mentioned before, having an essential repository that your SIEM can gain access to.

Speaker 2:

Now, what is a SIEM? You hear me say this. It's called a security information event manager, otherwise known as a sim. Now you also have what we call forwarders and they take the log data and they forward it on to the sim. You may have central locations where this is stored. Now, as an example, splunk has what they call a heavy forwarder. I don't, you know it's can move lots of data. I guess that's why they call it heavy. But bottom line is it works really well to take aggregate data, log data, and ship it to a central location. This works really well. It also can work well to parse the data before it's shipped, so it doesn't just ship everything. You can have it select certain levels of log data that is sent and then other data is shunted or it's basically dumped.

Speaker 2:

Now you have 30, 60, 90 days is what I got on the slide and if you're looking at that, it's one thing to kind of consider is it's 30, 60, 90 days. What does that mean? It means the amount of log data you want to store for your organization. Now I will tell you that pretty much everybody will. Everybody's not the right word. Many people will store seven days of data. That's great. It's after seven days. It's always regenerating it, right, it goes in first in, first out kind of activity. Well, so what ends up happening is you only have seven days. If you can detect something that's occurred within your organization, odds are high. Your log data is gone right, it ain't there. So you really truly want to have a 30, 60, 90-day policy on how much you're going to store of your logs.

Speaker 2:

Again, this comes back to strategic planning of critical apps. You know what? I maybe only need seven days for everything else, but for this one server and this one database I want 90 days because if anything bad happens to it, I want to know it. I want to have the ability to go back and look at it. The other part around is a legal hold. If your legal team comes in and says I need you to hold onto this data until I tell you you can't. So then you can have to keep this log data for indefinite periods, and so it's imperative that you have a good plan around that. When the legal comes to you and says keep all the log data, you say, okay, I'll keep it, no problem, by the way. Here's the bill. The bill is going to cost you X. Legal may go well, you don't need to keep all that, just keep that. So it's important that you have that relationship with legal, because they're just going to come in and say hey, you're the IT guy, you just take all this stuff. You do what I ask you to do. I'm making 400 bucks an hour, you're not. You just do what I ask. And yeah, an imperative though, that you to go and say to them you know what, that's fine, I'm happy to do it, but it's going to cost you X. They need to understand the bill and they also need to pay for it, because they may make a decision that you know what, it's not worth it, or they may say you know what, it's fine, we'll pay it.

Speaker 2:

The thing is is that you really need to make sure that the logs are destroyed if they're not being used, and I can't stress that enough Log data. In of itself, just having the data, is a bad idea, so you need to make sure that you delete it when it's not specifically being used. Security information, event management SIM. Okay, we talked about the SIM. It's an automated, configurable product Rule sets established for alert and flag suspicious activity. They will range in price depending upon the air quotes, bells and whistles you want to add to them. So the bigger the product, more it's going to cost.

Speaker 2:

Splunk is a great example. I remember when splunk was brought in. Splunk was a great idea. The logs were not that expensive and then, over time, now the storage of the log data is getting to be more and more and more expensive because, guess what? It's not really a splunk problem, it's more of a just a data storage problem. But you realize, to get the true value out of Splunk you have to have lots of data for it to use. Well, so what do you do? You store more. And as you store more, what happens to your costs? Costs go up. So it's really important that you have a good plan on that.

Speaker 2:

Now these rule sets and these SIMs are established to alert our flag on suspicious activity. The range and price. Again we'll talk about the bells and whistles, but realistically, that's the ultimate goal. Something triggers, they trigger. They let you know something's going on.

Speaker 2:

Now they typically are deployed in either an agent or an agentless deployment. The agentless basically takes logs directly from the system and ingests them into the SIM. The agent list basically takes logs directly from the system and ingests them into the SIEM. The agent utilizes software to collate, to basically parse it down to a much smaller amount and, depending upon where you have your data so if you like, say you have lots of remote locations you may want an agent at each of those locations, or maybe a forwarder that's passing on the information. Why Bandwidth? Bandwidth's a big deal If you're shipping data over the wire. It costs a lot of money, it takes up a lot of bandwidth. So therefore, do you need to send all these logs back or can you send a smaller subset of logs back? Important thing to consider when you're dealing with your overall strategy. Now, typical deployment agents are talked about are systems that are being monitored. You can deploy this with additional functionality.

Speaker 2:

These SIMs are usually very configurable. They can do a lot of different things and if you're looking at the slide, sometimes they can do too much. It's a typical thing where you know what. Ooh, my watch now has bells and whistles. I'm gonna enable them all. Turn them all on because I want to be excited and I want to see lots of stuff. That's usually a bad idea, because what happens when it lights up like a Christmas tree? You just kind of basically go into shock because all the flickering lights cause you to basically have a stroke or shock or whatever it is that you just pass out. So you don't want to do that. You want to basically start small and work your way out. Correlation engines and machine learning are also being embedded within these sims now, which is awesome.

Speaker 2:

Back to the initial point we had in the notes is the fact that AI is a huge factor in all of these capabilities and it needs to be leveraged. However, that being said, understand the foundational pieces before you flip on AI. I'll learn an example of this. I have QuickBooks, right, so I have business books for my wife's Kona business and for our traveling Tom's coffee business. Quickbooks decided in their infinite wisdom, I'm going to kick on AI. Oh, I hate it, I despise it. Why? Because it's giving me all kinds of stuff that I don't even know what it means, and so what is it causing me to do? I just don't want to touch it anymore, which is bad for business, right, you don't want to do that, but the point of it is is that AI is great, but you got to have your foundational pieces before you flip the switch on. I would have preferred with QuickBooks that I turn it on, not them. So again, there's an important part you need to integrate in other device management systems, such as microsoft's sccm, which is system center configuration manager. A lot of organizations have sccm within their company. You need to incorporate your sim into that. It can animate, add a lot of automated processes to help your organization be much more secure.

Speaker 2:

Continuous monitoring what is this? This provides an audit trail and basically investigation fodder. What is investigation fodder? Lots of stuff that can help you with an investigation. Without the logs, again, you have virtually nothing other than the specific incident, that's if you even catch it. The logs help you maybe find what actually occurred. So network time protocol is an important part of all your logs. This is where it's actually doing. A timestamp on the logs itself Synchronizes the ability for timestamps and allows for breadcrumbs for you to be able to go back and trace back what actually occurred when you're dealing with some sort of audit trail for continuous monitoring. It deals with promotes accountability. It's an imperative piece of all of this. Now, continuous monitoring provides data for adequate investigations. It also log amounts that they provide will be huge. Right the investigative automated tools to help you search the logs, because you really truly need some level of automation to go through the vast amounts of logs that will be coming into your sim, so it's important that you have that. But again, logs that will be coming into your sim, so it's important that you have that. But again, I start small. Do the mapping, have a plan. If you do that rather than just turn it on, you'll be much more successful and you'll actually have the ability to potentially protect your organization in a much better capacity than if you just flip the switch and turn it.

Speaker 2:

On the role of monitoring as we've mentioned before, it's early detection, operational efficiency, compliance, behavior analysis. All those are aspects of it. But we're going to get into active monitoring, passive monitoring and correlation of events, so we've got passive monitoring. This is where you're looking for indicators of compromise or IOCs such as unusual login patterns, unexpected system behavior. Iocs such as unusual login patterns, unexpected system behavior you need to correlate the network traffic with potentially malicious IP addresses or domains from threat intelligence that you may be receiving. So these IOCs you're looking for proactive monitoring of what's occurring within your network. Now, forensic analysis this is where you utilize logs for reconstructing the sequence of events. This is usually after the fact.

Speaker 2:

You also need to keep all of this for what they call chain of custody. We talked about chain of custody a lot in CISSP, cyber Training. The part of it is that it's a plan on how you're going to keep this data from a legal perspective, to ensure that if something ever had to go to court, you can prove without a doubt that you had this data. I got it at this time from this system and I have all the logs to back it all up. They have been saved. You can go and look at them. Everything is tight. That is what you deal with chain of custody right now, when you're integrating with incident response.

Speaker 2:

This will also help trigger response playbooks, or what they call a procedure either one of those, and they're based on the alerts that they have, and then they will go and go oh, something bad happened. Pull out the XYZ playbook, let's run through it, and then they will run through the playbook. The point of it is is it allows them to have the ability to respond quickly to these events. Now I've dealt with organizations that go well. I don't need a playbook. I got it down, pat. I know what I'm supposed to do. I've done this so many times I can do it in my sleep. Yeah, you're rock, you're amazing, that's awesome. However, when you hire somebody who comes in, that person doesn't have the same level of knowledge. So in a playbook, it's important that you put all this information down and you keep it updated, because, guess what, we all know within security, we got people coming and we got people going, and so it's imperative that you have a plan to deal with that specifically as well. It also helps escalate incidents to appropriate teams when some severity thresholds are met or exceeded. So again, monitoring and investigation is an important part.

Speaker 2:

Now syslog we're going to get into what is a syslog? Syslog protocol this is a standardized protocol for logging messages from a network devices and systems. This operates on, in this case, udp 4, port 514. Now, that's the default port for syslog. That doesn't always mean it's going to be on 514. That's just the default port for it. You can set whatever one you want. But bottom line is that it's set up to go through there it's set up with. You can have UDP or you can have TCP. If you're going TCP, obviously it's much more reliable. But now you're dealing with the SYNAC aspects of it versus UDP, just blasting the log data to that location. Now the syslog structure. You've got a header and you have message body. Now the header includes priorities, timestamps and host names and then the message body will contain the actual log message itself. Now there are authentication failures, firewall events, system boot messages and so forth. All of that stuff is available through syslog.

Speaker 2:

Now log sampling this is reviewing a subset of the logs rather than analyzing all the collected data. This reduces the workload from large volumes. It focuses on high priority events. Now the challenges around this is that there can risk of missing critical events. When you're just sampling the log data, you could actually miss something. You also get sampling bias if it's not randomized. So you could be sampling certain areas that are always coming in and other areas that are not. That's the sampling piece of this. Now the best practice is using a stratified sampling for specific types of events or timeframes and you supplement the sampling with targeted searches for IOCs or incident indicators of compromise. So that is when you're dealing with specifically around log sampling.

Speaker 2:

Clipping levels. Now, this is a. Thresholds are set to filter out our minor or irrelevant events. So you have lots of little things that are going on in your network and you'll see this in logs. A lot, all kinds of stuff that's happening. The clipping levels would basically say that I'm going to ignore login failures below three attempts right, because people screw up I do too. So this helps avoid excessive alerts because people make typos. That would be a clipping error. You log only errors of certain severity levels, right, so if you have a certain issue that happens and it's all low severity, don't even mess with it. Right Now. The benefits is obviously it reduces noise, improves analyst focus right, so that's positive. And it optimizes your storage for unnecessary logs, adding cost and time right now.

Speaker 2:

The downside of this is you potentially overlook low frequency events that could indicate a slow attack or more of a methodical attack, and then misconfigured clipping levels office obviously can really relate to or result in incomplete monitoring. You set your clipping level to 10 instead of two. Well, now you're missing all kinds of stuff because you made that fat finger mistake. So, or two and 20, that kind of thing. So it's important that you kind of have a really good plan with what your organization needs. Now, when you're implementing this, you align your clipping levels with your risk tolerance. It's important that you have a good plan around that and then you review this on a routine basis to adjust your thresholds based on the threat that may be evolving to you. What does that mean? That means basically in your business, if all of a sudden, you've known I know APTs want me, nobody wants to deal with me, and then you just get a government contract and now all of a sudden you are a high risk from an APT attack, that would change your potential risk profile and your thresholds because the threat may have already changed. So that's what that whole point is trying to get to Egress monitoring this is where you're monitoring and controlling outbound traffic to detect, prevent data exfiltration, malicious communications or unauthorized access attempts.

Speaker 2:

The importance of this is around data loss prevention. You help identify stop-s sensitive data from leaving your company. It can also help you detect malware or any command and control communications that are going out, and many regulations will mandate some level of safeguards against unauthorized data transfer. So you need to be monitoring your any outbound communications. One of the main things you really need to do is look for outbound or egress monitoring. Now you're gonna be looking at tools such as firewalls, your IDS, ipss. You'll look at other types of proxy information that might be going outbound. You're gonna wanna look at all of that. You also wanna understand your content, filtering and inspecting it for sensitive information such as credit cards. A lot of these systems will have the ability to look for social security numbers, credit card numbers and the like. So you want to make sure that you're filtering this kind of data going out.

Speaker 2:

Behavior monitoring looking for unusual outbound traffic. It's going to Australia at two in the morning. That would be not normal, especially if you're operating out of the Midwest, understanding that sort of behavioral aspects. Now the challenges that come with it is you have outbound traffic is usually, in many cases, https and the inspection of it can be very difficult. I was working with a large company and helping them and deploy their encryption class decryption software to look at this specifically. So you may want to consider that if you're going to be the security person for your organization, depending upon the outbound traffic, that might be something you want to go do.

Speaker 2:

Now there's lots of false positives. This could be legitimate activities that may trigger alerts requiring fine-tuned rules and thresholds. So it's important that you have a good plan what is that about? And then also understand your false positives to ensure that what's going out there is truly something that's bad, or getting alerted that's bad and what is not a big deal. So again, we kind of talked about egress and monitoring traffic as it's leaving.

Speaker 2:

Assume internal networks have been compromised, so what do they have to do if they've been compromised? The data has to go out somewhere. It's really hard to monitor the stuff coming in, but you always want to look for any of the data leaving your company. The attacker wants the data to leave. They want to get the data out and in many ways they will ship it out in plain sight, right out the front door, just so that you're not and it's in with all the other data, so you will miss it. Now there's tools to help you do to obviously stop data loss web proxies, stenography and then file-based DLP. So your web proxy, obviously looking for any rules, anything going to destinations that it shouldn't go to stop it right there. Most environments just really don't even address this. They just let all the data that goes out.

Speaker 2:

I say most, not all, but many, many do Stenography you embed messages within a message. It's very hard to detect. Now the good thing is is you, it can be, you can kind of go to if a JPEG is 37 meg in size, well, you might be going, yeah, that ain't right, that's not normally what it is. That would be an alert or a trigger on it. But putting data inside a stenographer or inside a picture and shipping it out the door, that's really hard to detect. You also have a bigger problem. You've got an insider that's doing that, potentially as well. Now, could the bad guys do it? Yes, but it's much more complicated if you're remoting in trying to put this data into a picture. That's really challenging versus being the person in the business, in the company. Actually manually doing that on a day-to-day basis is much, much easier.

Speaker 2:

And then file-based DLP software that affects you know, looks for affected file types your JPEGs, your PDFs, your docs. Are they going to places they shouldn't go? Are they systems that does it look like the file type or the file name, has changed All of those aspects you want to be looking for with egress monitoring SOAR tools. Now these are platforms that automate and orchestrate your security operations processes threat detection, analysis, response. Soar tools are very cool, especially for some level of automation that goes into it. Now you can combine multiple security tools, a SIEM and Threl and intelligence platforms all of this to be in a unified or processed workflow. The automation from having manual tasks, reducing manual tasks to more of an automated task to from incident triage to threat containment, to reporting all of that can be automated and set up. Threat comes in, threat is remediated, report is generated, everybody's happy and very few people touch it. It all just kind of automatically does it. That's amazing. Now, playbook execution this is implementing predefined workflows based on the playbook that you said, and then this will handle specific types of incidents. So having a good SOAR tool and its process in place is an amazing part to your organization. Now some of the other benefits this includes efficiency, consistency and scalability, and they allow for your analysts to have time to be able to go and actually focus on the real things that are there. The SOAR tools can help kind of triage, some of the lower level things. So it's great, great products. Now some examples of SOAR tools. You got Splunk, ibm QRadar, palo Alto's Cortex. All of those are different types of SOAR tools that can be used within your organization Run books and playbooks.

Speaker 2:

So what are these? Now? A runbook is a detailed, step-by-step instruction for executing specific operational tasks, such as configuring firewalls, resetting passwords and so forth. A playbook is a high-level workflow that outlines the steps and the decision points for responding to a security incident Phishing, attack, response. That is what would fall under a playbook. Now, the key differences around this are this A runbook is a task-oriented and focused on the technical implementation. The playbook, on the other hand, is strategic, decision-driven and has multiple playbooks in them. I've created playbooks that have multiple tabs and they have multiple playbooks that you would refer to to make it happen.

Speaker 2:

Good example financial disconnect or financial reconnect to third parties has lots of playbooks in it, and so it's an important piece of any company. So it's an important piece of any company. Any financial organization really needs to have a good disconnect and reconnect playbook, especially for third parties. If you're listening to this, if you don't have one, I can help you. Just reach out. I can help you with that. So, again, disconnect and reconnect playbooks important part Playbooks and runbooks, importance of them. Standardization ensures consistent response to security incidents. Documentation provides clear instruction for less experienced team members and, like we mentioned before, having this in place is important, very important for having your team Efficiency, reduces response times, eliminating guesswork very important part of having all this documented. Again, the downside of all of this is it takes work to do it. It takes time to do it. You've got to dedicate people and resources to make that happen.

Speaker 2:

Machine learning and AI tools Okay so. Threat detection it can happen with looking for zero-day attacks. All of this stuff can be incorporated and embedded. I've said it time and again All of this stuff can be incorporated and embedded. I've said it time and again I truly believe AI, embedded within your security tools, is going to be a big factor in the protecting of your organization. If you can incorporate it, understand it, deploy it, it's going to help your company mitigate a lot of risk. Again, though, you have to understand it before you deploy it.

Speaker 2:

Automation is another big factor. Automate repetitive tasks such as log analysis or threat hunting. You want to have that done. You don't need somebody's eyeballs looking at it. Predictive analytics you forecast potential attack vectors based on historical data. What does that look like? How are they attacking you? How could they potentially attack you in the future?

Speaker 2:

Now, when you're dealing with different types of analysis, malware analysis would be one example of how AI would help look for this. It would look for malicious code by recognizing patterns in the binaries. It also could look at the behaviors of these different types of malware and then look at how's it operating, even though the code that came in may be completely different, but it's operating in a very similar manner. This is where AI can look and determine hmm, maybe this is something that's more malicious, whereas a person like me would not be able to see that because it's just so much data. Now the challenges around this again is poor quality training can lead to inaccurate models. If you don't have, you know, junk in, junk out. You also have adversarial AI, where the attackers may develop techniques to evade AI detection, and I guarantee you they're doing that now. They're coming up with solutions because they know organizations are deploying AI within their companies, so they're trying to figure out how to avoid AI seeing them. And then resource intensity. And then resource intensity. This is where MLAI tools require significant computational and storage resources, specifically for your organization.

Speaker 2:

Threat intelligence what is this? This is where information about existing and emerging threats, including their tactics, techniques and their procedures or TTPs so there's types of threat intelligence you have strategic, operational and tactical High level is strategic. So there's types of threat intelligence. You have strategic, operational and tactical high level. Strategic this is where you have insights. Decision makers, geopolitical risks are all part of the strategic intelligence piece of this. Operational is is where you have information on active campaigns or attack groups potentially coming after you. And then tactical is details such as incidents of compromise, ip addresses and malware hashes. That would be your tactical piece. So that's where threat intelligence is important strategic, operational and tactical.

Speaker 2:

Now the benefits of this you have proactive defense, enhanced detection and incident response. Your proactive defense is where you're preparing for the threats and understanding the attacker's TTPs. Enhanced detection is where you're correlating this intelligence and internal logs, so you're basically putting the nexus together. You're combining them, figuring out where is this all coming from, and then your incident response uses these IOCs to identify affected systems, specifically during an investigation. Now there's different sources. You have open threat intelligence from like AlienVault. You have commercial fees from FireEye, recorded Future, and then you have industry sharing groups, such as the ISACs. So you have FS-ISAC, you have manufacturing ISAC, you have all these different ISACs that are industry sharing groups. So the point of it comes down to, though, is that this intelligence is imperative, and you want to incorporate all this intelligence within your organization and within your sim.

Speaker 2:

Now, the kill chain. This is a framework which you describe the stages of a cyber attack, from reconnaissance to execution. Now, this is the Lockheed Martin cyber kill chain. It's kind of how they broke this out and you have seven steps. You have reconnaissance, which you're gathering information. Weaponization, where they're creating the malicious payload to basically exploit Deliveries, which you're gathering information.

Speaker 2:

Weaponization, where they're creating the malicious payload to basically exploit delivery, is where you're transmitting the payload to the target right, you're launching it. Exploitation is you're actually doing something to gain access, whether it's using pass, the hash, whether it's using credentials. You're using something, and then your installation is installing the malware or the back door to allow access. Then setting up the command and control system to net, establish communications with the attacking device so it allows you to basically remote control your robot on the moon. That's the command and control piece, and then your actions and objectives. This is where you're executing the final goal, such as data theft or potentially even destruction. Those are are your cyber kill chain, right, one through seven reconnaissance, weaponization, delivery, exploitation, installation, command and control. And actions on objectives.

Speaker 2:

Now the uses around this. Obviously, if you understand this, you can identify and disrupt attacks that are specific within the kill chain stages, and then you understand the analysis that goes along with it. You can understand what the attacker's progress is. Now, this is all sounds hunky-dory and wonderful. That's assuming you catch them. A lot of times, these guys get in your environment and they may be in the delivery and exploitation and they're getting to the installation phase and they might even already be insulated. And then they have command and control, where they're talking back and forth. And all six steps have already occurred, and they occurred months ago. And now what are you going to do? Right, that's an important part. Okay, that's all I have for you today.

Speaker 2:

Head on over to CISSP Cyber Training. You can get all this content. All of it's available to you, just you got to purchase it. You can have access to it and then all the training can help you pass the CISSP exam. There's a lot of free stuff out there. You get my 360 free questions as well as some of the other content that's on the blog that's out there and it's free some of the videos and some snippets of the videos and so forth. You can go to YouTube and get some of this content as well, or go to Nextpeak.

Speaker 2:

If you're interested in cybersecurity consulting head, or go to Nextpeak. If you're interested in cybersecurity consulting, head over to reducecyberriskcom or head over to nextpeaknet. If you're in the financial industry and you need some consultation around financial aspects and cybersecurity, head on over to nextpeaknet. Or just send me a note. I'm happy to sit down and chat with you about that. We do a lot of great stuff with a lot of large financial organizations, so we definitely can help you with what you need.

Speaker 2:

Okay, have a wonderful, wonderful day and we will catch you all on the flip side, see ya. Thanks so much for joining me today on my podcast. If you like what you heard, please leave a review on iTunes, as I would greatly appreciate your feedback. Also, check out my videos that are on YouTube and just head to my channel at CISSP Cyber Training and you will find a plethora or a cornucopia of content to help you pass the CISSP exam the first time. Lastly, head to CISSP Cyber Training and sign up for 360 free CISSP questions to help you in your CISSP journey. Thanks again for listening.

People on this episode