CISSP Cyber Training Podcast - CISSP Training Program
Join Shon Gerber on his weekly CISSP Cyber Training podcast, where his extensive 23-year background in cybersecurity shines through. With a rich history spanning corporate sectors, government roles, and academic positions, Shon imparts the essential insights and advice necessary to conquer the CISSP exam. His expertise is not just theoretical; as a CISSP credential holder since 2009, Shon translates his deep understanding into actionable training. Each episode is packed with invaluable security strategies and tips that you can implement right away, giving you an edge in the cybersecurity realm. Tune in and take the reins of your cybersecurity journey—let’s ride into excellence together! 🚀
CISSP Cyber Training Podcast - CISSP Training Program
CCT 334: CISA and Stryker Attack and AI GRC Foundational Concepts
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
The fastest way to lose control of your security program is to ignore the systems that control everything else. I start with a timely CISA warning: attackers went after an endpoint management system, the kind of “one system that touches many” platform that can turn a single compromise into enterprise-wide fallout. We talk through practical hardening moves like multi-factor authentication, limiting where admins can log in from, and adding extra checks for high-impact access, because centralized management consoles are prime targets for nation-state and supply chain motivated attacks.
Then we pivot to the bigger wave: AI GRC (governance, risk, and compliance) in the age of artificial intelligence. AI adoption is exploding while AI governance lags, and that gap is where regulatory fines, privacy failures, and reputational damage tend to show up. I break down GRC in clear terms, explain why traditional audits and sample-based testing struggle with always-on AI decisions, and lay out what AI governance needs to add: an AI inventory, explainable AI requirements, named model owners, fairness and bias assessments, model lifecycle governance, and third-party AI risk management.
We also map the AI regulatory landscape you need to know, including the EU AI Act, the NIST AI RMF, and ISO 42001 as an emerging certifiable AI management system. From there, I walk through seven risks companies must understand: algorithmic discrimination, non-compliance, model drift, data governance and GDPR privacy exposure, black box accountability gaps, vendor and supply chain AI risk, and shadow AI from unauthorized employee tool use.
You’ll leave with an eight-step roadmap you can apply immediately, plus next actions like downloading the NIST AI RMF, running a quick AI inventory, assessing EU exposure, and updating vendor due diligence for AI. Subscribe, share this with your GRC or security team, and leave a review so more CISSP learners can find the training.
Gain exclusive access to 360 FREE CISSP Practice Questions at FreeCISSPQuestions.com and have them delivered directly to your inbox! Don’t miss this valuable opportunity to strengthen your CISSP exam preparation and boost your chances of certification success.
Join now and start your journey toward CISSP mastery today!
Welcome And CISSP Training Focus
SPEAKER_00Welcome to the CISSP Cyber Training Podcast, where we provide you the training and tools you need to pass the CISSP exam the first time. Hi, my name is Sean Gerbert. I'm your host of the Action Pack Informative Podcast. Join me each week as I provide the information you need to pass the CISSP exam and grow your cybersecurity knowledge. All right, let's get started.
CISA Warning On Endpoint Managers
Why AI Needs Stronger GRC
The AI Governance Gap In Numbers
GRC Defined In Plain Language
Traditional GRC Strengths And Limits
What AI GRC Adds
Key Frameworks And Regulations
Seven AI Risks Companies Miss
An Eight Step AI GRC Roadmap
How GRC Pros Can Upskill
Key Takeaways And Next Steps
Where To Get More Training
SPEAKER_01Good morning, everybody. It's Sean Gerber with CISSP Cyber Training, and hope you all are having a beautifully blessed day today. Today is Monday, and we are going to be talking about various training topics associated with ISC Squared CISSP. And today we're going to be focused on AI GRC. As you all know, that the GRC pieces of this are extremely important for any organization. But now that we're getting into AI, I thought I might as well do a little bit of training around that. Just because you understand you're going to be kind of are asked about this on the CISSP exam in future iterations. So you know what? No better time than the present to get into it. But before we get into that aspect, we're going to be talking about an article I saw in CSO magazine. Now, this is from a Howard Solomon, and it says CSA or CISA, I should say CSA, CISA urges IT to harden endpoint management systems after a cyber attack by pro-Iranian group. Okay, so this comes out of the recent hack that occurred with Stryker. And Stryker supposedly was attacked by a pro-Iranian group that got access to an endpoint management system. Now, if you're all aware with an endpoint management system, it can be many different things. It's basically a could be an MDM, your mobile device management systems, it could be your firewall management systems, it could be anything that is one system that touches many. And so as an example, let's just say you have firewalls. And if you have firewalls that are in place and you have a manager, manager system that manages all of these firewalls, a central point, a central location, that system would be a prime target for this pro-Iranian group. So the bottom line is that if you have anything like this, you need to make sure that you are properly protecting it and ensuring that you have multi-factor. You may be put in places where you can only log into it from certain IP addresses or certain domains. You also have the ability to have maybe even an approval process to log into this system. So you really want to have multiple checks and balances. Why? Well, because realistically, if you have one system that can touch many, then you have a situation where it can even go even worse if something bad were to happen. So the broader implications of this attack is that there's critical infrastructure risk, and you really need to focus on how do you protect your infrastructure, especially in today's world where everything is upside down. It's crazy. You also need to look for this nation-state threat. Now, I will say a little bit of a spin on this that we didn't really talk about in the past. Most people think that, you know what, a device company is that a nation-state threat? And most thought process is around the fact that it's intellectual property based. But in the situation that we live in today, a nation-state would look to make any sort of statement and would go after any company to say, yes, we did that. So you got it in today's world, anybody is a target. Whereas in the past, you felt a little bit somewhat confident that you may not have nation states coming after you. Yeah, probably not the case anymore. Also understand the supply chain and operational impacts that go along with this. So, bottom line, if you have a system that manages multi-diff multidimensional or multiple systems, you need to make sure that it is properly protected. And if you don't know how to do that, you know what? You probably can just go into GRC or onto AI and ask the LLMs out there. It'll probably give you a good idea of what you should do, at least to get started. Okay, so that's all I've got on that. Let's get into what we're gonna talk about today. Okay, so today we're gonna be covering AI, GRC, and that's governance, risk, and compliance in the age of artificial intelligence. Okay, so this is an important part of that we all need to understand. GRC is a huge deal out there. Governance, risk, and compliance. We've talked about the fact that there's lots of opportunities and jobs out there for you. The big difference is that fact that you're gonna have to understand what you're talking about. And this is a good way for you to kind of go over and just discuss some areas that you may freshen up what you may or may not know around AI GRC. So, what is some of the stats, right? We're talking about the governance gap that's here, and I feel that's a big factor in all that we're doing. Okay, so what first we're gonna start off with is where are the statistics and where's the gap? Well, 72% of all organizations in the the world are using AI in at least one business function. Okay, so they are at least that's 72%. That's a substantial amount of people from when this was released just even a few years ago. 18% have fully implemented some sort of AI governance framework that they're utilizing. So now what does that tell you? That means 80% are not. So that's a total inversion of where we're at. We have most of the people employing it, and we have very little few people that really have any sort of guardrails on how this thing should be deployed and how it should be run. So that's a huge gap. And now the EU has put out fines specifically if you're for your AI governance. If you don't have a good understanding of your AI space, they're having fines of upwards of$35 million or 7% of your global annual turnover. So that is a big factor there. So you're you're talking a big chunk of change. Now, this hasn't happened in the United States yet, but it is in place in the EU. So the EU Act began enforcement of February of 25, and full enforcement for high-risk AI systems starts in August of this year. So organizations that treat AI governance as a future problem are already not compliant. So you need to consider that when you're doing this. And as we see this inversion with the folks that understand GRC, this is kind of why I mentioned the fact that there's a great opportunity for you in the GRC space if you're in cyber. Then, August of 2027, all high-risk rules for AI embedded, that basically is in built into the overall, whether it's an operating system, any sort of device, uh, this is gonna have to be regulated. And these are all medical devices, cars, etc. So they're putting a plan in place that you're gonna have to have all of these capabilities built into your systems if you're gonna be putting a product out on the market. So that is a huge deal. As you can see, the statistics don't lie. Okay, so let's set the foundation. What is GRC? This is a quick level set that for everybody to kind of understand because not it, it maybe you hear it, but you don't really truly understand what it is. And those that are understanding it completely can go, well, okay, you can tune out for just a few minutes. But governance, the G part of this, this is policies, accountability, structures, and decision-making frameworks that will steal your steer your organization towards a specific goal. That's the governance piece of this. Risk. This is identifying, assessing, and mitigating the risks or threats that was both financial, operational, cyber, and legal. This is what jeopardizes your overall objectives that you have set up through governance. Then compliance. This is adhering to laws, regulations, and industry standards based on policies that you've created to avoid penalties and to build trust. So that's your GRC, governance, risk, and compliance. So a traditional GRC, these this has worked very well in the past, but we're going to talk about some of the strengths of it and some of the limitations in this new world that we live in. So, what are some of the strengths related to a GRC? You have mature frameworks that are there. You have the you have CIO, SO, you've got ISO 3731,000, you've got NIST Cybersecurity Framework, SOX, you've got GDPR, you've got lots of different things that are there to help you with this program. You have clear ownership and accountability chains that are in place. You know this because you've built it and it's understandable. You have a proven audit methodology, and this is a big factor, right? There's a lot of people out there that can audit and they understand what to do. They understand control testing. Big factor in this space. There's standardized risk taxonomy across all the industries. It's set up specifically around this overall TRC piece. And then there's board-level reporting structures and governance culture. So there is a plan in place to handle how you would deal with a GRC in a manufacturing space, in a medical space, in a financial area. It's all there, it's defined, it's well thought out. Now the limitations though are coming down to make it a little bit different. So when you're dealing with AI, periodic audits can't keep pace with the continuous AI decisions that are occurring. It's happening so fast. And then as your developers potentially are making changes to your AI, that just complicates the whole audit process even more. It's manual sample-based testing misses many of the AI scale outputs that it produces because it's all manual. You don't have a good plan, there's no automation around it, so it's not really truly understood. Risk frameworks are not built for algorithm rhythmic, basically for the math or the bias risks. They're not set up for that specifically. And there's no native templates for model drift or fairness assessments. So they don't have this in place. There's not nothing that's created at this moment. And then it cannot embed in governance checks into ML pipelines. So the process, just like the CICD pipelines, they don't have a way to embed governance checks into that. So again, you're on the cusps of something new. The GRCs in the past have been do it building for years and years and years. And now this is an entirely new process that is taking up over 80%, almost 80% of organizations are utilizing this capability. So what is AI GRC? AIGRC is the process of managing AI systems to align with your business goals. You want to mitigate all of the math that goes into it and the potential operational risks that are there. And you want to basically have this built so it's if it's going to manage these evolving AI-specific regulations and the standards that are associated with it. So it's an intersection really truly of technology, law, ethics, and enterprise risk. All of that is there and it's coming together in one area. You've got algorithmic governance. This is the oversight of the AI models from design through to the part of retirement. And this is probably not getting done. I would say it's it's not getting done by most people. There might be some in this being involved, but most people are just creating the math and they're letting it run. Fairness and accountability, biased assessments, and explainable AI decision trails. You need to have that brought out. Is there regulatory compliance that's tied to this? The EU AI Act that's in place. How do you become compliant with that? And then how are you monitoring the overall model performance and it's and how are you risk or surveilling the risk that's associated with it? So all of those pieces are all built into this GRC platform, and you're gonna have to understand how do you do this stuff. This is just not something that comes out of the box. So, what does an AI GRC add to the toolkit? So it's an AI system inventory. It can catalog all the models in use, their purpose, their data sources, and their risk classifications. You have explainability requirements. You build these requirements to know that you can trace and justify how did the AI make the decisions to the specific regulators and to the customers. You look at algorithmic accountability. Who is the human oversight to help on these high-stakes AI decisions? And who is the person? And we talk about this on CISSP cyber training all the time. Who is the owner? Do you have data owners? Do you have a model owner? There's fairness and bias assessments. Are you looking at these assessments and how do you ensure that it is being fair and the bias is minimal? You have model lifecycle governance. This was your oversight from the design and training through to deployment, and then on to retirement, and then your third-party AI risk. This is a governance of purchased or integrated AI tools to help you with your AI risk as well. Now, one aspect in here to kind of consider is also third-party risk that is using AI, third parties that are using this. That's another aspect that you'll need to consider when you're building out your overall GRC plan. The AI GRC regulatory landscape. So let's talk about what are things out there and available for you. We have the NIST AI RMF, which is your risk management framework. It was released in 23 and it was updated in July of 24. So it's still a couple years old. It's coming up on two years old than last time it had an update. It's voluntary, but is used by US Federal Procurement Standards and their systems. There's four functions govern, map, measure, and manage. Now, this is also there's a generative AI profile that was added, and this is uh NIST AI 600-1. Now, people I've talked to about the RMF is the fact that it's good to at least have something, but there are areas that there are gaps in. So know that going into it, again, as these frameworks come out, they are going to need some work as well. The EU AI Act, we talked, it's the first legally binding AI regulation out there, and it focuses on four risk tiers: prohibited, high, limited, and minimal. Talked about the fines that are associated with it, and then also the fourth full enforcement of the high risk that occurs in August of 26. There's the ISO 42001, which is AI management system standard. It's the word the world's first certifiable AI management system. It bridges AI RMF and the EU AI Act in one specific framework, and it's increasingly required by financial regulators. They want to use something that will broaden those systems. And as we all know, these IS, the various regulations or financial institutions will go across various countries. So that's why the financial regulators think it's very viable. That's certifiable, it's got a strong market differentiator between the two three that are there. So again, it's just something for you to put in your toolbox to consider which one do you want to start studying and getting smart on if you are already not. So in this segment, we're gonna get into the seven big risks companies must understand. This is from algorithmic bias to shadow AI. All right, so risk one, algorithmic biasing and discrimination. This is where AI is trained on flawed data and it replicates these biases at the machine speed. So it's going out there and it's throwing out things that are not correct. We've all seen this where it says, yes, this is correct. And you go a little bit later going, yeah, no, that's not right. So there's a lot of areas that this can affect, and this is your hiring, your credit scoring, healthcare, criminal justice, all of those have big impact. And you want to make sure that they're correct. So the legal exposure around all of this can be substantial. So therefore, it's imperative that you have a good plan in place to mitigate this. Now, how you can mitigate it was through fair impact assessments, diverse training of data audits, and explainability tooling. That's a risk that you're gonna have to consider if you're deploying AI within your company and within your organization. Risk two, regulatory and non-compliance. Only 18% of enterprises have fully implemented AI governance frameworks. We talked about that, but almost 80% are deploying it. So if you don't comply, it's gonna get expensive. There's U.S. sector regulators, you got CF, uh CFPB, FDA, EEOC, and FTC. All of these are looking to how to apply law to AI. You need to map AI use cases to your regulations and your gap analysis against the AI RMF framework when you're looking at mitigation techniques. And that's a big factor in what you're trying to accomplish. The EU AI Act applies any company whose AI systems affects EU residents. So if you have a business that crosses the Atlantic Ocean in any form, fashion, or another, you need to consider this EU or AI Act with the EU. This is regardless of where the company is headquartered. Very similar to GDPR, right? They're just throwing the same kind of thing on it and they're saying, you know what, if you want to do this, you're gonna pay if you don't pay, take ownership and take a good look at it. Risk three, model drift and reliability failures. This is where AI models degrade as real world conditions diverge from the training set that it has in it. Unlike software, AI is a, they say living. It's not living, but it's a morphing system, and it requires governance and post-deployment, right? Undetected drift in a fraud detection or any sort of clinical AI causes the serious, could potentially cause serious harm. So you need to have a good plan on how you're gonna manage that. Some mitigating aspects would be automated monitoring, defined retraining triggers, and then post-market surveillance. All of those pieces would be helpful in your looking at model drift and reliability failure. So when your the issue comes into is such as when AI credit model is trained on pre-2020 data, it may perform dangerous when economic conditions shift, which we're seeing all the time, without automated monitoring. Again, that's the problem. You train it on a set, which is great, but you don't keep updating it. That can cause all kinds of challenges. Risk four, data governance and privacy violations. AI requires vast data sets and may include sensitive or legally protected information. You have to make sure that that isn't the case. I've seen it many times where people have uploaded intellectual property data into AI, and that can cause all kinds of legal drama for you and your company. The EDPP, this is December 24. AI models trained on personal data remains subject to GDPR. See, we're adding all kinds of acronyms and all kinds of things to this environment, and it's only gonna be tangle up this web that we are going to be weaving. Generative AI creates new exposure, training data memorization, PII leakage in outputs. All of those aspects can come into the data governance and privacy violation piece. And then mitigation. How do you deal with these privacy issues? Well, privacy by design, GDPR article 25, DPIAs, and then data lineage tracking. All of those pieces are important for you to be able to help manage some of the data governance and privacy aspects involved with it. So the European Data Protection Board clarified in December of 24 AI models trained on personal data are not automatically anonymous. Yeah, that's a bad thing. GDPR focuses on anonymous yeah, anonymity. I can't think that's a$10 word. But being not known, that's what it's all about. So data governance and privacy violations. Risk five, lack of explainability and accountability gaps. So the black box AI makes it impossible to justify decisions to regulators or courts because it's just a black box. Nobody really knows what goes on in it. It's it could be squirrels and magic fairies doing everything. EU AI Act mandates human oversight for high AI decisions, high-risk AI decisions, I should say. And then accountability voids emerge when AI recommendations are acted on without review. So again, lots of uh there's a lack of explainability. And guess what? Most people you talk to, they go, yeah, it's a robot, it works. Um, having a good understanding of how does the algorithm work, how does it actually end up doing what it's doing, what are biases, all of those pieces are an important factor that you as a GRC person are gonna have to understand. So, so what are some mitigation aspects about this? You have various tooling, you have decision audit trails, and then you have a human in the loop checkpoints to ensure that the AI is providing you the information you need. So if your AI denies a loan application and the regulator asks why, the model decided it. It made a mate that choice. It's not gonna work. You have to be able to understand why the model made that decision and what were the factors that went into the model making that decision. Risk six third party and supply chain AI risks. Many organizations deploy AI through vendors, right? You have your SaaS that are out there, you have open source models, all of that is being done. Well, the traditional vendor risk management wasn't designed to assess AI-specific risks. It wasn't. So third-party risk is a big deal, especially when it comes to AI. Organizations can be held liable under the EU AI Act for third-party AI risks, and so yeah, that's kind of hard. So you're gonna have to come up with a plan. An AI-specific vendor questionnaire and contractual governance requirements will need to be filled out and planned for with your organization. So again, I come back to this GRC requirement. It's a huge factor. If you can put on your shingle that you are understanding AI and the overall pieces related to GRC, you are setting yourself up for a good position. So basically, can you can be fined for your vendors' AI risks, and the EU AI Act holds you responsible. So uh, yeah, you want to make sure you have a good plan on it. You better start thinking about it and you better put an assessment in place, especially if you're dealing with the EU. But it's only going to go from the EU to many other companies as well. Generative AI and shadow AI, risk seven. Employees using unauthorized AI tools create invisible compliance risks. Yes, you have your guys out there that are folks that are putting AI in place that aren't really talking to leadership, and they're utilizing the tools, and then all of a sudden you have a problem. So customer PII, proprietary IP may be exposed through unmanaged AI usage, hallucinations, all of these things can be acted on as fact, creating legal risk to you and your company. So, what do you do about it? Well, the mitigation aspect is that you have AI acceptable use policies. So you have people, you have paper saying if you use it, this is what's going to happen, this is what you have to do. Uh, and it also will hold people accountable. Now, will it will a piece of paper stop people from doing it? No, it will not, but at least it gives you another piece of fodder to be able to help you in this situation. Utilize tools to discover how AI is being used within your organization, what kind of data is going to AI specific, maybe websites, and then gen AI specific controls that you have created. Shadow AI is the new. Shadow IT, except the data exposure risk is orders of magnitude higher, especially when employees paste contracts into consumer AI tools. Yes, it's gonna be it's gonna be huge. It really is. And so you're gonna have to put something in place to help mitigate some of this risk. So now that you've all heard all this doom and gloom, what do we do? How do we deal with it? What is the plan? Well, here's a roadmap that you can take. Some here's eight actionable steps you can take right now to as a practical AI G GRC roadmap. One, build your AI inventory. Understand what you actually have that is out there in your organization and being used. Understand the risk levels. Again, apply the EU's AI Axe four-tier model or whatever one you want to, but understand what risk levels are out there for you. Conduct a gap analysis, understand the RMF and understand all the different frameworks, and then determine where is your gaps. Form an AI governance body. This is important to have people involved, your legal compliance, IT risk, and business units all involved to help you build this governance body. And then enable continuous monitoring. Watch what's going on. You're gonna have to potentially purchase tools andor modify and manage the ones you have to be looking for various levels of AI use. But you're gonna want to have some level of monitoring involved. Address third-party AI risk. This would be from vendors, as well as what third-party risk do you have to AI as well. You're gonna want to understand your third-party risk in this space. And then have an AI acceptable use policy. Define governance for employee use of consumers and generated AI tools. Make sure and everybody knows what is expected of them. And then the last thing is train your workforce. AI awareness for leaders, developers, and the general staff. Again, this is not all-encompassing, but these are eight steps, eight practical steps that if you don't have a plan in place now, use these as a guidepost. Use this as something to help you as you're deploying AI within your environment. GRPC professionals have a head start. And I say this if you're already in the GRC space, hoo hoo, goody up, it's you, you're already there. Your existing skills will transfer very well. Now, some things to consider. You already understand risk assessments, you already understand policy development, you understand audit and board level reporting. You've got the risk management piece down because you understand vendors well in this space, and you do have a level of compliance monitoring in place as well. So you get that, you understand it, you know this. These are important parts of what you do in any GRC port program. Now, what do you do with your toolkit? Well, you just transfer that, right? So you just get into the AIML fundamentals, understand how the models work. This isn't rocket science. They work very well. Now, I say that, and I couldn't code very much to be able to do any of this, but the part is that if you understand how the models work, you now can explain that to people. It's explainable. The NIST RMF or the EUI Act certification or training. Look at the various aspects around this, even look at the ISO one that's in there and determine how you can import import that into your organization. Look at model cards, data lineage, and fairness metrics. Understand those terms and how they're used. Understand explainable AI, XAI concepts and the tooling that goes with it. And then the GRC platform AI modules such as ServiceNow, MetricStream, and so forth. ServiceNow has a really good AI module, so does Salesforce. They have lots of really good programs out there. And then generative AI governance and prompt risk controls. You want to make sure that you have that understood understood and that you can then deploy it and you can explain it. So again, you have a good foundation if you are already a GRC professional. If you're not a GRC professional, just get smart on all these things. You've dealt with them in some form or fashion. Now just kind of put it all together. It'll help you out immensely in the future. So here are some key takeaways. This is AI and ML is an extension, it's not a replacement, right? So ARGRC needs to be built upon your traditional GRC. Don't throw it away and start over. Your foundation is there. It's solid. The regulations are here and now you need to plan for them. If you don't have a overseas and EU presence, well, that's fine. But guess what? It's coming to America. It's coming. It's just a matter of time. Inaction is the biggest risk. Take the action now. Even if they are incremental actions, do them now. Do not wait until you've got three, four, five years down the road and now you're going, oh, I wish I would have done this. Start it now. JI generative AI demands new controls. Shadow AI and hallucination risks, they do require a new class of governance controls beyond the traditional GRC. So you're going to have to kind of consider that. But I would recommend you start small. Don't go big. Start small and work from there. So what are some next steps you need to do right now? What can you do after after listening to this, watching this video, whatever it might be? What are some of your next steps for what you should do? Download the NIST AI RMF. Okay, it's free. It's at NIST.com or NIST.gov. Go get it. All right, especially the genitive AI profile. Conduct a quick AI inventory of what you've got going on and ask IT what AI tools are currently in use today. And they may not know all of them, but they should have a good handle. Assess your EU AI Act exposure. Do you have anything in the EU? Do you need to be worried about? If not, well, that's fine. You're good. Put that in the side. Explore your ISO 42001 certification as a potential market differentiator. If you want are going to be in this space, maybe you should get certified. Now we all know ISO certifications are not inexpensive. They can be very costly. But maybe in this situation, it could be a market differentiator between you and your competitors. And then the last thing is update your vendors, due diligence questionnaire with AI-specific criteria, because guess what? They're doing that to you. So you're going to need to make sure that you update it to understand what are some AI-specific aspects involved with this. So AI GRC is where the future of enterprise risk management is being written. You really truly need to understand it and grasp this concept. Thank you so much for listening today. I hope you get a lot out of this. Again, this is generative AI GRC. I don't know what's out there on the web around this, but I saw as a need that I, you know what, dealing with GRCs in the past and how I've handled them, how do we separate ourselves with the AI piece? This is a really good training to help you kind of point you in the right direction for GRC within your organization. So check it out. If you're interested in any more of this content, head on over to CISSP Cyber Training. I have all of this available to you as well as check it out my YouTube channel. There's lots of great content that's available to you as well. So again, go check out CISSP Cyber Training. Have a wonderful, wonderful day, and we'll catch you on the flip side. See ya. Thanks so much for joining me today on my podcast. If you like what you heard, please leave a review on iTunes as I would greatly appreciate your feedback. Also, check out my videos that are on YouTube and just head to my channel at CISSP Cyber Training, and you will find a plethora or iconicopia of content to help you pass the CISSP exam the first time. Lastly, head to CISSP Cyber Training and sign up for 360 free CISSP questions to help you in your CISSP journey. Thanks again for listening.