CISSP Cyber Training Podcast - CISSP Training Program

AI Poisoning the Quiet Enterprise Threats and CISSP Questions (Domain 1)

Shon Gerber, vCISO, CISSP, Cybersecurity Consultant and Entrepreneur Season 3 Episode 346

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 28:05

Send us Fan Mail

Quiet failures are the ones that scare me most, and enterprise AI creates a brand-new way for them to spread. If a chatbot becomes the “trusted employee” everyone relies on, a slow drip of bad documents, outdated procedures, or deliberately manipulated data can poison decisions for months without a single red flag. We break down what that looks like in real organizations, why it differs from the Hollywood version of a hack, and how the business impact shows up as confident misinformation rather than obvious outages.

We also dig into the difference between data poisoning (deliberate manipulation) and data pollution (accidental garbage at scale), then connect it to retrieval augmented generation (RAG). RAG is powerful because it answers from your internal knowledge base, but that same knowledge base becomes the attack surface and the “source of truth” the model won’t question. I share practical steps you can take right now: audit what your AI actually trusts, map the full AI contact surface across workflows and repositories, treat the AI pipeline like an untrusted vendor, and assign a named owner for accuracy and security.

Then we shift into CISSP Domain 1 practice with exam-style questions that force real trade-offs: using annual loss expectancy (ALE) to recommend a risk treatment to the board, applying NIST RMF guidance even when controls are inherited through FedRAMP, handling an ethics dilemma under the ISC2 Code of Ethics, spotting the biggest BCP gap when RTO and RPO targets collide with backup frequency, and explaining why HIPAA compliance does not automatically equal GDPR compliance for EU citizen data.

If you’re studying for the CISSP or you’re building security controls around AI and cloud systems, this one is built to sharpen both your judgement and your test readiness. Subscribe, share this with a friend who’s deploying AI internally, and leave a quick review so more CISSP candidates can find the show.

Gain exclusive access to 360 FREE CISSP Practice Questions at FreeCISSPQuestions.com and have them delivered directly to your inbox!  Don’t miss this valuable opportunity to strengthen your CISSP exam preparation and boost your chances of certification success.

Join now and start your journey toward CISSP mastery today!

Welcome And Domain One Focus

SPEAKER_00

Welcome to the CISSP Cybertraining Podcast. We provide you the training and tools you need at the CISSP exam first. Hi, my name is Sean Gerber. I'm your host, podcast. Join me each week as I provide the information you need. And grow your cyber sector in the light. Alright.

The Quiet Threat Of AI Poisoning

Poisoning Versus Pollution In Data

How 250 Documents Can Corrupt

RAG Pipelines As The Attack Surface

Practical Controls For Trusted AI

CISSP Cohort And Study Plan

Quant Risk Treatment With ALE

NIST RMF And FedRAMP Reality

Ethics When Management Hides Breaches

BCP Gaps With RTO And RPO

HIPAA Does Not Equal GDPR

Wrap Up And Next Steps

SPEAKER_01

Good morning, everybody. It's Sean Gerber with CISSP Cyber Trading, and hope you all are having a beautifully blessed today. Today, today is Thursday. And we're gonna be getting into CISSP questions related to domain one. So we are pretty excited about that. It's gonna be some stuff that's some deep dive aspects that you are going to focus on, and you can get specific details around it at CISSP Cyber Training. But before we get into the CISSP questions for the day, I actually had an article that really perked my interest. And obviously, I'm getting a little bit more depth in depth on the AI piece because in reality I feel strongly that this is going to be something that's going to be a game changer. And we all are seeing it. The question is, is how do we embrace it from a security standpoint? So this is out of CSO magazine. Again, this is by Cynthia Brumfield, and she's a contributing writer there at CSO Magazine. And it's called The Poisoned Truth: The Quiet Security Threat Inside Enterprise AI. Now, back from when I was in security, one of the things that I wanted people to focus on is the fact that just because you don't look for the big breaches, right? When I was a hacker, we didn't look for the things that would cause a huge breach within an organization. We would look for the small aspects. And we would look for different ways we could get information or we could create disinformation. And so this is why this is such a big deal. So let's set the stage of what we're going to be talking about related to AI being poisoned. We're not talking about obviously what you would see in Hollywood and a Hollywood style hack and where alarms go off and things are encrypted and people go running out the door. We're talking about the understanding of reality based on AI and it being corrupted really quietly and silently. So it's not to the point where it's creating this all of a sudden everything tips over. It's an issue where you have something very under the water, under the covers has happened. So an important to think about with this is say you have a highly trusted employee who's really experienced, well respected, right? And then over six months, somebody's been feeding them bad information, wrong policies, outdated procedures, manipulated data, all of those things where they're not actually hacked, right? They're not actually in a situation like that, but they're getting the wrong information. And so when this happens, now the images that their decisions that they're making about their controls, their financial approvals, their procurement, security operations, all of those seem totally normal because they've been slowly fed information. But what ends up happening is they've been poisoned, and the enterprise has been poisoned because the AI chatbot that you're using is now poisoned. Again, no alerts, no red flags, just the system has quietly started getting the wrong information. Now, this is where arsenic comes into play, right? If you want to kill somebody with arsenic, I'm not saying you should do that. It's a bad thing, don't do that. But if you would like, if that was your purpose, you would do it in small doses, little at a time. You wouldn't do it over a real quick period of time because the body would react very badly to that. But if you did it over short periods of time, it would create, it would kill a person, right? So that's the thing. So poisoning versus pollution. Now, there's something important that you need to understand because experts in this article draw out a clear distinction that many people miss. There are actually two versions of this problem, okay? The one is poisoning. This is where the intentional version, this is where somebody comes in and deliberately plants false or manipulated data to change how the AI thinks and behaves, right? So that's the poisoning piece. The second one is pollution, and this is the accidental version, and this is probably the one that's going to hit most organizations because they have lots and lots of data that may not necessarily be correct. So, in this example, when I was in uh in the organization that we had, very large multinational, we had gobs and gobs of data. So if you grab all that data and you throw it into your overall engine, what's it gonna do? Well, it could create a lot of problems because some of that data is so old and outdated that it would actually start poisoning your data streams because of that specific reason. So think about this. Now, Gary McGraw from the Berryville Institute of Machine Learning puts this in a situation where he says the difference between poisoning and pollution is simply intent. Right? Poisoning is deliferate, pollution is just garbage that got into the system. And again, I kind of go back to the fact that I think most people are going to focus on, they're gonna accidentally put the garbage in, or they're gonna wanna go, hey, we got to get this thing up and running. Let's just dump everything in there and let AI figure it out. So here's where things get really sobering. Because you might be thinking, okay, even if an attacker did want to poison our AI, they would need to massive resource, right? They would need to breach the AI vendor or the compromise of training pipeline directly, right? Well, research from Anthropic, the AK UK AI Security Institute, and Alan Turing of Institute found that just 250 maliciously crafted documents can corrupt a large language model, regardless of the size. So 250. That's not a lot. If you've been in any organization, any enterprise for any period of time, you know that 250 documents are like sitting on the street corner. They're there's there's they're there, they're everywhere, right? So this is the part that's not so good. So we need, if they know that the model scrapes Wikipedia every other week, you know, they could just plant bad content inside that window and just wait. We've talked about it before, from a hacker perspective, it's all about timing and it's all about waiting. Uh, you take your time, you don't be in a hurry because the moment you get in a hurry is when you make mistakes. So if they're really wanting to do this, there is the potential to make that happen. So, in how this could focus inside your enterprise, a poison support document could cause your customer service AI to quietly start leaking sensitive customer information. A manipulated approval workflow could nudge finance or AI towards a fraudulent payment. So these are the things that can get really out of control pretty quickly. So you need to consider how you're deploying your AI and have the right people doing this within your organization. Now, I was reading the article of item about rag, and I wasn't really sure what the heck that meant to be perfectly transparent with everybody. So you've probably heard of the term rag pipeline. Now, the rag stands for retrieval augmented generation, and it sounds very technical, but the concept is is very simple. And it is. I would rest if I dug into it. I'm like, oh yeah, okay, that makes more sense. So think about it this way: a standard AI model, like a really smart employee who studied everything they could before the first day of the job. They read millions of documents, absorbed enormous amounts of information, and they got locked in a room with no internet access, no file updates, no new information whatsoever. They can only answer based on what they already know from their training and the time they spent in the closet. That'd be very painful. Lots of pizza maybe would be good, but that'd be painful. So the pipeline, what that does is it changes it completely. It basically gives the employee a filing cabinet or some sort of repository where they would search in real time before answering the question. So you're giving them a base level of knowledge. So, example, what is our current password reset policy? So instead of guessing based on knowledge that may have sucked in from everywhere, the AI goes and pulls the actual internal policy documents, reads them, and gives you an answer based on your current specific data. So this is great because it gives you relevant AI relevant data without having to retrain the entire model to do so. So that's great, but there's a problem, right? There's always a problem. The Rag pipeline is the AI source of truth. So whatever's in that filing cabinet, the AI believes. It doesn't question it, it doesn't verify it, it just reads it and responds. So this is how you can make your own AI bot, right? The problem is though, is if you are if it has been deliberately tampered with by an attacker, the AI is going to confidently serve up the wrong answers because those are the answers that it has, and it's gonna say, I'm confident that is the problem. But the filing cabinet is your biggest attack server. So that's what they're gonna go after, is your rag. So again, this is an interesting part about this whole article that I think is just kind of really, really cool. But okay, so let's wrap this all up. What should you actually do? We know that they're seeing this in the wild right now, it's happening. So, first thing you need to do is audit what your AI actually trusts. Before you do anything else, you need to be able to answer three questions. What data source is my AI pulling from? Who controls those sources, and are they accurate and current? Most organizations cannot answer all three. That's first thing. Second thing, map your entire AI contact service. Don't just trust the model itself. Think about every place it touches, your rag pipeline, your knowledge base, your agent workflows, all of those things you need to consider. Third thing is treat your AI pipeline like an untrusted third-party vendor. I love that part. It's a very important piece. Apply the same scrutiny you'd apply to any vendor to your supply chain. Assume it can be compromised. Build controls accordingly. Do not give blind trust just because it's internal. Love that one. Big one. And finally, the fourth one, and this one's a large one. So, and it's with anything we talk about in CISSP cyber training, is you need to make somebody accountable, right? Whether it's IT, security data governance, it's somebody, whoever it is, there needs to be somebody that's accountable that's handling it, right? Until you put a named owner responsible for the accuracy of the security that the AI reads, and you have governance that covers the gaps, it's just it's a problem, right? So you need to make sure you have that in place. So again, great article. This is by uh CSO magazine from Cynthia Brumfeld, Poison Truth, the Quiet Security Threat Inside the Enterprise AI. Okay, so let's get into the questions we're gonna talk about today. Okay, before we do, want a quick shout out to CISSP Cyber Training. Head on over to CISSP Cyber Training. If you want to get your CISSP in eight weeks and you want someone to help you with this whole process, look at my cohort. I've got a cohort that's starting up in the July 7th of this year. We're gonna be going for eight weeks from starting on July 7th. We've got a plan every single week. We're gonna have conversations, we're gonna be walking you through. It's all part of the self-study program that you may have. That you're already saying you're a person that wants to do self-study, you can't afford a$10,000 boot camp. You want to be able to get the CISSP passed, but you don't know what to do. Well, I've got a plan for you to help you get you through this. The ultimate goal is to help you pass the CISSP and this cohort, along with the detailed self-assessments. I've got tests, I've got questions, we've got reading, all of those things are available to you by signing up for my cohort. If you do that soon, again, there's early bird pricing for this. If you sign up for it soon, you get on the waiting list, then you'll have access to me and the cohort at a reduced price. So go check it out, CISSP Cyber Training, banner across the top. They're just click on that and it'll walk you through all the details, including a video on what you actually can expect to see. All right, so let's get into the questions. Question one: a CISO must present a risk treatment recommendation to the board for a critical ERP vulnerability. Patching will cost$400,000 and requires 72 hours of system downtime. The asset value is valued at$8 million. Threat intelligence indicates at 30% annual probability of exploitation given current exposure. A compensating control, basically a WAF rule plus enhanced monitoring, costs around$50,000 annually and reduces exposure probably around 8%. Ignoring qualitative factors, which treatment option produces the best, air quotes, best risk out risk adjusted outcome. Okay, so let's talk about the the money side of the house here, because this is going to help actually define many of the questions of the answers to this question. So you gotta have you this requires quantitative risk analysis. So your ALE before the control, so this is your annual loss expectancy, is equaled to be eight million dollars. Okay, so you times that times 0.3, which is your annualized probability of exploitation. So 0.3 of 30% is$2.4 million a year. So your annual loss expectancy, potentially before the control, is$2.4 million a year. So now you're gonna go add the control onto it. So ALE after the compensating control, so you take the$8 million times 0.08, which is your 8%, and then this comes out to be$640,000 a year. So before the control, it was$2.4. After the control, it was basically$640,000. So the reduction is about$1.76 million a year. So your control cost costs you$50,000, but it's reducing for you approximately$176 million a year. So, or not$176,$1.76 million a year. So your benefit is around$1.71 million. What that basically comes down to is it's costing you$50,000. You're gonna add about$1.76 million in say in control reduction of your annual loss expectancy. So therefore, it's a good choice. So if you know that going into it, then accepting the risk, A, doesn't make any sense, right? Because you're not gonna accept the risk for a$2.4 million a year annual loss expectancy. Patch immediately, that eliminates the vulnerability potentially, but it doesn't situation put the as if the vulnerability even is able to be dealt with. So you don't know if that critical V ERP vulnerability is there. You gotta understand all of that. So the patching is gonna cost you$400,000 and requires 72 hours of downtime. That is something you may or may not have the ability to do. So then let's look at transferring the risk via cyber insurance. Well, why transfer the risk if you can spend the money and reduce the risk itself? Because cyber insurance is not cheap. You will spend probably more than$50,000. And what's gonna happen is if your cyber insurance, they say that you have the ability to make a change and invest the money to do it, and you didn't do it and you get breached because of it, they're probably not gonna pay. So the cyber insurance is only based for things that you know you can't control. Uh, if you know you can control it, you know you can make a change to it, then transferring via cyber insurance is not the right option. Uh one, it's just not a good choice. And two, uh, if you do that, your insurance companies most likely will, when the situation occurs, they will say, We're not paying for it. So keep that in the back of your mind. Next question: the organization is subject to a NISC, NIST 837 risk management framework and has completed the categorize and select steps for a new cloud-hosted system. The system owner wants to skip the implement implementation step and go directly to assess, arguing the system uses inherited controls from the CSP's FedRAMP authorization. So the C the they had to have some sort of FedRAMP authorization done. Which response best reflects the correct RMF guidance, the risk management framework guidance? Okay, A. The system owner is correct. Inherited controls do not require implementation documentation. B the implementation step is still required to document how inherited controls are configured and any system specific parameters are applied. B, the implementation step may be waived with written CISO approval if the CSP holds an active FedRAMP ATO. Or D the assess step should be delayed until system completes a full independent audit of the CSP or cloud service provider controls. Okay, so A, this goes answer is not correct. The system owner is correct. Inherited controls do not require implementation documentation. That is incorrect, right? You still are required to have some level of documentation on how any inherited control are put in place because it think of it this way. If you go, yep, I've got it, it's good, it's over there. Well, now it's hard for the auditor or whoever else to make sure that what you just actually did is in place. It's also easy for someone to say, it's over there, but it's really not over there. These are the droids you're looking for, but they're not, right? The ultimate point is that you need to document all of these activities. The implementation step may be waived by with written CISO approval if the cloud service provider holds an active FedRAMP ATO. And I I would argue with that. That's something that would be set up with the auditors related to if they would come in and say, well, yeah, the CISO waived it. They're still going to want some sort of documentation and know that you actually filed it. So having a written waiver is as a former CISO, I wouldn't do that unless I absolutely had no other choice to do something along those lines. But I would have followed all the steps that are required by it. The assess step should be delayed until the system completes a full independent audit of the CSP controls. You don't have time for that. And that's the situation. You want to put things in place. So the answer is B. The implementation step is still required to document how inherited controls are configured and any system-specific parameters applied. Okay, so then what it comes right down to is the implementation step requires documentation on how the controls, including inherited ones, have been implemented and leveraged. Even when the controls are inherited from FedRAMP and an authorized cloud service provider, the system owner must document which controls are fully inherited, which are shared, customer configurable parameters, uh IAM policy settings, all of those things, and then system-specific implementation details. You can't skip any steps. You've got to put all that down. And just assume, you know what? You're dealing with the government. They want all this stuff documented. One, because they like to do that. And two, it also ensures that you have in place what you say you have in place. Next question. The CISSP certified security analysis analyst discovers during a routine internal audit that their organization is systematically under-reporting breach notifications to regulators. A practice that appears to have management approval. The analyst direct manager instructs them to finalize the audit report without noting the discrepancy. Which course of action best aligns with the ISC Code of Ethics? A follow the manager's instructions. Organizational hierarchy takes precedence over professional obligations. B document discrepancy in the audit report regardless of management instruction and escalate through the appropriate internal channels. C. Anonymously report the organization to the regulator to avoid professional retaliation. Or D resign immediately and continue employment constitutes complicity in the violation. Okay, so you've got a security analyst. Discovered something during a routine audit, they've got an issue. The analyst direct manager instructs them to finalize the audit report without noting the discrepancy. Which is the best course of action, especially as it reliants aligns to the ISC squared code of ethics. So let's do the ones that are not correct. Resign immediately, as continued employment constitutes complicity in the violation. You could do that, but that why would you do that? Plus, also with 200,000 IT professionals that are looking for employment, yeah, that's probably not a good idea. C, and honestly, report the organization to the regulator to avoid professional retaliation. Yeah, well, that I'm sorry, but that's just a little childish and that's foolish. And don't do that. Because the problem is you need to address the problem. Let's just address it. Bring it up, right? Follow the manager's instructions. Organizational hierarchy takes precedence over professional obligations. No, it does not, right? You you you don't want to tell your boss that says, I you can't do that. Well, okay, there's there's ways to work around this in a way that is good and communicative to other people. Now I understand some people may not like that, but this is where you have to basically have a spine and bow up and do it in a way that is positive and productive. The answer is B. Document the discrepancy in the audit report, regardless of management instruction, and escalate through appropriate internal channels, channels. Again, work it up through the people that are in the organization. You don't want to hide it, definitely don't want to hide it. You also want to bring it up. Now, you don't want to just maybe write it down right away. You may want to talk with people about the situation as you move it up the chain before you actually go and document exactly what it is. There might be something nuanced to that that the leadership may want to have some input on it. So keep that also in the mind as well. Next question During a BCP workshop, a financial services firm identifies the following for its core trading platform. RTO equals four hours, so recovery time objective, recovery point objective, RPO is 15 minutes, mean time to recovery, MTTDR, is six hours, and current backup frequencies every two hours. The firm's hot site can be activated within 90 minutes, hour and a half, which gap represents the most critical to the BCP deficiency. Okay, so we got a BCP workshop, figuring it out. We got RTO of four, RPO of 15 minutes, MTTR of six hours, and the frequency backup is every two hours, followed by they can be activated within an hour and a half. So the hot site activation exceeds the RTO, making the recovery target objective unachievable. The MTTR exceeds the RTO, meaning the average recovery will breach the target even if the hot site is activated. C, no gap exists. The hot site activation time is 90 minutes, is well within the four hour RTO period. Or D, the backup frequency of two hours means potential data loss for up to two hours exceeding the 15 minute RPO. Okay, so let's walk through those. The ones that are not correct, hot site activation exceeds. RTO making the recovery time objective unachievable achievable. So recovery target unachievable. So RTO we know is four hours. Well, they can be activated within 90 minutes. So no, that's not a correct answer. The MTTR exceeds the RTO, meaning the average recovery breach target with even the hot site activation. So it does exceed the RTO, which the RTO is four hours. However, your site will be up within 90 minutes. So that's not even a factor there at that point. No gap exists because a hot activation site of 90 minutes is well within the four-hour RTO period. So let's look at the correct answer. All the metrics must be evaluated against the respective targets. RTO is four hours versus a hot site activation is 90 minutes. That one's good. That fits within the RTO timeline, no gap. We look at RTO versus MTTR. So your recovery time versus mean time to recovery. So your mean time to recover is six hours. This exceeds the RTO of four hours. Okay, this is a real gap. But the RTO is an average and can be influenced by the hot site. So the hot site is 90 minutes. So that may or may not be a situation. The critical deficiency is the RPO. The stated RPO is 15 minutes. Maximum acceptable data loss is 15 minutes. But backups run every two hours. In a failure scenario, up to 119 minutes of transactions could be lost because of the situation. And this is almost eight times the RPO. So the gap is bigger in the situation related to the backups. And the fact that that's a two-hour backup and your RPO is 15 minutes. So there could be a potential gap related to the MTTR, but the real issue, the best, the most critical one that you have to focus on is the backup frequency of two hours and the potential data loss up to two hours. Last question A US-based company processes health data for EU citizens through a SaaS platform hosted in AWS US East One. The company holds a BAA and with AWS and the is HIPAA compliant. The new EU customer contract requires GDPR compliance. The security team assumes existing HIPAA controls satisfy GDPR requirements because both frameworks address healthcare data. Which statement best identifies the flaw in this assumption? Okay, so they've got GDPR requirements, they got HIPAA, and they're putting it all, wrapping it together. So A, GDPR imposes additional requirements not covered by HIPAA, including data subject rights, lawful basis for processing, and cross-border transfer restrictions. So again, HIPAA, GDPR, they're one and the same as what this question is saying. B, HIPAA and GDPR GDPR are equivalent for healthcare data. The existing BAA covers both frameworks. GDPR does not apply to U.S. companies processing EU citizen data on U.S. soil. Or D. The AWS BAA or Business Associate Agreement satisfies both HIPAA and GDPR processor obligations automatically. Okay, so GDPR, if you haven't dealt with this, this is a very interesting animal. And just to be upfront, HIPAA and GDPR are not equivalent. They're not the same. They're very different. They have some of the same nuances, but yeah, I would say they're like, it's like having tomato bisque and just tomato sauce. I mean, they they are relatively the same color and they have the same thing, but bisque and sauce are very different. So B is not correct. HIPAA and GDPR are equivalent for healthcare data. The existing BAA, which is the Business Associate Agreement, covers both frameworks. That is not correct. C, GDPR does not apply to U.S. companies processing EU data on US soil. This does apply to US companies, and if it's especially if it relates to EU citizens, then it does apply to them. So that one is incorrect. Or the AWS BAA satisfies both HIPAA and GDPR processor obligations automatically. That is not true because GDPR processor obligations are very different than HIPAA and they are based on the location where they're at. So uh that is not a correct answer either. So the correct answer is A. GDPR imposes additional requirements not covered by HIPAA, including data subject rights, lawful basis for processing, and cross-border transfer restrictions, which it does, right? So they overlap in areas security safeguards, breach notifications, but they do diverge in multiple different ways. Again, lawful basis for processing, you've got consent, you've got data subject rights, you've got data protection officers required for GDPR, any sort of cross-border transfers is also part of GDPR. They are very different. The thing to think about is GDPR was privacy by design. That is their obligation. So when you hear people say, if someone says, yeah, they're very similar, they're again, it's tomato sauce versus tomato bisque. They they are similar, but they're not the same. So again, the correct answer is GDPR imposes additional requirements not covered by HIPAA, including data subject rights, lawful basis for processing, and cross-border data transfer restrictions. That's a lot to talk about. It truly is. Okay, that is all I have for you today. I hope you guys got a lot out of this. Head on over to CISSP Cyber Training, check out what we've got. Great content, free stuff available, lots of free stuff, some paid stuff that's amazing, as well as my upcoming cohort. Sign up for that. Get on the wait list now. It's filling up fast. All right, we'll talk to you soon. Catch on the flip side. See ya. Thanks so much for joining me today on my podcast. If you like what you heard, please leave a review on iTunes. I would greatly appreciate your feedback. Also, check out my videos that are on YouTube, and just head to my channel at CISSP Cyber Training, and you will find a plethora or a conocopia of content to help you pass the CISSP exam the first time. Lastly, head to CISSP Cyber Training and sign up for 360 free CISSP questions to help you in your CISSP journey. Thanks again for listening.