PrivacyLabs Compliance Technology Podcast
PrivacyLabs Compliance Technology Podcast
PCI-DSS and AI with Jen Stone
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode, we discuss what the Payment Card Industry- Data Security Standard (PCI-DSS) is, how it works and how it will unfold alongside AI regulations from the perspective of a veteran senior security professional and auditor. Jen Stone will guide us through the process of a PCI-DSS audit, the advent of version 4 and its intersection with machine learning processes.
Paul Starrett: Hello, and welcome to another podcast sponsored by Privacy Labs. My name is Paul Starrett, and I am one of the founders of Privacy Labs. I am very honored to have Jen Stone, a Principal Security Analyst at Security Metrics, and we will let her talk about herself in a minute. But just briefly, I want to give our listeners an understanding of how we came to this podcast. I was interviewed by Jen through Security Metrics podcast last week on the topic of AI and PCI DSS. I will put a link to that podcast at the bottom of the transcript. But without further ado, Jen, please go ahead and introduce yourself. Tell us who you are and so forth.
Jen Stone: Thank you so much for having me on your podcast, Paul. It was really great having you on ours, talking about AI. It’s such a great topic and it’s exciting to be able to talk to you again. Like you said, I’m one of the Principal Security Analysts here at Security Metrics. I also host our podcast. And as a Security Analyst, I’m one of about 25 auditors. I spend the bulk of my time on PCI DSS assessments. But I also perform assessments against HIPAA standards and NIST 800-53, NIST 800-30, CIS which used to be the SANS Top 20. You know, just a variety of, high trust, is of a variety of things that we can assess against. And I always say that if you know cyber security and you understand an organization’s systems, then you can kind of match it up fairly well against most standards and be able to give them a good degree of assurance on whether they are meeting what they should be meeting or not. I’ve been with Security Metrics for about eight years. And before then I spent a little over 20 years in IT operations.
Paul Starrett: Wow. That’s pretty impressive. You know, I would agree that cyber security as far as what you’re supposed to do and how you’re supposed to protect your environment is very similar across those different standards. It’s either you’re protecting something or you’re not, and the technologies tend to be the same thing. Often, you have a smartphone or you have a cloud based solution, what have you. You know, one thing that struck me when I first started looking at PCI DSS is that it’s not a law. It’s not a regulation that was enacted by a state or a federal or international body.
Jen Stone: Correct. Yeah.
Paul Starrett: Right? It’s a contractual relationship between the payment processors and those who process the payments if I say that correctly. So explain, if you would, to our audience PCI DSS what it is, and why our audience should be paying attention to it? It’s fairly obvious, but just to hear your thoughts.
Jen Stone: That’s such a great question. And that’s one of the first questions we get asked when people are new to PCI. And all of a sudden, and… The PCI security standards council actually has more than just the PCI DSS. There’s a lot of other things related to the payment industry, but we’re going to really focus on the PCI DSS, which is mostly applicable to merchants taking credit cards and the service providers that are helping them. And so a lot of times we’ll get someone, a merchant reach out and say my acquiring bank said I have to do this. Now, why do I have to do this? And sometimes they’re pretty upset about it and understandably because anything that costs you time and money, you’ll want to question before you jump in with two feet. Right?
But this came about because the payment brand said, hey. We’ve got a lot of fraud in the industry related to credit cards, and we want there to be less of that. And so they put together a council where the different payment brands got together with others in the industry and said, all right. If we’re going to protect credit card data. What does that look like? How do we do that? And what are the standards by which we should assess the security, how well any group is taking care of the security of credit card data. And so we are just launched into the 4.0 version of the PCI DSS. The last one was the 3.2.1. I can’t remember the whole history and timeline. I know people know it off the back of their hands. But I can’t remember how many years it’s been in place, but this is a really well developed standard that is uniquely suited to protecting credit card data.
Paul Starrett: Interesting. Yes, I noticed that when I started looking at it. I do have some exposure to PCI DSS. And just for our audience that stands for correct me if I’m wrong, Jen, but it’s the Payment Card Industry Data Security Standard, PCI DSS. And the payment brands, would that be MasterCard, Visa?
Jen Stone: MasterCard, Visa, Discover. Oh, there’s another one that just joined. It has a U in it.
Paul Starrett: Yes. Well, I think that sort of sets the stage, that’s kind of the role of, that’s the top. That’s where the school gets eliminated. Yeah, that’s interesting. I do think it does have its own specific, it’s focused on payment cards.
Jen Stone: It is.
Paul Starrett: Or payment processing. So… I’m sorry. I didn’t hear you.
Jen Stone: Yeah. Payment processing for sure. So, if there is what’s called, and you know, PCI DSS is one of those things where you know you’ve seen what it stands for a lot, but I’d have to actually look it up to make sure that you are correct on what it translates to. And the same thing with PAN, which PAN is the credit card number. So, if PAN exists, then you assume that that and its associated data is credit card, or card holder data and that’s what we protect. Like you said, it’s really the payment brands that drive it, but the way that they kind of have… They have leverage to get merchants to adopt it and to become secure in that if you choose not to go through the assessment process.
And if you’re small enough, you can do a self-assessment, but you get to doing a certain volume of transactions in a year and you start having to get someone like me involved. And the merchants that choose not to do that, they can get fined by their acquiring bank. And they can even, like you said, it’s contractual. They don’t have to let a merchant process cards. Right? They can say, look, you’ve chosen not to go through this security process and so we’re going to choose not to let you process card payments. And so there’s some leverage there. And then for service providers, what we’re finding is that a lot of service providers that deal with merchants, merchants won’t choose them as service providers unless the service provider is secure and can demonstrate that security through giving them an attestation of compliance that shows that they have met the PCI standards as well.
Paul Starrett: That’s interesting. And you really brought up a different point there that goes a little deeper that because it’s contractual it’s very close to home as far as your business purpose and commercial quest. It’s not like you have to worry, is a regulator going to come along and catch me not being compliant. It’s not in that category. Nor do you have to worry about, let’s say, a class action lawsuit if you cross your fingers and hope there’s not a breach. This is very much, your business can be cut off at the knees if you will. That’s interesting. Could you give us a sense of what the typical audit looks like? When you first come into an audit, what do you think about? How do you plan? And is it on-site, off-site, timeframe, just to give our listeners sort of a sense of, if it’s possible to give us a sense of a vanilla or typical audit. What is your experience with that? Because I understand you’ve done 200 of these.
Jen Stone: Yeah.
Paul Starrett: And that’s quite, well like top 10% [inaudible]
Jen Stone: That’s a lot. It’s definitely a lot.
Paul Starrett: Yeah.
Jen Stone: I do. I typically do about an audit a week, just… Man, I love work.
Paul Starrett: Yeah. You do apparently. Live and love.
Jen Stone: I don’t have little kids. I look at some of my colleagues, and if you have little kids at home, you can’t travel that much. And depending on the demands on you, it is definitely a lot of audits that I’m able to take care of. So, let’s look at what is a standard audit and let’s look at the difference between audit and assessment. And in the PCR world, it’s really an assessment, which is a more collaborative approach than we are going to try and catch you doing something wrong that the audit infers. An assessment, we look at the reporting and the evidence that has been given to us. And in an audit, an auditor does a lot of deep dive things on their own without collaboration with the assessed entity. And so that’s, I like the PCI assessments better. And we call them audits. It’s okay if you want to call them either one. It’s fine. But I like that work better because of its collaborative nature. We find out together whether a group is in compliance or not.
So, what a standard one looks like, again, if you are doing fewer than about a million transactions in a year, then you get to do what’s called a self-assessment questionnaire, and it’s a list of questions. Do you have a firewall in place? Is it doing these things? Do you have antivirus? And so it’s a checkbox. It’s in place, not in place, not applicable, in place with a little help. So, there’s some different categories that you get to check for that. And if you’re smaller, then that’s the way that you get to do it because the payment brands looked at it and said, all right, from a risk perspective, can we give our smaller merchants an assist and let them do something that’s less taxing on them? Yes, we can. And we’ll even help them know what kind of data flows they need to look at depending on how they take payments.
And so there’s different SAQs, maybe just kind of a lot of your listeners are probably familiar with having done an SAQ if they take payments. And the SAQs, there’s an SQA is for online payments, and AEP is a different flavor of that. SAQB is if you have a swiper where the information runs across your internet. And then you’ve got… Or excuse me, B is if it’s analog, BIP is if it runs across your internet because IP means you have an IP address, which is one of those internet words. And then there’s C and D, and so they take in more and more complexity.
But one of the things that people get wrong is they think an A rolls up into a B, rolls up into a C in terms of what requirements are covered. It doesn’t. The only thing that anything rolls up into is the SAQD. So, if people are thinking, yeah, I’m small and I need to do a self-assessment, make sure you read the before you begin piece in whatever SAQ you have, and then you’ll know you have the right thing or not. So, that’s really about the small and medium businesses that have fewer than a million a year.
Over a million a year, chances are good you have to work with a QSA like me, definitely over six million transactions in a year and you have to, or if you’ve had a breach, you have to. Or if you’re in what they consider a high risk industry, and your bank will tell you if they think you’re a high risk industry, then you’ll have to work with a QSA. And what we do is one of two things. Either we will fill out an SAQ after looking at all of your evidence, and making sure we can confirm the existence of things. If you’re still kind of in that less than six million range, probably you’ll be allowed to do an SAQ with your assessor.
But over six million, that’s when the QSA, there’s just really you have to do what’s called a ROC, or Report on Compliance. Report on Compliance is a report that again looks at all of the requirements that you would look at in the SAQs, but the assessor has to write paragraphs. Like, how does the assessor know? What did you look at? What did you do? Who did you talk to? What did you say? How do you know for sure that this is in place? So, that kind of sets the tone for what does an assessor do when they come on-site or leading up to that, all of those activities. What they do is they try and find the answers to all the questions they’re going to have to give a rational answer to when they go to write that report. Right?
And so I know that different organizations kind of do it in different ways, but the way we do it at Security Metrics is we’ll have an initial audit review where an organization that’s new to the process will have a series of calls with a specific type of QSA that looks at their data flows, tells them what their scope looks like, help them understand what their scope is if they don’t know what their scope is or will confirm yes, your scope is what you think it is. And scope is just another word for how could you affect the security of cardholder data? How do you transmit, receive, store affect the security of cardholder data? Any systems that kind of would affect those answers, that’s in scope for you. And any people and any processes. So, we look at what’s your scope, you get that kind of taken care of.
And then there’s some really basic things that if you don’t have in place at that moment, part of the initial audit review, if you don’t have those in place, you’re not going to pass. And we don’t want you to take an assessment with us if you’re not going to pass. So, some of the things that take people a long time to get in place are logging, monitoring, and alerting. And so if you haven’t heard those words, the information that flows through your networks and through your systems should have certain patterns, should have certain activities. Logging just means there’s a system that keeps track of what those activities are.
And then monitoring and alerting is there’s a facet of it that it looks at the patterns of those activities and says, ah, that feels like, not right. Maybe something bad is going on in your system. And so that’s what we call, it has detected an Indicator of Compromise or an IOC. So, with monitoring and alerting you want to know what’s happening and why and should it be. And if you don’t have those tools in place and kind of tuned in to look at things, that can take a long time to set up. So, that’s one of the things that we look at during that portion of the assessment. Just really make sure that you’re set for success when you go into the audit portion of your assessment.
So, then the initial audit reviewer says, okay, this company is good to go. We’re going to hand them off to the QSA that’s going to do the assessment. And then you have your kickoff meeting and say, all right. Let’s make sure we understand your scope. Let’s make sure we understand your timelines. What are the drivers? One of the questions I always ask is who’s going to get this report from you, and when are they expecting it? So, we can kind of help them march along in the direction they should be going at a reasonable pace.
We have a secure file sharing program where we’ll ask, okay, for this requirement, and it’s set up so that it matches the requirements in the ROC, the Report on Compliance. And so if I have to know that you have antivirus turned on and functioning, well, I’m going to have a question in our secure file sharing that says, can you show me screenshots and some narrative that tells me this? I personally will not go on-site to a customer until I know all the answers and they’re in good shape based on their narrative and their screenshots. And then -- [crosstalk]
Paul Starrett: That makes perfect sense.
Jen Stone: Yeah. Right. But a lot, I mean, some people will YOLO it. Hey, we’re coming up, see what happens. But I like to have everything in place first, and I like to have the executive summary written already in the reports so that I know who am I talking to? What is the documentation I’m expecting to see? What are the locations? How do you do what you do in your business that affects cardholder data? So, if that’s all set up, then you go on-site. And an on-site is one of the requirements for PCI for a couple of reasons.
One is it’s really good to have an actual visual of the systems in place and how they’re protected. That’s great. But the second one, and I think it’s even more important, is nothing beats that eye to eye contact, that in-person relationship that you build with a team when you are assessing them to really understand, okay. This is what I think you do. Is this correct? This is how I think you do it. Is this correct? And then after a day or two or three or five of being on-site with them, you can look at that team and say, all right. What are we missing? What’s the actual gap here? Is there something that really could affect the security of cardholder data that we haven’t talked about yet? Because by that time, they realized this is actually to help them be secure and be in good shape and support their business.
And a lot of times they’ll say, okay. Well, we still have this gap that we need to talk about. And then we’ll go through what do we do to make sure that you’re in good shape. Or that we’ll get to the end and they are solidly in good shape to be able to say yes, we are meeting all of these compliance things. We’re secure, and be in shape for the next annual assessment. So, after going to… [crosstalk]
Paul Starrett: I’m sorry.
Jen Stone: Oh no, go ahead. So, after the on-site, then the assessor goes back and finishes writing the report. And, we’ll send out, sometimes the customer wants to see a draft and sometimes we let them. But usually, we finalize the report and send it out for signatures and everybody signs it and says, yes, we all agree that this is what’s real and that things are secure. Or they’re not secure and they’re not meeting the compliance. And then you have to issue an incomplete or non-compliant report on compliance. I hate those. They make me sad.
Paul Starrett: Well, I guess there would be, though, a remediation step in place. Is there such a thing?
Jen Stone: There is, and PCI is great because it’s a point in time where you can look at things at a point in time and know whether they were meeting it or not. And if you find things during the assessment, yes, they do have the opportunity to fix them and get them in place before the report is issued. The only thing to keep in mind is that PCI does say you have to have a, there’s a freshness quality to looking at things. So, if you go out and do the on-site and look at all the things, we need to release that report within 90 days of that on-site date or have a revisit to refresh the information that you’re looking at.
Paul Starrett: Interesting. That was a very nice sort of soup to nuts, your cradle to grave, if you will, approach or explanation to how this works. Thank you for that.
Jen Stone: Oh, you’re welcome.
Paul Starrett: There’s a few things that jumped out at me and a good way to segue, actually, if you’re finished.
Jen Stone: Yeah. You bet.
Paul Starrett: So, the first thing is the in-person meeting where you can actually gauge the person you’re talking to, how do they feel about what’s going on? Are you getting the complete story? You know, are they apathetic? And then in reverse you can sort of imbue, sort of put a sense of hope and spirit into them about what you’re doing. So, I think that is invaluable, frankly.
Jen Stone: I think people often don’t understand how critical it is to the security of an organization that the people applying that security are actually happy. And what was it in the Godfather? The quote is it’s never just business. And so sometimes people are like, well, it’s just business. No, it’s not because people are doing your business. And if your people are not happy, then your security is not going to be great. Because for one thing, your communication is not going to be great. And communication is an absolute vital part of security. So, sitting with a group of people and you find out what’s going on and they’re not happy or communication isn’t happening, you can often dig deeper and go, “h, is this affecting the process?” Because then I can’t put in my report, “Oh, people are unhappy, so you should be worried.”
Paul Starrett: Yes.
Jen Stone: But if they’re unhappy and the person authorizing access for someone doesn’t want to talk to the person who has to apply that access within their access control mechanism, then that’s a potential risk in that step. You need to look at how is that happening? And do they have some guardrails in place when people aren’t happy? Or is there a way to help them look at their processes and understand the drivers behind this team is asking for this and this team needs this. Well, sometimes all we see is somebody’s asking me for stuff. They’re always asking for stuff. They want to know this. Why are they always second guessing me?
Well, if during the course of the assessment, you can say, look, this person’s asking you for this because I’m asking them for that. Or this person’s driver is this and this. If you can be kind of empathetic towards what the problems that they need to solve, then either you can find a better way to satisfy that, or you can be a little bit more patient when it feels like somebody’s being demanding and you don’t know how that hooks into what you care about. Right? So, sometimes looking at an organization’s processes, you can actually help them be more empathetic and patient with each other because they understand the different groups that have to go into the security of just one thing.
Paul Starrett: Yes. I was going to say, I’ve been in security for a very long time. Well, a long stretch within security, auditing, investigations and so forth. And a lot of times people consider them necessary evil. But if they see that a picture they have that moment of seeing the value that their role is in marshaling this forward. And so the corporate culture picks up, the morale picks up, and that’s where you see really a tightening of the whole effort -- [crosstalk]
Jen Stone: Exactly.
Paul Starrett: -- which is great. Yeah. And when you’re in-person, when you’re with them that is very much, I think, accentuated.
Jen Stone: Yeah. When somebody else’s tasks are disconnected from your values or your mission, then they’re just an annoyance. But as soon as you can connect their requests to your mission and your values, then it’s much less annoying.
Paul Starrett: Annoying I think is the right word, frustrating. You know, that’s a perfect segue because one other thing I picked up on in your descriptions is that you have this process of logging and monitoring and alerting. And this is common with many threat detection systems. And what is oftentimes implied in that process is that you need to have something other than just a human that is looking at the logs because the logs generally are what drives the monitoring process, and then that drives the alerting. So, involved in that, and as we knew, we were going to come to this topic, is typically artificial intelligence plays a role there. It’s looking at the logs, looking at patterns.
Now I just want to state that artificial intelligence does not necessarily mean machine learning. For those of you who don’t know the difference, machine learning is more based on statistics and other math, if you will. Whereas there’s another area called rules based or rule based where it’s just much more cut dried, it’s a workflow. It’s if this happens then that, if this happens then that. Artificial intelligence, by my take, includes both of those. But far more often than not now, it’s machine learning that’s in there. It is the model that the machine learning model, when I say model, I mean a machine learning application that is looking, it’s detecting. It’s watching for patterns. And if it sees a pattern that it thinks is a problem it will then alert.
So, the question then Jen becomes when you think of that process, do you generally… Because machine learning is really not something most people have an immediate grasp of or even think about, when you go in, how do you accommodate that type of thing? Is that something that’s still outside the peripheral vision, if you will, of the QSA, or the people internally who you’re auditing? How does that happen?
Jen Stone: So, I’m really glad you brought that up because, PCI DSS 4.0, the new version of the standard that we’re talking about says that you can no longer do manual reviews of your logs. So, up until 3.2.1, we were allowed to accept manual reviews of logs where I, and I know a lot of my colleagues, if we ever ran into someone doing it manually, our answer was no. I mean, you can’t as a human being evaluate and detect all of the patterns that need to be detected in the logs. There’s just too many. There’s too much information going by. And so you have to have now this machine learning or the AI involvement of evaluating these Indicators of Compromise within the logs.
And so I’m not sure how other assessors do it, but whenever I run into a new solution, some of the really well-known ones are like Splunk, Alert Logic, I mean, there’s a ton of them. Those are just a couple that just came to mind. And they do an excellent job. So, as an assessor, I look at the tools that they’re telling me they use. And then I’ll say, okay. Show me your dashboards. How do you use it? What is your [inaudible]? Show me where your alerts come in if I know the system. But if I’m not familiar with it, if it’s a newer system, newer solution or I’m less familiar with it, then I’ll go do a lot of reading of the vendor documentation until I understand how do they do what they do. Because they’re not all created equal and the more sophisticated, the more information and the better they are at finding these patterns, the more protected a group is.
And so sometimes you have to have serious talks with groups about, well, yes, you have this solution. And I understand that you think it’s great because it’s free. But let’s talk about what you’re actually trying to get from it. And so these are kind of the conversations that happen, and learning more and more about how the different groups do what they do. And the good ones do things very similarly, but you couldn’t do it without AI.
Paul Starrett: Yes. And I think that’s something that I often will espouse or will evangelize on that it’s impossible to, in our modern era, even for small and medium sized businesses or payment processors to keep up. It’s like drinking from a firehouse. So, as you peel that back, then I guess by going into the Splunk documentation or Alert Logic or what have you… By the way, I think you just stated most installations out there.
Jen Stone: Yeah.
Paul Starrett: Splunk’s huge, Sumo Logic, some other ones. Thankfully, I’ll be able to [inaudible]. So, what’s coming along with this PCI DSS 4? Is it also regulations dealing with responsible artificial intelligence, and that requires that you pay attention to what it’s doing? Is it doing what it says it’s going to do? Is it robust? Because there’s an area called adversarial robustness threats. Basically, people are trying to defeat machine learning models to perform some sort of nefarious action. For example, in the case of PCI, can I get a fraudulent transaction under the wire and get past that model so it doesn’t think it’s fraudulent and then I can therefore, execute on my theft or what have you? So, the idea is that you have to test those. You have to understand the risks of those models and arguably test them using techniques that we at the Privacy Labs can do.
Jen Stone: Fortunately, there is a really great community of cyber security experts that love to share information. So, when groups like yours find a problem that people are exploiting, then it gets published. We all get to know about it. And I lean a lot on that, knowing what’s out there. I have to, so again, if I come up along and find a technology with which I’m less familiar, I’ll go look them up under some of the vulnerability listings and see is there something out there I need to know about. And I also have feeds that tell me, probably way more information than I really need. But I like to keep an eye on what’s going on. And so, knowing that there are groups out there that are keeping an eye on what are these vulnerabilities and are these technologies that we all rely on? Are they doing what we need them to do? And that’s a super important part of the job.
You know, there is another way to look at AI in the PCI DSS that I hadn’t thought of until a colleague reached out and said, “Hey. What do you think of this natural language processing and getting an NLP model to write the Report on Compliance for you as an auditor?” And I said, “Well, I mean, I’m not sure how you would do that without giving sensitive information to the, like ChatGPT, for example.”
Paul Starrett: That’s a great point. I’ve just been spending a lot of time learning about vulnerabilities and ways in which models are built these days. And I think I’ll come back to your point in a minute here. But just as a quick sort of segue or transition here, there’s a lot of noise and fervor around LLMs right now, and people are rushing in [inaudible] large language models, ChatGPT, they insert it into their processes without really thinking about it. And those models, those large language models are prone to error and prone to doing things that are harmful.
And so without thinking about that, they’re building in risks that they’re not really sure of, but there are ways of testing those LLMs and other models. But it’s something that has to be done from the outset of the development of the model and something that should be very much a part of the internal process of building those models. And I would argue, a part of the PCI DSS examination is to look at any kind of machine learning models as like any asset that has to be tested, machine learning software. And if PCI DSS covers software, it’s automatically covered there.
So, getting back to what you said about using the LLM or the ChatGPT or what have you to write your report, my personal feeling is that it does run the risk of giving you errors in your information. It should be based, in my opinion, on an LLM that is specifically created for that purpose. You can take ChatGPT or an LLM like GPT 4, and if the audience doesn’t know what these are, that’s fine. These are just industry efforts. Dali and so forth that you can use those to build your own specific one. But the LLM that you’re using should be customized to PCI DSS audits, I believe, because the nomenclature, the guardrails you put around it, should all be there.
The optimism I have is if you generate that and you can go through and make sure it doesn’t have sensitive information or that it’s accurate, then it could take a lot of the tedium and the manual or the sort of [inaudible] some people don’t like to write, they’re not good, as long as you can read it, what comes out, then I think it’s okay. So, my personal feeling is that that’s not a bad idea as long as you’re sure of what you’re turning in because you’re putting your name on it.
Jen Stone: Right. If you’re sure it’s correct, but also if you are sure that you’re not revealing sensitive information or proprietary which is, of course, also sensitive. So, knowing what the information is that you’re putting into it to get the writing out of it, I think is a way to use the tool. I personally think I’m a much better writer than ChatGPT.
Paul Starrett: I agree. Yeah, I’m the same way.
Jen Stone: I have a hard time letting anyone do my writing for me including our marketing department. They’ll send me something and say, “Hey, what do you think about this?” And then I completely rewrote it. And then I think, oh, they probably don’t really appreciate that. But I love them. But if you’re going to ask me about, to put my name on something that is written I’m going to go, I want it to be good. So, I don’t think ChatGPT is quite there yet.
Paul Starrett: No.
Jen Stone: And like you said, there are potential issues. If you just trust it to go do the work and not oversee it, then that’s a real problem.
Paul Starrett: Yes, I’m like you. I’m more of a liberal science or liberal studies person, even though I have a strong background in machine learning and data science and programming and such. There is something about it, but I think there are people… There’s some auditors, they’re very sort of sequential thinkers. They’re very kind of detail oriented, not always are they the types that are good with prose.
Jen Stone: Right.
Paul Starrett: So, maybe if that saves them time and gets them sort of to the next level, then I guess it’s a case by case basis.
Jen Stone: I think words are fun and I probably shouldn’t tell people this, but every once in a while, I’ll be like, oh, there’s a word I haven’t used in a very long time. I’m going to try and fit it into this report.
Paul Starrett: Right. We’re human beings. Right?
Jen Stone: Let’s see if anybody finds this word when they QA this report.
Paul Starrett: That’s interesting. But I like the way you brought up the use of LLM and generative AI in the context of doing your job. And so I think we’re kind of coming down to the wire here. I did want to just try to wrap up here. So, I think with the advent of the new 4.0 standard and with the promulgation of these responsible AI like NIST, National Institute of Standards and Technology now has a responsible AI framework they’ve put out. And the EU has an AI Act coming out that will require people to verify that their models are doing what they’re supposed to, that they’re not leaking private data. So, they seem to be converging, which is expected. I think that what I’m going to start doing is looking more into where can we help people with this going forward? It’s probably going to take, it’s going to grow in its need as far as looking at those alerts.
The other piece is that people are using a chatbot or some other sort of online helper to go through transactions, what have you. Those are models as well, machine learning models, those should be tested because they can be defeated. And you can do some pretty gnarly things with those if you know how to defeat them. So, I guess we could just say that as we look at PCI DSS and machine learning that this is something that is only going to grow in need because data science and machine learning, artificial intelligence is its own rabbit hole of science and technology. So, you want to have people that know how to do that.
Well, great. Listen, thank you, Jen. I think we can probably wrap that up. But I always ask all of the people that I speak with rare exception, is there anything that we’ve not talked about, anything that you’d like our audience to know?
Jen Stone: You know, I think keeping the human element in whatever people decide to do, whether it’s AI or whether it’s standard procedures and flows. Sometimes we forget that people are the reason that we’re doing any of this. So, making sure that human element has a touch point at every critical juncture I think is going to make whatever we do as teams in our work is better. So, yeah, just keep people front and center.
Paul Starrett: Very nice. I couldn’t agree more. And I think it’s a very nice way to end this because I think a lot of people see PCI DSS, AI regulation as sort of the necessary evil. But the important fact, they do help keep it a trusted environment. The commercial viability of any enterprise or business or company is premised on the idea that what they have in place is working. And that those who do business with you or who will buy from you or sell to you know that you have things taken care of that way.
So, listen, Jen, thank you so much. You’re always such a pleasure to talk to. And I feel, even though we’ve already talked about a lot of this already, I know considerably more now that we’ve had a chance to have you tell us about what you do, the challenges involved, the positive upside of it, and also to rope in the machine learning piece, which is going to become much more of a challenge and piece of what we do.
Jen Stone: Well, thank you so much. It was a pleasure talking with you, Paul.
Paul Starrett: You as well. And I will be putting the link to your podcast of me there. Also, Security Metrics has other podcasts. Jen has many others that we discussed. And I think Security Metrics is also a good place to learn about this whole area. They’re a wonderful company. So, all right. With that said, signing out. Thank you again, Joan or Jen, I’m sorry. Jen Stone [inaudible] Joan. All right.
Link to Security Metrics podcast between Paul Starrett and Jen Stone:
https://www.securitymetrics.com/learn/ai-in-context-cybersecurity-and-privacy-implications