Kitecast
Kitecast
Justin Greis: AI Meets Cybersecurity
Most organizations are racing to adopt AI without considering the security implications. Justin Greis, former leader of McKinsey's cybersecurity practice and founder of an AI-powered consulting firm Acceligence, explains why this approach creates risk and how security leaders can change the conversation.
Companies are deploying AI at different maturity levels. Some distribute AI tools to business units and wait for use cases to emerge. Others push boundaries with advanced algorithms. Few consider the associated risks. The right stakeholders often aren't in the room when AI decisions are made, either because organizations want to move fast or because security teams are underfunded and focused on daily operations. Technology companies are making AI capabilities available at unprecedented speeds, leaving organizations uncertain about securing and deploying these tools responsibly.
Security should be the foundation of trust, not an afterthought. McKinsey research found that customers make buying decisions based on product security when companies can demonstrate testing and rigor. A secure, certified product materially influences purchasing choices compared to alternatives without visible security standards.
Greis emphasizes that compliance certifications like SOC 2 or ISO represent minimum requirements, not security maturity. Organizations secure enough to meet business objectives naturally achieve compliance. The goal is translating business initiatives into security requirements that exceed baseline standards.
The Chief Information Security Officer position has shifted from back-office administrator to business enabler. AI has accelerated this change by converging infrastructure, technology, and cybersecurity into unified platforms. CISOs now have opportunities to demonstrate how they understand business context and can help organizations move faster and safer.
The challenge for security leaders is communication and relationship building. Years of underfunding forced CISOs to focus on survival rather than strategy. As security functions reach parity with other departments, more leaders can engage at the executive and board level. This shift requires CISOs to develop storytelling skills that contextualize security metrics for business audiences rather than overwhelming boards with technical details.
As AI agents begin making decisions without human oversight, organizations face new risks. The push to remove humans from decision loops creates efficiency but introduces vulnerabilities, particularly when AI accesses data it shouldn't process or makes decisions affecting vulnerable populations. Companies need frameworks to identify where human oversight remains necessary and mechanisms to monitor those boundaries.
Organizations implementing AI successfully have thought through secure development lifecycles, DevSecOps, and product operating models. Those starting from scratch face larger organizational changes to incorporate security, privacy, and responsible AI practices into development workflows.
LinkedIn: https://www.linkedin.com/in/justingreis/
Check out video versions of Kitecast episodes at https://www.kiteworks.com/kitecast or on YouTube at https://www.youtube.com/c/KiteworksCGCP.
Patrick Spencer (00:01.585)
Hey everyone, welcome back to another KiteCast episode. I'm your host for today's show, Patrick Spencer. We have a real treat today. Justin Grice is joining us. Justin, thanks for making time to speak to us.
Justin Greis (00:12.674)
Yeah, thanks Patrick. Happy to be here.
Patrick Spencer (00:14.493)
Looking forward to this conversation. This is a quick intro and this doesn't really do him justice. I do all the things that he's been involved with and is still involved with. Justin is a founder and CEO of a brand new company called Acceligence. It's an AI power consulting firm focused on technology, cybersecurity, risk and strategy. He is the former lead of the McKinsey security practice and founding member of the EY's cybersecurity technology and digital practice.
He serves on the board for the Kelley Business School at Indiana, and he is a board member for Ravinia. We'll talk about that. I know my daughter being a violinist, that one's interesting to me anyway, and I think will be interesting to the audience. He's a frequent keynote speaker, published thought leader. He contributes to advancing technology dialogue while serving on multiple boards and supporting educational initiatives, as you can tell from his background. holds an MBA.
and a Bachelor of Science from Indiana University and is a graduate of the Harvard Business School Executive Leadership Program. So Justin, with that. What you're currently doing, you just launched this company. It's intertwining cybersecurity and AI and there's a lot more being talked about on that front right now as everyone dies head first into the swimming pool thinking AI and cybersecurity is an afterthought.
Justin Greis (01:22.646)
Yeah, thanks Patrick.
Patrick Spencer (01:41.393)
many organizations.
Justin Greis (01:42.99)
Yeah, AI is transforming everything. In a couple of things that I'm seeing, if we look at...
how companies are using AI. They're in varying stages of maturity and varying stages of adoption. Some are throwing AI out there to the business units and tell them to come back with use cases, cost reduction, automation. Very few are actually thinking about the risk associated with it. And there's a lot of reasons for that. It's either they want to be first to market, they want to hop on board.
or the right people who are supposed to be at the table are not. And so there's that out there. There are also companies that are sort of at the forefront and putting out, know, advancing AI, advancing algorithms, advancing those use cases, and they're pushing the boundaries. a lot of the tech companies are making this technology available at breakneck speeds and consumers of that technology.
are wondering what do I do with this and how do I secure it? How do I use it responsibly? How do I deploy it in my environment? And so we're seeing a lot of that happen out there in the market. At the same time, I've spent the past 25 years in management consulting in the field of tech, in cyber, in digital, and helping companies adopt technology, implement technology, secure technology, use it responsibly.
And I have never seen this trend, for companies move this fast and this opportunity be there for somebody who knows how to use it. And if we look at how management consulting firms are using AI, is, you know, they are moving, trying to move ahead of the market to figure out how to help their clients.
Justin Greis (03:47.543)
So I took all of this and I founded a company called Excelligence and it's an AI powered management consulting firm, as you said, focused on technology, cybersecurity, risk and strategy. And what I'm doing is building platforms that help companies adopt AI responsibly, safely, in the right way. And using AI to deliver services to my clients,
in ways that we didn't think possible before. The reality is you don't need the truckloads full of people to get the same level of expertise to get the results that you need fast. And so that is what Excelligence is all about. And it's gonna be launched here probably by the time this podcast goes live in mid September. So I'm super excited about it. Yeah, thank you.
Patrick Spencer (04:38.983)
Some exciting stuff, definitely. And we'll dive in a little more detail around, know, IBM's report came out, their annual cost of data breach report, and had a lot of interesting AI angles in there. I'm curious about your perspective on them. But before we do so, let's talk a little bit more about your background. You're at EY for a number of years, you know, and then you joined McKinsey and you're there for four or five years, and then you just left and started this new company, which is really exciting. You know, how did you...
Justin Greis (04:55.918)
Sure.
Patrick Spencer (05:07.111)
know, transition from business school into a career in cybersecurity. That's always of interest, think, to our audience.
Justin Greis (05:12.412)
my gosh. man. It's a great question. So I have absolutely loved every experience that I've had throughout my career. Maybe I'll take you back to the beginning. So I started at Indiana University, Kelly School of Business, where I couldn't stay away from there. I loved it too much. Graduated and came back and about a year later started guest lecturing.
I joined the faculty four years later. What happened was I came back and guest lectured once a semester, then twice the next semester, then three times with a faculty friend of mine named Ramesh Van Ketterman. And he said, would you think about teaching this course?
I said, yeah, but there's only six courses. He said, well, that means I don't have to teach. And so I took over the course where actually we co-taught it for a number of years, and then I took it over. So I loved being in academia and I couldn't stay away from it. I've been on the faculty there since 2008 officially. But I joined EY out of school and started in IT audit.
And I spent a grand total of six months in IT audit. And if I can tell you that I was the worst auditor in the world, that doesn't do it justice. I will never forget. It was one of my engagements with, I was working with a manager and I was fresh out of MBA school, senior consultant.
and we're reviewing the audit findings with a client. And my manager, we had spent, I don't know, six months or a couple months auditing and going over the findings.
Justin Greis (07:05.262)
And as we're going over the findings, the client gets more and more and more uncomfortable and starts squirming in his seat. But I just said, look, you can figure this out. And I get up on the whiteboard and I start sketching the solution. And suddenly his eyes light up and he perks up. And the mood in the room changes. But proportionately, my manager starts getting more and more and more uncomfortable. And he starts squirming in his seat and he starts making excuses saying, well, you can't do
Patrick Spencer (07:28.945)
Hahaha.
Justin Greis (07:35.009)
it exactly that way or know mileage varies or starts sort of qualifying everything I'm saying like no no no this this is the way to do it this is the way to do it anyway they we got to the end of the meeting and my client the auditee was so excited got up out of a seat gave me a hug and he ran out of the room to go tell the VP everything that he'd learned about how to fix their audit findings which were pretty material at the time this this was the age of Sarbanes-Oxley nobody knew what was gonna happen if you got a bad
Patrick Spencer (08:00.999)
Go pen the paper. Hang on.
Justin Greis (08:02.998)
So we, so I remember my manager took me aside and said, Justin, that was some of the best consulting I've ever seen. Unfortunately, as an auditor, we can't do that. And we're probably gonna have to remove you from the audit. And that's not something that you should do again. And his name was Eddie. And I go, Eddie, that's awesome. Well, first of all, I'm sorry, I didn't mean to do that. But how do I get to do more of that? And he said, well, you gotta be a consultant for that, not an auditor.
And so what I found is I had a passion for solving problems, not just finding them. And I just, couldn't help myself. It's sort of built into my DNA. And so I moved six months. I would have either gotten fired or I would have quit if I was a lawyer. So six months I moved into the consulting practice. And this was, in EY's history, this was after the sale of the consulting practice to Capgemini.
Patrick Spencer (08:41.981)
Great story.
Patrick Spencer (08:55.056)
Yeah.
Justin Greis (08:55.63)
and we were rebuilding and the part of the practice that and the firm that we had retained that had did a little bit of advisory was cybersecurity. We were all of maybe 50 people and we grew it from 50 to the 7,000 person global entity that it is today and that started back in 2004 and grew it.
I found out that I was good at consulting. I love technology. And so I moved into the tech strategy practice where helped found that practice with a bunch of people who joined from Accenture and we grew that. Made partner in that practice and then we founded the digital practice and grew that into what it is today. I returned back to cybersecurity in 2018 because I just couldn't get away. Couldn't get away from it and I was always doing cybersecurity.
Patrick Spencer (09:49.871)
I'm in there.
Justin Greis (09:51.861)
You know, you can't escape it. You can't escape it. It's as hard as you try. you know, he returned back and took over global leadership roles in the practice and came aboard then. And what I found is I love building things. I knew this about myself. I was an entrepreneur before I started at EY for my MBA. And I...
I love to build things. so when McKinsey came knocking and said, hey, we have a seed of a cybersecurity practice. We want to grow it. We want to scale it. We want to build it into something really distinctive. It was too good of an opportunity to pass up and came aboard. And the work that McKinsey has done to advance cybersecurity at the board level, make it relevant to the C-suite and to the business has been some of the most
impactful work in my career. And what I found there is using McKinsey's platform for having those conversations and taking cyber to places where it needed to go, but wasn't naturally able to go was incredible. And so we did that over and over. And I think that's what McKinsey is exceptionally good at. So I got to learn the hands-on stuff and the operations of
Patrick Spencer (11:06.173)
Hmm.
Justin Greis (11:17.326)
how to run high performance technology organizations, cybersecurity organizations at EY and build those things out and then take that to the board, take that to the executives at McKinsey. And it has been just an amazing journey and I'm excited for what's next.
Patrick Spencer (11:33.105)
That's great. Now, when you pick my interest, when you're at McKinsey and you're trying to get cybersecurity as a topic of discussion and measurement at the board level, are you going through the CISO or are you going directly through the CISO? And who typically was your sponsor in to the board to make that case and ancillary to that? Do you see CISOs becoming better equipped over time to facilitate those discussions and articulate the true risk of security? Because
Justin Greis (11:45.806)
.
Patrick Spencer (12:02.971)
That's easier said than done as both of us know.
Justin Greis (12:06.414)
Great question. Patrick, very rarely was our client the CISO at McKinsey. I think the CISO wanted to work with us, but we were generally brought in by the CEO, the CIO, the CRO, the COO, or a head of a business unit on a business matter that had cybersecurity implications or where that was a blocker.
to whatever it is that they were trying to do. Of course, there were times where we were brought in by the CISO, and what I found is those CISOs who brought us in were very business-minded, were very attached to the business, and could see that the blocker was the business implications of, and needed help there, and wanted to build those bridges.
But it was very rarely, very rarely the CISO. Eddie, why it was? That was primarily our client, very operationally minded, very operationally focused. But let's talk about, think what you're getting at is the of the evolution of the CISO. And it's super interesting and my answer is fundamentally different than it was four years ago.
Patrick Spencer (13:17.767)
Yeah.
Justin Greis (13:28.456)
I have seen, and I am the CISO's biggest advocate, I think they have the hardest role in the organization. And if you think about it, there is no other role where you are constantly under attack.
where you have somebody who is undermining you. You have nation states that are coming after you. You have threat actors, have insiders. mean, literally your job is under attack all the time. I think the only other person is maybe the chief marketing officer where your competition is trying to do that. But that role is one of the toughest out there. The role has evolved from...
Patrick Spencer (13:56.445)
Ciao.
Patrick Spencer (14:07.623)
Hmm.
Justin Greis (14:12.622)
you know, somebody in the back room pounding on a computer screen, an administrator, a technologist into a business enabler. But
everyone right now is on a continuum. There are some companies that are operating and to keep lights on, know, hey, we don't want you seen or heard and in the background and in that way they're being treated as a back office function. And there are those that are being seen, especially in the age of AI as enablers, people who can help the business sort of stay in guardrails, stay safe, you know, protect themselves from themselves or the ideas that
they say, that's a great one, but how do we do that safely? And of course, I love working with those who are in that business enabler, but the thing I really love doing is helping companies get from the left end of that scale to the top end of that scale, because there's so many things that you can do. But getting to your question.
CISOs are, four years ago I found, were far more fell on that administrator, keep the lights on mode, back office. But I am fundamentally seeing an accelerated shift because the convergence of infrastructure.
Patrick Spencer (15:26.461)
Director of Office Function.
Justin Greis (15:39.095)
technology and cybersecurity. They used to be very distinct disciplines, but it's all coming together into platforms that are being served up to the business. And AI has catalyzed that now because it is the convergence of technology and information and business context. And so there is no better time for a CISO to be able to say, hey, I understand the business. I understand how this works. I understand what you're trying to do. And I can help you do it
faster, safer, maybe cheaper, maybe. But I'm seeing the convergence of that infrastructure. And we've seen that with the promotion of several key cybersecurity executives who are now taking on infrastructure roles and being a of a combined CTO, CISO, CIO, CISO. But I am seeing that evolution happen quicker, faster than ever before.
The thing that CISOs need work on is communication, engagement, and relationships. Oftentimes, they are so focused on the operations of what they are doing in the day to day that they're not able to pick their heads up and forge those relationships that matter and get them into the room when those conversations need to be had. It's easier said than done.
because for the past 20 some odd years, the function has been underfunded to the point where they had to. They were just trying to survive. But as the role and as the function has gotten to parity, has gotten to par,
more are able to pick their heads up and say, how can I be more strategic? How can I be an enabler? How can I help the business go faster? And I'm loving it because we're seeing friends of mine get promoted. We're seeing people in the industry sort of rise up and take leadership roles. And it is the best time to be a CISO in the industry right now.
Patrick Spencer (17:45.405)
That's great. Now you recently on that note, just building on what you said, I read an article you recently made a post or maybe both. forgot where you argued that, know, cybersecurity needs to be a business enabler. Now, how does the CISO articulate that vision and become part of that discussion where because we're doing these things from a cybersecurity standpoint, we can attract more customers. Our existing customers will be
Justin Greis (17:54.807)
Yeah
Patrick Spencer (18:15.217)
have more confidence in us knowing that the data that we have from them is being protected or how do we launch a new business initiative and tie cybersecurity to it that makes it more attractive in the marketplace. There's a number of different ways I suspect that we're doing so.
Justin Greis (18:29.122)
Yeah, yeah, Patrick, mean, those are great examples. So yes, cybersecurity should be an enabler, a differentiator, a competitive advantage because it is the very foundation of trust. And in the age of information and connected devices where we're making buying decisions based on who do we trust,
cybersecurity is a buying decision. And we did a report at McKinsey, it was a digital trust report, and we found that that was the case, that a materially, a secure product will materially alter a customer's buying choices if you can show and prove that it is secure, it's tested, it meets the level of rigor.
We all know that the compliance reports are just that. They are the minimum bar. It doesn't necessarily mean you're secure, but it means you are compliant. But it also introduces a level of rigor that when faced with a decision of do I buy product A or product B and product A has a certification, a report is rigorous, is tested, and we are externalizing that to the market.
and one that doesn't, it's no surprise. You're gonna make that decision all day long. And so what we believe, the assertion for that article and the research that we've done is that the more you can externalize and make cybersecurity capabilities customer facing, to build the foundation of trust, the better.
And that is going to materially alter your trajectory.
Justin Greis (20:24.872)
saying it and doing it is a very different thing. And one of the reasons that we put the article out there, Mackenzie even, and I the effort to bring together the perspective that the NACD that is out there at board level, and we believe in they were very compatible. And so at RSA this past year, we put together a panel of tremendous CISOs and really amazing thought leading board members who believe that cybersecurity is a differentiator.
and we put them in a room and a panel together and we talked about it. And we saw it as our mission to develop board members who can work with management to say, how are we building trust as a foundation into our products using cybersecurity, using resilience as a differentiator for us?
And then how can we elevate CISOs into positions where they actually have a hammer to swing to be able to be in the room? That's fundamentally what I'd love to see happen is get the CISOs in the room and get the board members to recognize and get the C level to recognize that this can be a differentiator for every business. Now, I've done a lot of work for automotive.
and some of them are very non-sophisticated in their manufacturing process. About 10 years ago, one said to me, what does cybersecurity have to do with us? Even if systems go down, we may still be able to produce, and it was that May that I capitalized on, because it's not just a matter of cybersecurity, it's also a matter of resilience. And what we found is that if one machine goes down,
Patrick Spencer (22:05.649)
Hahaha
Justin Greis (22:14.346)
in a connected environment, because all the machines are connected, they aren't able to sell, produce, ship product out to customers, and that is the lifeblood for them. you know, it's a, everybody, regardless of your business model, is a digital company. And even if you're using it in a production manufacturing environment, it can still bring you to your knees.
but I prefer to approach it from a positive angle to say, how can it elevate you to new heights to make your product, your processes and your company even more resilient than they have
Patrick Spencer (22:55.941)
Yeah, that's a point. I remember interviewing, it's been a few years ago, a manufacturer actually speaking of manufacturers that probably had that perception until their employees started showing up at 6am and they had a green screen, you know, call this number. Their entire manufacturing environment was locked down for I think four years until they paid the ransom. Actually is what happened. And it was an interesting scenario of C played out, they had bring in a third party consulting firm to help them coordinate and communicate with
Justin Greis (23:16.606)
God.
Patrick Spencer (23:24.903)
the attackers so they could get their data back. Sometimes you never get your data back.
Justin Greis (23:30.35)
You don't, you know, and you're right, Patrick, you sometimes you need the breach to find religion. It's not important until it is. And, you know, I've had many clients who, you know, the say do ratio was out of whack, or they believe it's important, but weren't willing to put the investment into it until they were attacked. And then it became the number one
But you're exactly right. Sometimes you need that to happen. But that is the case with everything in life.
Patrick Spencer (24:09.553)
So that's a good segue when you're talking about the being compliant with different regulations, whether it's SOC 2 or it's ISO, your NIST lists, the NACD that you talked about. I'll give you a moment to speak about that here in a moment. Are those compliance regulations helping to drive better security? It's a baseline.
And then leading into that or leading out of that, do you think that those regulations are going to help when it comes to AI and the risk from a data standpoint specifically that exists there?
Justin Greis (24:50.54)
Yeah. It, so it's a, it's a great question. And I think it's important that we don't confuse security and compliance. They are two different, different topics. And, a mentor of mine, I wish I could take credit for this, but a mentor of mine, Brian Kelly once said, you know, you can be, you can be compliant, not secure. But if you are secure, you should be compliant.
and it will naturally fall out. So your level, your aim should not just be, we are gonna get a SOC 2 or an ISO certification and then stop because that is the minimum height to ride. That is the minimum bar that you need in order to establish the lowest level of security needed. It's important because it introduces, as I said before, the rigor that doesn't exist in many places.
However, it is not the, you know, where you should aim your program if you're sort of aiming at a level of maturity or a level of security that you should feel comfortable with.
You know, we have done many different cybersecurity or technology strategies, and we were setting goals and objectives. And the minimum needed for, say, a NIST level three or a NIST level four, on a CMMI basis of five, is actually pretty low when you think about it. But when you go talk to the business,
The CFO the CEO the CIO the CEO and then you interview the board what they're asking for Without saying it directly that we need to be a you know a level 4.5. We need to be this Actually takes you to a level of maturity far beyond, you know the the minimum threshold of a three Let's just say
Justin Greis (26:51.822)
But you need to do that translation to say, hey, did you know that we are launching a new product in South America next year? interesting. What are the data implications? How are we ensuring a resilient environment? How are we responding to incidents? How are we monitoring? How do we have eyes on glass? Things of that sort. But you can't just, they will never say we need X, Y, and Z. You need to be able to do that translation and say, if we are gonna do that from a business standpoint.
then it means this, which sort of takes us up to a level of maturity here. So, you know, from a standards standpoint, that is, you know, I find that if you're aiming at the minimum bar, it's not going to get you to where you want to be. But if you actually do the translation as to what the business says it is doing or wants to do, it'll exceed that. Now, regulations different from standards, I think, are really good in that they force you to think about.
If no one, if there were no posted speed limits and we all kind of drove our own things, some of the people would be going too fast, some people would be going too slow, none of us would have the guidance. that's what I view it as, is guidance as to what you need. Certain industries, it's mandatory. And I've done quite a bit of work in banking where, the OCC or the Fed are, and they're cracking down hard.
on security, resilience-related standards, and enforcing that minimum bar, that minimum necessary, that is really important. Because if you didn't have a regulatory body causing you to think about it and build that into your requirements, left to your own devices, you're going to prioritize other things. And we've seen that over and over and over. So I think it's a good thing.
Patrick Spencer (28:28.317)
Thank you.
Justin Greis (28:48.002)
I don't think that that is where you should peg your program to, your overall objective of your program to, but if you bake that in and aim at beyond that, you will achieve compliance all day long.
Patrick Spencer (29:03.675)
Yeah, those are good insights. From an AI perspective, do you see that we're beginning to see this, you you have some federal standards being rolled out that have AI baked into them. have the AI Act in Europe that's percolating. Do you see AI becoming part of those controls that are specified in these compliance standards like NIST and so forth? Do you see that happening sooner? Is it going to take some time before that occurs?
Justin Greis (29:31.727)
no, it's happening. I think what we're seeing though is the convergence of AI from an ethical use and a business use perspective with the technical controls needed in order to ensure proper oversight, governance, ethical use. They've sort of developed in parallel tracks that if you are going to implement
AI as infrastructure, you need data monitoring, you need segregation, you need algorithm, you need MLOps, need red teaming, need to test, you need these technical things that, quite frankly, if you've already thought about the secure SDLC and incorporating that, it's a few things to add on, but it really underscores the importance of being plugged in. If you haven't thought through your secure SDLC, it's a big lift.
And so many companies are switching or have switched to a product operating model. And that's good and it's bad. It's good because the business is going to move at light speed to get product out. It's bad because they're moving at light speed to get product out. And security may not be plugged in in an organization where, in a typical environment, where security is already short on resources.
But product operating model oftentimes decentralizes the development of a product and making sure that now the developers know the security controls and considerations that they have to plug into from a security, privacy, AI, ethical use, responsible AI piece. so layering that on can be a big organizational change for a company that hasn't thought through that.
But if you have, if you've been through the journey of product operating models, secure SDLC, DevSecOps, DevSecPrivOps, it's another layer that you add on to make sure that your responsible AI committee is plugged in for review, that you've got the right monitoring controls, and that security either is involved or has armed the product teams and the dev teams with the tools necessary and then the oversight necessary.
Justin Greis (31:55.331)
to stay involved. So I think that AI in general is going to have, it is having wide sweeping implications. People are on, the companies are on varying degrees of maturity and rigor and they're gonna continue to be, and they're just trying to catch up and develop those use cases at light speed. However, it's for companies that have sort of thought through the implication.
for the past few years are going to have an easier time than those who haven't.
Patrick Spencer (32:28.229)
You think when it comes to AI and risk, many organizations are doing the self-assessment right now. And like many other areas where they do self-assessment, like CMMC, for example, we did a survey this spring and I had a few questions in the report that we just published actually, where there's a huge overconfidence when it comes to, yeah, we're compliant with all 110 level two controls and CMMC 2.0. And then you have a third party come in.
You said so you're actually not compliant with 70 of the 110 controls or 50 of the 110 controls. Do you think that's happening when it comes to AI as well because of the over exuberance or the exuberance that exists in terms of AI adoption in the marketplace?
Justin Greis (33:15.546)
Yeah, I do. You don't know what you don't know. And a self-assessment, I guess it depends on who's doing the self-assessment and to what end and how deep they're going. Look, as bad as I was at auditing, can tell you I know the difference between a test of design and a test of operating effectiveness.
you know, the test of design kind of looks at, hey, is it put together in a way that makes sense, that is controlled, that prevents the risk or detects for the risk that you are talking about that's in question.
But inevitably when you do that operating effectiveness, is it working as designed and you start sampling and you start looking at the organization and you start collecting evidence, in most cases, the disparity is quite great.
And so I think if you were to then take it a step further and just say, hey, we're going to survey people and ask them if you are compliant, if you have the right level of controls, of course they're going to answer, well, yeah, yeah, we're doing that. You may be doing that in some small portion of the organization, but are you doing it enterprise-wide? You may be doing it when somebody asks, but are you doing it proactively?
You may be doing it in one product, but you're not doing it across the full portfolio. that's, quite honestly, that's what we've seen through the hundreds of NIST assessments that we've done is, hey, we're up level four, we're at this.
Justin Greis (34:55.948)
but that didn't factor in that enterprise-wide coverage piece where if you actually start doing the testing on it, you see that you're not as covered as you think you are. to your point around AI, I think if you...
if you ask that question, are you doing the right things? Are you doing these 10 things? People will inevitably answer, yeah, we're doing some version of those things. But are you doing it in a way that's consistent, repeatable, controlled, measurable? No, no, generally not. And one more point I'll make on this. Culture.
Patrick Spencer (35:38.909)
Hmm.
Justin Greis (35:41.985)
In the assessments and strategies and implementations that I've done in organizations where colleagues had the psychological safety to be able to disclose something bad, to be able to come to the table with a risk or a problem for which they didn't have the answer to and needed to problem solve it out or needed the resources.
Those organizations that were learning organizations achieved far greater results. And there's tons of research. Amy Edmonds and Anna Harvard put an amazing book together called The Fearless Organization. She talks about this and her research is just tremendous. But it extends to every aspect of cybersecurity. If you create an organization that's a learning organization.
that can say, we have 10 things wrong, and you talk about them, and you burn them down, versus hiding them, or versus responding to a survey, yep, we got it, we have all this stuff in place, you are going to not only get the budget you need, you are going to get the results, you're going to do better in your career, you're going do better as a company, but the challenge is that's top-
at the board or the executive level. And it's one of the, quite frankly, one of the reasons why we initiated the work that we did with the NACD, the National Association of Corporate Directors, who's the leading organization for board members. And it's quite honestly what we're trying to change at the top and help to recognize that we gotta create that psychological safety. We have to have the right conversations, create the conditions for people to come forward and answer those things honestly rather than sweetly.
Patrick Spencer (37:04.241)
Yeah.
Patrick Spencer (37:35.354)
Well put. Now, NACD, you just finished a big event with them. I think our audience may find it helpful for you to describe what that was and who was involved.
Justin Greis (37:45.561)
Sure, It was a great event at RSA and we're doing, well, I can't say we anymore. McKinsey is doing a number of things with the NACD and I'm a long time member of the NACD. I continue to stay involved in the things that they're doing. And just as a little plug for the NACD, they're the club that all board members want to belong to and should belong to. They do education.
ongoing learning, they are sort of a platform for board members and teach people how to be effective and high performing board members. Because it's a very different thing going from management to governance. You lose a whole lot of levers and tools that quite frankly are your go to when you start to oversee.
a company versus being the day-to-day in the operations. you shouldn't confuse the two. You should know kind of where you sit. But that is also evolving and the role of governance is evolving at the board level. And so we at McKinsey were in with management in the C-suite every day. We do work with the greatest organizations every day.
And we had this, we started seeing this gap between management and the board on technology related issues. Cyber security being the very first of those that we decided to tackle. And what we wanted to do at RSA is bring together the CISOs and board members for a conversation. And what was interesting when we did, there was no disagreement.
that this is important. Of course, we stack the panel to get people on there who believe in the power of cybersecurity. But we had a board member, Matt Rogers, who's on board at Exelon. Nora Denzel, who's on several boards, Sony, Gen Digital, and a number of other boards. We had Newper Davis, who was interesting because she sits on both the CISO side and she's on several boards.
Justin Greis (40:07.278)
Katie Jenkins, Marco Marino, and I may be leaving a few out. But what was interesting as we explored the topic is everybody agreed that we need, that this is an important capability. It needs focus. It can and should be a competitive differentiator. And we need to be doing more. And CSOs need to up level their game to
come to the table where board members and the C-suite is at, and board members need to develop better acumen and understanding of it so that they can ask the right questions and work, create a platform, create a platform for the conversation to be had. There have been a number of times where I've worked on board reports for clients, and we throw.
Patrick Spencer (40:39.677)
Thanks.
Justin Greis (41:04.974)
you know, X million vulnerabilities at the board. Fishing campaign stats. We'll throw incident response and compliance metrics. And all of that's great. But the telling of the story, the contextualizing it to the business, and the definition of why and when things went from green to yellow to red and back down to yellow and back down to green is so important. And it's that art of storytelling that
was that we found was missing in some cases. And when it was there, it was magic. It's like, my gosh, the story came together. And so we identified several of those things at this event at RSA and that we published on and that we're gonna continue to explore. McKinsey will continue to explore and through the work that I continue to be involved in at the NACD, we'll continue to explore. But fundamentally, I love the NACD because they have recognized that
cybersecurity is a important topic at the board level. They even have a cybersecurity oversight certification that board members can get, which is phenomenal. And the perspective that we brought in at the McKinsey level was how CISOs can up level themselves so that they can speak to board members where they're at, which was just a great conversation.
Patrick Spencer (42:26.267)
That's great. Well, by the time I have one more question I got to ask you, know, AI related, as you look at the landscape, know, most are not thinking that much about AI risk when they're diving into the AI pool right now. Do you see a particular level lever or event maybe causing the marketplace to begin to think more much more about cybersecurity in regards to the AI initiatives? You think maybe
Justin Greis (42:30.158)
Sure.
Patrick Spencer (42:55.837)
the evolution to AI agentics and the big embrace that's happening there right now, is that going to do it because suddenly you almost have a supply chain of risk tied into these individual in the past silo-based AI tools that are now connected to a lot more stuff in your environment? Do you see that as maybe the point where organizations begin to think more broadly about cybersecurity in regards to AI?
Justin Greis (43:25.112)
So it's a great question. So you're asking me to predict the future. I'll give you maybe a few things that concern me when I see what companies are doing. Agenda clearly is what is next, is trying to take the human out of the loop. And I think this is going to be a slippery slope and a tricky knife edge to walk.
as we start to delegate the decisions that we make on a day-to-day basis to AI.
It's going to positive benefits. I had a conversation the other day with, over email with an AI agent and I didn't know it was an AI agent until I was on my 10th email because I said, I'm just, your directions to this point have been amazing, but I just don't get that quite thing. Can you hop on the phone? And they're like, and it responded and it said, actually I'm AI. I had no idea. I had absolutely no idea. And that was, I was blown away and it was amazing.
But if we think about that, the company that I was interacting with had clearly thought through the implications of that, had done some extensive testing. And by the way, they did get me a person to talk to, was great. But as we start to take humans out of the loop on decisions, and we apply it to processes like insurance, we apply it to processes like financial decisions and loans.
apply it to vulnerable populations. These things that seem like efficiencies and good ideas can now be existential risks to a business model. you know, the risks are class action lawsuits, the risks are fines.
Justin Greis (45:20.514)
The risks can be far, far greater. Turning your agent loose on a set of data that should never see an AI algorithm is a huge worry to me. Companies have massive repositories of data and the possibilities are limitless as to what we can do with that.
The question in my mind is who is controlling and who is deciding what they do with it? How are they securing it? And how are they keeping themselves in that ethical guardrail and in that security guardrail to make sure that they don't sort of wander too far outside? So what are we gonna see? I think there is a push to take humans out of the loop. We've seen that, I saw it, we're experiencing it every day.
I worry about the companies and the organizations that are taking that too far, that are enjoying the benefits of AI a little too much and don't know at what point to say no. We pull back.
Patrick Spencer (46:25.019)
So pull back. Yeah.
Justin Greis (46:28.84)
And if we are to do that, here are the things that we should not and cannot give up. to me, that's it. And I think the implications, mean, is Skynet gonna become self-aware and is judgment day, judgment day, that Terminator 2, is the Terminator 2 situation coming? I don't know, I don't wanna be an alarmist, but what I do know is always have a process for keeping a human in the loop, knowing at what point you should not remove that human from the loop and having
a mechanism to oversee those points at which humans are now leaving the loop of decision making. That is my worry.
Patrick Spencer (47:08.945)
And measure that risk. Know what your risk actually looks like. I totally agree. Well, we could go on for a couple hours, Justin. Hopefully we can have you back as you're a found information, engaging conversation. I learned a lot. I'm sure our audience did. Thanks for making time. Where can our audience find out more about you? I suspect just on LinkedIn, probably is the best way to do so. But then your new company, where should they go?
Justin Greis (47:12.302)
100%. 100%.
Justin Greis (47:19.438)
We'd love that.
Justin Greis (47:30.872)
Yeah.
Yeah, look me up on LinkedIn. I'm all over it. So you can go to excelligence.com. That's A-C-C-E-L-I-G-E-N-C-E.com. And we should be going live here mid September if all goes to plan. And I'd love to connect with you and work with you if the thing is right.
Patrick Spencer (47:54.375)
That's great. It's going to be a big success. I'm sure we'll be watching it. Justin, thanks for your time. For our audience members, check out other KiteCast episodes at kiteworks.com slash KiteCast. Thanks.
Justin Greis (47:57.272)
Thank you.
Appreciate it, Patrick. Thank you.
Justin Greis (48:08.943)
Thank you.