Follow The Brand Podcast with Host Grant McGaugh
Are you ready to take your personal brand and business development to the next level? Then you won't want to miss the exciting new podcast dedicated to helping you tell your story in the most compelling way possible. Join me as I guide you through the process of building a magnetic personal brand, creating valuable relationships, and mastering the art of networking. With my expert tips and practical strategies, you'll be well on your way to 5-star success in both your professional and personal life. Don't wait - start building your 5-STAR BRAND TODAY!
Follow The Brand Podcast with Host Grant McGaugh
The Agent Has an Identity with Mark Lynd and Grant McGaugh
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Agentic AI stops being “just software” the moment it can take actions across your systems and that’s where leadership, cybersecurity, and trust collide. We sit down with Mark Lynd, a globally recognized cybersecurity and AI thought leader and former CIO, CTO, and CISO, to get specific about what enterprise teams misunderstand when they talk about autonomous AI agents. The promise is speed and cost savings; the reality is permissions, accountability, and a threat landscape that changes when agents have identities and privileges.
We dig into why “identity is the new perimeter” in an AI-driven world and how attackers target the keys to the kingdom: access, escalated privileges, and the ability to work around security controls. Mark shares how common IAM problems like permission sprawl and forgotten access can become even more dangerous with agents, especially as organizations scale from a few pilots to hundreds or thousands of AI agents. We also talk governance frameworks like NIST and ISO, why frameworks alone don’t equal evaluation criteria, and how boards push for innovation while regulators demand control.
If you’re a CIO, CISO, security leader, or board advisor trying to adopt agentic AI responsibly, this conversation offers a grounded approach: start with small, auditable use cases, keep a real human-in-the-loop model, align every agent to business goals, and build trust through repeatable wins. Listen, share this with a teammate, and subscribe plus leave a review with your answer: what’s the first workflow you would trust an AI agent to run?
Thanks for tuning in to this episode of Follow The Brand! We hope you enjoyed learning about the latest trends and strategies in Personal Branding, Business and Career Development, Financial Empowerment, Technology Innovation, and Executive Presence. To keep up with the latest insights and updates, visit 5starbdm.com
.
And don’t miss Grant McGaugh’s new book, First Light — a powerful guide to igniting your purpose and building a BRAVE brand that stands out in a changing world. - https://5starbdm.com/brave-masterclass/
See you next time on Follow The Brand!
Why Agentic AI Changes Leadership
SPEAKER_00Welcome everyone to the Follow Brand Podcast. We are going to have a candid conversation today, and I think it's going to bring a lot of information around agent AI, which is not not just as a buzzword, but as a real force reshaping leadership, cybersecurity, enterprise decision making when AI systems begin to act, decide, and adapt. And these lead to a lot of questions around trust, around risk, around accountability, as we move into, you know, from the lab to the boardroom. So today, my guest is Mark Lynn. And Mark Lynn is a top five globally ranked thought leader in cybersecurity and artificial intelligence. He's the head of executive advisory and corporate strategy and a four-time CIO, CTO, and CISO with over 20 years advising global enterprises. And Mark has been a featured speaker for organizations like Oracle, IBM Watson, Cisco, Intel, and Cohesity, and was an Ernest and Young EY entrepreneur of the year finalist. And he's also served in the U.S. Army as a veteran, having served in the 3rd Ranger Battalion and the 82nd Airborne. So I'd like to welcome him to the Fellow Brand Podcast. Mark, you'd like to introduce yourself.
SPEAKER_01Thanks, Grant. Uh thanks for having me on today. Yeah, no, I think you covered it uh pretty well. I'm looking forward to our conversation.
SPEAKER_00That is wonderful, wonderful. We're gonna jump right in.
SPEAKER_01Okay.
What Most Leaders Misunderstand
SPEAKER_00Because you have, you know, you've been you've been featured a number of different times. And I know I missed I haven't had your entire bio, but I know what you've done. You've been a CIO of CISO multiple times. And I want to ask you just from that seat, what does most of the markets still misunderstand about agencai AI today?
SPEAKER_01Yeah, I think uh, you know, that's it's interesting. Uh I advise lots of CIOs and CISOs, uh, well over 150 right now, um, in both uh uh private and public sector. And, you know, I think one of the things they're trying to figure out is what are my use cases for agentic AI and you know, is it safe for me to let it be autonomous? Um, and and then I think the other thing is it's being driven by, you know, cost controls and things like that, where they're wanting to do more with less, especially in the public sector. I hear it over and over and over. You know, how do how can I do more with less? Well, one of those ways is to automate. And obviously, AI, the promise of AI, if you will, and even the greater promise of agentic AI is that automation. And I think where it gets a little confusing for them is, you know, just things like uh agency, right? If you look at the OWASP, um, you know, top 10 right now for cybersecurity issues, uh, they already have excessive agency as one of the issues for agentic AI, right? Because they grant them too much uh for those that have deployed it in production. And so, you know, uh I think that's one of the things that they have to struggle with. And so a lot of the same questions they had when they did AI are the same ones where they're having with agentic. But on top of that, now they got to decide is it asynchronous or synchronous? Meaning asynchronous, it's it's uh out there on its own, it's making its own decisions, it's fairly autonomous. Synchronous is where uh kind of like what we do now, where we go inside the LLM and we create something and we run it from time to time and over and over, uh, and and it acts as an agent, right? Um, and it does does things for us. So there's synchronous and asynchronous. And I think along with some of these other things, there's there's there's the typical production issues. Uh, am I gonna have person in the middle? Um, you know, what kind of decision making are we gonna do? How comfortable or and ethical do we believe it is, uh, letting it out on its own? Um, and are we worried about excessive agency and cybersecurity with a genetic AI?
Threat Landscape And Risk Frameworks
SPEAKER_00Very all of those things are so important. You have to consider all these things and understand that that, and I know a lot about agentic AI in the field that, and what concerns me really is that once you you you you design something and you you do put it out in the field, I don't know how flexible it's going to be because the variables always change in the real world, they're not always the same. So, can the AI you know adjust to those variables? I I'm not sure. Here's one of the major concerns, and you could you're probably one of the few people that can actually answer this because you sit in that intersection about what AI agents and cybersecurity, you know, the that the threat landscape compared to traditional automation, like robotic process automation and things like that, or gen AI tools you just talked about. When you get into an elitic world, how is that going to change the cybersecurity threat landscape?
SPEAKER_01Well, I think, you know, uh I think that's a question a lot of people are struggling with, right? Especially um in large environments or you know, financial services, um, things where uh healthcare where things can go wrong. And if they go wrong, it can be very damaging very quickly. And what we're starting to see is um there's there's some really great uh AI risk frameworks out there. NIST has one, ISO has one, you can just keep going down the list. Um, and they're starting to get a little bit more mature, right? When they first came out, obviously they they're just kind of like AI-specific standards that everybody would think, you know, uh more around regulatory and and and management. But as we start thinking about, like you mentioned, the autonomous growth of these agents as this market matures, you know, we're gonna have to figure out how do I align them with what we're trying to do as an organization. How do I make sure that I have a framework in place to do that? How do I assign uh uh permissions? Because look, an agent has an identity, just like you and I have an identity, right? So uh it's gonna be a lot like an employee. Then also, how do I make sure that we have a person in the middle? That's a very important piece because the regardless of what the agent does, there are very few use cases where there isn't some need for oversight, just like you would uh with a team of employees.
Identity Becomes The Attack Surface
SPEAKER_00Yeah, you know, that's it. That's it. Oversight, you know, responsibility, accountability is big. You brought up something that we really gotta get our heads around, and that's around identity. You know, if the Egyptian AI model sees itself as uh is an identity, now I know you've written that identity is now the new perimeter.
SPEAKER_01That's right.
SPEAKER_00And here's the question Why has identity become the primary attack surface in an AI-driven world?
SPEAKER_01Mainly because once you have that, you have the keys of the kingdom. You have escalated privileges, uh, you have access. The other thing is that you're able to do work around some of the security tools and platforms that they have in place with the identity. That's why a lot of people attack identity. Um, the other thing is if you look at it, uh everybody's aware that identity just continues to grow um in what what it covers. Uh, a good example of that would be is um if you look at the octas, the pings, um, you know, I know duos, etc., uh, you know, they're they're building their use cases around somebody logs in, but if they log in in Salt Lake City and then 20 minutes later they log in in Shanghai, uh, we have an issue, right? Um, and we have a g and so also they're able to look at some of their behaviors and what they're doing, right? Uh, behavior analysis, uh, et cetera, et cetera, et cetera. So it just continues to grow and expand on what how they can secure identity. Think about this now when you have a billion or two billion or maybe 15 billion more, twice as many people as there are people on the planet are in agents out there, right? And I I don't think that's a very big stretch to think of it like that, because when you look at, you know, just some of the growth that we have and everything else along this journey, like how many times, how many people would have thought that they connect with five or and use five, six, seven clouds a day, which on average they do, right? People don't think that way. It'll be the same with agents, right? We're not gonna see them necessarily, they're not gonna be like right in front of your face, um, but they're gonna be doing things for you. Um, they're gonna be helping you along, they're gonna be advertising to you, they're gonna be tracking you, et cetera, et cetera. Well, if we don't have identity, it's gonna be very difficult. Well, with that identity comes that risk that you just asked about, right? Um now we have um, you know, you and I get something, I might have questions about it, and then I might escalate it to the help desk or to our cybersecurity team. How do we know the agent's gonna have that same kind of intuition, that same kind of foresight to do that, right? Or to have the historical perspective to understand that that's not normal. What if it only went back six months and not six years? So there's a lot of uh a lot of moving parts of this. And I think uh, you know, just like everything else, we're in it, it's a fairly immature space. And so as it matures, we're we're we're gonna really have to stress robust oversight uh mechanisms that so that the autonomy remains kind of bounded, traceable, and aligned with the business. That's the critical element in my mind.
Speed Versus Control In The Boardroom
SPEAKER_00That's a hundred percent true. Understanding what it is or what capabilities are we giving it, you know, we got to put it in the sandbox. You think about a child, right? Yeah, a young child, usually you've got to put them in the little sandbox, and they're in that world and they're gonna operate in that world, and they can't do too too do too much damage, right? Uh, they should be good. But once you take them out of that and they're in the whole playground, and and you're talking about now, like not really watching them, you know, we gotta really think about that as we move forward. Which brings me to this next question because the market asks for speed, the boards, you're a corporate board advisory um uh entity, if you if you will. Boards want speed, but regulators they want control. You gotta sit and go for those worlds, right? You don't understand those conversations. How should executive teams balance innovation with responsibility as these agentic systems scale, like you said, to potentially billions of agents?
SPEAKER_01Yeah, you know, a perfect, you know, kind of a use case example of that is they want to rush this out, they want to do do those things. And and and a lot of the CIOs and CSOs that I work with are under that pressure today, right? They are. Um, they're they're they're competitive either from their competition, from their board, from their investors, uh, you know, or from their employee base. Uh, we're we're seeing it across the board. But you know, you think about how you're gonna do that, and I think we go back to that alignment, right? The governance and alignment and putting that in place with regular, you know, with uh uh generative AI, you know, we're worried about things like uh, you know, model poisoning, pumped injection, having the right guardrails in place, all that. Now, take all of that and add in access control for agents. That's a perfect example. So today in today's world in identity and in AI, we already have we already see this issue where somebody was in this role and an escalated role, they moved to a different role and has a different level of permissions, right, and rights. Well, what happens is they never really turn off these, right? They never do. So if you're an employee for 15 years, you got 15 years of multiple different roles, permissions, and that's a big security risk that we have out there now. And then if so now a bad actor gets access to your identity, now they have escalated privileges across a whole spectrum of applications and and you know sensitive data. So we have that same problem with agents, right? If you have agents, think about it now. We we have an agent do this now, then we add a little bit, or we replace that agent, but we don't ever turn that other agent off, which we know happens in IT all the time, right? Um so you you know it's access control. So all these things that we have concerns about employees, we have to make sure that we don't just move problems we already have over onto that. Or if we have security controls and access controls in place for that, that we should apply those to those agents as well. So that's why I I keep kind of rolling back to some of the identity and all that is we need to then communicate that to the board because the last thing they want to do is have a breach, right? How many? Because I already got I've if I if every breach that I've had in the last year by somebody that I do business with, I'd have I probably got 15 different credit monitoring deals that I can sign up for free because they have to give them to me because they they they breach my data, right? I get it all the time. Everybody does, yeah. That's so I think that that that that that we're because if you go back to the board and just say, well, we don't we don't have the money or we're not gonna do it, or we don't have the time, or we don't have the right people, that doesn't send the best message about your capability as a leader. But if you go back and with legitimate concerns, and this is what I tell a lot of our CIOs and CCOs that I have these conversations with, I'm like, go back with legitimate concerns and frame it in that it aligns with the mission of the organization. Um, and the security, and especially when you can explain it in a in a frame like that, um, they understand that. So those take time. So have a little patience. We're gonna get there. The other thing is too, is how do we evaluate this? Is the other piece that I I had I just had a uh I travel every week, and and uh earlier this week I was with one of the biggest universities in in in the US, and we had a a long conversation about this. And one of the things we talked about with Agentec AI, one of the concerns is is you have to have some way to evaluate it, right? These the the the like the NIST AI risk management framework, uh you know, if you look at Microsoft's uh uh framework, if you look at uh, you know, open AIs, you can go down the list, ISO, etc. It's a framework, but it doesn't really give you the evaluation criteria. And really the evaluation criteria is gonna be pretty personalized to your organization because it's your mission and your mission differs in other people's missions, right? And you got to align it with what what you're trying to accomplish, what the good corporate goals are, and being able to do that and then putting that evaluation criteria and where where it's placed in against the agents you have in place, especially imagine if you're in uh an organization with 700 agents or 10,000 agents. If you didn't have some type of evaluation criteria, some kind of framework and some kind of way to ensure that you know they're aligned with the business. Can you imagine the chaos?
Trust Layers And AI Vs AI
SPEAKER_00I'm imagining it or already the trust layer, right? How do you trust what the information is given you? You know, you can start getting into that other conversation. We all always remember this. Uh you know, I'm a little older maybe than than you, but we used to get the paper. In the paper, you had spy versus spy, one of the cartoons that comes out. So now you got AI versus AI. Think about your LLM, right? And now you got a small language model that's proprietary within your organization that you've now deployed. That's your your SLM, and it's an AI, right? You got AI versus AI, then you got all these other autonomous agents all operating as well. It becomes kind of a mismatch. Now, even before we jump down this call, I want to get the audience to understand this. I use Zoom a lot. Well, Mark uses WebEx a lot. He had to go through a whole new configuration to get there just to get to you know to to video feed. Think about that when you deploy these AI agents that are they are they flexible enough to to to to self-configure like that on the fly? And then in the cybersecurity world, that's what's happening, right? You got spy versus spy. You've got like, hey, I want to get into your data, hey, I want to get you out of my data or whatever it may be. What does that scenario look like in your mind?
SPEAKER_01Yeah, I think you bring up a really good uh uh a really good topic because um, you know, I am worried if you think about generative AI as fairly stateless, right? Um, and you know, the you they have the context window to try to bring some state to it and all that, but we we we all know that doesn't work fantastic and it hallucinates. And I think the longest they've been able to go is like 31 hours in a in a session without it starting to hallucinate over a certain percentage. Um, so thinking about that and having um agents that have or should have state. If you think about an employee doing a job, they do have state, right? They've had it. Um, and they also have ethics. You hope they have the right morals. Uh, they know the mission, there's so much is stored inside of our brain. Now all of a sudden, you have a non-human entity that's doing things that could hurt or help the organization. Um, and it only has a limited amount of state. And I worry about that a lot, or like you know, long-term memory, if you will. And also access to the data is it skewed data? Is this data that we loaded? Um, you know, one of the things you you hear about notebook LM that Google has, where you load all your data in there, and the one of the biggest reasons for that is it only uses the data inside there, so it doesn't hallucinate. Yeah. So I think that's and a lot of organizations, their data isn't set right, it's not in a data lake. Um, you know, you look at uh they don't even have um the data set up where they know what is sensitive data, like PHI, PII, all that information. Um, so you know, granting access to data is gonna create problems. So, and I know I threw a lot of pieces out there, but my point being is I just don't know how all that as we mature is gonna become focused on the mission and focused on getting the right outcomes and and versus how much do I really gotta manage these agents? If I have to manage the agents as much as the employee, the only thing I'm really saving is their cost.
SPEAKER_00Yeah, right.
SPEAKER_01I'm not saving any of the soft costs, I'm only saving the hard costs, the the payroll, et cetera, et cetera. And I may not be getting as good results. Uh I don't know if you know you've heard, but a lot of these projects have failed. Right. Came out and said a lot of these projects, AI and agent, agenic AI have failed. And a lot of it is around if you if you read the article pretty deeply, it's a it's really about not aligning it with goals, not being able to have evaluation criteria, um, not putting the right things in place. And then one the one thing I thought was really fascinating, not um not socializing it with the employees.
SPEAKER_00Exactly. Exactly. They always get that wrong. I don't know what it is around technological deployments, and I've seen this a lot is that we spend a lot of money on the actual technology itself, but once you start to integrate it into the culture, into the society, and you spent so much little time with the people that are actually gonna have to interact with it, you know, the training and getting used to it is just as important. And then to your point, what exactly is this agentic AI agent supposed to be doing? What is the goal? You know, it's kind of cool when we think about it, you know, we oh wow, you know, you know, that's that wow factor. But when it gets down to it, all right, you have a job, it has a job to do, and if it does that job very well, you know, just like any roles and responsibilities, there's an outcome. There's a usually a positive outcome. I think we need to, in my opinion, start looking at small use cases. Let's not let this thing out. Remember, I said about you know the child and the playground and the sandbox and not let it all throughout the entire playground, just yet.
SPEAKER_01Yeah, absolutely.
SPEAKER_00Let's let it get.
SPEAKER_01Well, I didn't even get into the audit auditability and trust exploitation part of this, right? Or the orchestration, yeah, right? With multi-agent, I mean, there's a lot of there's a lot to this, right? There's a lot to peel back. And and uh just some of the things you and I've touched on, that's enough to to make a lot of people scratch their head and say, well, maybe I don't have uh maybe I don't have the data set up the way I maybe it's not in a uh data lake or accessible. Um you know in the appropriate way. May I don't have the safeguards or the guardrails in place, or you know, maybe I don't have the ability to audit, or I don't really know which employee should be this person in the middle, right? If they don't know anything, to your point, where if we don't socialize it, if they don't understand it, how can they be the person in the middle?
SPEAKER_00Exactly. And they're not used to it. And everything, it's it's new for everyone, not just for a few people. Uh they're just getting used to AI, just LLM, you know, uh yeast, uh at scale on a worldwide scale. And people are like, oh, okay, I see what I can get text, I can, you know, uh consolidate emails, I can, I can, I can communicate at scale that I never could before.
SPEAKER_01Yeah.
2026 Readiness And Real Use Cases
SPEAKER_00Now you're asking it to do something more than that, and that's replacing certain um workflows. And that's very, very important to really look at because from my understanding of machines and what machines do, even though you have machine to machine learning, is that they're very rigid in in how they do things. Um, they're not very fluid, you know. What I'm saying is as far as scenarios change, variables change. They they like very predictable um uh uh future out, you know, like hey, if I know if I step one step forward, I can take one step back, one step forward, one step back. When you start changing that stuff around, uh it it starts to get confused. I've seen my own AI get confused when I've given it information and then it comes back, I'm like, what are you talking about? You know, like ho, what look let me reset you because you're way off on a tangent. Because to your point earlier, the way the AIs are kind of set up now, and I'm talking large like large language models, is that it it even though it might not have an answer, it tries to get you an answer. Instead of just saying, you know, I really don't know what you're referring to. Can you help me with more information? So I'm like, it just tries to answer the question. That's a flaw that needs to be fixed uh as we go forward. Now you've talked to some of these things around going into 2026, you've published some predictions around this. What's the biggest single shift that leaders need to prepare for now if they if they don't want to be caught off guard?
SPEAKER_01Yeah, you know, I think they really got to think about the use cases. Um, and and just like what you just pointed out, uh, there's a lot to be said for human intuition. Um, you know, it plays a major role in your performance in the organization, your ability to react to, you know, situations that are very dynamic where agents just don't have that, especially like M to M, like you know, where they're doing that. So I think that, but um, you know, I think uh it's gonna go back to that orchestration audibility and and trust, right? We gotta, we we just gotta figure out a way to to make that. I've not seen a really good one yet that I like one, let me put it this way one that I would trust my bank account to, one that I would trust uh if I would if I was a CIO or CISO back at one of the big financial services that I worked for, that I would trust with treasury, right? With wiring big chunks of money. Or, you know, things like that. And so the use cases now tend to be uh not as bold and and broad as you know we you hear Elon and and so many others talk about it in this, you know, great big thing like robots and and agents and and all that out doing all this stuff for you. Well, even the travel one, which I think in, you know, in my predictions, I I I I I mentioned that I really dislike the travel one because I know what seats on different types of planes that I like, and I know you know what where I'm sitting, and I may have a friend that I know is sitting there. If I have to put all that information in there, I could have just gone to the application and done it myself probably faster. So I think what they do is they use these very high-level abstract use cases that are very simplistic. Um, but as far as like getting into ones where there's a lot of depth to it and and and human intuition plays a definitive role, that's a very difficult one to do. Very difficult. And I think we we have all we have ways to go before we I think people are gonna have that trust that you mentioned early on to say, go ahead and do that. I mean, how many people do you know are actually do the travel saying, Oh, go ahead and access my credit card and book it and do all that, and you had nothing to it? I I I don't know of anybody that crazy yet.
Automate Predictable Work First
SPEAKER_00No, it's risk, it's a lot of risk. You're putting a lot of risk with someone that really doesn't do so. To your point, and the way I look at it as from an authentic AI model when it comes to deployment in my own business, I start for me, I start looking at from the age before, which is the manufacturing age, when they started to put in automation into these plants, right? They started to do things that were simplistic, right? Just you know what, instead of having a human being sit there and turn a uh a nut or a screw, I can put uh some automation in there and that'll turn the screw, and I can take that human being, which is a very uh uh expensive um part of my business, labor is always the labor costs is always like I can put them to do something different, they don't have to sit there and do that. So I think if people start looking at what's what's what's in my business that is very predictable, number one, very, very, very um you know, simplistic in what it is, that then if I automate it just that, that I can deploy that my most valuable resource somewhere else, like that the human. I always look into human AI, the HAI, you know, I want to use my human as much as I can because they are far more advanced than any, as far as I can see, an agentic AI system there is.
SPEAKER_01Um yeah, and and yeah, just like you said, the monotonous and tedious tasks. If I can take those off the human, then I can have them work on higher purpose, higher, higher evolution things that we have in the organization, right? Where where I need human intuition, where I value it tremendously. Um, that is that's really what I want to do anyway. I didn't know you very rarely hire somebody to go, man, I just want them to go do this tedious task every single day. I mean, there are places, but that's the but but a lot of the jobs, that's not the case. And and I think you know, it's typical. Uh everybody used to always say Oracle and Microsoft, you know, they market nine months to a year ahead of the product, right? Which a lot, a lot of organizations do. Not I'm not just picking picking on them. Um I think we're even further out before these agents, just based on even some of the conversation you and I've had here, um, to bring all this together along with the use cases and then build that trust. I don't think that's something that's going to happen overnight. It it's and we're not really seeing anybody deploy any of these really high-end difficult uh tasks, except uh there is one exception. I did want to mention that's where I was going with this uh to align with your question. Some of the manufacturers do have some pretty interesting tasks. I'll give you an example. Um, if you look at uh CrowdStrike and Cisco, they're in their um uh AI defense parts of their products, the threat intelligence and being able to monitor millions of logs that used to be five, six, seven people on a team doing, and then picking out the 10 that are going to be problematic and then giving you additional information on those so you could act on those and and and provide that kind of intuitive information. There are agents that are able to do that. So there are there are, and there are other other use cases as well. So there are good use cases. So I think there we we should be optimistic about it, but I just think the timelines in my mind, the timeline for affecting like the general populace is is pretty far. You know, we're we're still several years away.
Brand Damage After Agent Breaches
SPEAKER_00No, what you said, it's all about the use case and what it can do, especially when you're crunching, you know, large amounts of data. Uh, these data sets are manifest. So use the AI for what it's really, really good at, you know, it's fantastic at uh math for the most part, so it can start to see and then pointing out those anomalies in real time, I think very, very quickly. Because you sit at that world of AI, cybersecurity, we've got to make sure these threats because to your point, what is the board really concerned about? We want to make sure we maintain a good brand, that we we are are are living up to our promise, and we do not need something disruptive like a cybersecurity. Think about the implications, Mark, of a conversation you now have to have with your board after deploying an identific AI system that now just was hacked and now is causing you millions, potentially more than that, in dollars lost because of your your brand reputation.
SPEAKER_01Yeah, yeah, and you and you know what, getting it back takes years. Uh so things like that, these really high value, the treasury example, some of these others. I I think that your your your trust and brand one's a big one. Uh, you know, it's gonna take a while for that trust to build. And and you know, there's gonna be some exploits that that cause uh you know issues with the genic AI, they're gonna get out in the public and scare the you know, scare everybody to death, just like we had with solar wind, just like we had with I can you can go down the list. There's so many of them. Um we're gonna have that. And and we haven't had it yet, and uh we will, and and how we get through that, um, you know, the safeguards and and the auditing and the things they put in to help build that trust back to, you know, that's gonna be critical. And and so that's why I believe, you know, and and you know, I did kind of put in in some of my predictions that you know there are use cases that will do well, but I do think there is gonna be a lot of people questioning. And you think about it, it was the same with the internet, it was the same with cloud. How many people said, I'm not gonna pay a thousand dollars for a smartphone? I don't I don't need it if I don't need to spend a thousand dollars. What am I using a smartphone for? Now, you know, you if you took it away from them, it's like a kid, it's like you took away one of their kids.
SPEAKER_00That's it. It's the the dependency, the maturity of the uh uh of the tech uh of itself and the use case. So we it's um we've gotta be we've gotta up our game as human beings to operate in this world and and what that's going to to look like. You can't just sit on the bench and it's doing everything for you. We've got to up our skill set. I always look at that that I call it the you know the the the model of AI, which is at the data infrastructure layer, the the information of itself. But then it starts to get into context and wisdom. We we know we we've gotta understand as human beings, I believe, that we've gotta own that context layer and own the wisdom layer, understanding what this data and information is really means in situational awareness, in emotional intelligence, things that are kind of I don't see that happening in a in in an identity AI model, but it's very important when you start to look at the mature business models and things like that, like where does this really play in and where really it shouldn't be playing in?
Fraud, Insider Risk, And Audit Gaps
SPEAKER_01Well, yeah, I mean you to I love where you went with that on emotional intelligence, all that. So I you know you remember the movie Office Space, right? Yeah, um and with the red stapler and all that. Well, there what they did in there is they took a couple pennies out of every single transaction. Now, imagine that being an AI, right? Uh, where they could even they could even hide it more, right? They could choose only certain transactions they take a penny out of, where it'd be even less visible, less noticeable, uh, less auditable, right? Uh, rounding error, if you will. Um, those kind of those type of things are gonna happen. And if you don't have some type of intuition or emotional intelligence or some way to evaluate that, if it's just straight up, that it's probably gonna get missed. Because they're gonna put tolerances in there. I I just think that's the kind of stuff I totally agree with you. Without those pieces in place, without having some way to leverage that, maybe the person in the middle is able to do that. Maybe they fully understand the pieces. But what if that person leaves and somebody comes in? Do we have to wait six months? What happens during that six months? How long does it take to? I just think there's a lot of pieces like that, uh, to your point, where we use intuition, uh, experience, uh, knowledge of the industry, etc. etc. And making sure that all gets in an agent and in an applicable and usable way that we can audit. Wow, that's a lot. Wow. It's a lot.
SPEAKER_00A whole lot. And to your other point, or where I love how you said, I could have just done that myself. Like, why?
SPEAKER_01Well, yeah, hey, you can even do this an insider insider threat. What if you're the one oh overwatching the agent and you have the pennies go into your account? Right. You just modify the agent or adjust the agent and you do it, and nobody's watching you. So where where are where how is all this? I you know, I just don't see a lot of use cases where people are using it to do financial transactions and things like that. I you see a lot of AI looking at financial transactions uh for fraud, audit, blah, blah, blah. But as far as like just turning an autonomous agent asynchronously loose on the world and uh with access to millions of dollars in a bank account, I just I'm not seeing those yet. I have not that kind of trust.
How To Reach Mark Lynd
SPEAKER_00I'll tell you, Mark, this is an interesting, very good podcast interview. I really, really appreciate you coming on. Before I let you go, you got to let us know how to get in contact with you. What is the best way? Where are you working right now, and any other um thoughts you'd like to leave us with?
SPEAKER_01Yeah, so uh uh you can go to marklin.com, m-ar-k-l-y-n-d.com, and also on LinkedIn. I've I've got a pretty big presence on there. And then uh I have several books out. I my latest is uh a leader's guidebook to cyber insurance, and I put that out there because a lot of people are getting declined in their coverage when they meet it most. Um, and there's a lot of rules and regs changed on that. I have a also have a fiction book out called Cyber War, uh, one scenario, which has AI, quantum, and cyber in it. Um, it's doing extremely well. Yeah. So, and you can find all those at marklin.com and and uh LinkedIn. And you know, I'd if anybody in the audience wants to reach out to me and have at it, I I talk to a lot of people, and uh it's always good to, you know, kind of have a well-rounded uh response and feedback scenario in place so you can understand what people I love trying to figure out what people are worried about, what they're what they're envisioning, what they're dreaming about, you know, trying to figure out that where the market's going because uh to you know, it's like the conversation you and I have. It's both exciting and scary at the same time.
SPEAKER_00It is, it's like a roller coaster ride, right? It is absolutely it's gonna be an interesting future as we go into it. I mean, there's a lot, but I'm hoping to see an awakening in in human awareness and really understanding okay, what what what can I just automate as opposed to what's absolutely necessary for me as the human being that the things that are very very difficult to simulate, things that you just talked about, just explain intuition, um, emotional, just understanding the emotion because emotion is not a logical thing. There's things beyond what we call logic that we really got to understand that's still very, very important in the human drama uh uh as we go forward in business and and in just the general life. So it's gonna be interesting as this uh scenario plays itself out on the on the on the human.
Start Small And Stack Wins
SPEAKER_01Yeah, I the one one thing I leave with the grant that I found fascinating. I get asked all the time, where do I start? That's that that's probably the number one question I get all the time is where do I start? Um, and I always say, look, take a simple use case, one that you you you can manage, one that you have oversight on, one uh that's auditable and aligned with the goals of the organization, and then take that win and that little win and build off that win. Now it may be that you're gonna use um one of the AI tools built into one of your manufacturing uh manufacturers' products that you already have deployed, right? And take that win and socialize that win and use that win to build more use cases. And so when you talk to people in the organization about, hey, there were there might be a use case here that aligns with the goals of of what you're trying to accomplish that uh that then you know build into the goals of the organization. Well, if you do that, having a simple set of wins behind you that you can point to and then giving them that same kind of deal, wins build off each other. And boy, that's a great way to do it. It's a manageable way to do it. It's also one where if you go, if you go out there and try to do the big one right off the bat and it doesn't go well, you may not have a job, you may take team at risk, you you've you've blown all this money in the organization. It's just uh to me, I always tell them, I said, look, do hear this in a very positive way. That's a reckless way to do that. Where when you have these easy wins from a positive aspect that you can go out and get today, right? And you get to those wins faster and earlier, too. Uh, it's very much the low-hanging fruit. And so that that's what I always say. And and I get asked that question almost every day.
SPEAKER_00And you gotta do that though, you have to start small and understand the nuance. I guarantee you, because like I said, I'm me and Mark have been in technology a long time. The simplest thing seems simple, but in in actuality, it becomes harder, and it becomes oh, I didn't see that scenario. I didn't, I didn't know I was gonna do it like that. Oh, that's interesting. You learn a lot of things uh along that on along that route so that you don't want to then put it into what you call production where it becomes highly visible, and then it's not it, it hasn't been tested in in a lot of different scenarios, and you run into a lot of risk. So you try to avoid that, right?
SPEAKER_01Think about it in the Philadelphia Eagles. Think about in the football Philadelphia Eagles, for example. Yeah, you know, they they started out the season and well, where they were seven and one or something, or seven and two, and now then they lost three in a row, and now they're back and they're gonna win the east, and they're gonna you know have be at home for the first playoff game. So if you're out there and you're a CIO CISO, wouldn't you rather have six or seven wins under little wins under your belt where if you have one loss, you're still gonna qualify for the playoffs and you're still gonna be at home.
SPEAKER_00Win early. I love that. One last question, real simple question. You've been on a lot of different podcasts, you've been in a lot of different interviews, you've done a lot of these things. Your first experience on the Fellow Brand Podcast. How did you like your experience?
SPEAKER_01I enjoyed it very much. I look, I like I like how dynamic it was. A lot of times I get the qu you and I talked about that before we kicked off. I get the questions ahead of time, and it's already the the outcomes already kind of you know done. Yeah, uh, you and I were very dynamic, and and we kind of went, you know, we we went here and there, but that's how people do it. That's that's how they that's what they want to hear. That's how they listen, right? That's how they have their conversations with their family and friends and co-workers, and and so I uh I very much enjoyed that and I appreciate you having me on.
SPEAKER_00Absolutely, Mark. This has been wonderful. Thank you for being on the show. We want to thank Think Thinkers 360 for bringing us together because we both got a work.
SPEAKER_01Absolutely, yeah. Yeah, no, I they're great.
Final Thanks And Where To Listen
SPEAKER_00They are, they they're fantastic. So I'm gonna encourage your entire episode or your entire audience to tune in to all the episodes of Follow Brand. They can do so at five star BDM. That is the number five. That is Star B for brand, D for Development Informasters.com. I want to thank you again for being on the show.
SPEAKER_01Thank you, Grant.
SPEAKER_00You're most welcome.