Evolving the Enterprise
Welcome to 'Evolving the Enterprise.' A podcast that brings together thought leaders from the worlds of data, automation, AI, integration, and more. Join SnapLogic’s Chief Marketing Officer, Dayle Hall, as we delve into captivating stories of enterprise technology successes, and failures, through lively discussions with industry-leading executives and experts. Together, we'll explore the real-world challenges and opportunities that companies face as they reshape the future of work.
Evolving the Enterprise
Future-Proofing the Enterprise with AI and Security Controls
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode of Evolving the Enterprise, SnapLogic CMO Dayle Hall speaks with Mark Lynd, a decorated U.S. Army veteran and recognized thought leader in AI, cybersecurity, and cloud innovation. With decades of experience advising organizations, Mark shares how enterprises can navigate the fast-moving intersection of AI adoption and security.
He explains why companies must balance innovation with strong governance, how to start with small, high-value use cases, and why “agents shouldn’t hallucinate privileges” should be every leader’s mantra. From higher education and financial services to government and healthcare, Mark outlines where AI adoption is accelerating—and why organizational readiness is just as important as technology.
Whether you’re a business leader, IT strategist, or security professional, this conversation provides clear, actionable insights for adopting AI responsibly, securing enterprise systems, and preparing for the next wave of digital transformation.
Dayle Hall:
Hi, and welcome to our latest podcast episode. This is where we dive into the strategies and systems and all the things that are driving transformation across today's healthy, growing, AI-driven enterprise. I'm your host, Dayle Hall, the CMO at SnapLogic.
So welcome back to our latest episode. Today, we're joined by Mark Lynd. He's a recognized thought leader in and around the areas of AI, cybersecurity, and cloud innovation. He is a decorated U.S. Army veteran and a seasoned executive. So he's a twofer. So we get dual experience with Mark today.
Recognized as one of the top voices in and around cloud and AI and cybersecurity, he's got a wealth of experience advising on digital transformation, adoption of AI, and one of the most critical parts of any enterprise, security framework. So it's great that we have him on the show. Mark, welcome to Evolving the Enterprise.
Mark Lynd:
Thank you, Dayle. Thank you for asking me to join you.
Dayle Hall:
Yeah, it's going to be a good one. I'd like to kick off, just getting a little bit about your background, how you grew into this AI, cybersecurity area, because, obviously, thank you for your service over the years. But tell us how you went from your early career to getting more involved in this type of area.
Mark Lynd:
Yeah, interestingly enough, they all tie together. I thought I was unique, but I found out later on that I really wasn't. I was in the U.S. Army. I was in the third ranger bat. One of my jumps, a gentleman ran across the top of my chute and I fell about 35 feet, busted up my right knee. While I was repairing, they said, we're going to send you to a school, which is what they do. You have an injury and they send you to school for fire correction for mortars, and it was using grid boxes, these early laptops.
I learned fire control, and the bug hit me right there. I knew I was getting out. I knew I was going back to school, and that's what I wanted to do. So I was really lucky. And then I got out. Occasionally people would ask me, and I was from the military, I thought it was unique. But now that I've been in cybersecurity for a while, there's so many people that are in the military in there, it's not unique at all.
Dayle Hall:
It seems like you're very trustworthy to be in the Armed Forces, full stop. So I can imagine this type of area is someone that kind of has a level of credibility, even walking into that. So I'm sure you've felt that as well as you've talked to many clients and many companies over the years.
Mark Lynd:
I do cybersecurity incident response tabletops for leadership. And I was doing one for a huge county in Florida earlier this week. And while I was there, one of the things that they always ask, and they did, what is the difference between what you saw here today with our team and what you see in other teams that do really well? I'm probably not going to tell you what the difference is between that because I still have some assessing to do. But what I will say is when I see good cybersecurity teams, they're curious, and this is in the military part, they have cadence, really good cadence, and they have a sense of urgency. And that really pays off well for them. That pace, that urgency, along with a cadence, really works well in cybersecurity.
Dayle Hall:
I like that- curious, cadence, and a sense of urgency. When you have conversations like this, are they really open to that kind of input? Do most companies you advise, do they already feel like they're way ahead of the game, or are they really looking to learn and improve? How do they take the feedback?
Mark Lynd:
I think most of them realize now that with this intersection of AI and cybersecurity and with the threats becoming highly automated and happening at a much greater pace and using threat vectors that they may have never seen be used before, which then expose more vulnerability in their attack surface, that has really created this need for people to get assessments, to do tabletops, to learn more about where their gaps exist. And so I see people really open to it.
In fact, I've done 141 of them in the last two years. It's been fantastic because what they're worried about is with AI moving at machine speeds and doing things that aren't intuitive for humans to think about, they can't just do it with their traditional team that they have set up. They're going to have to think outside of the box. They're going to have to go out and get software from manufacturers that supports AI and helps them with threat intelligence, helps them with operations, with all the different things that are required to ensure that they have a strong security posture.
Dayle Hall:
Yeah. I've done a few of these with different groups around the cautions around AI, the ethics behind AI. You're one of the first that we've talked to around specifically the security of it.
I guess my opening question is, whilst there's so many opportunities that we talk about with AI, and particularly we're seeing more around the automated agents supporting humans in their role, are you a, this is a great thing and we can definitely take advantage, or are you on the side of the equation, which is we have to be really cautious because you've seen potentially some of the drawbacks? Where do you land on that basic question?
Mark Lynd:
I think I kind of land somewhere in the middle, but probably a little bit towards, you're not going to be replaced by an agent. You're going to be replaced by somebody who uses agents. And that's an offshoot of Jensen Huang, the CEO of NVIDIA's famous saying. But I truly believe that. And as I've done some coding and built some agents myself, and we've deployed them as well as what I'm seeing from the manufacturers, because I spent a great deal of time with them, what I'm starting to see is these agents, we still have several years before they become fully automated, autonomous, doing everything.
I think where we're at now is every employee will have agents that'll help them take routine tasks and do them in the background. And then their job shifts to judgment, exception handling, and relationship work. So we think of it as agents with coworkers and credentials. To me, that way they can plan, take action, escalate only when it matters.
Now we're assuming there that they have some of the ethics and safeguards built in, that has guardrails, there is a way to intercede should things get out of hand. If an agent was doing POs, you don't want all of a sudden they're sending out purchase orders for crazy amounts of money without their controller knowing that. So I think there are some rules, some controls. Just like we have security controls that we implement to secure our organization, we're going to have to have AI agent controls.
Dayle Hall:
Yeah. When you talk to a lot of these companies and you maybe talk to some of the employees specifically, I completely agree with what you said and what Jensen said around being replaced. That's probably more of a leadership thing, thinking about those kinds of things. When you talk to employees specifically who feel like maybe they are looking at agents, even if it's just to make themselves more productive, do you get the sense that they are really open to it because they see what, which is, I can be more productive and if I use AI, I'm going to secure my position? In general, are the ground-level employees or people that definitely have a lot of manual processes, are they excited for it, or are they nervous down in the trenches?
Mark Lynd:
I think some of those that have gone out there and invested in themselves and done a little research, I think they're a lot more comfortable with that and managing technological change. But those that are uncomfortable- and I did one down at HPE headquarters down in Houston recently, keynote, and one of the things I try to do is bring some levity to it in the form of maybe some quotes or some statistics or whatever. One of the things I always lead with is Gartner projects that by 2028, 15% of the day-to-day decisions will be made autonomously via agentic AI and 33% of the enterprise apps will include agentic AI.
Well, that means the rest of that, which is a lot, is going to be the human, the human element, managing that, taking actions, escalating when things don't happen, judgment, really being relationship work becomes very, very important because it's going to be a long time before agents are doing any kind of relationship work, before they're truly trusted from a judgment perspective. Once you tell them that, you see their shoulders relax, they start to say, okay, because what they're hearing out of the marketing is you're in trouble, you're going to be gone, why go to college? All these sound bites out there, they're creating fear.
Dayle Hall:
Yeah, I've heard some of that. My daughter's a senior this year at high school, but I am telling her and my son who's a freshman, they're growing up in a different world, now you can't use it to do your homework, but definitely understand the value because when they go out into the workforce, it's going to be obviously a much bigger part of what they do.
Mark Lynd:
Oh, I tell you, Dayle, I have two girls in college and one in high school, and I do exactly the same, because it comes down to, as parents, we have to give them guidance. And the reality is this is a technological revolution, if you will, not unlike some of the stuff we've seen in the past. If you remember, everybody thought the cloud was a craziness. Why would I pay a thousand dollars for a mobile phone? This internet thing is a passing fad, right? And this is potentially bigger than all of them, if you believe what some of the luminaries are saying. So having our kids understand that they need to go and make some investments in AI.
The great thing about it is that a lot of my customers are higher ed and they are making investments. They are putting together a curriculum for AI, and they are starting to make investments in AI platforms that they can teach with. So there's some really interesting things going on in higher ed around AI.
Dayle Hall:
Yeah, no, I think that's good. We'll get into some of the different use cases by sector in a second, but I think what would help, I hear different descriptions of what an AI agent is based on a different type of company, potential, a different public sector, private sector. What's the big difference between automation and how you look at what an AI agent is specifically? Because I still think, again, there's lots of definitions out there. Different people think of it a different way. What is your basic definition based on- you just said as well, we're a ways from fully autonomous agents. So what is an AI agent today?
Mark Lynd:
Yeah. So I think if you compare and contrast with traditional automation, intelligent AI agents, one of the things is traditional automation executes predefined scripts, right? There's logic rules in place that guide it. And it pretty much requires a tremendous amount of oversight from that perspective. And even when the automation is going at its full piece, it's not making intelligent decisions. It's not looking and learning as it goes along.
The agents that we're seeing today, intelligent agents, they come in two forms. It's really important to understand the two forms. They're synchronous and asynchronous. Synchronous is just like you go into ChatGPT and you're a copilot in the session, right? You're asking it questions, and it drafts and analyzes and reviews and provides you responses. That's synchronous.
The asynchronous is the one that people fear. Like I mentioned, I think the fear is a little overblown at this point, a little too much hype. It's an event-driven co-worker with memory. It watches queues. It can file tickets. It can update CRMs and ERPs. It can ping you on exceptions. It has some level of autonomy, like we mentioned when we were talking about the Gartner piece. But right now they're not full. What's nice about it is if you think about it, sync agents save minutes. Asynchronous agents save Mondays, Tuesdays, Wednesdays, Thursdays, and Fridays.
Dayle Hall:
Yeah, that's a good definition. From your conversations when you're out talking to all these companies about the different areas of expertise, people fully embrace the synchronous side and they're exploring asynchronous. Are people skipping one side, skipping the synchronous, and going straight to asynchronous because they see the value? Where do they begin with this kind of journey?
Mark Lynd:
Yeah, and I know we're going to talk a little bit more about it in the use cases, but right now, we're seeing quite a bit of it out there. You have Salesforce AgentForce. You have Command Center. You've got Google AgentSpace. We can just keep going down the list, right? There's lots of them out there, and they're all varying in their capabilities and what it is they're trying to accomplish.
That is the way that we assess, purchase, and deploy technology, largely through our manufacturers, right? There are some that build their own, but there's so many frameworks out there now and there's so much of this going on and then the investments are quite large. Going out there- and the board is asking leadership, hey, where's our AI strategy? What are we doing about all these AI agents I keep hearing about? This gives them the ability to go out to their manufacturers, make some assessments. If they decide to do it internally or if they need it, they can put a framework in place. There's lots of orchestration frameworks. There's LangGraph, Microsoft AutoGen. You can go down the list for that.
Then you can kind of look, how am I going to integrate that into my environment? For that, you have all kinds of enterprise hooks. You got agents for Amazon Bedrock. You've got a bunch of stuff that McKinsey's put out there around the integration of these apps with agents. I think those pieces along with a platform and a framework would put you in a position to do that.
Once the companies see that there's an investment, a pretty large chunk of work- because you still got to figure out the data. You got to make sure you got guardrails. Once you have all those kinds of pieces out there and they can assess what effort that's going to take and what kind of investment they're going to have to make, I think that's what drives their comfort, because you were asking me what the comfort is. That's where it really comes down to because some of the marketing makes it sound like it's super, super easy. It may be relatively easy, but getting to the data, integrating it, having a framework, doing the things you got to do to make it successful and a long-term value that'll have return on investment, there's a little bit more to it.
Dayle Hall:
Yeah. A lot of the podcasts that I've done recently, there's definitely an overwhelming sense from people that are looking at this, that they want to look at AI, they want to look where it can help. The sense is that the best way to do this is to start with, find something you're trying to solve, find a use case, find a problem, and then decide how you solve it. Do you need an AI or agentic-type solution? Can you just have a simple automation to do that, which I think is the right thinking. That's definitely what I'm hearing across business leaders.
In your experience from talking to the 141 of these table topics that you've been running with many enterprises, who's doing it right? Which sectors? What are the use cases that you're currently seeing are having a positive or a big impact, and who's crushing it?
Mark Lynd:
Yeah, I think financial services has really put a lot into this. They've really taken the time. They're looking at. They have very strong use cases with strong ROI.
And by the way, what you said, that is exactly what we're hearing. In fact, I'm on the leadership team in NetSync, and one of our top leaders, he says it all the time, we need a use case. And it's not just for AI, but in many different technologies, we need use cases, because that's how we'll figure out what the ROI is. And also we'll start to understand what success looks like. And so we have to do that.
But when I'm out there and I'm talking to them, we see it in higher ed, we're seeing really good use cases and things going on there. We're seeing a lot in financial services, and we're starting to see a little bit in healthcare. Now, public sector's definitely behind. That's not unusual, the same with government, because they have long acquisition cycles and leaving longer deployment cycles oftentimes.
And then the technology companies, technologically, they're prepared for a lot of this and a lot of their values driven off that, because think about it, their stock price gets impacted if they don't have an AI agent, an AI story and an AI agent story.
Dayle Hall:
Yeah, exactly. It's interesting because I'm pretty old now. I've been working in technology selling to all these sectors for a long time. Healthcare, financial services, not necessarily known in the past for being leading in innovation or looking at this kind of technology because they've got so many regulations with HIPAA, EII, and all those kinds of things that we all know about. So it's really interesting that they really see an opportunity here and are almost leading the way with some of this AI and agentic development.
Mark Lynd:
I totally agree. With financial services, a lot of their systems are low-latency systems, and they've had to make technological advancements to remain relevant. And I think relevancy is driving a lot of that. Universities are worried about, hey, are we going to even be around in 10 years? Because there's a lot of people predicting that the university thing is going to fall apart because of AI, which I think is probably a little overblown as well.
I think what's changed is relevancy, right? You have to remain relevant, and it's becoming tougher and tougher. It was funny, I heard a discussion on a podcast a couple of days ago, and they were talking to Sam Allman, the CEO of OpenAI. And they said, hey, what do you think it's going to look like in 10 or 20 years? What companies are going to be around? And he was like, it's definitely going to be different, and it probably won't be the same companies we're seeing today.
Dayle Hall:
Yeah. I think there was a stat that I saw, something like 70% of the top 100 companies from 20 years ago don't exist. You look at that stat and think, wow, is that really true? I think we're in that next wave of change, not just within the market and with innovation. I think we are going to see a lot of these companies fall by the wayside, and I think this is the challenge. I'm in marketing, right? The challenge is everyone feels like they have to be relevant, so they have to have this kind of story. So now the market is flooded with everyone saying a very similar thing.
So if you're in healthcare or financial services and you're trying to get ahead or you want to invest, how should people start identifying what's real from vendors? And I'm not saying marketing ever really makes things up or exaggerates. I'm not even going to admit to that. We're obviously trying to position ourselves as leading, but how should people really look at that? How do you make an assessment on who to go with?
Mark Lynd:
I think what you said really matches well with answering that question. Pick out your use case, talk to your manufacturers, talk to those out there, and make a determination. Will this help me with this use case? And then do a proof of concept, do some kind of a copilot, if you will, do something that allows you to go in there and see in a very small way and then build off that.
I think we're seeing a lot of that, too, because we sell technology to companies all around the world. And one of the things that we often look at and we advise our customers is get a couple small wins. Even if those wins are doing some kind of POC or doing some kind of smaller version of the project, more than MVP, that's more the development side, and then expanding off that win, right?
Because the other thing that's part of this now is these aren't cheap investments. These are expensive investments. You're going to have to put some dollars, some people, and some resources to do this. You're going to have to look at your data, just all the things you and I talked about. You're going to have to put ethical controls in place. You're going to have to do a lot of things to make sure the success. And so doing it in a little bit smaller scale, getting a nice win under your belt, and then using that to leverage that, that's a really great way.
A perfect example of this that I'm seeing all over the place, especially in higher education, a lot of them use either Google or they use Microsoft, Office 365. They're now rolling that out and with the copilots. Cisco's coming out with copilots built into every single one of their product lines, the software. I can just go down the list, right? Those are just a couple of examples.
So there's a way to roll it out, take a look at it, see how it's going, evaluate, is this going to be valuable for us? Do we have the capabilities internally to manage this, to gauge what type of return we're getting, those types of things. That is what we're telling our customers, because if you go out and you try to get an LLM, train it on all your data, build a data lake, get the storage in to run the data lake on, validate the rules, put in a framework, I just keep going down the list, you're looking at a pretty sizable project with a pretty long timeline.
Dayle Hall:
Yeah. You touched on something there that I want to dig in on a little bit, which is your organizational ready. Are you ready to take this on? You mentioned a number of technology areas there, but let's also talk about how are people structuring for this kind of project, whether it's a small use case or something bigger. I think most people are investing in something already, but as they get ready, what are you advising on governance model? Who should own the project? Who should be involved? Is it a business unit? How does an organization get ready for this kind of development?
Mark Lynd:
A great question and one that comes up a lot. We talk about infrastructure and storage and all that, and that's obviously an IT designation. But as far as putting it out there in the business community, the business disciplines in your organization, that needs to be done by the business. We're seeing a lot of people use steering committees. I've seen a whole bunch of different paths that they're taking along this path to try to align themselves with the different frameworks. There's a bunch of them. Google has them. Microsoft has it. NIST has it. I can go on. I think there's 11 of them out there, major ones right now.
When you look at those, there's very specific roles in some of those that you need to do. People are going to have to go in there and explain to people what it is you're going to do. A perfect example of that to me is a personal productivity agent, right? That's one that a lot of people are talking about, also like RFP and proposal agents. I hear this all the time, right? Because that's low hanging fruit, if you will. But you're going to have to go in there and talk to those employees that currently do that and get their buy-in and find out what it takes to get that buy-in and then be able to organizationally get that where it makes sense and put it in place, the rules and escalations that are needed to make sure that doesn't go off track, and then also how to evaluate it, right?
If you're going to make these investments, you got to have some way to evaluate it. You got to have that evaluation piece. There's the IT piece over here. There's a cybersecurity piece. And those are pretty well known what those are. But this piece over in the business, that's a big part.
I'll give you an example. I was in Houston with a company that's in the concrete business and they have trucks that go out all over and they're like, hey, we'd love to get more into intel on these trucks. Are the trucks sitting somewhere for two or three hours, or maybe they had a problem, or they were sleeping, or it could be anything, right? There's so many different reasons. Also check the weights, make sure there's not loss, et cetera, et cetera. And then map that to the invoicing and everything. And they'd like to do it in English, like a language. They'd be able to talk to it like they do a chat session with ChatGPT.
We said, yeah, that is definitely doable, but you're going to have to have the data and it's going to have to be in a data lake, right? You think about the data, we have lots of data that's like SQL data, and we've had that for a long time. But all the data that lives out there and emails, spreadsheets and all that, you find someone to do it.
And then you have to be sure about the PII and the PHI. You and I could go on and this whole podcast with what it would take to roll that piece out. And what was interesting is when we got to the end, making a long story short, we told them that, they're like, yeah, but the return would be incredible and we'd be so much more efficient and effective. And I was like, that's the moment. That's the moment that we're all looking for, right?
Dayle Hall:
Yeah. That's a great example of whilst they know it could be hard, it might cost more than they originally had laid out or planned for, they can at least see a point in the future where if it's successful. They will get the ROI. And I think a lot of companies are trying to get to that.
Is there a reticence to start something because of the risk of ROI? Are you seeing people who have maybe gone too far originally without thinking about ROI and now they're seeing it fail or they're not seeing the kind of results? How are organizations thinking about, you said it yourself, making sure that you can track and measure ROI of potentially an initial use case or a multi-use case, a more defined change across the enterprise? How do you measure it?
Mark Lynd:
Yeah, I do. Unobserved work is risky. It doesn't matter if it's human or agent. You need to treat agents like service identities. They have scoped access, signed actions, replayable audits and traces. You got to do that. Even Gartner has come out and said they expect a lot of agent projects to be abandoned by 2027. But the winners are standardizing on platforms, not pitching pilot, right? And you notice I didn't use the word pilot earlier. I stay away from that word.
Pilots tend to not be well described with a solid use case and fully understanding how to evaluate what success is and what the potential return is. Pilots tend to be, we got pressure from leadership or the board. We got to do something, just start something, get going. And we get back to that kind of unobserved, right? And you put it well earlier as well, you got to have that use case. That's the critical element. And then you got to match that use case with the pieces you're going to need to be able to evaluate it, gauge what the ROI is, and then make the right determinations.
And that's why I say start small. If you start small and it is an abandoned project, it may not kill the whole agentic AI within your environment. You may find a better use case. You might be moving down the road with that project and you find a better one, much higher ROI, and you're worried about what the board and leadership's going to think, so you choose it.
Dayle Hall:
I like the thinking of really, we talked about a use case, but also thinking about what do we judge success or ROI of this? I think that's really important. There have been a couple of companies out there, obviously, I think probably looking for more publicity and basically suggesting they are going to get rid of 70% of their company because it can be run through AI. I'm not going to name any names, but the CEO is suggesting he could be replaced by AI.
I think that's a little bit of over-dramatization. But where do you draw the line around deciding or delegating what could be an agent versus what should still have humans involved? And as we've talked about, yes, it's a use case, have organizations talk to you and it feels like they're biting off more than they can chew in terms of what they think they can replace?
Mark Lynd:
Yeah. I think the main blocker for most of these models is operationalization, identity permissions, data access workflows, things you and I talked about. And then once you start that model, start with a low blast radius, right? Don't automate chaos. Standardize and then agenize, right? Standardize around that use case and agenize that piece, because you're right, there is a lot of hype. It's definitely been overplayed.
And look, I can't draw on one- and I have a lot of customers using AI and I talk to a lot of customers about AI. I do not have one example of one doing the things that you described that you've heard. And I've heard that exact same one. In fact, I know who you're talking about. I just have not seen that. I don't think that's where the value is. I think you're right. I think that's trying to get a little stock pop, do a little, you're right, get a little bit more money, make some more investments, buy some more data centers.
Dayle Hall:
Yeah. There's always method to the madness, Mark.
Mark Lynd:
There's always method to the madness. But for me, I always think the same way on this piece. This is my job, right? I draw my decision boundary based on reversibility and blast radius. And what I mean by that is there's money movement, access chain, customer visible actions, human in the loop. And I take all those pieces in there, then I'd look at it, and help the customer understand that, because all those pieces are put around a use case. And if we can do that- and the blast radius, meaning if this were to go bad, it's not going to wipe everything out. That's why starting small and being thoughtful about it and having a good governance model and understanding what it's going to take fully and sharing that with the others that are providing the money- because the problem with giving it solely to IT.
I'm an IT guy. I did IT marketing. That's what I've done for my career. The problem with just tossing it over the fence and saying, hey, we want to do RFPs using agentic AI, is that they're going to look at it from their perspective. And their perspective is we have a budget, we have projects. I only have so many people, so I'm going to need more people. I'm going to need a bigger project. It spins off in its own piece.
A lot of times the goal and the return on investment and their use case that they originally thought they were going to solve, it's lost. And we've seen that, right? We've seen a lot of IT projects fail, and fail miserably. I think AI has that same opportunity to fail if we don't think about it in the right way. Thankfully there are good frameworks. There are manufacturers doing good work. There are some good use cases out there of people doing really good things with AI.
Dayle Hall:
Yeah. Let's talk a little bit about the security side, because you have a lot of experience in this way. Is there something that you feel organizations need to know around making sure that they're at least controlling what an AI agent accesses, what it doesn't? We talked about things like healthcare and financial services are really looking to use this technology. So where would you say that they need to think about drawing the line between where an agent plays, where it doesn't, where they should control it, and where they can let it be a bit more creative?
Mark Lynd:
Yeah. Even in other areas like in telecom and manufacturing where you're also seeing quite a bit of agent and AI pieces, the thing I always tell our customers all the time, I'm like, agents shouldn't hallucinate privileges. And that's my cybersecurity bit to them. I say it all the time over and over. Sometimes I'll say it three or four times in a meeting. Agents shouldn't hallucinate privileges, because if you are putting information out there, PII, PHI, invoicing, billing, customer information, et cetera, and you're rolling it through there and you're not sure if it's being trained on or not, you're going to be hallucinating privileges. Privileges are sensitive information. And that's a big problem. And that is one people should worry about. We've already seen several incidents of that happening.
And then the other element to that is to shadow agents or shadow AI. We're seeing a lot of that where they're introducing AI through either SaaS or just putting things in ChatGPT or in Claude or in Gemini unregulated. And because they don't have any policies at the organization yet, they can't even go back and do anything about it. I think those are three really large deal security issues.
And once again, we're back to that decision boundary. You got to have reversibility, and you got to keep that blast radius in mind. It's so important for that because I talk a lot about AI ethics. I post a lot about AI ethics. I've written several articles about AI and guardrails. And I truly believe to be successful with AI and agentic AI, those are some things you've got to put in there early. It's like the old piece. Dayle, I know you'll appreciate this. You bake security and you don't bolt it on afterwards.
Dayle Hall:
Right. Yeah. I had a conversation with- there's a guy called Steve Norey. It's an AI ethics board, but they do a lot of AI development. We were using this as an analogy, like when social media came out, it felt like we threw it at everything. We have tried then for years to pull back on security and controls and hate speech and all those things we feel like we've lost control of with social media. We hope, and it feels like there are more groups out there now who are trying to make sure that AI doesn't go the same way, meaning we don't end up that we've unleashed this thing without any controls. I think that's a great example, which is don't bolt it on, make sure it's built in.
And I think this is one of the key challenges now, because we talked about who should own it. I want to use certain types of agents and AI in my role. I'm asking my team to become the most advanced AI marketing team out there, but that comes with risks, right? Are you seeing organizations, even internally, whoever ends up owning it, are you seeing them setting up more controls and governance around things like AI and the use of it? Because we need to have some of that built in.
Mark Lynd:
Yeah, I think one of the things is there's a little hesitation because if you look at the AI-driven marketing tools out there, if you look at the CRMs, if you look on the more customer-facing pieces, if you look at some of the ERP elements that are out there, these are things that they interact with each and every day, right? I need a PO. I got to go create a post. I need to talk to an influencer. I need to support a trade show, blah, blah, blah. As you're doing that, each one of those pieces of software has its own process, its own piece. You're not really aware what was the framework that was supporting this. What ethical and guardrail elements are in this software? I don't know. Is it okay for me to put our information, our proprietary or sensitive information in here? Do we want people to know that?
I think that's where having some policies in place- and I'm not talking about draconian policies, right? Because that actually has a net effect of the opposite of what you really want typically. What I'm talking about is back to that agents shouldn't hallucinate privileges. When you explain that to employees, I think they really get it. They've heard the stories. If they understand that the outcome could be something that's negative or could affect their job security or the organization's profitability, which ultimately would also potentially affect their own personal security, it tends to work. That's what we always focus on.
We really put a lot of human element in ours. We're very relationship driven. I've been that way for a while, and I just feel like that's the way to manage fears, relieve anxiety, and put people thinking about what are the positive outcomes, what are the things I need to do to head towards those positive outcomes. I truly believe that.
Dayle Hall:
We could have a separate podcast just around the ethical and the governance side of what we're facing now. As we come to the end of this, I appreciate your time, one of my favorite quotes from all these podcasts is agents shouldn't hallucinate privileges. But as we think about the world has changed so much, just even in the last year and a half, two years since gen AI really started to take hold, and now with agents and everyone's involved in it, is there something looking into the future? You tell me, 12 months, two years, five years, whatever, as we look into the future, is there something that you're really excited about, the potential for what we're seeing, the early stages of what you’re really excited about? Is there something that you also have a, here's what we need to be cautious of? What really gets you excited when you tell your kids and I tell my kids, learn this stuff because it's going to be important? What are you excited about?
Mark Lynd:
I think once we get an agent ecosystem that an employee fully understands and is able to supervise and do the things we talked about earlier, so it does that, I think that is where there should be a lot of excitement, because you think about it- and I was just having this conversation the other day with a team that I was talking at a customer. It was interesting because the employees actually talked more than the leadership in the room. There was genuine concern and thought. There was some excitement as well.
What it boiled down to was there are kind of two visions. One's a little apocalyptic about they lose their job, blah, blah, blah. But the one that I want to touch on toward your question is, I said, and just imagine that you don't like that you toil with every day, that actually takes away from your job satisfaction, that keeps you from being able to go home on time and go to dinner or go out and do something with your family, or you have to work on the weekend. Imagine if that work was taken away and all you did was supervise it and evaluate it. Wouldn't you be happier? You could see the room change. You could see them starting to think about it. And I think that was a really great thing to do that.
I did tell them one other thing that relates to this, and I've heard a variation of this, and that's where I've kindly borrowed it. If you can't replay it, you can't govern it. What I was trying to tell them is that you're going to have to manage and monitor this and evaluate it. That's the only way you're going to be able to govern this. If you have five agents, you have five employees, right? So if you're a manager and you have 10 employees and they have five agents each, it can get out of hand pretty quick. I don't think anybody's thought about how to scale this out, right? And the real impact and the human impact of that scaling, it comes back to that piece, what the organization is going to ask, how do we govern that? How do we make sure that you can manage it, be successful, and we as an organization can profit from it? So if you can't replay it, you can't govern it. I truly believe that's a big element.
Dayle Hall:
I like the example of you could actually spend less time doing the mundane. I think that, yes, we would all like to spend more time working on things to be more creative. Just simple things for me are like an automated way of completing an RFI from all these analysts, tell us everything about the company and the size and all these product features. They're pages and pages.
We're working on an agent that we use on our own product to help fill in some of that stuff. This is multiple people across the organization, hours and weeks of work, and then you give it to them. And then some person at an analyst firm decides where you appear on a quadrant or a wave or all these. Come on, we've got to get better than that.
Mark Lynd:
I'll tell you another one, too, is you think about all of the companies out there, all the organizations that put out RFPs, a lot of RFPs, especially manufacturing, just go on, and public sector. This is the one I like to use a lot in those industries and sectors. Imagine if all that RFP could look at a day full of data from previous RFPs that you won versus ones that you did not win with customers of similar decision and purchasing styles, and you were able to apply that and you could put up an RFP in two hours versus the six weeks it takes you to put it on an RFP and all of the drama associated with people going, oh, that isn't right, I don't like that. Where did you get that? Why did you make that decision? It's a brutal process.
Dayle Hall:
Yeah, exactly.
Mark Lynd:
That's a really great example of do that. But it does go back to the piece, if you can't replay it, you can't govern it, because if you lose RFP, you're going to want to replay it and figure out, hey, how do we make it better? How do we govern it? And I think your example and my example, those are two really good examples that could really make a big impact.
Dayle Hall:
Yeah. Well, Mark, I appreciate your time. I know you and I could go for another hour. We'll maybe set up a part two to our discussion. So look, I appreciate your time. Thanks for being part of it. How can some of the listeners find out more about the stuff that you're doing, and where should they follow you?
Mark Lynd:
Absolutely. At my site, marklynd.com, M-A-R-K-L-Y-N-D.com. I have a lot of information out there. I have a newsletter, Cybervisor, that goes out every Tuesday. It's a lot of IT, AI, and cybersecurity. It's a huge audience for that newsletter. And then last but not least, I just had a new book release. It's called Cyberwar, One Scenario. It's doing quite well. It's in Barnes & Noble, Amazon, Kobo, Apple Books. It has AI, quantum, cybersecurity, and a battle over Taiwan.
Dayle Hall:
Oh, wow. An eclectic mix.
Mark Lynd:
It's quite a mix. The trailer goes out on Monday. Just released it a week ago. It was doing a soft release because we were garnering reviews, which we've got amazing reviews. I took a lot of what I learned on the road and with our customers and really poured that into this book.
Dayle Hall:
Yeah, that's great, Mark. I appreciate your time. Thanks for being on the podcast.
Mark Lynd:
Thank you, Dayle. Appreciate it.
Dayle Hall:
Thanks everyone for joining us on this latest episode of Evolving the Enterprise. Make sure you subscribe, follow us, and we'll see you on the next episode.