AI or Not
Welcome to "AI or Not," the podcast where digital transformation meets real-world wisdom, hosted by Pamela Isom. With over 25 years of guiding the top echelons of corporate, public and private sectors through the ever-evolving digital landscape, Pamela, CEO and Founder of IsAdvice & Consulting LLC, is your expert navigator in the exploration of artificial intelligence, innovation, cyber, data, and ethical decision-making. This show demystifies the complexities of AI, digital disruption, and emerging technologies, focusing on their impact on business strategies, governance, product innovations, and societal well-being. Whether you're a professional seeking to leverage AI for sustainable growth, a leader aiming to navigate the digital terrain ethically, or an innovator looking to make a meaningful impact, "AI or Not" offers a unique blend of insights, experiences, and discussions that illuminate the path forward in the digital age. Join us as we delve into the world where technology meets humanity, with Pamela Isom leading the conversation.
AI or Not
E056 - AI or Not - James Imanian and Pamela Isom
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.
A stolen credential can do more damage than a noisy “hack”, and AI is about to multiply the number of identities your organization has to defend. We sit down with cybersecurity and risk executive James Imanian to unpack why identity security has become the new center of gravity for cyber defense, and why many teams are still underestimating machine identities such as service accounts, tokens, and secrets.
From there, we push into what’s next: AI agents that can execute tasks independently, call other agents, and operate at scale. That capability is exciting, but it also creates real governance questions. What identity should an agent have? What access is appropriate? What gets logged? And what happens when an agent tries to “get the job done” by reaching into systems a human would know to avoid? We talk practical guardrails, emerging best practices, and why waiting for perfect standards is a losing strategy.
We also zoom out to the board and executive lens. Ethical AI and people-centered security are not add-ons; they’re how organizations protect decision integrity when data is compromised or poisoned. If your AI strategy ignores identity governance, privileged access management, audit logs, and data integrity, you’re not moving fast, you’re building on sand.
[00:00] Pamela Isom: This podcast is for informational purposes only.
[00:26] Personal views and opinions expressed by our podcast guests are their own and not legal advice,
[00:34] neither health, tax, nor professional nor official statements by their organizations.
[00:42] Guest views may not be those of the host.
[00:50] Hello and welcome to AI or not, the podcast where business leaders from around the globe share wisdom and insights that are needed right now to address issues and guide success in your artificial intelligence and your digital transformation journey.
[01:06] I am Pamela Isom and I am your podcast host.
[01:10] And so we have a wonderful guest with us today,
[01:15] James Imanian.
[01:17] He's a passionate cybersecurity and risk management executive.
[01:21] We collaborate on so much. We collaborate through CyberArk,
[01:25] IEEE and then I just learned that probably going to be collaborating with him on a few more other things.
[01:32] James,
[01:33] welcome to AI or Not.
[01:36] James Imanian: Yeah, thanks Pamela for having me here. I'm looking forward to the conversation. You've had some great guests and I hope to contribute to the community here.
[01:44] Pamela Isom: Awesome. Will you tell me more about yourself, your career journey?
[01:49] And while you're talking about yourself,
[01:52] can you share some milestones that have attributed to your passion that you have for cybersecurity and risk management?
[02:00] James Imanian: Sure. And it's been a lifelong endeavor for me, really the sum it up at the end, it's helped to defend the nation.
[02:08] But I'll also say that I've had a computer by my side since grade school, so I had one of those VIC 20s. Some of the listeners might know that I purchased the original Apple Macintosh.
[02:21] And so I've been a computer person,
[02:23] you know, from the early, you know, mid-80s on forward.
[02:29] And I also had a passion to fly. So I went to the U.S. naval Academy and got my degree there, went out of there and became a naval flight officer and flew Tomcats for first two thirds of my career.
[02:44] And then I went back because of my computer passion. I finally got my master's in computer science and then transitioned within the Navy to what the military calls information assurance and what we call cybersecurity today.
[02:58] And that just led me along a path of cybersecurity and helping organizations both operate in cyberspace, defend themselves and on the military side if we had to attack in cyberspace, in and through cyberspace.
[03:13] So I had many of those assignments transition out of the military back in 2016 and just continued that journey around cybersecurity and risk management and how we're using technology to accomplish either a business function or a mission of an organization.
[03:31] And really again, that been my passion is how can technology help both humanity in our IEEE phrase,
[03:39] advancing Technology for humanity,
[03:42] but also specifically in the business world. How does technology help a business achieve its mission and provide value to its customers?
[03:50] Pamela Isom: You have an interesting background and I'll say it again, thank you so much for your service. That's amazing.
[03:57] Okay,
[03:58] so what about this identity vulnerability and risk management, where does that fit?
[04:05] James Imanian: Yeah, for me it's been,
[04:08] I spent a lot of time around how do we defend networks, so how do we certify and accredit networks?
[04:15] And the opportunity to come to CyberArk almost three years ago now has helped me centered around identity. And identity has always been very important, but it was one area where I thought I could dive deeper and it was very privileged to come here to Cyberark.
[04:31] If you look into what the cybersecurity space is doing today,
[04:37] it really revolves around identity and how are we protecting those identities, how are we deriving other identities from our human identity, which we'll probably talk about later.
[04:49] But really, as you use technology,
[04:53] who you are, what you are, what you want to do, what you're allowed to do in terms of governance and what your role is has become even more important over the past couple of years.
[05:03] So that's what I've been focusing my effort around is identity security and governance.
[05:08] Pamela Isom: Are we getting it right?
[05:10] James Imanian: I think we are getting it right in many places, but we have a lot further to go in maturity and around governance.
[05:20] And again, I think we now talk about multi factor authentication and single sign on. Those are all very good things.
[05:30] But we haven't seen a lot of movement around machine identities.
[05:35] And how are we interacting with the cloud and how we interacting with our software developers and how they're using identities and how they're putting cloud infrastructure together or code together that's using identities.
[05:49] So there is a lot more that we need to do in our community.
[05:53] Pamela Isom: How does that tie? Is that like we were talking about machine identities and AI agents, we were discussing the loss of identity or credentials and how the adversaries go after secrets that are not protected.
[06:07] And we think that we're doing a good job of protecting it. But it seems like there's an emerging threat along the lines of machine identities. And then there's an association of that with AI agents.
[06:20] What do you think?
[06:21] James Imanian: Yeah, there's a lot there.
[06:23] So one, even before we had the AI agents, we had more and more machine identities.
[06:29] So think about these identities that a server talks to another server or you,
[06:35] you ask a database for something and, or an application for something and it needs to go to other databases on your behalf.
[06:42] So We've always had that problem and that problem was scaling already.
[06:46] And Cyberark did a study a couple years ago where that we went to a subset of organizations that were medium sized organizations and at that point for every human identity in the organization, you averaged about 42 machine identities.
[07:02] And we recently redid that study and that's now 82 or 83 identities. So the amount of machine identities that an organization needs to deal with has grown. I think it's going to grow and grow even faster in the future because you mentioned the AI agents.
[07:19] So while we have the large language models that have taken the news over the past two or three years,
[07:26] over the past year and going into 2026,
[07:29] there's going to be a large growth on AI agents themselves. And we like to, I like to define agents as something that could do an independent task with its own identity and then come back with an answer.
[07:41] So it's not like you're stepping it through something.
[07:44] And so and an agent can call upon other agents and some agents can create other agents.
[07:49] And so now you just think about that in terms of the machine identities that we just talked about and you're going to have a lot of those.
[07:56] I'll leave this section to an end around how AI agents really have the characteristics of both humans and machines. They have humans because they're individual, they're, they have their own mind.
[08:11] They might have, they should have their own unique identity,
[08:15] but they're machine like in that they replicate themselves like we've talked about.
[08:20] They're talking to other machines, they're, they're even generating their own code. And they're doing it, you know, 24, seven, they're doing it at scale. So you've got to think about AI agents as something in between a human identity and a machine identity.
[08:33] Pamela Isom: Is that presenting a challenge when it comes to identity management?
[08:38] James Imanian: Yeah, absolutely. So if we are talking about organizations that may not have the best strategy or haven't really come to terms with this expanding amount of machine identities as we add these AI agents,
[08:54] the problem only gets worse and the debt gets bigger.
[08:58] So there are a lot of initiatives out there. Whether there's the NIST AI framework, SANS has put out a couple of guidance and principles in terms of AI agents I think we all should be looking at.
[09:13] So the community is starting to generate what those frameworks should look like, what are those best practices.
[09:19] But again, it's very, very new and I think a lot of organizations just haven't taken the time to them for themselves and said, okay, well what is my strategy? What is my strategy around identity?
[09:30] What are my humans doing and then what are my agents doing and then how should I govern that going forward?
[09:36] Pamela Isom: It's a complicated governance model that we need to put playbooks to.
[09:45] So that's what I'm thinking. So I'm listening to you talk and I'm thinking, yeah, we need to dig deeper into governance playbooks around this whole identity and machine identity and the interlock with agents.
[09:57] So I think we have some homework.
[10:00] James Imanian: Yeah, yeah, I would agree. And you know, again, it's not like we haven't had these challenges with technology in the past.
[10:07] I think what's pretty unique about the times we're living in today and what 2026 and 2027 are going to look like for businesses and public sector agencies is that march of technology is going to be very, very fast.
[10:20] Right. So we've got to adopt, we've got to start right now. You know, we're not going to have the best answers.
[10:26] Like I refer to, you know, again, you know, following some of the sans work, they admit these aren't fully baked protocols or fully baked concepts,
[10:34] but there's something to start with and that, let's start with that and then learn from that and then adopt.
[10:40] Pamela Isom: Yeah, we'll, we'll dig into that some more.
[10:43] So that tapped into national security, because that is a national security risk.
[10:49] It touches into geopolitics and also being people centered.
[10:55] How do we stay people centered in the midst of machine identities,
[11:00] AI agents and AI at large, should we do more to stay people center?
[11:08] James Imanian: Absolutely.
[11:09] And you know, the people centered piece is what I often try to emphasize.
[11:13] I definitely don't like sometimes you hear within the cybersecurity community that the users or the humans are the weakest linked. I think they're the strongest link. They're the ones that we need to train.
[11:25] They're the ones that have the context, the intuition and the ability to observe and act and decide.
[11:33] So all that being said,
[11:34] AI and the AI agents really should be decision support for us.
[11:40] We create AI and AI agents for certain tasks. We need to do that in a professional way that's auditable and governable and ethically aligned. And then we need to monitor that and only humans can do that.
[11:54] We could talk a little bit about we are going to have AI fighting AI and we could talk about sheen speed cyber defenses. So I'm not saying that that's not going to happen,
[12:04] but humans are always going to be needed to design,
[12:08] monitor and improve upon those systems.
[12:11] Pamela Isom: So how does that tie to ethical AI and your perspectives in bringing more humanity into the AI we're here.
[12:21] James Imanian: AI is a tool and we need to make sure that we treat it as such and it's not. It shouldn't take away agency, I guess, is one term I would use here.
[12:32] And when we are designing our technological solution that includes AI, we need to align it with the ethics that we want in it. And aligning it to ethics is more than just the programming, especially when it comes to AI.
[12:48] It's about the data that we train it with. It's about how do we, you know, red team it. I know you've talked about on past episodes in terms of what do we want to make sure that it doesn't do?
[12:58] And make sure that it does well.
[13:00] And so there's all those different things that we need to do.
[13:04] But we shouldn't say that this is a too big of a problem or things are moving too fast and we can't do ethics, we can't do governance. We have to do that and we have to do that well.
[13:17] Pamela Isom: And we shouldn't set ethics aside.
[13:20] James Imanian: Should not. Yeah, correct. I mean, that's again, I think that's going to be the center and I think businesses coming all the way back to the business and the government public sector side,
[13:30] businesses, I think are going to be able to differentiate themselves on being able to deliver products and services in an ethical fashion,
[13:38] because they're going to be businesses that are going to trip over that, and I think those ones will fade away fast. If you have a business that's solidly, ethically aligned, that's delivering value humanely,
[13:50] and then on the public sector side,
[13:52] you going to have citizens that demand that. Right. So as we roll out government services,
[13:57] as we provide applications and ways for citizens to interact and get the value out of government that they want, we need to make sure that that too is ethically aligned.
[14:08] Pamela Isom: That sounds like strategic advantage that businesses need to think about. So we think that the ethical considerations are an overreaction.
[14:18] You're putting ourselves at a strategic disadvantage. That's what I heard you say.
[14:24] James Imanian: I believe that's true.
[14:26] To say that there is some talk about, hey, we need to race faster and we could bring in the AI race with China and we can't have too many guardrails because that'll slow us down.
[14:37] Well, that's a race to the bottom. And oh, by the way, I don't think China wants that either.
[14:43] Right. So we've got to move forward in an ethical fashion, keeping to Our principles.
[14:47] Pamela Isom: And if you don't have principles, get them right and rethink your principles as well with the interception, with the integration of AI in the mix.
[14:58] So let's zoom out. So let's talk about your board director experiences.
[15:05] So tell me about that and then what are you thinking when it comes to AI and emerging tech from a board member and director lens.
[15:18] James Imanian: All right, so from the board member lens,
[15:20] I'll go back to your statement around strategic advantage. If you're a board member,
[15:26] you are, and whether this is a for profit or a nonprofit. I'm on a few nonprofit boards,
[15:32] but I think this goes across both in that your organization has a mission. It needs to either deliver value to its state, it needs to deliver value to its stakeholders, and whether those are stockholders or, or the citizens and people that your nonprofit board serves.
[15:50] And therefore as a board, you need to be challenging yourselves, you need to be challenging the executive team to see how are we delivering that value that we said that is part of our mission, either our business objectives or our mission as a nonprofit.
[16:07] And AI and technology has always done this and AI is doing it, I think even more enabling organizations to deliver value faster, to understand the environment,
[16:20] you know, that they're moving in to tabletop and to,
[16:23] you know, I'll say war game or figure out what different scenarios are and then put that on the table using AI and other technologies today and figure out what's the best way forward.
[16:33] I think, you know, wrapping that up at a board level,
[16:36] AI is not a new problem. It's the technology risk that we've always had.
[16:41] But it does have advantages and I guess same thing board level. If your board's not taking advantage of what AI can do,
[16:49] another organization will do that and then they'll out innovate you and then eventually probably replace you.
[16:55] Pamela Isom: Those are really good points. So I'm just going to add just a little bit, but I actually like what you said because we want strategic advantage and we need to pay attention to what could cause strategic disadvantage, which is what you just elaborated.
[17:10] I'm thinking about board director and board membership responsibilities. And I'm looking at it from a technology perspective, but also from a data perspective.
[17:20] And what comes to my mind is decision integrity,
[17:24] direct cost,
[17:26] indirect cost,
[17:27] how if data is at risk,
[17:31] those decisions that an organization can make or are making is compromised as well. But we take pride as business leaders in making data driven decisions. So that's all the more reason why we should pay attention to data integrity and zero in on what can we do to prevent and block our data from being at risk or at least mitigating the risk as soon as possible.
[17:59] And then you kind of link that to technology risk at large.
[18:04] So if your AI ingests is poisoned or compromised data,
[18:11] then your corporate strategy is misguided. So going back to the whole board,
[18:18] thought,
[18:19] thinking and thought leadership. So boards should be thinking about that. So I'm just basically saying this to support what you've already said.
[18:27] Any comments?
[18:29] James Imanian: I think we're in agreement and just, you know, if you're a board leader or aspire to be or you're advising a board, please make sure that this is the case. Because while you were talking,
[18:40] came to my mind is that yes, we make data driven decisions and how much often we say to ourselves, well, we need more data, more data, more data. So there is some,
[18:51] I think, tendency to delay a decision. So, you know, often you're making decisions based on not all the data available to you,
[18:58] but the data that is available to you. You need to verify that, I think maybe to your point, and maybe make sure that you're picking and choosing the right data and then asking yourself, why do we have this data?
[19:10] Another thing that came to my mind is we went through knowledge management and I love to bring up knowledge management in this context. In terms of data generates information,
[19:20] information generates knowledge, and knowledge can turn into wisdom if you know where to use the knowledge.
[19:25] So you know all those things in terms of, you know, how is your technology, AI specifically generating data into information?
[19:34] How are you verifying that information? Are you cross checking against knowledge across your board?
[19:40] And then you know, do you have the right wisdom? Are you applying the right wisdom to make the right decision?
[19:45] Pamela Isom: So what are you thinking? I was wondering about any breaches or incidents that have transpired that we can learn from as stewards of technology and data.
[19:54] What any incidents or breaches that come to mind that we could learn from,
[19:59] James Imanian: you could take almost any breach over the past couple years and trace it to a loss of an identity or secret that either made it a breach that was major or allowed the adversary to broaden its effects.
[20:17] So two things in my mind are one, adversaries are not breaking into networks anymore, right? They're stealing credentials from one place and then just logging into your systems with known credentials that would have just been compromised.
[20:33] So thinking around protecting identities, now you have to think to yourself,
[20:37] okay, that identity that was just used as it goes further into the network, is it used in the normal fashion that we've seen it in the past? So think around analytics, around whether that's normal behavior or not normal behavior.
[20:53] And then often we also see breaches where a hard coded secret in the code has used. Either they transfer it from a development environment and now is a production or have I even seen breaches where that hard coded secret is in the user manual that's been issued and then they never change it when they go into production.
[21:16] So all that being said is that we've got to protect identity.
[21:20] I think another thing that we need to do acknowledge as a community we used to think that there's just privilege in a few folks, maybe your IT admins that are building infrastructure 1020 but almost every worker during their day executes some privileged tasks.
[21:36] So think about the HR person updating a salary or administrator looking into a SaaS application to do some function that affects the entire business.
[21:48] So again you,
[21:49] you really need to look at yourself and say how are we protecting our identities?
[21:54] What are those privilege functions that could cause material effect to our business and are we protecting them appropriately?
[22:02] Pamela Isom: And that reminds me of how as a business leader myself I have to always think about, this is the simplest example, what permissions have I granted and am I keeping track of it?
[22:13] And then when do I remove it?
[22:15] Because there's times when there are things that go on when I have to elevate permissions but I need to always remember to go back and set it back where it should be and we should do.
[22:25] I try to do this regularly but it's so easy to forget. And then when it comes to employee transition, like let's say employee leaves,
[22:36] so you disable access but there's so much more to do you right?
[22:41] James Imanian: Yeah, absolutely. There's a whole industry and some set of tools around identity governance and administration. So iga and you've got to account for the joiners, the movers and the leavers. And to your point, I think that what we often get confused about or don't keep track of is as somebody moves around an organization,
[23:02] are they accumulating credentials and privileges that they probably shouldn't have in their new job?
[23:09] And that's often the case again for these breaches where they get hold of somebody who isn't a privileged access manager but has gotten all these privileges in their five years in an organization and now the adversary is off and running
[23:23] Pamela Isom: and then we add agents on top of that. AI agents.
[23:28] James Imanian: Right. And so now you've got AI agents and again we have some, not necessarily a breach. But Microsoft itself got in trouble around Copilot pilot going around in within an organization and grabbing information and not putting it on an audit log.
[23:45] Right. So there's that piece.
[23:47] And then you've got agents that are going to try their very hardest to get the job done.
[23:52] And they may go like, so if you're an individual and you're given a task, you may know not to call HR for salaries or for this database or just access to this, because that's just not appropriate.
[24:06] But if the agent thinks that that task will help it achieve its final objective, then it's going to try to do it. And then again, if you haven't locked down your environment, you haven't given the agent the right accesses and right identities, then it's going to try to get that done and probably to the detriment of the overall organization.
[24:24] Pamela Isom: I love this conversation.
[24:26] It's so informative in a practical way.
[24:30] So I am about to get to the last question, but before I do so, our last question in your case, as a leader of Cyber Risk AI, we've got global instability that's in our environment today that we want to help to bring some stability to.
[24:50] So my last question is typically, what words of wisdom or call to action would you leave with us? But before I ask that, I always ask, is there anything that,
[25:01] considering the conversations that we've had today,
[25:04] is there anything else that you want to talk about before we get to that last call to action or your final words of wisdom?
[25:12] James Imanian: Well, I think I'll tie the two together. I think that the first one to answer the second question first would be security. And cybersecurity is not the job of the CIO or the IT department.
[25:25] Right. It's the whole organization. And so you often hear that message, if you haven't heard that already, I want to reemphasize that.
[25:33] And you know, the business technology risk is a business risk. And going back to our board discussions and some of the other previous questions,
[25:42] we just got to realize that as we use technology,
[25:46] there is risks.
[25:47] But collectively, as an organization, we need to understand them. And what I like to say is make a conscious decision.
[25:54] We're not going to get everything right.
[25:56] We're not going to make all the right decisions. Let's at least say to ourselves, we put on the table,
[26:02] we considered that risk, we said that it's not applicable. And we learned maybe it does benefit or it emerges as a risk and then we deal with it.
[26:10] But let's not talk about it. Right. Let's again make a conscious decision. Yes, no, then move forward.
[26:15] Pamela Isom: All right. Well, is there anything else?
[26:19] James Imanian: Well, then I'll answer the second question about my final thought. And it's around building off of that.
[26:25] Let's make a conscious decision.
[26:28] Let's make sure that we are thinking about identity holistically and identity security.
[26:34] Because again, I think a lot of organizations, whether it's private sector or public sector,
[26:39] probably aren't quite ready for all these machine identities and then you know, the AI and the agents.
[26:45] So there's a lot of work to be done and I think we could work and we work best as a community.
[26:52] So, you know, reach out to other organizations that have done some trials around identity and machine identity security and what they're doing on AI, what they're doing about AI agents,
[27:02] you know, continuing to listen to this podcast and other podcasts and SANS and NIST with their AI Frame risk framework.
[27:10] So there's a lot of resources out there and then contribute back into them because like we've mentioned before,
[27:16] a lot of stuff we've talked about, especially around the AI,
[27:20] these risks, well, these technology and capabilities are brand, brand new, like 2, 3 years old in terms of how they're being used in businesses. And agents are even newer.
[27:30] And so what you learn is valuable to the rest of the community. So just keep talking to people around you.
[27:37] Pamela Isom: Awesome. Well, thank you for being here and thank you for sharing those insights and words of wisdom and a call to action.
[27:44] So thank you very much for that. James,
[27:47] it's been great having you on the show,
[27:49] so thank.