What's Up with Tech?
Tech Transformation with Evan Kirstel: A podcast exploring the latest trends and innovations in the tech industry, and how businesses can leverage them for growth, diving into the world of B2B, discussing strategies, trends, and sharing insights from industry leaders!
With over three decades in telecom and IT, I've mastered the art of transforming social media into a dynamic platform for audience engagement, community building, and establishing thought leadership. My approach isn't about personal brand promotion but about delivering educational and informative content to cultivate a sustainable, long-term business presence. I am the leading content creator in areas like Enterprise AI, UCaaS, CPaaS, CCaaS, Cloud, Telecom, 5G and more!
What's Up with Tech?
Securing Agentic AI Identities
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Interested in being a guest? Email us at admin@evankirstel.com
AI agents are starting to do real work inside real companies and they often do it by acting as us. That’s exciting, and it’s also a security wake-up call. We sit down with Matthew Immler Regional CSO, Americas at Okta, to unpack why identity security has become the primary battleground and why attackers increasingly prefer impersonation over breaking through a “front door” with zero-days.
We get concrete about what “non-human identities” actually means in plain English, and how agentic AI changes the rules. When employees connect new tools and click consent, an AI agent can gain access not just to a calendar, but to email, files, and other sensitive systems through broad OAuth scopes. From the security team’s perspective, the activity can look like normal user behavior, which creates a visibility problem at the exact moment enterprises are being pushed to adopt AI faster than their controls can mature.
We also talk solutions: treating AI agents as first-class identities with owners, managers, and access reviews; spotting non-human behavior through signals like abnormal client secret flows and extreme refresh token patterns; and why blocking AI outright can drive “shadow AI” instead of safety. Matt shares how standards work like cross-app access can shift control from end-user consent to IT policy so teams can approve tools, lock scopes down, and keep tight governance.
If you care about AI security, identity and access management, OAuth risk, and practical guardrails for agentic AI, this conversation will help you think clearly and act faster. Subscribe, share this with your security or IT team, and leave a review with the one control you think every AI agent should have.
More at https://linktr.ee/EvanKirstel
Why Identity Is The New Target
SPEAKER_00Hey everybody, really excited for this conversation today. Agents and identity in this new era is one of the hottest topics right now in cybersecurity and beyond. And uh companies are racing to adopt agentic AI. And what are the implications in terms of challenges and opportunities for the enterprise? Matt from Okta, how are you?
SPEAKER_01I'm good. Uh, how are you? Thanks for having me.
SPEAKER_00I'm doing great. Really admired uh the work you and the team are doing at Okta for some time. Before that, maybe introduce yourself, who you are, what you do at Okta, and what's keeping you busy these days.
SPEAKER_01Yeah, so my name is Matt Emler. I'm the uh regional CSO for the Americas at Okta. Uh, I've been here for about nine years. Um, originally started with the uh Auth Zero side of the house and came over during the acquisition in 2021. Um, but I spend a lot of my time with um security projects in the US, escalations, um, you know, calls and things like this, and really just kind of talking to our customers about um, you know, what's going on in both the identity world and and more frequently now, uh, how that relates to uh to uh AI.
SPEAKER_00Yeah, such a fascinating topic. Why is it? Why has identity suddenly become the center of all of these conversations the last couple of years? And uh what's driving that?
SPEAKER_01Yeah, so identity has become more and more of a focus uh these days in the security world because we're kind of moving past a little bit the point where, you know, zero days and you know, trying to break in the front door that way is becoming as feasible as just trying to, you know, take over somebody's identity and impersonate them to get in. So identity has become more of that first step that attackers are using. Uh, it's kind of a lower bar for them in a lot of cases to try to get access to networks and to um businesses. And so we're just starting to see that take a lot more focus away from some of what those traditional methods of security are, you know, your traditional firewalls, your traditional, you know, IDSs, things like that that you might see, and more along the lines of, you know, how do we secure our identities?
What Non Human Identities Mean
SPEAKER_00Yeah, really important conversation. Of course, non-human identities are now front and center. How do you describe what that means in plain English for maybe your friends and family who aren't steeped in security speak?
SPEAKER_01Sure. So non-human, when we talk about non-human identities, we're talking about um identities or accounts that are not actually backed by a real person. So uh traditionally, for a very long time, this was service accounts, which you know, they had a single job, a single purpose, they did one thing repetitively. And now these days, you know, over the last couple of years, we're looking more at what this looks like in terms of a genic AI. So AI agents that are acting on behalf of humans. And so although they're still not an actual real person, they're able to do things dynamically. They don't have a set path that they're gonna follow. So uh they can you know kind of rationalize and make decisions on their own, but they're still not an actual human identity.
SPEAKER_00Yeah, really well said. And how big of a uh a trend is this in enterprises that you talk to every day in terms of the machine identity reality, the problem, let's say, even is it you know, thousands in a large enterprise? Are we gonna get to millions this year or next?
SPEAKER_01Yeah, so um one of the biggest things I hear across uh, you know, all of our customers that I talk to is just the you know increasing pressure that they're feeling from either their um, you know, their board or their CEOs that you know, we can't fall behind. You know, we this AI push is big, so we need to implement now, we need to implement it fast, we need to make sure that we stay ahead of our competitors. And a lot of times what that's causing is security to fall a little bit by the wayside in in you know the efforts to try to roll these things out as quickly as possible. So what's keeping security teams up at night is you know the speed at which they're being forced to allow this sort of these sort of things to operate in their networks. And a lot of times the big issue is they don't even know how many are in their networks. And that's that's the big problem and where a lot of that uh heartache is coming.
Consent Screens And Data Exposure
SPEAKER_00Yeah, unknowns are are never good. Uh and so we've gone from AI assistants that really don't quite work, but we we try to use them to AI agents that are actually taking action on our behalf that are really compelling. I have four different agents that are working for me now as we speak, from Cloud Cowork to Manus and OpenClaud. And I'm a little bit of an early adopter to say the least, but they're doing real work. And um I'm a little concerned uh personally, but now compound that into an enterprise of thousands. How does agentic AI really completely change the security equation?
SPEAKER_01Yeah. So as I kind of mentioned, one of the biggest issues that we have right now in the security equation is visibility. So a lot of times, you know, the one of the biggest things I hear is that, you know, employees or users inside a company systems, they're going and they're trying to keep up with the mandate that they've been given to implement and use AI. And so, you know, everybody wants to get ahead, everybody wants to be the first, everybody wants to be innovative, and they're going finding these cool new tools out there that they can put to use in the in their daily job. But what happens a lot of times is they go and they introduce this tool and they essentially are letting it impersonate them on the company system. So, you know, we're all probably very familiar with that consent box that comes up that says, you know, you've logged in with this. Do you want it to be able to allow it to access your email, your whatever else, things like that. So now you have these non-human identities, these agentic AI, you know, um users, quote unquote, actually coming in and doing work on behalf of the employee or the user of that organization. And there's not anything that's necessarily directly, you know, differentiating it from what the IT and security team sees from their end. So they don't know if those actions are being taken by a real user that's been trained, knows to follow policy, all those sort of things, or something that's you know non-human.
SPEAKER_00Yeah. And you think the concern is is mostly about you know app usage or is it access to data? Is it API usage, you know, systems? Where where are the where are the concerns in the the uh holes, the gaps today?
SPEAKER_01Yes.
SPEAKER_00Okay, so that was easy.
SPEAKER_01Uh no, no. So so I'm uh data is one of the biggest concerns. Um so when you know, when we're talking about that consent flow that I just mentioned, you know, when a user is granting access to their data as part of this, you know, allowing an agent to act on their behalf, a lot of times they don't realize the extent of what data they're providing access to. So I always joke around and use my fictitious AI agent, super cool calendar company.com, whatever, you know, something that's like, oh yeah, we organize your calendar, we'll we'll help you, you know, make your meetings more efficient or things along those lines. You know, user gets it, it's great. They authorize and say, Yeah, I gave it access to my calendar. Well, maybe you didn't just give it access to your calendar. You might have given it access to your calendar, your email, your file store, other company records that the user's not even aware of. So, you know, it depends on that scope. And a lot of times, you know, folks who may not be as technical or may just be looking to, you know, just accept whatever comes across. I mean, we all know the issue with you know those license agreements everybody just clicks accept to, similar type of issue. You know, it's like, oh, this is the access it needs to work. Sure, go ahead. And then at that point, that agent can then, you know, you mentioned uh applications. It can start acting on behalf of that user on maybe other applications. So let's say they experience a breach or get hacked and they're looking, you know, the people who hack it are looking at, well, what does it have access to? Now they can tell those agents, well, go try to farm email from here or pull files from here, and all it's really supposed to be doing is touching your calendar. So it's a big mix of all of those things uh combined right now.
Governing And Detecting AI Agents
SPEAKER_00Yeah, and I thought I think most companies, uh to say the least, don't have identity, uh human identity really figured out. And yet we're talking about a world where agents, you know, supposedly might outnumber humans, you know, 50 to 1 or something. Uh and that's when a lot of uh the upside opportunity for AI exists. Where does it take to get their infrastructure, uh, systems, processes ready for that kind of world?
SPEAKER_01Yeah, so from our perspective at Okta, you know, we look at, you know, we're a big identity company, so identity is at the core of a lot of everything we do. And so we look at it very much in the in the scope of how can we govern these identities? How can we detect them, know where they are, make sure they're in your directory, make sure they're assigned a manager, you know, treat it essentially as a first class identity. So do we know if this is an AI identity or not? How do we get control over it? You know, are we putting it through access reviews? Do we have somebody who's responsible for it? Can we identify it to begin with? Um, and there's all sorts of things that we're working on tooling, uh, detection mechanisms, things along those lines that help you know people do that easier. So, you know, just as an example, you know, a human user, you know, they log in, they do an authorization flow. That's totally normal. A human user does not go do a client secret flow, like that's totally abnormal. Same thing with like refresh tokens. You know, a human user might want one every one hour. An agent might request one 500 times a minute, you know, like what are these anomalous behaviors that you know you can utilize to detect something's wrong? This is probably an AI agent, a non-human identity impersonating a human. And how do we now bring that in-house and govern that identity?
SPEAKER_00Interesting. And you know, what are some of the biggest mistakes you see companies doing right now with this whole identity management uh challenge or AI security in general? Um, what what what are they doing wrong? Where are the missteps and how do how are you helping them there?
Cross App Access And IT Control
SPEAKER_01Yeah, so one of the biggest missteps I see is companies who are really trying to actually do the right thing. So they're going out and they're telling their employees, like, hey, we're not fully up to date or ready yet for all this AI. So we just don't want you to use it yet. We're blocking these tools or doing something to prevent you from using them. And all that does is spur employees to then try to get around those safeguards or do things even in more of a shadowy way where they're accessing things that you've explicitly said don't do. So a lot of times what I tell customers or or just folks that I speak to about this subject is, you know, you have to provide a path for it to be used legitimately and not try to just, you know, straight up run in fear and block it all because you people are gonna find a way. So if you're you know, if you're saying, like, okay, fine, maybe we don't want you to use platform X, but we want you to use Platform Y, fine. Tell employees, use platform Y. We have a license agreement with Platform Y. We know our data is protected. So at least then the majority are gonna be going and using the tool that you want them to use and have a little more oversight over. So, you know, trying to just kind of deny the reality of what's happening in this field is one of the biggest mistakes I see happening right now.
SPEAKER_00Interesting. And what about internally uh at Okta? I mean, how are you guys looking at agents? Are you you started on this whole authentic workflow yet? And AI insecurity in terms of uh drinking your own champagne. How have you adopted some of these practices?
SPEAKER_01I like that you said drinking your own champagne. I used to say eat our own dog food, and then I got you.
SPEAKER_00That's disgusting. Yeah, I've always wondered about that.
SPEAKER_01So we don't eat our own dog food, we drink our own champagne. Thank you. But regardless, um, no, so we have tooling in place, like I mentioned about being able to identify these identities and things like that. But a big part of the what we're doing right now is working in the standards world around these agents and AI. So, for instance, cross-app access is a really good example of that. So that's an extension that was added on to the OAuth standard that Okta played a big part in um in developing. And what that does is it's something that can be adopted by these independent software vendors who produce these AI agents. And then that allows your identity platform like Okta, for instance, to more easily regulate these identities and identify them. It kind of takes that control that I was talking about with the users and employees out of their hands where they're no longer the ones saying, I consent to this or I'm gonna allow this agent in. With cross-app access, the IT and security team has that ability. So they're able to say, okay, maybe I'm cool with super cool calendarorganizer.com, my fictitious existing. Maybe, maybe, maybe it's great. Maybe we approve of it. But we want to make sure that these specific users can use it. It's only locked down to the calendar, it can only perform these type of API calls. We have a tight reign around it, and we have a manager assigned to it and somebody who's responsible for it. So that's the way we approach it is how can we make sure that we're governing these identities? As I mentioned, you know, what we like to say is we like to call them first class identities now and making sure that they follow the same processes that a human user would, and that we're making sure that the IT and security team have full visibility over it.
Human In The Loop Future
SPEAKER_00Fantastic. So if we were to fast forward a couple of years, what would um you know your agent look like? How would it behave and act as a you know, co-worker, a colleague of yours, uh you know, Chief Security Officer Light or Agent Matt, what would you hope that agent to be doing and helping you with on a day-to-day basis?
SPEAKER_01You know, it's really hard to speculate in the in the future or what they'll be capable of. I I get asked a lot, are we going to get to a point where agents are approving other agents to do things? Um I do hope that there's always some sort of element of human in the loop for these AI agents so that it's not going completely autonomous everywhere. Um, you know, and also that we have ways that we ensure that should there ever be a failure in the AI agent land, that we have a way to counteract that. You know, for instance, if there's one human managing 20 AI identities and all of a sudden the AI goes down, can we still do the job without it? You know, so making sure that they're tailored fit for the job you need them to do, but also having those contingencies in place to ensure that, you know, you don't ever hit a really hard place if there's something goes wrong. But um, I'm hoping that they don't get to the point where they're approving themselves and creating themselves. I mean, they already are to a certain extent, but to a to a place where we're actually okay with that.
SPEAKER_00Yeah, interesting times. Well, maybe in a couple years I'll be on the beach, and my agent will be doing these interviews and just uh interviewing your agent. So brave new world. Thanks so much for opening, you know, this window into machine identity and all the opportunities and challenges. Really great to catch up with the leader in this space. Yeah, absolutely. Happy to be here. Thanks, Matt. And thanks everyone for listening, watching. Check out our TV show, techimpact.tv on Bloomberg and Fox Business. Thanks, everyone. Thanks, Matt.
SPEAKER_01Bye.