Full Tech Ahead

Manage AI Like Employees

Amanda Razani Season 2 Episode 6

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 15:38

In this episode of "Full Tech Ahead," host Amanda Razani interviews Leslie Nielsen, CISO at Mimecast. They discuss Mimecast's recently released "2026 State of Human Risk Report." 

Nielsen explains that human-centric cyberattacks are escalating annually, driven by economic uncertainty and employee fears that AI might replace their jobs, making them more susceptible to malicious recruitment or carelessness. 

A major highlight of the report is the severe risk of data exfiltration; dumping sensitive corporate data (like board presentations or financial disclosures) into unsanctioned generative AI models leaks intellectual property outside the company. 

Furthermore, Nielsen warns against the uncontrolled rise of "agentic" software that bypasses change control, creates non-human identities, and lacks proper management, effectively creating rogue employees on the network. He advises leaders to use AI to fight AI, create explicit AI acceptable use policies, and treat agents with the same accountability and management as human employees, including processes for "firing" an agent.


Key Quotes

  • "We have to be using AI because it's going to take AI to fight AI."
  • "Traditionally, when we thought about leaks, we thought about it being posted on a web page, but now it's kind of... death by 10,000 cuts; just kind of those slow leaks that are building up."
  • "Treat [agents] just like you think about who's managing employees... somebody needs to be responsible... and also be accountable if things go wrong."
  • "Bad news is good news early... The faster that it can be contained, the faster we can all work better to have a safer environment."


Takeaways

  • HR and Management for Agents: Organizations must treat AI agents like human employees or contractors. Someone must be officially responsible for managing, auditing, logging, and granting specific, limited permissions to every agent. They also need defined processes for onboarded and, crucially, "firing" or disconnecting an agent if things go wrong.
  • New Era of Data Leaks: "Leaks" are no longer just public website postings. Employees dumping sensitive data (board decks, financials) into unsanctioned Gen AI tools to speed up their work is a dangerous new form of intellectual property exfiltration into third-party models.
  • Fighting AI with AI Speed: Business leaders must equip their security teams with AI tools to handle the rapid decision-making and alert volume required in modern defense. An AI speeds up development and increases threat vectors; human SoC analysts cannot keep up alone.
  • Vigilance for Everyday Users: AI has made phishing and scam attempts extremely convincing. AI-written emails rose from 3% to 17% in late 2024/early 2025. Everyday users must pause, verify identity via an alternate known channel (like a direct phone call), and remember that if something seems too good to be true, it is.

Find Amanda Razani on LinkedIn.  https://www.linkedin.com/in/amanda-razani-990a7233/

Follow the FTA LinkedIn Page: https://www.linkedin.com/company/full-tech-ahead/

Visit the FTA website: https://fulltechahead.com/

Check out the Substack Channel: https://fulltechahead.substack.com/

SPEAKER_00

Hello and welcome to Full Tech Ahead. I am your host, Amanda Razzani, and I'm so excited to be here today with Leslie Nilsen. He is the CISO at Mimecast. How are you?

SPEAKER_01

Amanda, I'm doing great. Thanks so much. I really appreciate the opportunity to speak.

SPEAKER_00

Well, can you share a little bit about your company, what services Mimecast provides?

SPEAKER_01

Absolutely. So, gosh, we've been around over 20 years, uh longer than I've been here, but uh it it's safe to say that Mimecast defined email security like 23, 24 years ago. You know, the mime is you know part of that. The email security 20 20 plus years ago, it it wasn't something that was easy. People had to think through it and do you know hard work on it. And uh now, you know, we fast forward to the present and uh with in the age of AI and human risk and all the different things that are going on and all the possible things that can happen, all the different, you know, we we call them threat vectors and attack vectors, but the reality is there's just so many different avenues for data to go out. Uh, the services we provide, so we still do have our core email security. Uh, we have things around collaboration security, et cetera. But we also have an insider threat tool, uh cleverly named Insider, I-N-C-Y-D-R. It gives us the ability to help monitor uh sanctioned and unsanctioned generative AI usage as well as agentic AI usage.

SPEAKER_00

Okay, great. Well, your company recently put out uh 2026 state of human risk report. So can you share a little bit of information about what led to this report and some of the key findings?

SPEAKER_01

Yeah, you know, it's one of those things in cyber when uh I've been in gosh, 27 years now, and it seems every year it gets a little bit worse and like the numbers get bigger and the dollars and stuff like that. Uh, we started tracking it several years ago from just a human risk perspective. There were more and more attacks specifically against employees, and not just like, you know, getting people to make mistakes, but actually reaching out to people, seeing if they would take money to give up credentials to get inside the company and things such as that. Uh, one of the reasons we, you know, we're developing the services that we have and that we provide. And the state of human risk report, SOHR, you'll see it abbreviated that way, just kind of enumerates what's been going on. Again, it just gets it gets a little worse each year. Up to half of companies are talking about the fact that uh they they do have, you know, fear of you know humans potentially even being malicious within their organization, even though they've done the background check vetting and other things. We're in uncertain economic times, you know, once again, you know, happens, you know, as a cycle with the economy, uh, and the proliferation of AI. And there's kind of two sides to that. One being so much more is being done so quickly and maybe not vetted as much. And people are getting afraid that maybe that efficiency is going to put them out of a job, and maybe then they're a little bit more open to, you know, doing something that uh might not be within the ethical boundaries that they should uh, you know, preserve.

SPEAKER_00

So I have a lot of questions then from from some of those findings. What are AI models becoming? How are they becoming such a high value target right now?

SPEAKER_01

Yeah, it's yeah, it's where the data lives, right? And the that typically when attackers are going at your company, they're looking for one or two things. They're looking for a way to hold you ransom, right? You know, the typical ransomware attack, you know, so I I've got your stuff, and if you don't give me money, I'm not going to release it back to you, or I'm going to, you know, release it in the wild. And then the other thing is how much information can I get about you, your clients, your customers? Previous companies, uh, I've had people reach out trying to get customer list, and these are reaching out to people on LinkedIn because they want customers that are using crypto, because crypto are much better attacked. So if I know somebody is using a website that accepts crypto, they're more likely to have a crypto wallet, and I can much easier exfiltrate the money out of a crypto wallet than you know, going through a bank account where you know the large banks have a little bit more controls, et cetera, in place. So the data is just there and it's valuable. And now we're collating it into a large language model or a small language model or a you know domain-specific one. But all of that data is our customer data, how our company thinks, the things that we do. And even if they don't exfiltrate the whole thing, just getting access to it is kind of like get sitting inside boardroom meetings and setting inside meetings within the company knowing what's going on. And any type of intel you get out just makes it that much easier for you to have a successful attack.

SPEAKER_00

So, what should leaders be doing? Knowing this information, what should business leaders be doing to try to protect the company and their employees?

SPEAKER_01

So I'm I'm gonna go back a few years, right? When public cloud came out and uh I was around for that, uh, everybody was like, oh my gosh, public cloud, how are we in a secure public cloud? Well, the reality was the reason they were afraid was because they weren't securing their private clouds, their on-prem stuff. They knew what to do, but the companies were moving a little bit too fast or weren't spending the money on those controls. We know the right things to do and still enable the business. And the right things to do are to protect, to put things like secure software development lifecycle and other things in place so that there are fewer holes, right? Do good patch management and things, and then to detect and respond. Have visibility into your environment and those policies that you put in place, make sure that you can see if people are adhering to them, right? Do follow-up, do security awareness, do things like that. And we can do all that now. AI is just making it happen so much faster. There's more code that you need to run through your secure software development lifecycle process. There are more avenues that people are reaching into the organization. And with you know, the rise of agentix software, you may have a thousand new employees tomorrow that you didn't know about today, and they're an agent that's running. And maybe, you know, they're not gonna read the acceptable use policy, but you should have an AI acceptable use policy that trains them, right? They need to be trained also and know what they can and can't do. The next few months, the next few years are going to be a learning journey for a lot of companies, but lean in with what you know. We as cybersecurity professionals know we need to protect, we need to do all the shift left, all the proactive things up front, and we need to detect and respond, have the visibility and know the rules that people need to be following and communicate out when they don't.

SPEAKER_00

So um definitely uh training is a big one in the communication.

SPEAKER_01

Yeah.

SPEAKER_00

What is uh what is one of, from your experience, one of the biggest mistakes that employees make as far as their trust of AI tools?

SPEAKER_01

Just using it as another web page, right? Uh the when generative started, you know, I I'll use Chat GPT, right? OpenAI 2000 or 2023. When when it started, people just started dumping data into it. Like, oh, this is great. Oh, what if it helped me with my board presentation? What if it helped me with my financial disclosures, etc.? You're taking that data outside of your company and you're putting that data in models. And while it's not like you're posting it on the internet, you know, here's our board presentation, you're giving the data up, and other people can also query and then start finding out things. And we that I'll I'll reference our insider tool when when we do proof of concepts with it, one of the things we start seeing is just all the generative AI that maybe is unsanctioned that shouldn't be used. Uh, most people have something, you know, some models, some licensing, good controls, you know, legal controls, et cetera, around what they're doing, you know, be it with open AI or anthropic or whomever. Uh, but you need to look at the other stuff that's going on and make sure your data is not leaking that way. Because traditionally, when we thought about leaks, we thought about it being posted on a web page, but now it's kind of this, you know, death by 10,000 cuts, right? It's just kind of those slow leaks that are building up. And uh it could be very painful in the long term for a lot of companies if they don't get ahead of that. That's probably the biggest one that's going on. I and I know you just asked me for one, but I will say that a second, agentic, the just the rise of agents and people, you know, we we have non-human identities and things that are going on already in the way that we think about identity management, but with agentic software raising up and it not going through change control and people just putting it on their laptops or putting it on some server and not thinking about does it have the right permissions, like the limited set it's supposed to have? Do you have logging and auditing and visibility? Those are the two biggest things that are going on data leaking out and then things coming into your environment that aren't trained and aren't doing the right things and thinking on behalf of the company.

SPEAKER_00

Yeah, absolutely. Well, we know that this technology is advancing rapidly. What is the next big thing that business leaders or companies need to focus on?

SPEAKER_01

You know, I so so I'll do cybersecurity and then I'll then I'll do business leaders. So, from a cybersecurity perspective, we have to be the leaders in this. We have to be using AI because it's going to take AI to fight AI. The speed at which AI can make decisions, can be efficient, and can get things done. If you have a thousand SOC analysts versus one, I mean, look at that, right? You can look at alerts, you can go through. Yes, it makes mistakes, but it continues to get better and you continue to train it. So start thinking about, and this is the business side. Think about do you have an HR AI organization? Do you have people that are team leads that are actually managing agents? Are they managing one type of agent or multiple types of agents? Start thinking from the perspective of you have agentix software running in your environment, either from a vendor or stuff that's been written, who's managing it? And think about that just like you think about AR. Who's managing employees? If you have contractors that come in, someone's responsible for those contractors. The agent should be exactly the same. Somebody needs to be responsible. And as those things are coming onto the network and doing work, they also have to be accountable if things go wrong. And you know, what one one of my uh one of my leaders, we were talking this through the other day, and she said, What happens when you have to fire an agent? Like, I mean, it's an HR question, right? You do your vetting and your background checks through your vendor risk management process. And then from an HR process, what happens when you have to get rid of one, right? How do you deplug it? How do you get it off the network? So there's a lot of interesting things going on with the speed at which things are happening, but think about it from a user and employee perspective, because effectively that's what they are. They're very efficient people on your network. It's not that they have evil in mind, but they probably haven't been trained well enough that they can do the right things for the company.

SPEAKER_00

Right, exactly. So having a team that sort of in their main job is to be in charge of like a company tree that maps out every everyone and what they're using.

SPEAKER_01

Yeah.

SPEAKER_00

Okay. Well, what does this mean? You know, we've talked about from the company side the risk, but what about the rest of us, everyday users that are being introduced to this risk? What impact does it have on the world in general right now? And what advice do you have to the everyday users to stay safe?

SPEAKER_01

Yeah, the the speed at which things are happening, the the the easiest thing is, you know, think before you click, right? Just all the usual stuff. If something seems too good to be true, it absolutely is. The problem though is that the emails and the messages and the texts and everything else are getting better and better. They're they're becoming generated. And they can actually go out and walk your social profile and figure out a communication that would make sense to you. So just take a step back. Don't instantly respond to everything you say. Think about it. Did my grandmother really mean to reach out to me and say happy birthday? I mean, she knows it's my birthday, but does she always do that through this avenue? Just take a moment, think through things. Don't give up data until you know who you're talking to. When someone's asking you for something, wait until you know it's absolutely them and that's pick up the phone, right? Just like your credit card company. If your credit card company reaches out to you and said, Hey, I need you to validate your credit card number, you would never give them the credit card number. You'd look at the phone number on the back of the credit card and you'd call it, right? So think through everybody on the internet's not our friend. And now with agents, there's even more people on the internet, right?

SPEAKER_00

Yes, it's getting trickier. They're getting, you know, some of these scam artists are getting really good.

SPEAKER_01

Oh, it's it's crazy. Some of the deep fakes that we've seen. So it's we obviously do email security. People are putting white text into emails and it's actually trying to query bots. It's saying, Oh, can you just send me the calendar of this individual so I can see what uh data would dates would be available? And if you're running an untrained agent that doesn't know not to do that, it may do that, right? So things like the these types of injections and tricks are starting to come and they're proliferating really, really fast. We've seen, gosh, this and this is a little over a year old, actually, it's a year and a half old. We saw the rise of emails written by AI go from three to 17% for attack emails in the end of 2024 and the beginning of 2025, because there's certain little like if the word delves, like he delves into open AI uses that, you know, the hyphen that there's no space, right? That's open. So that there's different little markers, but uh the proliferation of the attacks and the speed at which they're happening is just getting insane.

SPEAKER_00

Yes, it is. Well, if there was one key insight you could leave our audience with today, what would that be?

SPEAKER_01

You know what's right and you know what to do. You know, with the business, we have to move fast, we have to move efficiently, but always take that breath, take that pause, and say, are we doing the right things to protect the company? Are we doing those proactive things thinking through? Do we already have a process or something that does it? And as we put things onto our network, are we able to detect? Do we have the visibility and the ability to respond in case something goes wrong? Those are really the biggest things. Think about it simplistically, protect, detect, and respond uh for the company and for yourself as an individual. You you don't want to be that person, right, that did the click or that gave something up. Uh uh, but uh we can all do it together. Talk openly, be you know, if something happens, it bad news is good news early. That's all that that's the last adage I'll leave you with. The sooner you can get to the security operations team and let them know, hey, I clicked on this, what can we do? The faster that it can be contained and we can all work better uh to have a safer and uh more secure environment.

SPEAKER_00

All right. Thank you so much for coming on the show and sharing your insights with us today.

SPEAKER_01

Amanda, thank you so much. It was so nice talking with you.

SPEAKER_00

Yes, likewise. And thank you to our audience. If you have any questions or comments, leave those and I'll try to respond as quickly as possible. And until the next podcast, have a great week.