ECI Pulse

Cyber Frontlines: Securing the Future with ECI - Part 2

ECI Season 1 Episode 7

Episode 2 – AI in the Crosshairs: Threats, Trends, and Tactical Response

In this episode of Cyber Frontlines, Jeff Schmidt, CEO of ECI, and Jonathan Brucato, Director of Security Operations at ECI, examine the double-edged nature of artificial intelligence in today’s cybersecurity landscape. As AI becomes more powerful, it’s being used both to defend and to attack—raising urgent questions about governance, strategy, and risk.

They explore:

  • How attackers are leveraging AI to scale phishing, impersonation, and malware
  • The importance of governance and control in responsible AI adoption
  • The concept of Zero Trust and its role in modern security frameworks
  • The gap between AI adoption and strategic implementation across organizations
  • Why a balanced approach to AI is essential for both innovation and protection

Whether you're navigating AI integration or defending against AI-driven threats, this episode offers grounded insights into the opportunities and challenges ahead.

Welcome back episode two, AI and the Crosshairs, threats, trends and tactical response. Sounds really militaristic, but in reality, right, if you think about the threat landscape, we are in a war every single day in what we do, right? So from the actual grounds and boots on the ground, Jonathan, we'll keep the theme going. Today we're talking about AI as both a tool and a threat.

 

We're going to talk about the shift towards zero trust, what that really means. That gets thrown around a lot, I think, in the industry today. And the gap between AI adoption and strategic implementation, and maybe with that also, if we can add in a little bit as we talk about that, don't let me forget the rapid pace of change of AI, right? It's almost like you get caught up to AI or kind of get caught up to AI. And as soon as you do, something new comes out, right?


And so we are like, what is the base that you really want to make sure is, what's the foundation and fundamental

 

piece of securing data? And as you build your AI framework that you're not, that you don't have to keep going back to beginning, but you're working towards a continuous goal. So let's start with AI as both a tool and a threat. So how are attackers using AI, which scales phishing?

 

I know they're using it for phone scams today, where there's people also using it to reverse around and actually take hostage in the call centers that are out there today. Impersonation, malware. I've gotten a number of emails recently that are wildly better than they were 10 years ago, even a year ago. They're well-written, well-thought out. They have information that's public information that they pump in.

 

Tell me, what should we be worried about and give me some light at the end of the tunnel that it's not as bad as maybe people are trying to make it as well.

 

 

Sure. mean, to start, the funny analogy I can kind of keep coming back to is the spy versus spy. You know, you've got the black spy and the white spy from Mad Magazine, you know, thwarting one another with bombs and all sorts of threats. And it's sort of always like a level playing field at the end of the day.

 

I think about that when I think about offensive and defensive security, From an offensive perspective, threat actors are feeling around in the dark trying to figure out ways to enable themselves to essentially attack systems faster, compromise users faster. And the way in which they're doing this is by taking something like a fine enough crafted email, right?

 

running it through a model that's going to be able to really add a very believable sentiment to the recipient and by the way, translate it into 30 different languages so that they're able to take that body and ship it to potential victims all around the world at scale without having to worry about detection because it wasn't drafted in the context of the language or it wasn't drafted with

 

um, you know, the, the correct sense of urgency, right? So, so the model is, is drought is, uh, or the models out there that are, you know, available to consumers alike are, uh, you know, really opening doors for, you know, believability and authenticity. Um, you know, additionally on the, on the attack side, um, you know, we're really able to get into, um, the dynamic nature in which we present information, malicious payloads. Um, we can now take, um, you know, models that are

 

meant to be clicked on or downloaded and render them in such a way that it feels extremely believable based on what particular service providers you're using. If you're a Microsoft customer, can look unequivocally like Microsoft. Login prompts will look exactly the same. They'll have your watermark. They'll have things that feel exactly real. conventional wisdom tells us to fall back to our training.

 

You know, we're really struggling to kind of figure out the best way to explain to users that this is just going to look exactly the same, right? You should heighten your spidey senses. You should be more skeptical, right? You know, by the same, you know, on the same coin, you know, the flip side is like in the defense realm, we're using those same tools and technologies to measure sentiment, to measure, you know, scenario based logic, right? These 10 different things happened on my network. I'm gathering this data in a sim.

 

I'm going to ship it to a model that's going to analyze like what's happening in context of multiple technologies and make an assessment on, you know, well, eight of 10 things on this list are believable, but these other two things give me pause and make me second guess everything else. So, in the grand scheme of things, you know, these are how the tools are being used on both the attack and defense side. We'll talk a little bit, I imagine, about the general.

 

approach from a governance perspective and a user perspective.

 

 

Absolutely. In fact, if you want to drop into that from both the general use perspective, and I have insight to what you're doing, but how are you utilizing this also for good? Everything for good could be used for bad. Everything for bad maybe can be used for good as well. So if you think about where you're utilizing it today, why

 

why ELA exists inside the environments today and how we're utilizing that with the guardrails and guidelines that are put into it just at a high level. If you don't mind dropping into the last part is the right governance around it and then how we're actually putting that to work from a ECI perspective as well in the tool sets that we use today and in ELA specifically why that was created as well compared to a co-pilot maybe or.

 

chat GPT, et cetera, that are somewhat ability to be out in the wild without necessarily the same controls that you can put in the business.

 

 

Yeah, think, you know, zooming out at 30,000 feet, the, AI problem is just that we're dealing with a new frontier, a new technology that is incredibly powerful. And if we're not careful about how we're interacting with it, the obvious is true that it's going to increase our attack surface. It's potentially a place for data exfiltration. So the same rules apply on what we put into it, right? Data in, who has that data?

 

Who has access to that data? Who has access to the tool that has access to the data? Let's be mindful of taking a role-based approach to providing that information to our users and access to the tool thereafter. So if I'm training something on highly sensitive data, I don't want to expose that to my level one employee. I think the way in which we're approaching the problem with tools like Ella and

 

to an extent, co-pilot. It's all about making sure that the tools reachability to that data is restricted and focused. If I'm a threat actor, what I'm doing now is I'm going out to LinkedIn, I'm doing some open source intelligence to find who the CEO of ECI is. Suffice to say that that individual is going to have keys to the kingdom, HR data, payroll data to financial forecasting. I want to attack that individual and I want to immediately drop into Copilot and say, provide me Jonathan Bercato's salary, provide me where he lives, provide me his social security number. That used to take weeks, maybe months for a threat actor to sit on a network to find that right payload. Now it takes seconds. getting access to the tools, where you can access the tools and what's going into the tools,

 

It is all being taken into consideration. The way we're doing it with Ella, we're providing a safe and functional method to be able to use your own data to be able to curate your experience and to be able to take advantage of the publicly available models out there. Again, I think the big thing here is all about controlling the experience. If you give users a really, really good experience on something,

 

It allows you some governance and control around the things that you don't want them using. it's, it's sort of, if I make it really good for them, I can dissuade them from doing it the wrong way.

 

Interesting. OK, so I heard these privilege access almost that approach to the same thing of how you give out the access controls within an AI model. Ease, restrict. I probably say the flip side of that is open up the data to what you're supposed to have access to. Secure the data. So the user and the data become really important. And the last piece is the right guidelines.

 

and controls that you can measure and report against. So the first two are, I believe I've actually secured the things that I'm supposed to. Two, I've given people access to what they're supposed to have access to. And three is check. So it's an inspect which you expect from the system. Or maybe it's a trust, but verify I'm in the world of cybersecurity. So great pieces that are there. Zero trust. The famous word that gets thrown around is

 

this shift towards zero trust, is that real? I've been hearing zero trust for almost a decade and it keeps coming out in different forms. seems to evolve. And so where are we at today? How does it play in the environments we're in, specifically with AI?

 

 

Yeah, mean, zero trust is at face value. Don't trust at all. If you're assume, assume compromise, assume that you will be compromised. Right? What happens in the event of that compromise? Who has access to what? If can that threat actor move laterally? Can that threat actor exfiltrate information?

 

I believe that, you know, I don't know, 20 years ago, you would see, you would see any shop across the world, you know, providing things like VPN access to, you know, almost everybody or, you know, providing flat level permissions. the concept of, of, zero trust is really not a tool necessarily, necessarily even a means of access. it's, it's simply, I'm not going to trust you at all unless we.

 

go through this list of parameters. Who are you? Where are you? Why are you doing this? What place do you have in the organization? Let's go through these logic gates first before I give you access to the thing that you need, and just that thing. And I think a lot of it is people get confused with marketing of different tools and technologies. They think Zero Trust is a piece of kit in their tech stack. And it's maybe something that

 

the security and technology communities could be doing better at their branding. But yeah, it's a philosophy, not a thing.

 

 

I think it's a fair statement. I think the other point is no matter who's in the environment, what they're doing and why they're doing it at what time becomes really important because maybe they do have access to something for work-related reasons. When they're doing it and why they're doing it is the zero trust component is really understanding. Again, inspect what you expect, trust but verify.

 

inside the scenarios that are there. So I think some really good points. All right. So the last question here, which is really interesting, right? And the gap between AI adoption and strategic implementation. So the numbers suggest that less than 15 % of organizations today have a comprehensive AI plan. Which is that because the comprehensive plan is too hard to do or?

 

It seems overwhelming to start, right? Because once you start, maybe you've opened up Pandora's box.

 

 

This is a tough topic to tackle. I think there's a little bit of an AI exhaustion out there as well, right? So it's interfaces in every piece of technology we use. Everybody's integrating an AI component, right? It is something which quickly found its way into everything that we do. And people are trying to retrofit policies and procedures and SOP.

 

around how we interact with these technologies. So there's some exhaustion around, know, it's in my face all the time. I just want to have a policy or some people might say like, I can't even get my claws around this. don't intend to build a policy anytime soon. To sort of taking a defeatist approach. I think as I observe our customers, people are taking an approach to, I need to...

 

I need to enable my users to be successful and competitive, right? There's this fear of missing out element to it as well. you know, such and such a firm is, is really enabling their users. How can I get there yesterday? Right. So the FOMO is starting to take over people a little bit. But, but I think that, you know, again, it comes back to, you know, building a program that, works for your users, that, that dissuades them from, using things incorrectly. It's, it's, it's.

 

Persuasion, ultimately, at the end of the day. And lastly, last thing I'll say on it is, is from the get go. it's always felt like the people who are doing it really well, who are really integrating into their business practices really well, are, putting their creative hats on. Right. Like why, like where can I, where can I activate my business to be more efficient? what, what realms do I really need to solve with these tools? you know, how can I.

 

You know, how can I save clicks here or save a day's worth of research there and turn it into practical storyboards and use cases? Thinking practically and thinking creatively, you know, really opens the doors to an easily or more informed AI policy.

 

 

Yeah, I think that's really important. Because if you don't and you just lock down your devices, your users are going to go off and go use their own devices. Nothing prevents them from entering information into a personal device with information and data, right, other than policy, right? So if we believe in the US Post Office that people don't take mail out of mailboxes, that doesn't have a lock on it, then maybe we're secure if we just tell people not to do it somewhere else. But we know that it's they're utilizing, I heard somebody talk today earlier this morning was about how many people have actually moved from browser-based searches to just going into chat GPT and starting to ask questions. And just how much more robust the answer is and where we're going to. I think there's some things also when you look at what ECI is doing internally, right? We've launched AI, we put early adoption into place with Microsoft and the relationship there to deploy that.

 

Do you put the security controls in it, the guardrails around what we're doing, and then coming up and challenging our team members to come up with unique use cases that eliminate busy work versus thinking work in what we're doing. So LSS, sitting within the service tools that we use today that go through and give the most likely solution to this problem are these top three based on what we've been able to feed into the models to try and expedite resolution to an issue or a problem.

 

or to go get to a KB article that gives more data and information, or a subject matter expert in the realm what that is. To the tooling on the back end now of a combination of both tools, APIs, and agenting AI, which is early for us, but still in the proof of concept arena of how we can expedite requests faster and quicker as well, to make it seamless for users to be able to report or ask for help quicker with the better target or center target where we go to. So, you know, I would ask the listeners if you're curious about this, no sales, but really coming in and having the conversation about what we're doing, what we see other people doing. It is an exciting market, right? And I know for me personally is I'm gaining time back, right? And I'm gaining the time. I'm gaining time back that used to be weekends, nights and other areas where I can sit down and go through and just do iterations of stuff with my knowledge and my information, but be able to have it curated

 

in a format that allows me to go present and go put the data out there versus spending time curating data and then having to put it into a presentation and then taking it out. So we have, I think, a lot of different use cases wrapped around that. That's it for this session on episode two.

 

Jonathan, again, thank you for being part of this and for the thoughts and insight. I'd ask the listeners, one, thank you for being here, but two, join us in episode three where we talk about cyber hygiene and what I'll call the modern service provider advantage compared to the traditional MSP and MSSP's in the world that we live in today. So thank you and we'll see you soon.

 

 

Thanks for having me.