The ActivateCX Podcast
Join Frank Rogers on The ActivateCX™ Podcast, your resource for demystifying, clarifying, and providing guidance around AI, CXM, and the modern Cloud Contact Center.
In this Podcast series, Frank interviews Thought Leaders, Unpacks critical AI & CX technology, and addresses the leading Experience topics of the day.
#cx #customerexperience #ai #ex #cxm #contactcenter #salesstrategy
The ActivateCX Podcast
How to protect your AI & CX from Cyber Attacks
Your AI Contact Center solutions are under attack. Hear Frank Rogers and Rob Fitzgerald unpack the secrets of Cybersecurity, the flip side of the CX & AI coin. Our world is changing fast, hear from these two thought leaders to get a bead on where your cybersecurity roadmap should be heading. Don't forget to subscribe for more updates on how AI, CXM and Contact Center technologies can help you get and keep customers!
Don't forget to subscribe for more tech updates!
Hey, Rob. Great to have you to the show. Thanks, Frank. I appreciate being here. This is going to be a lot of fun today. No doubt. So we both know that like contact center technologies are kind of at the core of this whole revolution in business customer experience. And I think one of the things that sometimes isn't discussed maybe in as much detAIl as it should is really the cybersecurity considerations that go along with this technology footprint. So I would love to kind of unpack that relationship between the two. So , let's go get it. All right. Yeah, indeed. So, Rob, what do you think? Maybe just right off the bat , are some of the key considerations for implementing endpoint security solutions where were really trying to protect agAInst threats and data breaches in this cloud contact center environment? Well, I think it's interesting that you're even asking about endpoint security because the reality is It's one of the most critical but overlooked pieces when we're talking about cloud contact centers, and that's because there's really two ways for malicious actors and even nation state groups to go after call center environments, right? The first is, and we'll get into this, I hope, but the first is to attack the application itself and the data stores. It's themselves of the cloud call center platform. The second is at the end user level at the employee base level or contractor base level attack the computers that they're using or devices that they're using, whether it's tablets or phones. And no one's talking about this. I think, in part, no one's talking about it because there's a An assumption that, oh, the computers or the laptops, the endpoints are already secure their workstations and desktops. But the reality is, is most of them are not, especially when organizations are looking for alternative solutions. And so. Alternative endpoint solutions. Hardware and software prices are getting more and more expensive. And as call centers grow in terms of adding people organizations realize there's a real cost to hiring individuals, whether they be employees or contractors, but it's critical. It is absolutely critical to be to have an implement MDR XDR on the end point to be actively looking when you think about call center employees, there's many times where. There's many call center employees that are transacting on the computer with the customer in real time while they're while they're using the call center platform as well, which means it would be easy to send a malicious PDF or an emAIl with a malicious link or anything else that could that could be clicked. That might even go unnoticed by the employee because the CSR is focused on supporting the customer or the buyer in the way that they need. And there's, if you think about it. It's an incredibly wide open space for malicious actors to to target because the focus itself of the of the employee is to provide high value customer service quickly. It's not to be thinking about cyber security, malicious hackers, ransomware, or anything else, and I think it's really critical to say, do we, are we starting with a clean, secure slate, a tool, a platform, a base, a workstation? That our employees can use to connect to our critical data in the cloud and therefore not reveal secrets. Yeah, so that's interesting because I mean, at the same time, that kind of dovetAIls into this. It's a revolution. It's not an evolution. I think it's truly a revolution. AI generative AI and how that's being used in the contact center. And ultimately it's being used in a lot of different ways, right? It has kind of an agent assist and supportive in a real time sense to the agent that you just described. That's, on a call in an emAIl, in a chat with a customer or a prospect. and then there's also the whole generative AI Component of voice bots and chat bots. So where do you see like the AI driven cyber threats kind of rolling into this contact center environment? and how do we address that? Well, I think Frank, there's really two areas that. I'm most concerned about, when I think about call center a usage of any kind. And I love the fact that you brought up chat bots because right now that's exploding and organizations are leaning heavily into this. So the first thing that I would be suggesting to any company is get an AI policy in place. In fact, better yet, before you even create a policy around AI usage. Let's make sure that we have business objectives aligned to what the AI you're implementing or have implemented is supposed to do and achieve. What's that ROI, so to speak? Because I think it's real easy to say, Oh, we have AI this AI that we're using AI over here. But the reality is, is if there's no way to define. What the value of the AI is to bring. You're never gonna know if you're successful. And from there, you can write policies around it, right? So if you think about it, there's AI tools like chat GPT, for example, right? That open a I right that become Potentially very helpful for some, but there's a number of other tools in the CSR world, depending on what side you sit on. That could be zoom info with their AI. That could be Salesforce with its AI. There's just a number of ways where it's outreach or inbound in determining what you're expecting from that. Is important and putting guardrAIls around what can and can't employees do with their AI tools becomes absolutely critical because the last thing you want to do, is accidentally unintentionally share personal information or financial information into the Internet because you're using some AI tool that doesn't have boundaries on it. Part of the policy that comes with when you're creating the policy is thinking through how are we going to protect and manage. The AI tool that we bring in house, whatever I tool it is. I mean, agAIn, it could be copilot doesn't matter. But what matters is that there's an idea of how we secure the AI tool. And from a compliance standpoint, how we create or generate evidence that shows we are properly securing the AI tool to do what it is supposed to do and nothing more. And when it comes to generative AI, there's A number of opportunities for AI to go off the rips. The first is hallucinations, right? Where, hey, it's, convoluting information and providing , poor results back. So in a chat. It's saying, Hey, congratulations. I was able to discount your flight to zero dollars, for example. Which I would appreciate. But, as a buyer, I would appreciate. but then there's also the other side where with generative AI, depending on how it is implemented and managed and sort of monitored at the I. T. Operations level, You can build information off of the information that you're asking. Tell me my history. What's the number one product that I've bought in the past X period of time? What was the price that I pAId? How do I get that price today versus five years ago when I first bought that product? As, as agAIn, I'm using a simple example. So when AI, we need to be thinking about having a policy that clearly outlines And aligns to the business objectives outlines what what employees can do and aligns to the business objectives. But then, secondly, what are we doing to show that we are being compliant and proactively monitoring for misuse, intentional or unintentional misuse of AI and I suspect over the next three or four or five years. We're going to see a number of organizations that are targeted either by malicious insiders or malicious external actors to capitalize on generative AI that maybe isn't reined in the way the buyer thought that it was reined in the cost of the client thinks that it's reined in. My, sense, Rob, is that we could probably do a full podcast just on this subject alone and my thought is that, ultimately, some of these things that you're talking about in terms of policy, should be part of that entire process of paring down and selecting the right AI technology for you. So, even in the selection process, it's not so much that you are looking at something that has obviously the right experience that you want to have for your customer, the right connective capabilities into your structured data and your unstructured data, but then thinking about, fundamentally this policy that you overlay to ensure that it's operating correctly. Let's move on to another subject, the contact center world definitely was impacted heavily by the dreaded COVIDs. We moved into this decentralized, this work from home environment and so now we have these remote workers. How do we look at things like maybe access control and identity management? How do they play a role in ensuring? We have secure access to fundamentally, not only the systems, but the data that's in these technologies? And I think you're I like that you actually distinguish between not just the systems, the data, and I think it's a great It's really important, right? So for years now, we have talked about multi factor authentication, and even today with clients, we get pushed back oh, my employees aren't going to want to install an application on their phone, and we don't pay them for phones. we continue in 2025. We're still getting pushback from organizations that are saying, quite frankly, you know, we're not interested or we don't want to use MFA and MFA is really an interesting technology and I'm going to build on this for a quick second, but understand we use MFA in the security realm and in the compliance realm to be able to verify that when an action happens on an endpoint or on a server, but on an endpoint, that it is in fact not someone stealing the username and password, but is in fact the person that's typing or creating that action or activity. So, for example, deleting an order or deleting a reservation. Is it really Rob Fitzgerald? When it's just a username and password, it's easy for me to say, I didn't do that. That happened. Someone must have, a hacker must have got into my computer. And stole my username off the off the dark web and figured out my password with MFA. That's another step. It's not a silver bullet, but it's another step to say, here we go. Now, companies are still struggling with this. It's another step to say, yes, it is Rob Fitzgerald because it was on Rob Fitzgerald's mobile phone, or it was on a token that Rob Fitzgerald plugged into the machine. The problem is a lot of companies still aren't there yet. We need to continue to beat the drum about the value of MFA. And which is surprising because I think there's a couple pieces here, right? There's certAIn protocols that anyone that is employed. Agrees to within their workplace, right? So for many workers they agree to wear shirts and pants and or or dresses or skirts and shoes into. We agree that's acceptable. We agree that you have to be on time. We agree that You you're going to bring your own coffee cup or you're going to use our coffee cups. So I'm not really sure why we're still having this MFA agreement or conflict. It should be if you want to work for Company X, you are installing this MFA application on your phone and you're going to do this. If not then that's fine. If not, you can't work here. We need MFA to buy Starbucks and Dunkin Donuts and Tim Hortons. To buy a coffee, we need MFA. Why is it that we're having this debate? I don't know. It seems like it would fundamentally be table stakes maybe some other table stakes kind of elements in the industry, encryption seems to be an easy checkbox, but I don't believe that it's quite so linear in terms of looking at the contact center, where do you see encryption as being something that can be nuanced as well? Well, so I think there's like really four great technologies that can and should be thought about from a call center environment. And I stress MFA first, because there's still organizations that are struggling with it. Second in with remote workers. Encrypting their laptops or desktops is absolutely step two. Step three, along those lines, is removing local admin access. So one of the things that we want to do, and most organizations skip over this primarily for convenience, and there's a number of tools out there that can now be used. One of the things we want to do is stop employees. From downloading and installing software applications that may or may not be malicious, quite frankly, if there's a piece of software on a company owned computer that the company does not own, they're at risk for a potential lawsuit of some kind, and many actors are at risk for a potential lawsuit of some kind, and many actors When they send emAIls or malicious links or whatever, those clicks install software. Those clicks download and install software without local admin access. We remove that attack vector for them. And agAIn, to your point, these are remote workers. It's not like they can call up. You know that helpdesk and someone from helpdesk walks over to their computer to fix this Not that would be an appropriate step anyway, or but but here you go Additionally on top of that we are seeing a number of organizations start utilizing SASE technology to to require dedicated Defined, connections to the applications that the employee needs, and it gives you visibility. Now, I'm a big fan of looking at PAM technologies, Privileged Access Management technologies as well, to say, look, let's segregate. What Frank and his team in the call center can see because they're here in the U. S. And let's segregate what Rob and his call center can see, his call center team can see because they're in China or they're in the EU or they're in some other geographic region. And quite frankly, when security issues arrive, That micro segmentation actually helps in minimizing the total impact on the organization as well as helps more quickly identify. Where patient zero is to be able to go after that, fix it and clean up the mess. Yeah, it seems to me that's kind of a form of a compartmentalization style strategy where you're able to narrow down that threat vector and also contAIn it with inside of that construct. That makes. Perfect sense. Let me ask you a couple questions here about, just overall threat detection period. And, one of the things that I've learned over time in talking about this subject is that, you really have two sides to the coin. You have threat detection, But then you also have response technologies as well, and they're not only just technologies, but they're teams. And so if you're going to identify a threat, you need to be able to immediately address it and remediate it or mitigate it in some way, shape or form, what do you think is the best way to approach that when we're talking about the cloud based kind of CX world? Well, This is interesting, right? When you're evaluating vendors for the cloud space, CX world, you want to understand. One, how secure is their platform? How are they handling? Security monitoring backups? What are they doing to protect the platform itself? And what's the response capability of the vendor? When there's a potential issue, right? And it's all about a potential issue. It's not even about the actual issue. When the actual issue comes, you want to understand that. But but here you go. So when it comes to threat detection, as a customer of call center technology. You want to be thinking about what are those risks? And really, I see endpoint attack risks. This is where, malicious actors are coming in, but then depending on the vendor and the vendor's technology, there may also be API risks. So the question becomes. We are seeing a significant increase in API based attacks. So the question becomes, how is the vendor and how are you as an organization prepared to be able to address those? We historically have looked for hackers. We've looked for malicious activity. Historically at a human based standpoint. So think about key borders on the, people on the keyboard. Then it's moved into some automation. Now we're looking at AI attacks. And the interesting thing about AI based attacks is the rapid speed at which they will be able to attack because it's completely automated and it's looking and it's taking feedback and then trying to address what that feedback is with API attacks. Most organizations do not have WAFs in place. They don't have CDNs in place. So content delivery networks where they're sharing their Their website or access points so that if one goes down or comes under attack, another is avAIlable. There's a lot of times that activity goes unnoticed until they're in the system. and then we say, okay, how do we manage? How do we manage from that? So there's two parts. What are we doing to protect? But then when are we being notified? And how, are we as a purchaser from a vendor, a user of a vendor service, because most of this is SaaS stuff, correct? How and when are we notified? And what can we be doing to ensure our own data is going to be protected. And part of that goes back to, let's circle back, you mentioned encryption, right? So part of that goes back to Are we holding data or is the call center holding our data? If the call center is holding our data, what are their policies? Is it encrypted data at rest? Are they tracking, do they have PAM in place? Are they tracking who has keys or access to Our data stores, not that they might be able to see our data because it's encrypted. But are they are they looking for fAIled login attempts and other pieces there. So that's great. There are some organizations where hey, we'll hold on to the data. Or we we want to or we have to hold on to the data will still use the call center technology, but we have to hold on to the data come back to us. And do we have in place those same requirements that we're pushing our vendors as partners to provide to us. So, I'm curious when you mentioned the term, endpoint security , do you include with inside of that term touch points, which would be web touch points, like, a chat bot. Or a live chat or potentially something in the inbound voice type of channel are you considering that all these different threat vectors within the context of endpoint? Absolutely. Quite frankly, I think we're going to start seeing, more touch points within the endpoint, right? So the endpoint itself is a device. It's, a phone. It's a tablet. it's a box of some kind. The bigger picture is, chatbots are huge right now, and they're getting bigger, and there are a number of organizations, malicious ransomware groups that are working to figure out how to best exploit them, figure out, are there ways to, use SQL injections, for example, within the chatbot , target an environment or that could be absolutely right with client data, financial data, insider trading, intellectual property, whatever else. Inbound voice is also interesting because depending on what type of inbound voice there is, You know, it's funny, it blows me away is still how many organizations have or rely on faxing technology. And that's another scenario where are there new code based ways to manipulate inbound voice or inbound faxing technologies? To open up a terminal for malicious actors to be able to go after we're going to see video chat within the call center environment start to be exploited. We've already seen or heard of a number of cases of, AI generated individuals manipulating unsuspecting recipients to release financial information or money or proprietary information. So I 100 percent think one of the challenges call centers are going to have is that as the technology, Evolves and improves the call center employee, the CSR, for example, is going to be in a much more immersive environment, chat, text, type, call, video, all at the same time, probably potentially managing two or even three, let's say multiple requests at any given point in time. And by the way, chat and chat bots are two different technologies. And I think that's another thing where we're going to see essentially employees learning a new way to provide customer experience. Based on the technologies that are avAIlable and all of those need to be evaluated holistically, at the box level, what are we doing to to improve that? But then individually at to define, okay, the box is secure. Now what do we have to do here? How do we handle chatbots different than we handle chats? Now there's going to be crossover. Malicious links can come into either one of those as well as emAIls. But there's different components to that need to be evaluated, and I don't think we're very far away. I think within the next 18 months, it wouldn't surprise me to see CSRs becoming much more strategic, empowered problem solvers that are focused on providing, Intentional experiences for customers than they've ever been before. And quite frankly, I think those individuals that adapt and companies that adopt this will have a much stronger, robust Client base and relationship with their clients then historically. Yeah, that's interesting. I agree with that a hundred percent. That's the whole concept of the high value agent and the fact that they've been escalated now to a point where they're handling the most complex, the most challenging engagements with their customers, whether it's on the sale side or it's on the customer service and advocacy side. But kind of my big takeaway from that last component, before we move on to the next question is. that there are three threat vectors here that I'm hearing. One is your endpoint security is at the device level and then on that device, there's some touch point and say, for example, it could be on a desktop or it could be on in a web app on a phone, but regardless. You could engage at that touch point with a channel. So what's the channel that you're engaging with? Is it an SMS channel? Is it an emAIl channel? Is it a channel that has to do with a chat bot that gets escalated into a live chat? So all of these three components Right. There are all the different layers that we need to speak to, because at each one of those particular points, there's an opportunity for a threat. So that's my big takeaway on that one. But if we think about creating an, incident response environment, what do we do ahead of time? I think that there's a readiness component that I always like to say that you win the war before you fight the war. So, what are some of the best practices you would look at, just from a business continuity strategies perspective for incident response plans. So I think there's a couple things, right? The first is make sure that you have Immutable backups as an organization. Don't trust the CX vendor to, back everything up and go from there. Make sure that if they're doing it, you're auditing or testing that, or back it up separately, which is what I would really recommend so that you can back it up and then dump it, move it back as quickly as possible. So that to me is the first thing. Because here's what we consistently see. We consistently see organizations that have backups that get hit with ransomware. The backups are hit with ransomware, and it takes 30 90 180 days to get or get that organization back up and running and the hackers don't care whether it's on prem or in the cloud. They don't care, that it's call center verse file server. They just want to back stuff up cripple your business so that the business will pay them their ransom and then move on. Beyond that, there needs to be some real interactive education and trAIning with CSRs, with these individuals on what to look for, what to be aware of, if you see these things, if you see something come up on a screen that doesn't make sense, and how to quickly, Alert someone that this is happening or happened in a way that isn't punitive. I mean, let's start setting up a reward system for when these come in. If you find it, the first to report it gets a gift card or something like a true reward system where you say, Hey, it's worth us, to encourage. Reporting this because remember, a lot of individuals are, their metrics are their performance metrics are based on how they perform, how they support the customer and not only that, how many customers they can support within a given time period. And so, we want to balance this where I think. Historically, it's been, a race to the second, right? So in some ways, we used to say Oh, hey, in sales pricing will create a race to the bottom. Who's going to be able to charge the least amount? And in a way, we've sort of done that in this industry where how quickly can we get this issue resolved? Obviously, time matters, and the longer customer prospect is on the phone wAIting there's metrics that say, Hey, that's not good for the experience. And that's not good for the purchasing. That being sAId, what we do want to be considering is when someone becomes aware that there might be a problem. There might be a malicious link here. There might be something else going on that they're responding and not being penalized within their metrics for that, and quite frankly, being rewarded. Because if they're rewarded and you can stop it at one, it means you can stop an entire class. Of attacks across the entire organization. That makes perfect sense. And so when we think about, that relationship with the CSR or the sales agent inside the organization, there's also this, overriding construct within the subject of cyber security that deals with HIPAA, PII, all these things, how do organizations ensure regulatory compliance when we're deploying a cloud based CX technology? I think the good news is. Let's see. What's the expression? What's what's old is new and what's new is old, right? We never forget that our duty as sales agents as CSRs in particular in the healthcare space, but across all the spaces is very much to be sensitive and aware. To the contact or the callers needs and situation. And so, when we talk about regulatory one, working with the CX vendor to ensure that their sass environment. Captures only the information that's necessary and shares only the information that's necessary. So having those protocols in place. How do you validate if Robert Fitzgerald calls in? How do you validate that? I'm me and not some other Robert Fitzgerald that may be trying to schedule an appointment at the hospital or whatever else what pieces of information do you need? Do you need social security number? Do you need bank account number? What what are those pieces of information that you're going to share and then two? How is that information portrayed? Is it all portrayed? Up on the screen all at once or is it, Hey, gimme your name. I click a button and as a CSR. I click a button and I can see what the name is. Okay. Gimme your address. I click another button. And it unblurs and it gives me just the address. Like, how are we managing give me your birth date. I click a third button. The asterisks of the birth date are removed. I can see that it's the birth date. And then when I click off the button, it, blanks out the the birth date. I think there's ways that we want to be conscientious. And one of the big things that we need to be thinking about and in particular in healthcare and in finance, understanding and capturing what are the agents typing and what are they typing into and what tools do they have access to, should they have access to word and notepad and other applications, or should it simply be. the environment and how it's tied to the, whether it's scheduling software or, sales software or whatever it is, how do we work with that to ensure that even the temptation of risk is mitigated by removing access to things that are not necessary. I think that the latter part you just mentioned is mitigating risk to eliminating risk, fundamentally taking it off the table, whether it's obfuscation whether it's moving it into a separate process where that process engages directly with the end customer. And then once that particular set of personal information has been transacted, Programmatically, they're pushed back into a conversation, with the actual support individual or salesperson. I think all of those things are really, really critical. Ultimately, it's very clear, at the end of the day that When deploying a CX strategy like a contact center that now has so many channels, digital channels, human channels, so many different threat vectors that if you aren't really factoring a security and a cyber security strategy into it, you're missing the other side of the coin. Rob, thanks for being on the show. Thank you very much. This was great. I hope we can do it agAIn.