CX Today
News and Insights for Today, and Tomorrow CX Today reports on the latest customer experience technology news and marketplace trends. Every day our tech journalists uncover the hottest topics and vendor innovations shaping the future of work.
Our coverage is fully digital offering our audience authentic news and insights on the channel of their choice. We offer daily news, weekly features, video conversations and authority content aligned to the needs of business leaders in today's world.For industry professionals, our weekly newsletter offers a range of popular stories hand-picked by our editorial team.
Subscribe to our weekly newsletter.If you're seeking editorial coverage, connect with our news desk.
CX Today
The 2026 Compliance Survival Guide: Demystifying the EU AI Act
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode, Rob Wilkinson is joined by Steve Blood, VP of Market Intelligence at Five9, and Martyn Redstone, a consultant specialising in AI regulation.
Together, they unpack where CX teams are most likely to misread the Act, why “we bought it from a big vendor” is not a compliance strategy, and how governance needs to shift from dashboards to outcomes customers actually feel. The conversation also digs into the clash between legacy on-prem approaches and the EU AI Act’s expectations around logging, data visibility, and conformity assessments.
If you are running AI pilots in the contact centre, balancing cloud migration decisions, or trying to protect customer trust while innovating fast, this is your practical briefing.
Hello and welcome, I'm Rob Wilkinson, and today we're taking a closer look at the EU AI Act and what it really means for customer experience leaders. If you're challenged with figuring out which of your AI pilots are safe, which are about to become high risk, or if you're weighing up whether to stick with your legacy systems or to move to the cloud instead, then stay with us because you'll get some practical takeaways that you can use in your business this quarter. I'm joined by Steve Blood, VP of Market Intelligence at 5.9, and Martin Redstone, a consultant who spends a lot of time helping organizations interpret AI regulation in the real world. Between them, they've got a very clear view of the decisions, the risks, and all the opportunities that leaders are facing today as we head towards those EU AI Act deadlines this summer. Steve, Martin, welcome both of you. Thank you so much for joining me.
SPEAKER_02Good to see you again. Thanks.
SPEAKER_00So, Steve, last time you and I spoke on CX today, we we unpacked uh the hidden cost of doing nothing in terms of legacy systems. Today we're kind of coming back to that same theme, but through this lens of the uh the AI regulations and compliance that are coming into effect. Um, really looking forward to this conversation, uh, building on those things we touched on last time. Martin, before we dive into this though, um maybe you can through your lens kind of give us a bit of a set the scene moment, give us some details around what the background to the EU A Act is and why it matters right now, especially for CX leaders in particular.
SPEAKER_02Yeah, sure. So um so for me that the EU AI Act is the world's first kind of comprehensive uh horizontal legal framework for AI, for artificial intelligence. But it it wasn't created in a in a vacuum. You know, it was born out of a realization in Europe that that while AI offers some immense benefits and some that we'll probably talk about today, it also poses some very unique risks to fundamental rights, to safety, to democracy, the existing laws like GDPR and others they they don't fully address. And and for CX leaders, the the act I think is a bit of a wake-up call because customer service and the contact centre is currently what I would see as the primary frontline for AI deployment in an organization, and so it's really shifting AI from being kind of this um theoretical, kind of purely technical project into what I see as a core kind of priority for governance and brand trust. And trust is that key word when it comes to the AI Act, it was all about building trustworthy AI systems.
SPEAKER_00Nice framing. I I love that it makes a it's quite a nice clear pitch that.
SPEAKER_01Yeah, I I think that's that's already come and gone in some respect. Um, I mean, you know, for me, yes, the EUAI Act is um it's it's already out there. Um the the the the the part of it that is more relevant from where where we're trying to impress new technologies, obviously, is the high-risk element, which is August this year. But I think right now the pressure for CX leaders is more operational than compliance. Okay, um, you know, the high-risk agentic systems we refer to them, um, I don't think really, in most cases, companies are there yet in terms of making autonomous decisions without having human oversight. So as soon as you put human oversight in the loop, you're in you're in low risk, so you're limiting the impact of potential fines of things. But um, you know, I think uh uh there are some some challenges there, you know, we we with things like brand, um brand protection, but I do think that you know to Marlene's point, there's a lot going on in the service space. It's ripe for automation, it's you know, it's an area that companies really want to try and uh use technology to create greater efficiencies. So I think you know companies have got to start looking at well, what have we put out there? What are we bolted on to legacy? Um, what sort of point solutions do we do we have? So, you know, absolutely Martin's Martin's totally right. The this idea of the wake-up call, this is the opportunity to say, let's sit down, look at our point solutions, map them all out, dig into the inventory, what use use cases, what AI, and and where are we using? So you can start to map this out. And this all then comes to bear before we get to the point of August where there are going to be uh you know more implications and and more concerns, but also um you know uh more likely companies will start to look to implement more agentic level of the capabilities.
SPEAKER_00Interesting, okay. So so as they're doing that then, Martin, I guess we need to consider that there's gonna be an impact over the coming months um for all the people involved on the front line, and and by that I don't just mean that the our teams, but uh but with the kind of customers themselves as well. This must be showing up, uh, and it's there's probably risk of consequences uh for the teams working under this we might call it a cloud of uncertainty with anything change anything like new things that change that brings with it uncertainty. How are we seeing that play out?
SPEAKER_02Yeah, so so it's interesting. So from a customer perspective, um, yeah, the whole point of the act um is to introduce kind of these trust safeguards that are going to fundamentally change the digital interactions that they're really used to, you know. So customers, you know, and something that we don't get a lot of now, but you know, there's mandatory transparency. So customers have to be informed when they're interacting with an AI system like a chatbot or a voice bot, um, unless it's pretty obvious from the context. So, you know, some companies that kind of hide those bots or try and make out that they're human, you know, they can't do that, and and there has to be clear disclosure, um, and that's a legal requirement. They also need a right to an explanation, which I think is a big change to the way that a lot of companies interact, you know, and as Steve kind of mentioned, you know, that there are a lot of high-risk AI systems out there, you know, one that might be used for credit scoring or insurance pricing. If that makes a decision that significantly affects a customer, they've got a right to a clear and meaningful explanation of the AI's role and the main elements of that decision. Uh, and there are other such things, you know, deep fake disclosures, protection for vulnerable groups, but I think those are kind of the two most kind of operationally impactful pieces that organizations are grappling with.
SPEAKER_00Okay. Um so I guess, Steve, you've got um access to the frontline, you're you're you're speaking to contact centres every day. What's it feeling like for kind of a head of CX right now who's got probably pilots running, um like quality monitoring, routing, uh sentiment analysis, things like that. Um they might not be sure which ones fall into this high-risk territory that we're talking about. What are those conversations like?
SPEAKER_01Um well, I think I think there's three perspectives of high risk. Um so let's um the most obvious one that's been looked at right now, I think, is agentich quality monitoring. It's inside the organization, right? So agentic quality monitoring is basically the ability to automatically uh review and rate employees on their performance and then pass that information back. The the kind of important thing is, or the useful thing is rather than just um measuring manually five or ten calls or interactions a month, you get to measure all of them. So what you're seeing is a a bigger picture of how staff are performing and so on. Um so in that scenario, and this is maybe a little bit too soon to be talking about this, but if someone was to implement agency quality monitoring, it it didn't work correctly. An employee was disciplined and fired, and they took them to court, and it turned out, oh, the technology was wrong, then that would fall into high risk category, and the fines are huge, right? I mean, we're talking up to 35 million euros or 7% of turnover for getting it wrong. So that's very, very high risk. Now, um, so in terms of regulatory regulatory compliance, regulatory compliance, it it's a major risk. But I think you know, uh you have to sell that as a benefit to the employees, right? They they they have to see it as, well, actually, this technology is reviewing everything. I I might have had a bad day when the when the service uh the leader or the supervisor monitored me. This way I get I get a better perspective of of my performance. It also picks up the things I'm really good at. Um, and those can be used and they can be uh elevated as a result of that. So it's got to be sold to the employee. So the risk there actually is if they don't if they don't see it's a benefit to them, then the morale drops, you get churn, and then you've got high risk to operational success, right? So you know, regulatory compliance, there's a risk there, operational success. And there's one more, I think, around uh implementing elements of AI. And I'm gonna pick up on uh biometric authentication. Um the you know the the rise of deep fakes um really mean that companies they have to minimize you know the risk of hacking more so uh things like multi-factor authentication are gonna be more key. So um as I mean, um Martin can correct me. As far as I'm aware, this isn't regarded as a high-risk use case per se from an EU regulatory perspective, but the financial risk could be very high risk. So uh the financial loss, sorry, could be very high risk. So I think from that perspective, you know, there's yes, regulatory compliance is a risk there, there's risk to operational success, and there's there's there's the there's a high risk of uh of fraud and financial risk. So I think that for me they're the three I think that service leaders are probably going to be looking at in different perspectives at the moment.
SPEAKER_00Which actually that that succinctly brings me to my next question to Martin. Um there might be some organisations where they've got uh an approach that might be doing nothing and could be seen as safer. Um, you know, instead of touching the old systems from a regulatory point of view, though, is that true or or is there more risk or hidden risks in in just standing still?
SPEAKER_02Um I don't think I think yeah, you're right. You know, I talk to a lot of organisations that that kind of say that that you know it feels like doing nothing often feels like kind of the safer bet, you know, the kind of the path of least resistance, you know, but but from the perspective of you know the the the EUAI Act and also kind of broader kind of governance, risk and compliance landscape, it's standing still is just you know it it's a it's a gamble ultimately. Um and so you know you've got some some real challenges there, you know. Um it's it's it's accumulating kind of you know compliance debt, really, because you know if you've got legacy systems, you know, they'll often violate some of the some of the newer kind of transparency requirements that I spoke about earlier. Um you know, they they'll violate kind of the right to explanation, you know, even kind of the internal piece that Steve kind of mentioned there, you know, the right to explanation of how AI was used to make an employment decision as well. Um and they very much fail the that kind of human oversight test as well. And so, in in the eyes of a um of the regulator of the EU AI Act, you know, a very old potentially biased, unexplainable system is just as illegal as kind of a new, you know, all systems, agentic kind of system. And so the safe path for me is is is kind of a proactive migration to a governed framework, you know, so so you know, just as Steve kind of mentioned, you know, exactly kind of what systems you're using, what your risks are, both you know, from a regulatory perspective and also from a from a you know uh company perspective. And you have you know the systems, the management process and the documentations in place uh to prove that you are managing those risks.
SPEAKER_00Okay, so it it feels like there's um there's an opportunity there for people to get kind of ready for this then. So let's ha let's help them do that a little bit. When let's let's kind of try and and I'm I'm saying this to you, Martin, there w can you strip away like the legal angle angle of the legal language, uh certainly technically, and then try and explain the kind of the EU AI act risk-based approach to uh you know a busy CX leader who's this is not his like main job. What are the things they should be focused on? What are the things they need to know about in kind of layman's terms, I guess.
SPEAKER_02Yeah, and and one thing to kind of note is that the EU isn't kind of regulating you know the the code itself, you know, the system kind of code in the background, it's it's it's regulating the context of how AI is used. And so there are some main kind of uh pillars or buckets that um that I tend to kind of work through with my clients, and it kind of kind of looks like a a regulatory traffic light system, really. And so we have those kind of red light prohibited kind of AI practices, you know, and and those are the deal breakers. So we kind of you know the EU has has identified certain areas of AI that and a completely unacceptable risk to the rights that the EU AI Act brings, and completely banned, so manipulative techniques, you know, AI that uses kind of subliminal techniques or deceptive methods to impact the behaviour of a person. Social scoring, so systems that evaluate or classify people overtime based on kind of social behaviour, personality traits, etc., completely banned, um, emotion recognition in the workplace, completely banned, um, and untargeted by metric scraping, so so that kind of creating or expanding things like facial recognition databases uh by scraping images that completely banned. So then we have the high-risk AI systems that we talked about today, and those are kind of what call the amber light, you know, and so this is where the most significant compliance work tends to live. Um, and if if AI and TIC systems fail here, um you know it it is legal, but it has to meet kind of strict safety standards before it can be used. And so you've got that employment and HR. So any AI used for recruiting, making decisions on somebody's employment, you know, promotions, task allocations, employee performance has to be um monitored and and and managed properly here. Finance and essential services, you know, systems used to evaluate credit worthiness or risk assessment for kind of health insurance and things, they sit within here. Um we also look at kind of safety critical infrastructure, so um safety components in like water, gas, heating, that kind of thing. But public services as well, you know, and so very much in our in our wheelhouse, yeah, systems used by authorities to determine eligibility for social security um benefits or healthcare, they all live in this kind of high-risk system. So this is where the significant compliance work has to be. And then we have kind of you know the green light, you know, limited risk, limited transparency, and most kind of CX tools like chatbots, content generators, those kind of things fall into this category. It's not heavily regulated, but they do carry those kind of mandatory transparency obligations. So disclosing if somebody is you know interacting with you know an AI system like a chatbot or a voice bot, they have to be informed they're talking to a machine. Um, if a system generates synthetic audio, video, or kind of um aperture kind of images and what have you, um you know they've got to be clearly labelled as artificially generated, and that's something that a lot of people don't actually pick up on, um, especially in CX, where you see CX systems now generating kind of emails and um content and what have you, they have to be labelled um as artificially generated. And then we have kind of very routine AI used for things like sentiment analysis, not in the workplace, but in general, um, automated routing, translation, and they generally very green light, you know, they face very kind of low low hurdles beyond the the honesty and transparency rules. So for me, kind of the bottom line for for CX is that if you're using AI to decide who gets a job, who gets a loan, then you're in that amber risk kind of um that amber high risk zone. Um and you do need that full kind of governance, risk and compliance framework behind it. But if you're using AI to communicate with customers, you just need to be honest and tell them that they're interacting with a bot. Simple rules.
SPEAKER_00Nice. It's good to get something a bit clearer. Uh so thanks for clarifying that for us. Um Steve, let's uh bring it into the contact center world then and look at this a little bit more specifically. So it would take some of the very kind of common uh AI use cases. So we've already talked about some of these sentiment analysis, real-time agent coaching, um, AI-assisted routing chatbots, obviously. Um how should the TX leaders think about the risk levels in practice going on from what Martin's just told us?
SPEAKER_01Yeah, well, I think um certainly, you know, that there's the external and the internal. Um I think largely, um, yes, we've as we've spoken, there's more focus on trying to automate the internal stuff. Um, the external stuff is is the biggest risk is probably brand protection, right? We've seen some awful chatbots that have been using large language models, there was no governance and guardrails. I think it's the DPD one where it actually swore in response to the customer. So that's massive, that's devastating for brand, right? So, you know, it's a high risk. It's not high, you know, they're not getting uh they're not getting fined for any breaking in the regulations, but uh, you know, someone getting getting the chop for that. So um I think from that perspective, you know, you've got to look at the two. Um in in there, but I I think um the the internal ones that we see, you know, there's not a lot of risk in, as Martin said, not a lot of risk associated in in how they're deployed currently, the uh uh the summarization that's being used, you know, just making sure that it's accurate is really the important thing. Because again, from a quality perspective, you put in a summary that's wrong, that gets passed downstream. So the next person that picks that up wash, and they're gonna communicate with the customer. What are you talking about? That's not what I discussed with that. So the whole journey gets ruined and things. So, you know, as I say, the the from that perspective, um the the risks are you know, the I I don't think we should just look at this from the perspective of regulatory risk. We should also look at it from the perspective of risk to the company. So companies should be defining their own high and low risk um uh uh use cases. They should obviously align those regulatory naturally, um, but they should have their own internally defined categories. Um, you know, because because customer trust erosion is probably the biggest risk uh that CX leaders face of all that.
SPEAKER_00I love that. Trust erosion that has to be on a scorecard somewhere in a contact center near you very soon. Um but it's a fair point because you know you you referenced DPD though, we're still talking about that now. So that's a reputational risk, right? That that that can happen to anyone without the right kind of things in place. Um Martin, I just want to shift over to kind of um rather than we're we're talking about things that are very clear, very factual at the moment. What what what's the opposite of that? Where do where do you see like the biggest misconceptions that people are having? So I'm guessing some leaders are probably overestimating the level of risk with some of the tools that they've got. Likewise, they're probably underestimating the potential risk of some of the uh the black box models that no one really understands. Would that be right?
SPEAKER_02Yeah, yes, um, I I think so. I mean, two you know, I see kind of two very kind of opposing misconceptions, you know, and so so leaders, yeah, absolutely, they they're often kind of panicking about the wrong things while kind of you know walking around with their heads in the cloud, really blissfully unaware of the actual kind of you know, ticking time bombs that they've got, you know. So, so so yeah, so absolutely overestimating that risk of those kind of simple assistive tools. You know, there's there's this pervasive myth that all AI is now high risk and requires that kind of massive legal overhead. And you know, and I really see you know leaders really kind of hesitating to roll out very basic generative AI assistance or productivity tools because they fear those kind of 35 million euro fines, you know, and and and that's you know, they they think they need a conformity assessment for just a drafting tool, and they don't, you know. Um so that's kind of the first one. But yeah, absolutely, underestimating the risk of those kind of legacy black boxes, you know, is is absolutely the most dangerous blind spot. Um, you know, they've been using automated decision systems for decades, you know, things like kind of old kind of hiring filters or credit scoring models or churn predictors, you know, prioritizing which customers get a callback and what have you. Because these things. Systems are old, your leaders just assume that they're safe or they're kind of grandfathered in and they aren't. So if you are making that kind of significant change to a legacy high-risk system, you know, it's got to immediately meet the full standards of the AI Act. So there's absolutely that. But I actually see kind of one other uh well, I see kind of loads really, but but you know, a couple of others, you know, the the IT only problem. You know, uh I I absolutely see kind of um you know leaders and and boards especially kind of delegating AI Act compliance to you know the IT department or the CIO, and you know it's a government's a governance framework, it's not just a tick-box list, you know, and so it it does require um things that the IT can't build alone, you know, fundamental right impact assessments, you know, and human in the loop protocols and what have you. Um the other misconception, I think, which is the most important one, uh, and I hear this probably on a weekly basis, is the whole kind of we don't build AI, so we're fine. Um, and you know, and I and I see it a lot, you know, CX leadership and other leadership, you know, they believe that because they buy their AI from a big vendor, um, the liability stays with that vendor. And and actually the act makes that very sharp distinction between the people who build the tools, the providers, and the people who use the tool, the deployers. And so, as a deployer, there's some very kind of specific non-delegatable, is that the right word? Duties. Um, you know, so so they've got to so that isn't an excuse, and that's probably the biggest misconception that I see um across the lot.
SPEAKER_00Okay, so ignorance is not bliss. Um which is it's a shame to have to point that out, but it is uh you're you're you're absolutely right to do so. Uh and I I appreciate the that there must be lots of different avenues we could go down for on all of that, that Martin, and I'm very grateful for you to prioritize those that are kind of the important ones. Um we do have to move on from the kind of from this part of it. I c we've got the kind of I think we've got a handle on the business and the kind of human element here. Um I want I want to lean into the technology angle now because I think although we've spoken about the risk in theory, it it's important to look at how um both legacy AI and modern kind of cloud native AI compare when it comes to this whole compliance piece. Um and Steve, when when you begin kind of looking at solutions with with customers who are moving off legacy, I guess. What what are the things that are shaping the decisions in that whole process? Is it is it performance? Um are they looking at compliance more so, or is it is it the ability to explain what the AI is doing, or is it something completely different?
SPEAKER_01Well, yeah, thanks Rob. I think the starting point here is um that you know the two primary reasons for moving off legacy, a cost of ownership of legacy tech, um, and and the lack of AI capabilities. This this came out in a Metrogy survey last year, the the two top two top reasons for moving. Um so you know, we've gotta we've got to um recognize that it is possible to implement AI as a bolt-onto legacy, right? No one's gonna build AI in premises-based technology, it's not it's just ridiculous. It's not, you know, the elastic, there's no elasticity there in the compute and anything like that. So it's always gonna be from the cloud, and but it is possible to bolt on. And, you know, being open and honest here, 5.9 has a lot of competitors in that space. You know, we want to sell our self-service capabilities, and we're up against some standalones, you know, with those sorts of capabilities. That's the reality that that's out there. Um, but I think there are some risks, and of course I'd say this, but you know, there are some risks of looking at standalones. Um, and I think you know the danger is what we're trying to do is provide this seamless engagement for a customer. Everyone gets irritated when they have to repeat themselves, their context isn't maintained, and so on. And I think what you've got to look at is if you if you bolt on, then you are creating a new product stack with a new software release cycle, with another set of integrations and another set of capabilities to manage. Um, and I think as you look to scale that, then your problems start to compound significantly. So, you know, we would argue that um uh you know you if you if you do all that through a single platform, then you maintain the context, it's it's it's retained through. Um and and um and you know that gives you across all the channels of engagement, uh across all the context, it's all consistent, it's all managed by a single system. Um and you know, and we think that actually that's gonna, for when it comes to scale, it's bur it's it's it's um it's easing the burden of maintaining the consistency, transparency that we've talked about, the guardrails, and and also you know, something we're very, very, very, very um passionate about, which is keeping the human in the loop. We do not believe that you can uh automate 90% of engagements uh or even a hundred as some you know that some some suggestion. So the ability to say, I've got this far, the agent now makes a decision, I need to pass this off to a human, and it's sent, you know, and it finds a human, that tells the customer they're connected, and the context is maintained and the conversation continues. We see that as absolutely paramount. Um, and so I think you know, as companies move off legacy, they've got to be considering that short-term, potentially short-term gain of achieving some very, very, you know, very rapid automation, um, and then thinking about that longer-term perspective of of end-to-end customer experience.
SPEAKER_00It's a yeah, it's a it's a very valid consideration that, and I think Martin, flipping that on its head, looking at that from a governance perspective, um, these legacy and bolt-on options, um where where does that struggle when you kind of map things out against the the um the act's requirements?
SPEAKER_02Yeah, and I think Steve Steve kind of gave a really great summary there, you know, and ultimately legacy systems, you know, that they were they were very much built for kind of stability, local control, but but you know, the act really does demand you know that dynamic transparency and and accountability. And so um so legacy world you know really does struggle from a governance perspective. You know, that kind of on-prem, sorry, legacy world um really does struggle from a governance perspective. Um, as Steve mentioned, you know, documentation, traceability, you know, they often kind of you know they they they're kind of um they were frequently deployed as kind of set and forget solutions many, many years ago. And but you know, the the act kind of requires that kind of detailed technical documentation that that has to be drawn up before a system is placed on the market and put into service. And that includes descriptions like training data, model logic, classification choices, and so for an on-prem system installed years ago, that documentation tends to be missing, incomplete, sitting in a laptop somewhere, and so you know you kind of got that risk there. Steve mentioned human in the loop, yeah, absolutely. You know, it was all most of the time leg on-prem legacy AI was designed for automation and not oversight. Um, and and these tools were built to replace human steps to save money, and so they don't have that kind of human in the loop interface and that human oversight. Um I mentioned earlier about the the kind of the deployer um uh liability, and and very much in the old kind of um on-prem model, you bought a perpetual license and managed it yourself, and the vendor relationship was very transactional. Um, and under the act, you know, as as you're the deployer, you're legally responsible for ensuring that the system is used according to the instructions for use, and and many legacy vendors are either out of business or they've or they've moved to SaaS, and so they're not providing instructions for use anymore or technical details required to fill those compliance um obligations. Um, we've got other things as well, you know, automatically recording logs of events over their lifetime, you know, it doesn't we don't see that happening really with on-prem either, as well as the significant modification trigger as well. You know, if there's any modifications to an AI system, um, you know, it triggers a new conformity assessment, and and that doesn't happen either. So, really, you know, the the legacy kind of on-prem approach was was about ownership, and the at the e the EU AI Act is about explainability, and they kind of clash quite quite terribly actually. So you need to really be showing the data it was trained on, you know, the instructions it follows, the logs of every move. Um and if you can't do that, it's a real liability.
SPEAKER_00So, yeah, both from a customer experience perspective and a compliance piece, uh, and from just a technology from a deployment piece, it it doesn't seem to stack up anymore. Okay. Well, we're calling this session the compliance shield. Um, Steve, can you break this down for us? So, in practical terms, um what does it mean for a CX leader to inherit the compliance from the cloud platform rather than having to build everything in-house themselves? That's got to be a benefit, right?
SPEAKER_01Oh, absolutely. I mean, yeah, this is this is a continuum of SAS, right? I mean, you know, we talked about uh in the early that you don't need to build and manage this technology yourself, it's done for you. Um, you know, inheriting compliance for us means that um the CX team doesn't have to build security and regulatory compliance from scratch. Um, we already do that, right? We do that for all of our customers. We have an audited infrastructure, encryption access controls, regulate regulatory certificates. We have a whole team that does all that, right? So I would kind of say you know, we're what we're doing is the heavy lifting of compliance, um, you know, continuous audits. It's all it's all part of the service offer that companies are paying for. Um, where the CX teams obviously are very much involved is in configuring the access, the data usage policies, and how the AI agents behave, right? Uh and they can still get it wrong. Um, but you know, our role there is to provide them with the tools to manage that. So, you know, what we refer to as AI trust and governance, uh, to ensure that you know the performance of their prompts is working well, that there aren't long pauses in between, especially for voice, there aren't long pauses between questions and things like that. Uh, making sure that hallucinations are minimised, um, you know, and obviously continuously helping to re-engineer those. And things like obviously, there are still bad actors out there that are trying to take down your AI agent, so prompt injection attacks, you know, we're monitoring for those as well. So that's kind of like you know, the the rap that we that we put together for for customers as a cloud provider.
SPEAKER_00That's that's really clear. Uh I'm I've it's giving me some thought because we're seeing a shift towards new technology, we've seen all these shifts from a compliance piece sort of driving that. Is there any benefit to the are there anybody out there and to mine you might be able to answer this best? Are you seeing kind of organizations that are actually trying to turn this into a differentiator rather than like it's always compliance has always been a cost center, right? Are any is anyone doing that? Um or or has no one kind of comed onto that yet?
SPEAKER_02Well, I think there's a few different ways to look at this. Um it's a great question. Um, and and actually, you know, we look at it from both kind of a vendor perspective and also from a uh from a buyer perspective as well. And Steve perfectly highlighted um you know the differentiator that I'm seeing happening in the market right now, which is vendors that are treating this as a differentiation strategy. You know, if you're a a CX buyer, uh a CX leader and you're and you're buying you know a new uh system, and one vendor says it's fast, it's bias-free, trust me, uh, but but another vendor is saying, you know, it's high-risk compliant, it's audit-ready, it comes with pre-filled transparency reports, you know, we really support governance compliance. It's a huge differentiator that I'm seeing in the market. Um, and and so you know, I think that Steve kind of highlighted that absolutely perfectly. Um, but you know, aside from that, yeah, absolutely, you know, I I think that you know there are some organizations that are seeing this very much from a cost-centered perspective. You know, if they're you know, if they're looking at kind of those um, you know, reactive audits, agentic GRC again, you know, on their kind of vendor side of things, you know, if you're hiring 50 people um, you know, to kind of manually check logs and fit out paperwork, you know, but if you're if you're implementing uh a system that that has these kind of compliance and governance solutions built in, then again, you know, you're you're saving kind of time and and money there. But but more importantly, um for CX leaders, you know, I think that the honesty kind of mandated by the act is being turned into a brand story. Um I think that a lot of customers are becoming very AI sceptical. I mean, we all hear it just in our kind of you know, in our social lives. Oh, you know, I phoned up, I had to like talk to a machine, you know, etc. And they fear you know being tricked by a bot or having their data harvested or not being kind of listened to, and and trust kind of starts getting eroded, and we talked about that earlier. Um, where I'm seeing the kind of differentiator in the market is CX leaders that are kind of leaning into the transparency requirements, you know, and instead of that kind of tiny, kind of powered by AI kind of disclaimer, they're they're providing that kind of full, clear explainability of what the AI is doing, what the data it's what data it's using, how humans are taking over when it needs to. And that kind of radical transparency is really building what I'm seeing, you know, deeper customer loyalty, you know, through trust.
SPEAKER_00Radical transparency. Absolutely love that. Um Steve, I'm gonna challenge you to come up with something similar now.
SPEAKER_01Yeah, I I love radical transparency. Um I was gonna use the word operational signals, but radical transparency is way better. So I think this is really important, right? So um how do you get across to someone that if you know if you analyse every single interaction, every single engagement with a customer, you surface insights that you didn't know existed. We find, you know, we have an AI insights product, companies put it in, and it's we had no idea that was going on. Um these are these are operational signals that you can grab. You know, the we talked about um quality monitoring of staff, you know, analyzing a hundred percent of the interactions, not five, five or ten percent. Um leaders can get a better idea of compliance across their teams. But it's not just about operational, it's you know, custom getting customers accurate answers. That's all good for CX as well. So you're surfacing those insights. Um, you know, my my analysis here is is um Steve um started out on Monday with this new product, and he's clearly you know, the the the product saying, I'm sorry, the the uh uh uh AI insights is saying he wasn't listening to the product training last week. Um you're gonna have to take him out of the loop, take him out, take the skill set away, get him trained again. This time he listens, so he can provide customs with with with answers accurately. Um so if you didn't use the tools that we've got, um then you wouldn't pick that up until the end of the month, and only if it was one of the 5% that were that were monitored. So I think you know, these operational signals, they're there. And you know, also the the just think about the conversation, the trends in conversations, new product issues, poor marketing communication, these all lead to extra. Oh, can I just confirm what what's happening about this? It all this costs more, and so the opportunity there to feed that back into the business. So now the service center is becoming this um this real-time intelligence engine, this nerve center of how the company is performing. So I, you know, it's um it it it all that stuff is there, and we never had it before. And so I think you know, it it's it's fascinating. I've been around this industry a very, very long time, and only the last few years has it really started to just surface this these insights so so readily and so easily.
SPEAKER_00I I it it is like we've got the the finger on the pulse finally, you know, we've always had that data sitting there, we've always had all the kind of recordings that have been there for training purposes or whatever, and we've never put them to practical use like we can today. So it it really is a new era in terms of using this this data that we have available to us. Um just just conscious of time, gents. Uh I I think we need to have a look forward now because um we've we've I think we've got some a really good handle on um on the technology side there. Um Martin, I'd like to um come to you and ask you uh if there's anything that gives you optimism over the rest of this year as these rules kind of start to kick in. Um and is there anything likewise that that keeps you cautious and and we should be mindful of?
SPEAKER_02Well, yeah, no, I I I I I remain very optimistic. Um you know, I think I think the EU AI Act, you know, whilst it feels like a complete burden, um I I think certain parts of it do absolutely give me optimism. You know, I think that I mentioned you know that kind of transparency requirements and and kind of you know that that radical transparency that I talked about earlier, you know, for me it um it it it builds the ability to to create that kind of trust and safety-based relationship, and I think that that absolutely gives me um optimism um around how to operationalise certain parts of the act into uh into operations, but you know, that that kind of compliance as a competitive edge as well, you know, um the the the the best organizations that I'm talking to, you know, they're not treating this as a um as a localized European problem either. Um they're using that kind of European compliance standard um to fast track kind of market entry into other parts of the world as well, you know, um the UK here and you know Asia and the US as well, because they're all beginning to start mimicking these standards, and so um they're kind of building their governance based on the EU to go faster elsewhere. Um so that gives me you know major kind of optim uh uh uh optimism is the word. Um what keeps me cautious? Um I think that those kind of implementation gaps that I talked about earlier, um, and those misconceptions that I talked about earlier, um you really still still keep me quite cautious. I think there's a lot more education that's needed uh for CX leaders and for the board um on the obligations and the requirements and and and and the misconceptions. So that's the bit that always keeps me a bit cautious.
SPEAKER_00Good to hear some some optimism though, some like silver linings, uh as it were. Um Steve, over to you. I think if you could offer a piece of advice for our audience this week, um maybe they're looking at their AI roadmap and and thinking what they need to do next, what would it be?
SPEAKER_01I've got two, so I'll be quick. Come on. The first one is um to focus on outcome success, not metrics. So, you know, you you hear it all the time. Um we deflected 60% of customer interaction to self-service. How many times have you heard that, right? That is not an outcome. Um, especially when uh as I as I you know, when I was a government analyst, I heard this all the time. The service team would say, the website have uh they've done this, and and then they're saying, Oh, we're deflecting all these, everyone's self-serving. They then get the core, right? And the customers more irrage. So a measured outcome would be something like 60% of customers were able to resolve their issue without repeat contact or escalation to a different channel. That is an outcome. And you need a different set of metrics, a broader set of metrics to be able to make that statement. But if you can, then you're able to say our project is a success, and you can start measuring the benefits a result of that. And the other one, we've mentioned this already, but I'm going to bring it back up. Make trust front and center to your strategy. So as a business, you've got to trust the technology, it's going to enable the use cases that you want to deliver. Your customers have got to trust that they that they like what they see and they can use effectively. So have your employees got to trust the technology that's going to work for them. Um, and then you also got to trust your supplier. Um, that you're going to get the support through each of the initiatives from pilot through to surfacing the results at the end.
SPEAKER_00That's great. And you you can have the second one, of course. It's a very valid one, to be fair. Um unfortunately, that that I'm afraid to say that that is all we we've got time for. Steve Martin, thanks again uh for joining me uh and answering all my questions. Um, Steve, just before we close, uh for anyone who's watching this uh and then maybe want to explore this subject in more detail, what's the best way for them to find out more about 5.9 or or to get in touch with you guys?
SPEAKER_01Yeah, I I um I certainly would start with 59.com. Uh from there, there's a very good section, uh, a drop down for all of the products and services, and and and it's got a really good search engine. Um you can check out the chat facility as well and uh get some answers through there. That's all run obviously on. Our own technology, but um, yeah, um, obviously, we're here to help. Um, it's an exciting time, um, it's a re uh it's a time for caution as well, and uh we're ready to help uh customers and and prospects through that.
SPEAKER_00That's great, and and and don't forget, um you can find a wealth of related resources, uh, other stories, uh, other videos at cxtoday.com, uh including uh our earlier conversation with Steve on the hidden cost of doing nothing, so check that one out too. Um but for now um that wraps everything up. Uh, I'm Rob Wilkinson from CX Today. Uh thanks for joining us and see you next time.