Advice from a Call Center Geek!

QA/AI Showdown: Chris Mounce from Evaluagent & Tom Laird Unmask AI's Reality in QA

August 10, 2023 Thomas Laird Season 1 Episode 200
Advice from a Call Center Geek!
QA/AI Showdown: Chris Mounce from Evaluagent & Tom Laird Unmask AI's Reality in QA
Show Notes Transcript Chapter Markers

Episode 200!

Dive into the evolving landscape of AI in the contact center QA space with our next Advice from a Call Center Geek podcast! 

Chris Mounce of EvaluAgent® returns to go head-to-head with our host Tom Laird in a riveting debate. 

Can AI truly automate scorecards, or are we still testing the waters? How do we ensure quality in AI-powered QA, and how should we interpret and leverage the data it generates? Discover the most practical AI use-cases in QA right now. 

 We're peeling back the curtain on how organizations are capitalizing on the low-hanging fruit of AI implementation as it relates to QA.

If you are looking for USA outsourced customer service or sales support, we here at Expivia would really like to help you support your customers.
Please check us out at expiviausa.com, or email us at info@expivia.net!



Follow Tom: @tlaird_expivia
Join our Facebook Call Center Community: www.facebook.com/callcentergeek
Connect on LinkedIn: https://www.linkedin.com/in/tlairdexpivia/
Follow on TikTok: https://www.tiktok.com/@callcenter_geek
Linkedin Group: https://www.linkedin.com/groups/9041993/
Watch us: Advice from a Call Center Geek Youtube Channel

Speaker 1:

This is advice from a call center geek a weekly podcast with a focus on all things call center. We'll cover it all, from call center operations, hiring, culture, technology and education. We're here to give you actionable items to improve the quality of yours and your customer's experience. This is an evolving industry with creative minds and ambitious people like this guy. Not only is his passion call center operations, but he's our host. He's the CEO of Expedia Interaction Marketing Group and the call center geek himself, tom Lear.

Speaker 2:

Welcome back everybody to another episode of advice from a call center geek the call center, contact center podcast. We try to give you some actionable items to take back in your contact center improve the overall quality, improve the agent experience. Hopefully, improve the customer experience as well. How is everybody doing? Oh, chris, what is up? I'm excited about this. So Chris Mouth from Evaluate Agent is coming back on.

Speaker 2:

As any of you have known who have followed, we've been doing and talking a lot about AI and how it relates to QA. There's probably no better person on here, or no better company as well, than Chris to come on. We're going to have a little debate, a little friendly banter, and talk about where we see and what's real right, what we think can be scaled right now, how far can we go with actual AI when it as it relates to QA. So if you guys have any questions, we're live on TikTok, we're live on LinkedIn. We always do this as an AMA. So, please, anything that you guys ask, we'll try to get to if we think it's going to add some value to everybody. But before we get this rolling, chris, how you doing buddy? Yeah, I'm really good.

Speaker 3:

Thanks, tom. I'm super excited to be on, especially talking about AI. I mean, this is right up my street. I am a big fan. I take geek like yourself, so I'm really looking forward to a friendly debate, right, right, right.

Speaker 2:

So, all right, let's start this off. Let me start this off by just kind of maybe setting the table and then we can get kind of deeper into this. So initially and this was probably three weeks ago, you know I was talking with my internal team here at Expedion and we said, hey, listen, we're getting really good at or at least I think we're getting good at, you know, writing prompts for in a chat, gpt world, for a lot of different things that we're trying to do from a business standpoint. And you know it wasn't even me, it was one of my IT guys said, well, what about QA? And I didn't really even think about it for a second and just kind of let it go. But then that night it just kind of just stayed in my head.

Speaker 2:

So the next day and the cool thing about being like the C company that's not a huge company with you know, like four or five IT guys is I can just be like, hey, everybody, drop what you're doing and let's talk about this.

Speaker 2:

So that's what we did and we kind of spent the day and we started to prompt and kind of look at our forms and said, oh my gosh, you know, I think that this could be done that night or the next night.

Speaker 2:

I did my first like because, again, being the shameless self promoter that I am, I think I went on TikTok in LinkedIn and said, hey, listen, I think if three or four of us a little old Expedia, that's not even a software company can kind of figure this out, my initial thought was by the end of the year, right, full QA, and I kind of gave it was kind of a set as a warning, right. So initially, chris, you know we did, we talked about this as kind of being a six month process, but I think it's one of the low hanging fruits, right, when it comes to how AI can have an impact. And again, I'm going to stop there, let you kind of talk through that. And then let's get deeper into kind of some of the things that I think we can do now and that can happen now, and then maybe we can talk through some of that stuff.

Speaker 3:

Yeah, well, thank you, in the discussions that we had any value agent that you had with your IT team, I think we were probably, you know, a good few months prior. You know we all you know our IT team or Rotec team got together and decided to go for it and see what was possible. And that prompted the release of SmartScore and we've spoken about it before where it uses the last language models and generative AI to be able to score the conversations. And you know, since then I was completely swept up and I bought into it. I'm a huge fan of chat GBT. I mean, just last night I was using it to play Dungeons and Dragons very, very cool.

Speaker 3:

But I think, since you know, from the conversations that I've been having with you know, other contact center professionals, they're not as caught up in AI as perhaps they can take it Minded people are, and for me that grounded me somewhat, you know, because there is huge potential, huge potential, but I think it's for a lot of people they're not quite ready yet. There's a there's a huge change coming, for sure, but I think it's going to be a journey for people to be able to buy into that and understand the, the potential and the limitations as well, because you can do amazing things, but there is limitations, for sure, and the key driver, I guess to to overcome those limitations as the human component, and I think that's that's that's going to be the case for certainly the foreseeable future. So for me, I'm comfortable that AI will not replace QA for now.

Speaker 2:

Well, I think here's here's my counter to that is, I think we might take AI too far sometimes and we might think, when we talk about machine learning and we talk about you know, these type of things, where it actually gets smarter, I think the the that I agree with you. Like, if we got to the point where it would be, I'm gonna take all the data that comes in to the call center, I'm gonna quote-unquote score a hundred percent of it. I'm gonna look at you know also you know handle time, and then look at a correlated score and then talk to Janie here and prompt that Janie's you know scores are and her a handle time, which means she probably has an issue here. Right, like, like, if we get to that point. I don't think we're there yet, but all I'm looking for, and I think all we need, is a utilitarian tool that can score things and to be honest.

Speaker 2:

I think a chat GPT is Taking the learning piece out of this is the perfect tool. The problem is not the problem, but I think the problem for a Software company and why why nobody's been able to do this yet and why it can't scale yet is it's very difficult right to prompt. Right, because everyone's QA form is and nobody wants to use again and again. I'm not saying nobody wants to use like a smart score or any of that, because I think that is a giant leap forward, but For this to have kind of a mainstream Wow, I think I want to try to use. This is every.

Speaker 2:

Whether you're a BPO and I have clients and they've had a scorecard for 25 years, right, they want to, they don't care that I can, you know, give this other score. They know that an 82 means this, a 94 means this, because they've been using this scorecard and I think from a, a small BPO, I can take the time and I can write an awesome, unbelievable prompt for that customer that takes into account the, the type of call that it is. If somebody says I want to cancel, I know that this is a retention call and it's this type of form and I need this kind of verification and then these 15 questions get answered yes or no, or one through five, and then I need a disclosure. I can write a prompt with outputs for that. Now I will agree with you.

Speaker 2:

To scale that is a whole different ballgame. Right to be a company like you guys or to be any of those, those kind of Software companies that are looking on the call center side to even the C-cast players. To scale that, because how do you write a prompt for everybody? How do you?

Speaker 2:

get so good. Where I think again it's, it's where I agree with you that I think the general People utilizing this is not going to be for a while, because people are still filling out, but the technology to do it Can be done right now.

Speaker 3:

Yeah, yeah, I mean, on that, I guess, the weakest link when it comes to that technology, as as us, because when you mention about, you know, building the prompts, that's, that's a completely new way of working. You know, I think, when this technology first appeared and all you do is give it a prompt, I think they can. I thinking is and it's a fair enough train of thought you give it one line, one sentence and that's your prompt and it can do all of these weird and wonderful things. But I think now is where, as we're learning more, those prompts can be as complex as should be, as complex as needed, and all to get those those complex results. You know, I mean, potentially your prompts could be four, five, six paragraphs long.

Speaker 3:

But it requires, I think, us to to have a change in the way that we think, because certainly QA professionals, they'll be Consciously competent or unconsciously competent on, you know, what's our what's a good experience, what's a bad experience, what's the you know good for soft skills or our compliance and so forth.

Speaker 3:

But the AI doesn't know that. So it requires us to build those prompts, to really take a step back and be more consciously competent and Really define what each and every, every outcome should be, you know, and also thinking about that, the what ifs. If this happens, this should be what you do next. You really need to, I think, be specific about Exactly what you want to want to measure, and includes things as well, like the guidelines that you would give. You know, currently you would give your, your QM. Any valuators, traditionally, or simply what I've seen, those are, you know, very, very sparse, because there is that expectation that you expect you, the valuators in your QA team, to know what good looks like. You know that's ultimately their job. But in order, I think, to utilize this, this technology, in the best way, those, those guidelines, need to be super descriptive, because all of that information pushing into your prompt, that's what's going to give you the best results.

Speaker 2:

Yeah, I agree. But I will say this to you know, a big thing with with this is I think you just have to be close, right when it comes to the score. And let me kind of expound upon that, like, if I'm, if, if I'm scoring a call and Susie here scoring a call and Jimmy scoring a call and Janie scoring a call, we're gonna probably all come up close. But we could be, you know, this could be a 92, you could be at an 89.5. You could be at a 91, like we. We're gonna be in that world. You know what I have found?

Speaker 2:

And again, if, if little old expia with with a non software company Is creating prompts for our customers right now, that when we score them, right with chat, you now, granted, we've had to do a lot of work on the prompts to your point scaling this to a, you know, from a software standpoint. But the chat, gbt and the ai has democratized this for everybody, right? So again, if we're figuring this out with four of us and we're having our qa guys then calibrate, and again, sometimes we're on, sometimes we're off, but it's 84s to 82.5s, right. And again, when you're in qa, you don't care so much about that, right that that it's that close. You care if it's a 90 to a 50, right that that it's that close, and you care if it's a 90 to a 54 and an auto fail, right, I mean, that's that's a big deal, um, but it has made me think again, and the main thought that I have here is is, if we can figure this out and we figure it out for our Clients and we're going to roll this out to our clients in a week and we've only taken a couple weeks to kind of figure out the prompts for some of this stuff Um, now I wish, I wish that were you cx1.

Speaker 2:

I wish cx1 had a way that we could send our forms back into cx1, like some type of api, or that we could. We can't like everything we're doing now is kind of outside that world, um, but it makes me think if there's, if there's you guys, or if there's a ccast company that has 400 or 1000 programmers that are working on this stuff, um, that this is going to happen really really fast and I just I can't see a world where Someone's not going to be able to figure out how to do these prompts really really well in an easier form with some type of Wizard or widget right that can almost use ai to build your form out to then score doing there.

Speaker 3:

Well, that's the thing I mean. There's, there's I've seen plenty on. You know, problems that help you with your problems. You know, get your. You know, here's what you just put in and you get. You can refine your problems, tell it what you want and it will give you that. You know that prompt, and it works pretty well.

Speaker 3:

The the thing for me is is, when it comes to to using this sort of technology, it's it's trust. You know, can you categorically trust the, the, the, the scores that are coming out from that that ai, you know, when you mentioned about, you know there's a. You know we all score things and it might be slightly different and yeah, in a grand scale of things it's not a huge difference but perhaps an agent that might be, you know, particularly if that meant a difference between a pass and fail. So in those cases then I would worry that there would be Disengagement from that that agent. You know they don't believe in the scoring because, well, lesson, that was a, an ideal call. I was told by the, the team leader, that you know the call was fine, it was just the ai that scored. That you know, because that's just what the, the score produced.

Speaker 3:

So that's that's my worry because I think we spoke before where there may be a shift towards coaching. You know, and I think that is a really important point, because ultimately, the whole reason that we do the evaluations in the scoring, yes, to measure the, the quality, but it's ultimately to use the insight to be able to coach the performance and the agent. So when we remeasure it, it's going to give that that increased score. And I always worry that in these sorts of things that where it's purely ai driven, you've got tons and tons of data, absolutely, but if there's there's slight mistakes in that, then the agents won't buy into it. So then when you're trying to coach an agent on ai data that they don't believe in, I feel as if you'd be, you'd have a barrier there.

Speaker 2:

I guess we're assuming, though, that humans don't make any, any mistakes. You know what I mean, like you know, because we have, because I I mean we have seen too, and again, it hasn't happened a lot. I mean, to be fair, it is not, but we have calibrated calls where we had a little bit of difference, and when we had a group of people or the supervisor, they go look at it. Right, the chat gpt score was more pure to the scorecard, then kind of some of the nuance things, and maybe that's bad Right, because because the the agent may know a little bit, like why I, I let that one go a little bit, or I, but I would say this too is, if that's what's happening, first of all, the form is probably a little bit wrong. Second of all, we can add that into the prompt. I think, again from a small-scale deal, I mean I don't know why every call center and I don't know if I should say that or not, but should maybe look at even just taking a transcript going into the UI UX. Don't use the APIs, forget that for right now. Go into the UI UX and then start playing with that. Chris. I just think more people are going to start doing that and realize, holy cow, yeah, I think the speed of this thing now is at Mach 30, as it's been.

Speaker 2:

Let me say this too. Let me ask you, because there's something I do agree with you on that we need to figure out as an industry and we talked about this the last podcast, which was, let's say, that we do automate QA and that's great. Then what the heck do we do with all the data? We've talked that through Before. Let's say, I have 1,000 calls that come in and normally I'd have a human being. We score, I don't know, let's just say 50 calls in that day or whatever that number is. But now I have 900 calls that are scored that we're using AI. What do we do with that? That almost adds like, oh, there needs to be another analytic or another AI. Then that comes on top of that to make it even useful. So is it even good to do? I think those are some questions as well.

Speaker 3:

For sure. I mean we'll look at the usual conversation coverage percentage. For a QA team it's probably around 3 percent of all of the conversations and that always shocks me. Where you're looking at 96 percent, 97 percent are left completely unchecked. So I think a definite advantage has been able to use AI to give that 100 percent coverage. I think because that technology does exist.

Speaker 3:

I think what we'll see, probably sooner rather than later, is that it'll become a requirement to have that higher level of conversation coverage.

Speaker 3:

So getting all of that data is going to be invaluable, because I think the key thing we're having all of that data is not using it just to farm out to agents and show here's all of your evaluations, here's the reports, here's what we're going to talk about.

Speaker 3:

I think that data needs to be restricted to key members of the team to be able to draw out the key insights in there. Because if you give all of that information into the general population, I think it would be overwhelming. Because when you say if you're scoring a thousand calls and you're getting a thousand results and a thousand insights and potentially thousands of bits of feedback, I would be overwhelmed. I would be too focused on that rather than dealing with the customer on the end of that telephone. I think that's what you always need to be mindful of. But then again, that's bringing the focus back into the human element. I think that the human element is important and they are to be able to filter all of that stuff out so that those agents are getting human, filtered insights and feedback rather than just coming directly from the AI.

Speaker 2:

Yeah, and again, I know we disagreed on who should do the coaching, but I think maybe this is another reason why a on-floor like let's just say, and again we're thinking this through, I have no answers, I have theories. I don't think anybody right, no one at this point has answers. It's moving so fast. That's the fact, right, no one knows really what to do. That's what's cool about it. You can be first, but maybe that's good, maybe that's bad because, again, there's so many people thinking this through. But I think that when we do have something scored, if we do go this AI route in our call center, at least if we do have a lower score, it does go to the supervisor. The supervisor will listen to the call and then coach the agent, and I think that can be a little bit of a buffer as well if we think something is way off.

Speaker 2:

I think we are thinking through. There needs to be almost. You don't want to say the same QA staff is now going to calibrate the AI, right, but how do you QA the AI? Like? These are things to think about. Like, do you need? We probably wouldn't get rid of 15 QA, not when I say get rid, move them to like a coaching aspect, or move them onto the floor. You know, have another role or be a prompt person, right?

Speaker 3:

But I'll you end the frame. You know, prompt engineering, that's Thank you. Thank you sure.

Speaker 2:

And again I think you know, to have a smaller team, to then be kind of queuing the AI, which, which Maybe that defeats the purpose, maybe it doesn't, but again it's like you need to build things. It's the chicken or the egg, like that's great that we have this, this, all of these calls scored, but now we, almost to make utilize use of that, we need that tool from an AI standpoint to utilize all of this data. Right to say, because what's important is it's not just the low scores. We're full, full analytics now. Right, so we're looking at, you know hey, maybe you know these key phrases that came in today, right? So the whole analytics platform that you had, with the bubbles and all that go away and it's almost like this becomes the whole thing. Right, because this is what customers are talking about.

Speaker 2:

So the QA and analytics they they always were small, at least they were siloed before now starting to kind of merge together. And and what do we talk about? What is important? What do we talk to an agent? Is it just the low stuff? Is it they miss these closing signals? Or you know, this is marketing data that came in from customers. All of this data comes in that somehow that's the new analytics is what we're kind of talking about from an AI. So maybe you're right. I mean, maybe this isn't, you know, ready for prime time from there. That's that kind of AI machine learning, but just taking a certain specific amount of calls and scoring at least for it for now, I think is the first baby step that we're making towards, towards this robust analytics QA World that we're about to see.

Speaker 3:

In a year Maybe, yeah, I mean it's, it's as I was saying. It is moving very, very fast and I think a lot of people are concerned about that. I got an email in just earlier today and it was a I think I've got the stat here it was talking about. It was a survey that axio axios done and 82 82% of Americans think that we should slow down AI development. And you know, I'm kind of it's, I'm kind of on the fence about it, because I think at the moment there's certainly plenty for us to get get to grips with because it is baby steps as a journey and I think if it goes too fast we'll become overwhelmed and they'll just be so much that we don't know what to do with.

Speaker 3:

And I think with that, that journey where we can Start to dip our toe into to using artificial intelligence for QA, we can see the potentials, we can see we can refine it as we go along, we can see those benefits.

Speaker 3:

But in terms of how we make sure that that data is correct, when one way I get it it is correct, when one way I guess that could happen is the, the 3% Coverage, the the call centers typically get now with her QA teams, that 3% could be focused on, you know, calibrating the results from the AI. So that's another thing. I guess that that that could potentially happen, but for me, the the kind of main element it always has been is the coaching, you know where, the QA element and gathering all other stuff. It's a means to an end. It's about coaching the most important asset and a contact center which is there as the agent and, I reckon, getting the insights from the AI. You know, yeah, that can fuel your coaching conversations Absolutely, but ultimately, that conversation needs to, you know, needs to be a, a human interaction in order to get that.

Speaker 2:

Don't you think that that could? That could help that, because I agree with you like I don't want AI. I don't want we're not there to AI coach or even to send feedback to an agent, but you know the prompts that we have found right and the outputs that we're asking for or that we have tested things like all right, please score the call. Give me the overall customer agent sentiment. Give me five things that the agent could work on. I'm back to make the score higher. If you gave somebody under ninety percent, please go into detail of the things that you didn't do. You know, like just testing all this crazy stuff and I think that helps. Right, I can. I don't want that going to the agent because they could be, but if I can get that, I don't see a piece of paper but you know that on the desktop Right and to have that my supervisor, or now the q, a person that I don't need to score calls go coach with all this extra. That's the stuff that I think I can do right now. That really.

Speaker 2:

And shifting the burden of scoring a call right to make in the contact center better with humans, because I'm getting all this extra data that I wouldn't get before. I think that's the what's exciting me and I think what we can, what can happen now and we're gonna see this just scale exponentially now moving forward, and that's kind of get why I think you know by the end of this year, like If you're getting into q a, I think it's it's you have to rethink what q a means. Right, it's, it's coaching now, it's, it's maybe looking at prompting, but it's not scoring calls. Because I think that, again, I'm Pretty sure that that is the first drop shooter drop here in this world.

Speaker 3:

The thing is, I think the scoring of calls is always. It's boring to know it's not right at the end of the week. You know when you're enthusiastic and motivated, but by midweek, the end of the week, you are exhausted. You know you're listening to the calls, it's Taking a little boxes and so forth. So the quality of probably a scoring and your, your feedback, is that that goes way down. How? Because at that point you just want to get over and done with. So I think in that respect Is absolutely going to be a game changer, because it's given is working at maximum effort every single time, and that's that's a really cool thing, I think you're right is going to change the way that the q a works in that respect, for sure, and that's that's.

Speaker 3:

That's very, very exciting. And it is moving so fast. It's moving really really fast and I think we are, you know, those concerns about as it moving too fast. I think that's down to us to make sure that the way that we are using it is, you know it's we don't lose sight of the overall goal. It's about getting inside coaching performance and you know, and going from there, I don't think we can get to carried away with it. It's easy to do, for sure, because the things that it's able to do just by you know prompts and things like that is is amazing, but I think it's. I think we need to Be grounded when it comes to q a, when it will happen. It's on a trajectory, but we just Enjoy the ride, but I think we just need to be mindful, because I'm loving what's happening in such a short space of time as well. It's, it's, it's been amazing.

Speaker 3:

What can we just yeah, I think we just need to be mindful because, certainly when, when I'm talking about a I and I'm talking about, you know, smart score, I get super excited about it because it's such a cool thing.

Speaker 3:

You know, seeing that score, you know just magically appear and all of that feedback and that insight is excellent. And you know, when I show people, it's really really cool, which is great, but it's listen, that's a lot of enjoy seeing that. But show me your core product, because we still need to q a right now we're using spreadsheet, so that's, you know, that's way down for us and I think that's that's something that that All surprises me, as well as the number of people that I still q a using spreadsheet. So there is, you know, opportunities for progressive organizations using this sort of tech, but still a lot of people there that are, you know, very early on in the q h only, which is absolutely fine yeah, chris, let's talk about a value agent here for a second and your guys roadmap or your guys plan or how you plan to tackle kind of this.

Speaker 2:

I mean I know you guys have the smart score and your guys are working on that for the machine learning in the a I standpoint. I mean again, I don't know if you get you speak for the company. I mean I'm sure you can kind of elaborate just a little bit on what are some of the things that you guys are working on or you know, and where do you see this going? You know as kind of I don't say an end goal, because there's no end, but you know down the road. I mean, what are some of the things that that you guys are actively working on to improve or or kind of change what everybody's doing?

Speaker 3:

Well, that mean we've got smart score already, and that's that's, you know, brand new, based on using, you know, those last language models, and what that's doing is score your conversations. It'll give reasons as to why you're scoring it and give those, give that insight as well, you know, improvement suggestions. The key thing there is it's for the human evaluator to decide, you know, whether that's correct and also the level of feedback of us going to then pass on to the agent. And what we're also using that tech for is to be able to summarize the conversation. So here we go. Here's some bullet points as to what happened that in that conversation and that's not just text tickets, that's phone conversations as well.

Speaker 3:

And now the cool thing that we can also do with this tech is sentiment analysis based on the content that happened in that conversation. Here's, you know, key points here was empty, demonstrated, here was customer frustration, and again, it's just giving you that action of a linsight to find the conversations that matter and, additionally, where that conversation is being analyzed, the AI will be able to suggest things that you know topics, missed moments, we've called them. So, ultimately, if you are actually looking for a topic, that's great, it will look for that. But I'll also give you suggestions where there's maybe something you've not thought to look for because you just didn't know it was happening. So it's able to give those suggestions. That's something that they were going to be looking at to be able to bring in.

Speaker 2:

Have you guys? I mean, are you working anything kind of in that, in that prompting world, or are you staying more kind of almost what I would say, maybe even advanced? Right, because looking at more the machine learning, that the large language models and how you can develop tone and those types of things, what technologies are you guys playing around with or utilizing you know from?

Speaker 3:

from that aspect as well, Listen, there's so many large language models out there and I think that the most popular is, of course, chat, gbt, but what we found? We always say this is not about the technology, it's about the application of the technology. So we are. You know, we are using a number of large language models so that we're not relying on one, because the one of the biggest worries, I guess, is charge is amazing. But what if it goes down? You know that your whole AI at that point is then down with it, so we are able to just switch, you know, instantly to using, you know, another large language model. Customers have also got the option to be able to choose which one they want to. You know they want to be able to send their data to and analyze their conversations.

Speaker 3:

But in terms of the prompts, we will provide guidance and education, but it's ultimately for and this I guess is is given rise to this new sort of kind of industry we ultimately guide and train our customers to be able to use the prompts and write the prompts for themselves, because you know, you know from from writing the prompts for your scorecards there is a million different ways that things get measured and a million different things that customers want to measure.

Speaker 3:

We can never write prompts to cover all of that, so it's very much about educating the customer in order to get the best out of it.

Speaker 3:

But, like I mentioned earlier, where your prompt could maybe be refined, you know, and maybe just be tweaked a little bit, we can use the AI to do that as well. You know, just by clicking a button it can refine that prompt and give you a suggestion, you know to to make that a little bit better. So is definitely a journey in that respect, where you constantly need to to make sure that your outputs are accurate, because you know it's. It may be for 100 conversations, yes, giving you the results that you want, but then there could be two, three, four that are not quite accurate because it's a little different conversation. So I think it's about monitoring that consistently, to be able to tweak that prompt when you need it, and that's that's going to be a big focus, I reckon, is making sure that the prompts are accurate as much as possible. But it is going to be a journey, is constantly testing it, which is a cool place to be. I think it's never going to be an end goal, as you say.

Speaker 2:

Yeah, no, and I think you know what I was. I was just on a on a call before this, before the podcast, and we were having this discussion and it's kind of a good lead up, you know, to talk to you. And the science is there. Everybody has the science or or it. You know, the AI and the open AI and all these large language models has kind of democratized the science of this. Like the tools are there, it is.

Speaker 2:

Now it's turned into a, an artist in a creative world, and I think you know from from what we have found developing those prompts it is. That's why I think it is very difficult and you could be right In the this could be maybe a little bit longer than we think, because I've talked to everybody right, and you guys are in this. You know you guys might be a little bit ahead of where some of these guys are, but everyone wants to. To scale this thing, you have to be able to have some type of kind of regimented way of doing a form right so that it can kind of pick some things up, because it's very difficult to kind of Willy nilly a prompt right. It just start to create some things and I think that's.

Speaker 2:

That's been the hard part. If you have a, you know one through ten questions, yes or no, you could do that tomorrow, but nobody has a has a doubt. Now let me ask you guys, let me ask you this do you guys see a? Is there more importance for you to develop kind of your smart score technology to make that I don't say an industry standard but something that people can really trust? Or do you see you know the bridge going to? We're gonna try to create other people's forms like where do you guys think what have the most benefit for customers? I Think yeah.

Speaker 3:

I think the way that, for us, customers should always be, can I in control? If they, you know, if they're looking for a little bit of guidance In order to create scorecards and it's using that that sort of kind of taking prompts to be able to reduce those elements, I think it should always be here's. Here's what I would suggest. But it's ultimately up to them to be able to do that.

Speaker 3:

The whole tech is about Assisting. You know the way they we've termed that, I think make us off to the same, where it's a co-pilot. It's there to just assist you doing the work that you do. So I think for us it's always going to be In that regard. It's there to assist and advise, but it ultimately it's always going to be the Human component that makes a final decision, because if it goes wrong, if there's something that's not done, accurate, you know who you gonna blame. You know it's the AI. I mean, the AI doesn't know right from wrong. It's going to be the guy that you know, the guy whose job it is. That's it. That's that's always going to be the key differentiate. There's always going to be a human component there to make the decision because ultimately, it's on them.

Speaker 2:

Do you see a world when we have in your like, in your Chris, just crystal ball and not even speaking for evaluation, let's just say speaking for you? Do you see a world when, when there will be when, when, when automation of AI does happen, like how? I mean I think it will happen. I mean I think you and I just kind of maybe disagree on I don't even say the speed of it, but who can use it, when I think it's probably the the biggest, you know, differentiator, but when do you see that happening? Like, when do you see the technology kind of overtaking what what a human can do In what I'm sure, with what you guys are developing, what other people are developing?

Speaker 3:

I think there's those two things. I think that I think about when, because I think about this question a lot, and for me there's the technologies developing rapidly and I think the the, if it was left to to just sort of kind of go its own way and there was no regulation and there was no Restrictions and it was just, you know, let's do it, I think we'd be talking, you know, probably Gosh, certainly within the the next five years would be seen that sort of kind of level of automation you know across, because every every app that you open now there's, you know, here we've introduced AI. You know you want to.

Speaker 2:

Do Thing is blowing my mind.

Speaker 3:

I'm gonna try this and you know what? It wasn't all that good. It's a fair enough suggestion, but I'll just write it more and we watch then. I guess you know it means the human components. So you know I still make the decision, but I think the the biggest, you know, the biggest driver for making it probably longer than that, is going to be the, the human component, because I think we're right to be cautious About not running too fast with it.

Speaker 3:

You know, even the, you know opening, I themselves are calling for regulation and things, and I think that is really Important because there's, as it's coming out more into the mainstream, there's a you know it's great here in all of these cool things, but along with that, get into the mainstream news, there's a lot of scheme longer and where it's like AI is done this and it's cost all of these jobs and you know the end of the world with AI and I think it gets, it gets headlines and it gets viewers absolutely get that. But I think that's it's not accurate. It's absolutely not accurate. It's about you know, understanding the take and see what it can do, because ultimately, if we're not using it, we're not giving any input. Yeah, it doesn't so and I think it is this is the the start of the journey where not just the sort of can I take like take-minded people will be, you know, learning about this. I think everyone's going to begin on that journey where everyone's starting to use it because you can get away from it. You know it is an every app that you open and it will just become second nature. There'll be that expectation that there's some sort of AI driven component and everything that you use, and I think I think that's part of the part of the journey because by the time that it does Jump to the next level, everyone will be used to using AI and the current sort of kind of format, as just now.

Speaker 3:

You know where you're seeing the difference between you know chat GPT 3.5 to 4. That is a huge. You know it's so cool. When you see the, you know the difference by you know here's a prompt and put 94 and it's output is excellent, but then you put it into 3.5 and it's it's not quite the same. So even that jump is astronomical, but that's not available for everyone. It's only the people that are looking to go out and find that you know letting will and to invest and Understand and that take that are getting those benefits. But sooner rather than later that will start to come out to the mainstream and people will really start to Embrace it. And you know what I think it's a good thing. There's no escaping it.

Speaker 2:

And there are a couple drawbacks to, I will say, you know, to some of the points that you just made. First of all, there is a security aspect, right of data On chat, gpt. The good thing that I at least felt a little bit better with as we were testing is, you know, utilizing the API's, they at least don't model off those which I'm not saying that's a hundred percent secure, but maybe feel a little bit better, you know. I think that you know there is the, the cost, right. So you know, voice, voice for chat, gbt is coming and I have, I keep hearing the, I keep using its, its months, not quarters, right. So so to get rid of of Transcripts, right, and to actually use the voice recording, which is good and bad because of there's a little bit of, I think, a length of time and could be a little bit slower, but again you're getting the actual thing that's being said. You know, even even looking at some of these pass to pass problems and отдел from them, if more people are like the, if you're using an Excel spreadsheet, we can talk about automating QA, but the bottom line is, in today's 2023, august 10th, you need a transcript and if you're using Excel, you don't have transcripts, you have just a recording, if that. So I think there's some drawbacks to thinking some of those things through of a full recording to transcript, to outputs.

Speaker 2:

There's a lot to this that I think it's one thing to say again that the technology is there, and I think the technology is there and it's kind of naked self. But then when you kind of peel back the layers, that's obviously the doubles in the details, right to try to really make this thing to work. But it is an exciting time. It is happening so fast. I can't wait to see what some of these, what you guys, come up with. With. You know these actual software.

Speaker 2:

You know kind of SaaS providers that all they're thinking about is this and QA and not even just just any of the call center stuff, and what we're about to see from from a use case scenario is going to be it's going to be pretty exciting.

Speaker 3:

For sure. I mean the comeback to what you mentioned about the security aspect, that's you know that's right to consider that and one of the benefits of using, you know, I guess, an organization like like a value agent. We've thought of all of that. You know, we've. You know we'll create the transcripts of the conversations, we'll send all of that through, and the tech already exists to be able to identify sensitive components of your conversation that you might not want to be transferred to the last language model. So you get that option. Where you can, you can delete that, and I think it's important to have that, that piece of mind that your data is safe and secure, and that is that's that's always going to be a fundamental consideration, because data is that's a greatest asset Safe from the agent.

Speaker 2:

You know and again I was going to wrap up but I just you said something there too and I've had these again everybody knows that we use utilize nice CX, one for kind of our platform and you know the conversation that I've had with them to have talking about data right and the new, you know valuable item out there is data and and you know Chris Crosby, who I love, and he he posts a lot on on kind of the AI tech side of the contact center world. You know he had a post out the other day where he talked about he goes, you know he goes, they're utilizing all of our data and we kind of own technically on that. We pay for that data from, from all the call center work and the transcripts and the recordings, and it's sometimes it's very hard to get to that data to kind of play with some of the things that we want to, we want to do and that's another thing. I mean, that's a discussion for a whole another time. But the value, the value of data right, because of what you can do with that when you have these kind of you know a lot of it.

Speaker 2:

But it's it's on this one kind of topic right that we're like our call center, this program, this skill and the insights that you can get from an AI standpoint when it's so clean right, because that data is just super clean on. What we want is that's the next thing that everybody's got to think through it's because people are going to want that, because they're going to be able to do things like we're doing right now with, with, with with APIs of of chat, gbt and open AI. They're going to want to start to build their own stuff with their data. That's going to be really interesting kind of discussions on how that's all going to play out to yeah, absolutely.

Speaker 3:

It's exciting because this is, I think, just to start over and we're just going to see. It's going to go leaps and bounds with the things that we're able to do, and I think the safeguarding data is going to be paramount, absolutely.

Speaker 2:

I'm going to. I got. I got another question for you. I'm sorry.

Speaker 3:

This conversation is just. It's so many things.

Speaker 2:

I know what do you think about and again, I think this is a great question for you guys, because I know the scoring aspect is important. But you guys are about evaluating the agent, the coaching of the agent, helping that whole facilitate that whole kind of nuance thing where we can make the context and or make the agent better with technology. Do you guys see a point to where you know the AI and I know you're very people involved but the coaching aspect is another thing that people don't talk a lot about. Right, when it comes to AI, I think that that is a little bit more advanced than where we are now. I totally agree with you on that. But I mean, have you guys had some, some discussions on that, on on, on some type of automation with the coaching aspect or giving subtle hints, or how do you guys view that? Or is that just something that you know? We're kind of we're not really looking at that right now?

Speaker 3:

I think it's, that's certainly something that's that's we would consider probably a lot further down the line, and I think the reason being is given given tips, you know, for coaching tips, absolutely. But in terms of automating coaching, you know, coming back to that human element, you can never, ai will never be able to truly show things like empathy and I think for it, certainly for me and in my opinion, that is the the most important aspect of coaching is to be able to show empathy to the, you know, your coach, to be able to pick up in those, those triggers and respond to that and, you know, have a human conversation. I don't think you'll ever get that with our, you know, with our machine.

Speaker 2:

I'm with you on that. I mean, I I think maybe there's some things that you know from an agent assist standpoint that maybe we could be prompting some, some agents, with. But I think, from the overall coaching aspect, I think you're just asking for for revolve, revolve, right like.

Speaker 2:

I mean you know they're just and it's cold, right, I think it's cold to just be like there's one thing to score call and just send them the score which I don't even really like to do and we don't really do that either but like here was your call, here was your score, fix this, do this. And here's even a snippet of the call. Like I don't know if that that would resonate probably like go pound salt right, like there's a, there's human element there, for sure.

Speaker 3:

Oh yeah, I mean I remember I was thinking about this earlier where you know your smart speakers, you know they are, they were amazing when they came out and you know they could do all these amazing things. But I do not speak to anybody like. I speak to my smart speaker. You know they abuse that thing gets, and that's because there's no there's there's no emotion in there. And even now, if we have a conversation and you knew that I was, you know, a computer generated avatar the conversation would be completely different because it would be one sided in terms of you know that that emotional connection, and that is that's never going to be changed.

Speaker 2:

Alright, chris, I appreciate it. Man, this has been so much fun. I mean, it's amazing.

Speaker 2:

Even you know we talked, I think, february, ish, maybe I don't know if that sounds about right, you know before, before calling contact center expo, and I mean the change that has happened in in six months, like we didn't touch on a lot of these. I mean, I wasn't even thinking about this stuff six months ago. So you know it's awesome to see what you guys have been kind of doing way before the curve here. You know from from thinking through this AI piece and, I think, in a really smart, intelligent way to to try to help, you know, our coaches and our contact center agents, but still trying to catch everything that we possibly can Without that, you know, without a little bit of human touch, to make things go a lot better. So take it, take maybe the last couple minutes just working. People find you, you on LinkedIn, in the company and I know you can make note about you value, agent, calm and maybe some of the main things that you guys do. But go ahead.

Speaker 3:

This is yeah, yeah, check is out.

Speaker 3:

So if I was in dot com, you'll be able to find is there and you'll be able to explore the AI for yourself. You've got plenty of product tools out there where you can play around with it, see what it does, and if you're looking to see more than absolutely getting touched, you'll find you on LinkedIn as well very active on there, and I'm usually, you know, putting up videos on my thoughts and AI coaching. Coaching is very close to my heart, so there's loads of stuff out there. I love to talk about it. So, yeah, anyone is interested then then look me up.

Speaker 2:

I will tell you guys, if anybody who's listening or will be listening, the podcast that Chris and I did before. We talked specifically on coaching. Chris gave amazing coaching techniques, how to do coaching. So there's, there's I still I've chopped those things up 42 times, I posted them so many times and I think it's it's really really good stuff. So make sure you check that out not just this one, but but check out that one. Yeah, chris, love your brother. Good conversation, man. I think that this is exciting time. We got to do this every once in a while and what happens the next six months absolutely, tom.

Speaker 3:

It was excellent. Thanks a lot for having me.

Speaker 2:

All right, buddy, we'll talk. All right, take it easy.

AI's Impact on Call Center QA
AI in Quality Assurance
AI's Impact on QA and Analysis
Using AI for Conversation Analysis
AI and Data Impact on Future