Advice from a Call Center Geek!

10 ChatGPT Prompts to Fully Automate Your Contact Center QA

September 08, 2023 Season 1 Episode 203
Advice from a Call Center Geek!
10 ChatGPT Prompts to Fully Automate Your Contact Center QA
Show Notes Transcript Chapter Markers

Ready to unlock the secrets of automated QA in your contact center? This episode is brimming with insights about using chatGPT to revolutionize your QA, promising to transform how you approach your work. We walk side by side with you, through the labyrinth of user accounts and databases, right up to crafting prompts for a form from a client. 

Ever wondered how to ask the most effective questions and generate impactful call evaluation outputs? Your search ends here. We navigate the crucial waters of quantifiable metrics, systems to rate calls using binary points and CSAT scores, and the power of leveraging a confidence score for each assessment. 

As we round up our discussion, we lift the veil on how AI can categorize call transcripts and the potential scalability of a SaaS model for larger companies. We touch upon our exciting, upcoming Discord, and share the success of our MVP. Delving into the importance of alpha and beta testing, we also explain the immense benefits of employing chat gbt to automate QA. Whether you're seeking to enhance your QA system or merely curious, this episode promises a treasure trove of actionable insights. Buckle up and join us on this journey!

If you are looking for USA outsourced customer service or sales support, we here at Expivia would really like to help you support your customers.
Please check us out at expiviausa.com, or email us at info@expivia.net!



Follow Tom: @tlaird_expivia
Join our Facebook Call Center Community: www.facebook.com/callcentergeek
Connect on LinkedIn: https://www.linkedin.com/in/tlairdexpivia/
Follow on TikTok: https://www.tiktok.com/@callcenter_geek
Linkedin Group: https://www.linkedin.com/groups/9041993/
Watch us: Advice from a Call Center Geek Youtube Channel

Speaker 1:

What's up, everybody excited to be here, excited to do this, you know, trying to foster this really cool community. So before I begin, thank you guys for for signing up. We've had about 250 people, 260 people that have signed up on the on the website, trying to get the word out, trying to give you guys as much value as we can for just giving me your email. So we're pretty far along in the process here at Auto QA with, with what we have done, at least from the meat and potatoes of the prompting aspect. There's so much that goes into building a product like this, from user accounts and database issues and interfaces at the end that you know, again, for people that are using Excel and spreadsheets, not looking for a full QA platform but something that is better than what they have and I guess, honestly, that's kind of our motto like better than what you have. So what I want to do today is is take you through everything so far that we've learned on the prompting aspect of this. I'm going to show you a full prompt at the end, going through everything that that we have done and honestly, you can go if you have chat gbt especially if you have, you know, paid the 20 bucks for chat, gtp, for have the user interface from the desktop. You could start to play around with at least calibrating and automating your QA right after this. Right, that's my goal. Our goal is to give you guys so much information that you don't need our product, right? Or if you would be like, hey, we want that souped up version for a couple hundred bucks, then you want our product. But at least you know about it ahead of time. You know what we're doing, how it's working. Everything we have is an open book. So if you have any questions, please you know, dustin and and a boss. Thank you guys for the high and the shout out early. If there's any questions as I go, please let me know. We'll have questions at the end as well, but again, this is just like me talking to you guys and having that conversation. So let's start. Let me get and now I'm going to do the full deck, the full kind of screen here Get my head out of the way, so okay. So again, we want to give you prompts for total free how that you can internally do QA by yourself, and hopefully this will give you some some aspects of it.

Speaker 1:

Now, the first two that I have are a little bit outside of that, but I thought we're really cool because something that we have done here is is number one is have chat. You can have chat. You can actually just review your form, right. So let's say you have an Excel form and then you have 15 questions that you're asking. You know, throw it in there, right, and ask it to summarize the areas of what is it actually measuring, right? Because a lot of times we think our form is doing something, but maybe it's not.

Speaker 1:

One of the really cool things that I have found is again putting a form in from a client. You know we're trying to look at, you know, customer experience, but really the only thing that this is scoring is is security aspects and making sure that we read disclosures, right. It misses the mark on what our form really wants to be. So, again, kind of a really cool use case for that. And then, secondarily, if you're just starting to build your form or if you want to build a new form, ask chat, gbt. And again, when you guys see these in quotes, right, this is a sample prompt. And if you're in here, if you're part of the community, I'm going to send you this deck. Afterwards You're going to have all these prompts, right, but this is another way of building out your form. Again, this is super basic, right.

Speaker 1:

But chat GBT, taking into account our company's culture of and I probably should have put that, adding that in In the primary KPIs, which are right can you restructure our QA form to emphasize the values and performance metrics that we want to prioritize? You can get very specific and ask for 10 or 15 specific questions to have a transcript ask. So this is outside of what we're talking but I think gets your brain thinking about some of the cooler aspects of what chat GBT can do. To look at what you have now, make sure that it is matching the value, the culture, the KPIs of what you want your QA aspect to do, especially that number one, right, the first slide. So many of us, how many of us you know you're looking at? Did they read a disclosure properly? Did they make sure that you know they asked three questions, right? All of this kind of structural stuff that I think goes doesn't go enough to a lot of times with the culture of the actual organization or what the quality aspect of the customer experience really wants to do, okay. So getting those out of the way.

Speaker 1:

In the first couple of minutes here we want to talk about how can you actually build prompts to automate your QA in your contact center. Before I get into that, just understand and I'm going to tell you who we use you have to have transcripts at this point. Voice is coming to be able to just upload a voice file and for the chat GPT to listen to that Right. But right now I can't do that. Voice is extremely expensive if you want to try that. So we use a company crazy cheap.

Speaker 1:

They're called DeepGram, after researching and researching and researching the best way, because while we have transcripts for our clients at Expedia being a BPO, I know a lot of you guys that have under 50 seats that are using Excel or using spreadsheet you don't have access to transcripts easily. If you want to do this on your own, you can go play with it. Right now they're not giving me any money. I don't have an affiliate program with them. It's just the best one that we have found so far that we can connect you through our APIs to pull very quickly. In two seconds you can have a transcript and it's super accurate. The cost is like nothing. So, again, deepgram is who we're using, kind of maybe write that down and if you want to utilize that.

Speaker 1:

But let's say we have the transcript now, Right, and the first thing we decided when we were doing this is, again, we have clients. We have 20 to 30 different clients. Every single client has their own form. I'm sorry, looks like I got lines kind of close together there. Okay, we said and you have your form. But based on this QA form, how can we reformat criteria? Now, a lot of you guys don't want to reformat, but some of you guys have a lot of leeway with that, right? So how can you rephrase criteria of was the agent epithetic and were they understanding? How can I ask Chappie G, how can I make that phrase that I like in my QA form? How can I make it into something that you understand more? Right? So this is something that we have found to be pretty, pretty good.

Speaker 1:

So to take kind of these not yes or no exact questions and have chat GPT, tell us, hey, listen, this is kind of really how you should format this question if you want to utilize me and it'll add more clarity, I can understand it a little bit more. So you have to think of literally chat GPT as a QA person, right, who is sitting next to you, who you're training to score, and when you're training to score you got to talk to them through all of these type of things. So again, I think that's a big piece. If you have questions that are not fully yes or no, that have some ambiguity to it, ask chat GPT. The best way to figure out how to ask those questions in the QA when it's looking at the transcript. Okay.

Speaker 1:

So with that, let's say you know we have 15 questions. We kind of like what the questions are. We don't want to change the format. We think it's kind of in a pretty good way. And again, you don't have to get crazy with chat GPT with that, but you want to give it as much information as you're building this. It's a large prompt. I remember you only got to do it once out.

Speaker 1:

So again, was the call opening? Did the, did the? The assault is probably a bad was. Did the associate handle the call opening correctly? And ChatGPT doesn't know what. That is right. So we have that question and then underneath that in our prompt, so question one here's the prompt, right. And then we ask it when evaluating if the opening was done correctly. Please consider that a proper opening involves a greeting and agent introduction and acknowledgement of the customer's issue. And then how did the agent do based on this? So any criteria that you can give.

Speaker 1:

And think about it if you're training a new QA person, you want to make sure that they understand red disclosure properly. What does that mean? All of those type of questions right, whether you have 15 questions, right? What does it entail? What are some of the things that need to be said by the agent? What are some of the things that need to be said by the customer At the end? What's the first call resolution happened? First call resolution is measured by a customer saying thank you, I'm all good by our question. You handled all my issues, those type of things. You could add those things in there. But from a basic standpoint, take your 15 questions and do just what a QA, what is QA listening for for those questions, and just tell ChatGPT what that is. And when you do that, that's when you get really, really good outputs right.

Speaker 1:

So again, we want to start with building our questions to a format that we don't think is pretty straightforward and we'll talk about that in a little bit Looking at each of those questions, asking the question and then giving it as much information of what that question entails. And again, I know this sounds like a lot of work, but think about it. You only have to do it once. It's like building a form for the first time. And once you have this you're able to put all of your call data in there and then you can score these calls instantly.

Speaker 1:

Ok, let's go down deeper on the rabbit hole here. So for effective questions, we kind of talked about a little bit of that yes or no's are great. We have found, obviously yes or no questions are the best. So if yes is five points, you know, in zero is or no is zero points, like it's a we call binary, right, it's on or it's off, it's five points or zero. That's the best way You're looking to avoid compound questions. You know where you're asking like three different questions in the same thing, right, make them their own line, make them their own question, as you can, or ask chat to do that for you to get the same exact type of result that you want.

Speaker 1:

And then this is really important is use quantifiable metrics when possible, right. So you know, we talked about kind of what first call resolution is, and sentiment reporting and analytics does the same exact thing. It listens to hear the customer say when an agent says, if have I handed all of your, your issues? And a customer says, yes, everything's good, right, that is a first call. Most likely that is a first call resolution call. Right. When you're talking about CSAT scores, did we find that the customer was fully satisfied on this call? Right. And you can do that by saying, if the customer says this, everything was great, thank you for your time. Those kind of things can correlate then directly to to what your CSAT is. So again, quantifiable metrics, if you can, you can look at average handle time and what it will do is it will kind of take the length of the transcript on average, right. So we know that you know, in average, three and a half minute call is roughly this many lines. You can, you can actually say that, right. If it goes over 3,000 characters, then please deem this as a call that the average handle time was a little bit longer, right. All these things that you can kind of think through and add to it to give you really good outputs that correlate well to what you want to do. All right, I think this is a really important piece and this is one of the ways we're thinking about doing.

Speaker 1:

Our pricing is not just. You know, when we do this for you guys, eventually in a prompt and an output we're going to talk about outputs here in a second comes out, right, it might not be good, right? So you want to ask for a confidence score, chat GPT when evaluating the call. Can you provide a confidence level for each of your assessments or for this assessment based on the clarity and information available in the transcript? Sometimes the transcript is very difficult to read, sometimes it's not at a very high quality, sometimes the agent is quiet, sometimes the customer is quiet. So when that happens, the confidence level goes down. So we want to make sure that we know that as well.

Speaker 1:

So I don't know the number yet. We're working on the number, whether it is, you know, one through 10, whether it is a percentage of one through 100, things that we're playing around with, right. But you know we want to only be able to charge or or honestly charge the count, right. Things that have like a 90% or higher confidence rating from Chat GPT, right? So I think that's a really important kind of piece of this and then when you're doing your calibration sessions with your QA staff, right, you're only looking at things that maybe are 95 or 100 to make sure that they're, that they're right. I think that's a really important piece, something that we've really found I actually got the idea from a friend at NICE on that and it's something that we want to make sure that, again, only the high quality stuff goes and if something it says, hey, only 20%, if only feel 20% confident on this, that it gets kind of taken to the side, all right.

Speaker 1:

Well, sorry about this, not very good. Let me see if I can make this bigger. No, okay. So we want to make sure that the outputs right are are very succinct and we actually want to show the outputs as well. So I apologize for this slide. I uploaded it and obviously didn't upload great, but right.

Speaker 1:

So the four things that we are really working on that I think really work great are number one give me a call summary Makes sense. Chatgpt has been doing that. All of the C-Cast players have their own piece where they're taking the transcript, taking a summary of the notes, putting it back into a CRM, right. So call summary is important. We want to score the call. Right, all of you guys are scoring calls. So what is the actual score of the call, what are the areas of improvement to determine the overall sentiment of the call with the customer and the agent? And then what is the reasoning and you see that at least I have the heading there what is the reasoning behind those scores that you gave so that your team can feel comfortable of the why these have been the four or five outputs that we have found to be best to give us the most amount of meat. And then obviously it's not in the actual output here, but we would ask for that confidence score as well. But that's already taken care of. So that's really the one, two, three, four, five, six outputs that we're looking for. So Dustin has a question.

Speaker 1:

I know it probably cannot detect tone in the transcript as it does now. When will we be able to upload audio? So there's a couple of things with this is yeah, tone has always been a lucid in speech analytics. Sentiment scores are kind of based more on word choice. Now voice again. I've talked to the people at some of the main C cast platforms as I've been kind of picking the brain to see where they are with this stuff. I have heard, quote unquote voice is is months away, not quarters. So by the end of this year, and we're planning for voice to be taking over this extremely quickly, where we can just upload an actual voice recording and have a go, which is a total game changer and takes a whole step out. It makes things a little bit easier for everyone.

Speaker 1:

So tone, I would say it's it's about 60% there, with what is being set on the call and the logic of an analytics platform and chat. You've seen, guys, I'll be honest, like I have taken our transcripts and looked at some of the I'm not going to name the analytics platforms, but kind of put them in and are getting very close to the same outputs that a full analytics suite is doing. So I feel pretty comfortable again on and I understand that some word choices, like could be joking that the logic isn't there and I'm not going to say that this is 100%, but neither is if, how and I keep saying this to everybody if we had 10 people around this room and I play one call in the middle, we all score it. How many different scores are we going to have, right from a 15 to 20 different question survey. I bet you there's going to be some nuance to that. We're going to be probably relatively close. But some people are going to say, well, I thought that was sarcasm, no, I thought that was good. No, she, she kid a little bit, all right. So I think that it kind of still plays with that. No one is saying that we don't just like we would now, you know, take this for, for you know, gold, right.

Speaker 1:

So but anything that comes out here we would send to the call center. We're sending to the contact center, our supervisors kind of listening, making sure that scores good, and then we're kind of going with the agent. So there's some checks and balances there too. But again, just look at those, how quickly and precise those outputs are right provided. Concise summary of the cause. Main points Based on the scoring criteria, scoring the agent's performance on a scale of one to 10, we're also scoring whether one through five, one through 10, these are all examples, you know when we're asking those questions.

Speaker 1:

Or you can say, based on the criteria, take the score that was given and give a percentage. Yeah, and I think you need to. I mean, honestly, we're we audit our human QA, because I don't have our QA people I mean at least us, we don't have our QA go and actually coach the agent, right? We actually send it to the supervisor who deals with those agents, who knows where they are, understands what their strengths and weaknesses are a little bit more. They listen to the call. Sometimes they have to push back on them too. So I guess it's a cultural thing that maybe needs to shift a little bit. But it's getting really good, especially the new chat GPT enterprise that we now have finally have access to and that we're playing with. It's a whole different level and it's a speed deal too. But you guys can do all this with the chat GPT, for I think it's really important to again we talk about sentiment it's really important to ask it for the reasoning behind why it scored it did.

Speaker 1:

Again, we have found chat GBT to be sometimes a little bit off. I will tell you, though, you know, if our average score in our call center stays in 89 I'm just making that up chat GBT is normally either on or two to three percentage points higher or lower, and Sometimes we think chat GBT, when everybody looks at that call, was actually more correct, right, because then we think about the reasoning behind that, giving those reasons of why it scored it like it did. And then it's like, oh yeah, I missed that. That makes total sense. Or we say, no, that doesn't make any sense, chat GBT. And then that makes you go back and then refine your prompt a little bit. Right, but again, these are the best outputs that we have found. We've asked it to be as more consistent.

Speaker 1:

Now you see where I said up top, make sure you show it. So what we have found is you put all of your you know, hey, chat GBT, please, you're the head of QA, you, we want you to score these calls. Here's our top 10 scores. We say this is what an opening is, this is what a greeting is, this is what a disclosure is right. We're giving it score one through five, like we're giving it all that. And then when we talk about the outputs, we're gonna tell if this, provided concise, you know, main points Based on the criteria, identify where an agent could have performed better sentiment scores.

Speaker 1:

We're doing that. And then at the bottom, we're gonna tell it which which makes it more consistent. This is really important and I I don't think I have it. Did I put it on this slide? Now, I didn't Is. Here is the format that I would like it in, and we took one of them that was already done, that we liked that had, you know, a paragraph for the summary, a paragraph for sentiment, a paragraph for when it could do better and show that exact exact thing. So every single time you get a consistent output that looks the same exact way. Right, because you don't want, because again it's chat, gbt, it'll, it'll, sometimes it'll have nuances because it's not doing the same thing every time, which is why it's great, but we want to make sure that that's. That's kind of how we're getting more of a consistent output and a consistent look. Why that is important is then we have our JSON outputs Great.

Speaker 1:

So if you want to get more technical with this, right, you can ask for the output, right, to be given in a JSON format, right, so then you can put that into a database and again, this is where it gets. This is the stuff that we will be doing and we are working on. That may be a little bit beyond what you would want to do, but I gave just a quick, you know, example there of the call type, the call summary, customer center. We can put all of that into a database, then, and have people be able to look, search For specific calls that have specific things in them, and make it so you're not just getting this text file for everything, which For most QA and for you guys, it's fine, right, but if we're gonna build some type of cool platform, we need to have this be able to be searchable, be able to have there's different types of reporting, based on what is being said, what sentiment scores, what is the actual score in the call, and so, again, you can ask for it to have a format Done right in a way that you can then upload this, all of your, your outputs, into a database so that you can do some really cool things. So I think I think that's pretty cool.

Speaker 1:

That's something we have found out in practice and it wasn't as easy as we thought it would be, but we had to get really specific with what type of format that we would want, and especially be in the RFC 8259 compliant Format, jason format. So that's what one of the biggest things that we've been able to do is. One of the biggest things that we've been able to do is and take our outputs, throw into a database, allow you to report, and then this is this is more nuanced and we we struggled with this at the beginning. So you know, we have a client that has really one skill Well, has multiple skills but the biggest skill multiple calls come into. There's a lot of reasons for that which I'm not going to get into here, but there could be a service call, there could be a sales call, there could be a gift card, there could be looking for a specific vendor, right? So all of these calls come in. So we started prompting with that and it was very difficult, but then we found that we can categorize them as long as we're being specific, right? So, given this call transcript, categorize this as a service call. And again, these are really short prompts, but understand what is a service call. Then I would add that in A sales call, a sales calls when somebody says I'd like to buy, or what is the price of, or whatever those kind of keywords or phrases is, or shipping issue hey, I have shipping issues, right.

Speaker 1:

Logistics issues, those kind of things, everything that you would put kind of in an analytics platform. Put what that would call, and then this can be another output of what type of call it is, and then again we'll be able to search for it to be like I just wanna look at all the sales calls, I wanna look at all of the shipping calls, I wanna look at all the service calls and then tell it again, always ask it why, like that's what we're finding. Like, if you wanna have confidence in the answers that you're getting, ask it why, make sure it's telling you and how it did its work. And it's what we're finding here. The only way that our QA staff, our agents, are learning how to trust it right, because we're asking for it and they're like, oh my gosh, that makes sense. Oh, you know what, that wasn't good. And I'm like, okay, that's right, let's go refine that. And when you ask it, how? Again, that's how you get your prompt better and better and better. So just to show you guys this, then let me share my screen and let's do here. So hopefully you can see this screen and I just did. I took what I just said and kind of gave a full prompt. Whoops, sorry. And again, opening evaluation right.

Speaker 1:

Consider, this involves a greeting an agent right. Acknowledgement of the customer's issue. Can we assess how these are gonna be performed in these areas? Giving the questioning right. So we have a greeting, right? Was there a clear greeting? You wanna yes or no? Yes is five points, no is zero points. Four score five. This is what it should be. We said hello. Thank you for calling company name. How can I assist you today? Whatever your verbiage is Same thing for an introduction and acknowledgement, right, we're giving that what that agent should be saying and what correlates to a five right.

Speaker 1:

Number three asking for a confidence level. Number four getting very specific with the outputs that we're giving it right For we want the call summary, we want the score. Ask it for a percentage instead of a one through 10, whatever you wanna do there. What are the areas of improvement that this agent could have made? The sentiment scores and why did you score it that way? Right? Ask it if you needed in a specific format to give it to you in your format. We can ask for that call type that we just kinda talked about there. And then again, feedback and refinement can go both ways, right? I like to do this when we score it with a human being and when we have those differences, we put the differences is put in the summary of what the human did and then see if it had chat GPT then can say I disagree or I agree with that as well. Again, I'll have this to you guys as well in there.

Speaker 1:

Hopefully you guys could see that this is my first time using the webinar thing from StreamYard, but I want you to be able to do this without us. I want you to be able to trust the outputs before we even have a product. I want you to play around with some of the specific things that we have taken and we have found to work really, really well, but then add to it and then we can have those discussions. I think that you can be the biggest help for us and hopefully we can be the biggest help for you when we're doing this. We're starting our full Discord, which will be next week. It's built. I just want to make sure it's secure and that we have all of the things and bells and whistles in there that we want so we can start to have these conversations. But I think it's pretty cool.

Speaker 1:

We've learned a lot over the past couple of months of how to prompt the best ways, how to get the best outputs, how to get the best chat GPT outputs and how to get the best outputs for what we need from a database standpoint, getting the most out of it. From a how do we build confidence with our agents, with our QA staff, when this thing goes making sure that we're always asking it to prove itself, why is it that you're doing what you're doing? I think almost one of the biggest deals From an integration standpoint. Paul is asking if we're going to integrate with HubSpot. I think for our we're at Alpha Beta testing now. Mvp is proven and if you didn't see the video, I'll send that as well. Again, we did a full video of doing a full prompt, uploading a recording, having get the transcript go out through the APIs, do the prompt, come back with the outputs. We have an MVP. We're now in going to be Alpha Beta testing.

Speaker 1:

Again, everybody who has signed up for the community. We're going to send an email out with just a Google doc. Find out who wants to help us do this. First part of it. I think for the first, for our MVP, for our first go with this, I'm not going to have any integrations with anything. It's going to be a separate thing. Again, at least in my head.

Speaker 1:

If you have it's nice CX1, if you have Genesis, if you have 5.9, you have your own QA platform. You're probably not looking at this. That's why our sweet spot and who we want to help the most, is those under 50 seeders using Excel, using spreadsheets maybe don't have anything really cool and at least to be able to give them a platform that they can start to look at reportings, have good outputs, where they can pull things up, where they can coach. Again, I'm not competing with enterprise anything, enterprise everything. I think everybody's working on this but nobody's looking at those little guys. So maybe down the road, once we get this and everybody feels comfortable with what we're doing, then the integrations can be.

Speaker 1:

I would love to be able to have APIs that go back into Genesis, back into CX1, back into 5.9, that we could take our outputs, put it back into their platform, because they have all those other cool bells and whistles from a reporting aspect, from being able to pull it up, be able to listen to the recording, have sentiment analysis with it. But at this point I think we're just, we're trying to be play with the say the little guys, right, but everybody who's been ignored, right, all the smaller contact centers have been ignored in this kind of AI deal. So I'd like to try to at least help this and I think we can give them a really cool product so you can see everything that you could do on your own and that's all the things from our MVP that we're trying to incorporate so that all you have to do is literally upload and everything is done, prompts are built and you just get your outputs on a kind of a gooey on an interface. Yeah, and again, dustin, I think that's where people are gonna have an issue with this, I think with O2 is because you don't want to sit like you don't want to sit to books, right. So if you have, like you're like QA built into the platform you have, yeah, maybe down the road, in two years or in a year we'll have full integrations. I don't wanna say two years, but in a year we'll have full integrations.

Speaker 1:

But that's why I think we've really focused on kind of the smaller again, I keep saying it but especially Excel and using spreadsheets, right, because if you're, you probably don't have a QA platform or you don't. If you're using Excel, you may have something that you can listen to calls with, right, but for us to be able to say, hey, listen, and we're doing this at a SaaS price, like I'm not doing this by a per seat anything. So, whether I don't know whether it's 299, do $799 a month right, based on how many of these you want to do, and we could have an enterprise deal where you could do 100% of all of your calls, just upload them. Or we could have APIs that literally connect to your files when a call comes in, that just goes and then the scoring happens. Like we could have pricing for that. But I think at this point I'm just looking at a couple hundred bucks a month to do a certain amount of making you feel comfortable with the amount of QA that you're doing and probably take a 10X right For a very small amount of money, and then to change that QA job that you guys have of not scoring. But maybe then they do start to be the more calibrator and go coach. They can have a bigger impact in your contact center than just scoring calls, which is something what we're finding.

Speaker 1:

When done right, when done properly, I think we can knock out of the park. So we're working on it. It's been working on every single day. I'm super excited about this. This is not anything that we plan on getting into, but when we saw the power of it. When we saw what we could do for our clients here at Expedia oh oh, just sorry, expedia, everything's backwards we said why can't we do this for everybody else? So why can't we start to build this out for those smaller guys that all the enterprise guys are leaving alone, that they don't want anything to do with?

Speaker 1:

So again, if you guys just keep getting the word out on the Auto QA, get people signed up for the pre kind of our pre-launch committee or community, again the Discord's coming next week where I will be there all the time Any question that you have about any call center, anything, and hopefully it's just more than me it's gonna be hard to get that thing started. But once we get that started, I want it to be the go place for QA, for operations, for workforce management for, and always having kind of really smart people in there that can answer questions. That can be part of that deal. So trying to build a community around this as well to see if we can help as much. So that's what I got for you guys. Any other questions?

Speaker 1:

And, like I said, I'll shoot the deck in an email to everybody who's on the, on the community or on the house has signed up and I'll shoot that screenshot or something to. With that, that final prompt To hopefully get you think. And I'd love to see if any of you guys post anything, if you play with anything, if you do anything, please tag me or either on, you know, linkedin if you show anything, or just DM me. But hey, tom, that works, he's on. That didn't work. I'd love to love to know. Dustin, I will have the discord Monday or Tuesday. It will be emailed again to everybody's that signed up.

Speaker 1:

I don't want to make it. I'm kind of going back and forth on it. Right, I want more, the most amount of people in there, but I'll probably just open it up to everybody, but maybe to you guys first, so that we can try to get as many people in. Yeah, paul, I hear you. I know, like the, the, the scorecard, you, we can For sure manually update or automate that thing. And just again, thinking that process through is I Just I think it could be really cool.

Speaker 1:

And again, some of the big guys are trying to do this, they're trying to make it, so you don't use your form that they're using like. And again, I know I talked about early on about making chat, gpt, making that the questions that you have, more chat, gpt, ish. I Don't think you have to do that. I think it's better for it because as long as you're getting the same outputs Right and it doesn't change that question a ton, I think that that's good. But my goal is to have you be able to use your exact form that you've used for whatever 10 years, that you feel comfortable, that you know what a 90 is a 90 and an 85 is an 85 and an auto fail and auto fail. That's one thing. We even talk. Like you, we can have auto fails in this too, right, like you know, if, if the, the rep does not ask three points of or three pieces of information when a customer calls, it's an automatic auto fail, right, that could be there.

Speaker 1:

But again, and then a and then guys will be asking for some alpha beta testers too, as kind of we talked about as well. So, hey, thank you guys for joining. I hope that was helpful. I hope that added some value. I hope you play with this.

Speaker 1:

I'll send the deck out to everybody, but I appreciate you guys being kind of the, the ring leaders here of the of the early or the early stage Israel. Thank you, great job it would be. This will help a lot of contact centers below the 30 members mark. I agree, appreciate that. And then again, like I said, if you're, as long as you're signed up on the auto auto qacom, I'll shoot that stuff out and we'll get I'd like to get five or six people to alpha test and, you know, maybe a health care, maybe a retail hardcore customer support, like just different nuances, to see if there's any, if we struggle with any of that stuff with anybody else. But again, thank you guys. So very, very, very much. If you can give a shout out on this on LinkedIn, I think that will go a long way to as some help. But, like I said, I'll shoot this deck to everybody and I hope you guys have a great day. Any questions that you don't want to ask here, just DM me on LinkedIn. More than happy, thanks, guys.

Building Prompts for Automated QA
Effective Questions and Call Evaluation Outputs
Using AI to Categorize Call Transcripts
QA Platform for Contact Centers
Alpha Beta Testers and Appreciation