Live Chat with Jen Weaver

Conquer QA Overwhelm with Stacy’s Fearless Ten-Ticket Method | Episode 20

Jen Weaver Season 1 Episode 20

This podcast is sponsored by Supportman, which connects Intercom to Slack and uses AI to give agents feedback and surface problems in real time.

“Consistent quality review, I think, is better than no quality review.”

 That’s the principle that guided Stacy Justino, Product Support Manager at PetDesk, when she launched a brand-new QA program for veterinary support in just weeks. Drawing from her experience at companies like Wistia and Loom, Stacy created a lightweight system her team actually enjoys and that she can sustain over time.

In this episode, we cover:

  1. Why ten interactions and two reviews each month provide just the right amount of feedback without overwhelming the team.
  2. How to select tickets that are random, recent, and representative so reviews reflect real work.
  3. The four pillars of Stacy’s rubric—accuracy, completeness, customer excellence, and empathy—and why they matter.
  4. How private coaching helps agents grow while public kudos reinforce team culture.
  5. When and how to scale QA up or down depending on changes in performance, products, or processes.

If QA has ever felt overwhelming, Stacy’s fearless ten-ticket method shows how to keep things simple, fair, and effective.

For more resources related to today’s episode:

📩 Get weekly tactical CX and Support Ops tips → https://live-chat-with-jen.beehiiv.com/

▶ Keep listening → https://www.buzzsprout.com/2433498

💼 Connect with your host, Jen Weaver, on LinkedIn

🤝 Connect with Stacy Justino on LinkedIn

🔧 Learn more about our sponsor Supportman → https://supportman.io

Chapters:
0:00 – Intro: Consistent quality review in support  
2:27 – Meet Stacy Justino: Product Support Manager at PetDesk  
5:05 – A week in the life of a support leader  
8:41 – Ten interactions and two reviews each month  
13:41 – Random, recent, and representative ticket picks  
19:00 – Building a concise rubric to define quality  
22:11 – Private coaching paired with public kudos  
24:12 – QA only what you need to review  
30:36 – Key takeaways for support leaders

Speaker 1:

Consistent quality review, I think, is better than no quality review. Since we are a small group of people who are carving out time to do this consistently, we decided to go with a cadence of 10 support interactions a month.

Speaker 2:

Hey friends, welcome back to Live Chat with Jen Weaver. It's not often that I get to do a podcast episode with somebody who I admire as a professional in our field, but who also has been a great friend and a mentor to me, so I'm super excited to talk today with Stacey Justino, who's the product support manager at PetDesk. Stacey arrived last fall and spun up a brand new quality program for their veterinary support in just a few weeks. Now you know we're all about QA here at Support Band. That's what we do, so I'm also super excited to unpack her topic her 10-ticket cadence, the four-point rubric her team actually uses and the simple Google Sheets trick that keeps her QA reviews fair and random. Stacey spun up this program to be lightweight so that she could actually keep it going over time with minimal overhead.

Speaker 2:

I think it will be really helpful for support leaders who don't maybe have a ton of resources for building a QA program. Let's be real, that's probably most of us. Before we get started, though, our QA tool, supportman, is what makes this podcast possible, so if you're listening to this podcast, head over to the YouTube link in the show notes to get a glimpse. Supportman sends real-time QA from intercom to Slack with daily threads, time QA from intercom to Slack with daily threads, weekly charts and done-for-you AI-powered conversation evaluations. It makes it so much easier to QA intercom conversations right where your team is already spending their day in Slack. All right on to today's episode.

Speaker 1:

All right on to today's episode, like for for stacy in a week at work. First, there's something that's like maybe starting with things that like are multiple times a day, sort of things. Yeah, checking on the chat queue. Uh, checking out our ticket queues, is there any ticket that's been sitting there at the top for a while? Let's make sure that somebody gets on that. Oh, we have six people waiting in chat. We have wait times about three minutes. Let's post in Slack to say, hey, we need anybody who's not on break at lunch or in a meeting to grab a chat. So that's something that's happening, you know, multiple times.

Speaker 1:

Throughout the day Beginning of the week, there's, you know, some weekly reporting that needs to get done. So doing that on Monday, weekly reporting that needs to get done. So doing that on Monday. In terms of other stuff, you know there's projects to be done. So we have, you know, we call our quarterly projects, or OKRs, like rocks. So making sure that I have time and I try to book at least a few hours in my calendar every week to make sure I can focus on those things.

Speaker 2:

And where do you keep that? What is that? Do you do that like on paper? Or In a notebook. Nice, same girl.

Speaker 1:

And then I have another colored pen where I check it so I can check out the boxes.

Speaker 2:

Of course, yeah, colored pens are where it's at. So you're talking about like planning out your week. I'm going to try that out, that simple little just like writing down what needs to be done today and then this week and with due sort of like for weekly reporting, um that needs to get done, um.

Speaker 1:

And then I have one-on-ones with each of my direct reports 30 minute one-on-ones with each of my direct reports, um, each week. So have those sort of sprinkled throughout. I try to put them in like blocks of two or three, so not too long of blocks, but that I, I like, I'm in that mindset Like, okay, this is my time one-on-one with the people who report to me. So I really aim to be present and not be distracted during those times. And then, of course, we have our team meetings on Tuesdays and Thursdays. I also have a weekly meeting with our senior product support specialist and our product solutions advisor on Tuesdays, as well as a Monday and Wednesday meeting with the support leadership team. So it's with our boss, the senior director of global support, and the support managers of all of the pet advisors.

Speaker 2:

Pet desk support teams and the other support teams serve other parts of the business.

Speaker 1:

Other tools yes, we collaborate a bunch because of the way the products interact with each other. Yes, we collaborate a bunch because of the way the products interact with each other. We now have a weekly pet desk communication, support and customer success leadership meeting which has been really great.

Speaker 2:

Today, we're doing something a little bit different. We're going to take a deep dive into your QA process for your support team. Will you just kind of get me started on where you work and what the context for the conversation is yeah, so I am a product support manager at PetDesk.

Speaker 1:

Petdesk is a company that has multiple products, but I specifically support our PetDesk communications product, which is basically kind of like a CRM for veterinary providers that integrates with their practice management system. And we also have an accompanying app for pet parents. So veterinary provider has our clients that have booked appointments and then, if they have a pet test communication subscription, we can handle appointment reminders, health service reminders oh Pixie is overdue for a rabies vaccine and we can send those kind of reminders. And then the pet parent can see all their appointments in the app, see a lot of other information. If the clinic has it set up, they can earn loyalty points or reorder prescriptions for their pet. So that's kind of the scope of what my team does, and we hadn't had a quality program before I got here and I started a pet desk in end of October 2024. And so we spun one up.

Speaker 2:

So that's exciting. But putting that aside, the quality program that you created, can you tell me a little bit more about, like, how you got started? It must have been. You saw the absence of a quality program and you thought this is an initiative I can run. Did I get that right?

Speaker 1:

Yes.

Speaker 2:

Yes.

Speaker 1:

So we already had some loose sort of quality standards. One of our senior product support specialists those are the folks who do training and onboarding of new people David, he would always go over sort of quality what does quality mean when it comes to pet desk support and how do you do that? So we already had a sort of a basis for that and I was lucky to come into an organization where quality was already sort of emphasized. You know it wasn't straight up like answer as many tickets as you can, so that foundation was already there. So I wasn't sort of starting from a point of sort of oh, our quality needs tons and tons of work. We need to put this in place, because where we're at versus where we need to be is a pretty big gap.

Speaker 1:

So that was a good starting place and I think the team, like all of the specialists on the team, were eager for more regular feedback. Right, because basically the feedback mechanisms up until we launched the quality program was oh, ticket got escalated by a CSM or negative CSAT, right. So or in the cases where you know specialists on the team in their one-on-ones with their manager would be like, hey, I think I could have done better on this. Can we talk about this ticket? So a little haphazard and not consistent and very reactive.

Speaker 2:

Okay, cool, yeah, and so you saw a big opportunity to systematize that. Where did you start?

Speaker 1:

Well, I drew on a lot of my previous experiences. So when I was at Big Fish Games, I ran the quality team within the customer support team, so you have some experience with that.

Speaker 1:

Yes, yes, and also at Wistia. I spun up a quality program there. So I was able to take a lot of what I'd found success with in the past and we decided to sort of start with what the quality guidelines were already in place, right for the team. So what did we? We cover in training, what did we have in our one guru card around quality and um, rather than sort of totally rework it. But then I incorporated some of the things that I found pretty important and foundational in a quality rubric, um, and then, since the folks doing the reviews are the two managers and three senior product support specialists who, spoiler alert, we all have a lot of responsibilities already on our plate.

Speaker 1:

So I had to really wanted to focus on something that was something we could do and deliver on consistently. I know a lot of organizations, especially ones with a dedicated quality team or that are using software that enables you to do things a little more easily. I know that those orgs tend to do like a percentage of total interactions small group of people who are carving out time to do this consistently. We decided to go with a cadence of 10 support interactions a month, so two QA reviews each month, five tickets in the first review, five tickets in the second review, so that people get regular feedback review so that people get regular feedback but it's not too cumbersome for the folks doing the reviews.

Speaker 2:

Yeah, that makes total sense, because you don't want you're trying to initiate this program and get buy-in and you don't want people to, you know, feel overwhelmed and feel like it's an addition to their workflow. But that brings me to a burning question a lot of people have about QA how do you choose which conversations? Is it the ones with negative CSAT or is it random?

Speaker 1:

be random but with some parameters, to make sure that you are selecting tickets that are sort of representative of the work that they do, right? So in our case we have live channels, we have folks who do phone support and tickets and folks that do live chat support and tickets and the phone volume and the chat volume is greater percentage than the ticket volume, the handle. So for folks who are working live channels, we pick over the course of the month for the first QA, we will pick three chats or phone calls, depending on which channel they are on, and then two email tickets are on and then to email tickets. So that's kind of how we make sure that it's representative of sort of their work over the course of the month. And then, in terms of other sort of things we look at.

Speaker 1:

You know, recency is pretty important, right. I always want the person whose ticket is being reviewed to be able to remember that ticket, right, of course? Yeah, that makes sense, and so if you pick a ticket, they handle it in the beginning of the month, you know, and this is their second QA review. Then it's not as impactful, right? So I have set so we're using Zendesk and so I've created a report in Zendesk. Unfortunately, I have to create two reports, one for chats and one for tickets, because of how the data sets work. But we look at tickets that were solved within the past seven days and exclude some ticket types. Like we have a not for support ticket type where it's meant for another team, or we're just escalating to the CSM. I haven't gone too far into like adding more filters, but that is sort of how we select them and then I make sure that the ticket created date is the day after they were emailed their feedback so that we're reviewing tickets that they interacted with after their last round of feedback.

Speaker 2:

Yeah, so it's not reaching back into the past. So you have these sets of time that you're QAing. That's really interesting and the filters you're using are essentially it needs to be a recent conversation and it needs to be representative of wherever they're working. So email and chat or email and phone, that makes total sense. And so just to get a handle on what you're working with, what's the typical volume of tickets or total conversations that your team is handling On an average month.

Speaker 1:

A single support specialist on our team probably is doing between 350 to 400. So it's like a pretty small percentage that we're looking at over the course of the month. But consistent quality review I think is better than no quality review.

Speaker 2:

Yeah, absolutely, and especially. It makes sense to me that you're starting this program with you don't have a QA team, right, and so you're being very mindful of the workload that the managers you said it's team leads and managers who are doing QA.

Speaker 1:

Senior product support specialists they're kind of like a tier two type role.

Speaker 2:

That makes sense. They've been there a while. They know what a good ticket looks like. Exactly so, as far as the people who are doing QA, it's the senior specialists. What about who receives QA? Is it everyone, or is it new people?

Speaker 1:

All specialists, like we have level one, level two and then we have our senior product support specialists. So all level one and level two folks get QA. The way we've sort of mapped it out is the you don't get QA during your training and onboarding. And then we recently just decided that for that first month post-onboarding, the QA will be done by the senior product support specialist, because those are the folks that they've been interacting with the most. And then after that first month that they're like fully trained then and they'll also deliver that feedback right and they'll also deliver that feedback right. And then the next month it depends on how we've sort of assigned who's going to QA who for the month. But then the manager will be the one who has that feedback session going forward, because then they'll already have a couple one-on-ones under their belt with their manager and have built some rapport before the manager takes over that QA feedback.

Speaker 2:

Yeah, so I'm getting a picture of a really gentle process for everyone. Right, it's gentle for the folks who have onboarded. It eases them into it, and then it allows them some time with their manager to kind of just gain rapport and get to know them before they start doing QA with their manager. Yeah that makes total sense. And is it the senior specialist who does training? Like are they doing onboarding at?

Speaker 1:

the new timber Okay so it's an extension of training, exactly, exactly, exactly that first month, getting sort of folded into the QA process is an extension of training this program.

Speaker 2:

If I'm a customer support leader who's looking ahead to a team that doesn't have a quality program, what would you say is the biggest mistake I'm likely to make that you could maybe help me avoid? Well?

Speaker 1:

I think the first thing would be like focusing on like negative CSATs, because I think that's what people think they should do, but that's not really representative of probably the majority of their work, and so I think it's still important to review those, and I think that I was actually just having a conversation earlier today with someone about this and say, hey, I think that if you want to review those against your quality rubric, that's perfectly good and okay, but, like if their quality score is going to be part of their performance guidelines and performance expectations, that is not fair and equitable. To have your like ticket selection criteria be all negative c-sats plus a handful of random ones, because that's going to tank their score and is, like, like I said, not representative. So like to me is making sure that that's one of the things. Keys to success is the quality the tickets.

Speaker 1:

Google Sheets has a really cool feature where you can randomize a range. So you just copy all of the rows that you want to do this to and you right-click. In the very bottom there's like additional options, I think it's something like that and it says randomize range, and so I just filter by the one specialist randomized range and then click through the tickets like top down, because they're in a random order, I do sort of look into them. Yeah, I figured that out like two months ago when I was first doing this. I was like there's got to be a way to do this. So I was trying to use the random function, but that's annoying because it refreshes and so you have to, like, use the random function, copy and paste this value and then sort um, and so this was much easier to do, yeah that sounds great.

Speaker 2:

You copy it into a new sheet so that you're yeah, you're working with separate data.

Speaker 1:

Yes, yeah um, and so what I also do is I'll click through those tickets because because of certain situations with our software, we get a lot of tickets from providers who are reaching out to support to say, hey, can you update this client's phone number, can you update this client's name, and so that's a big percentage, but that's so. That is the one case where it's like the tickets I'm selecting aren't maybe quite representative of like 30 percent of our tickets. Are these account modification tickets? But I don't want to QA 30% of their tickets being those, because those are sort of very much standard use, a macro updated thing in our tools. So I try to make sure that there's a balance.

Speaker 1:

So it's like not all of one ticket type because that's not helpful to them. So still random in terms of still going top down. But what I'm kind of working through now is, now that I've gone through this for a couple months, let me add to our guru card about support interaction selection criteria and I want to put that in there because I think one of the things to your question is transparency, like being really transparent about the process. So the cadence, the support interaction selection criteria, all that stuff is in our guru card about our quality program that everybody has access to.

Speaker 2:

So, speaking of transparency, do you feel like it's always a good idea to do QA more privately? So if I'm giving a specialist feedback, what are the pros and cons to doing that in a DM or in a meeting, versus maybe sometimes shouting it out that the specialist has done a great job in putting that in a team channel? Do you have thoughts about that?

Speaker 1:

I think there's situations where both of those are probably good mechanisms.

Speaker 1:

I think the one-on-one session just gives them this face-to-face time to get to talk through it, and I think one of the things that also I think is really important is you need to give mechanisms and your team really needs to feel that if they don't agree with something, that they can bring it up right with something that they can bring it up right. So that to me is better and you can establish that sort of rapport in like a one-on-one session. But I think, as like a team culture, shouting out when people do a really great job is something that should be happening right. So like we have two team huddles every week and in the first one of the week I always do a CSAT shout out. So, and one of the great things about having a quality program when you do that is you can call out things that are in your quality rubric about what that person did really well. So that's the other thing I think to make it successful is everybody needs to really understand and buy into your rubric.

Speaker 2:

I love that you mentioned this rubric. I would like to dive into what your rubric is, how you determined it. Is that something that came from your previous work, or did you kind of tailor that to your new company? Both.

Speaker 1:

So David, the senior support specialist I mentioned, he had been working on a rubric with the previous manager and so took some of that because that was based on the quality training that new hires on our team go through. And then I incorporated, like I said, some of the elements of the criteria that I've used in the past that I found that I think is pretty foundational, incorporated those together to come up with what our pet desk communication support team quality criteria are. So, for example, our four criteria at pet desk for pet desk communications team are accuracy, completeness are accuracy, completeness, customer excellence and empathy tone. We're dealing with veterinary providers. It's a very high-stress job, it's very busy, right. So empathy tone might not be a full criteria in some places, but for us it was really important.

Speaker 1:

So there's some things that are sort of universal in terms of, like, a quality support experience. But then I would always recommend to someone that one of your criteria should be really rooted in, like your company values or what is sort of specific to your company. So one of the elements of customer excellence is guidance right Controlling the communication, guiding to the best options, effective solution, providing additional information to the customer to address their next question or issue based upon the original reason for writing in that last one is actually one that I carried over from Big Fish Games and Wistia, but we put it under this guidance Like we need to serve as guides sometimes for folks, whereas in other support orgs that I've led that hasn't been as important.

Speaker 2:

That makes perfect sense. So it's almost the customer profile that guides that rubric quality.

Speaker 1:

Correct Interesting.

Speaker 2:

I just wonder if you have any other advice for customer leaders who are working on quality programs.

Speaker 1:

I think one piece of advice is, if you're incorporating the quality score into your monthly performance or quarterly performance for individuals in your team, is making sure that whatever benchmark you've set for your internal quality score and what you're looking for in your rubric is complementary to sort of your productivity expectations or your workflows or your business needs, sometimes right. So I can give a real-life example of where this happened when I was at Big Fish Games. So when I was at Big Fish Games we had monthly performance and we had sort of meets expectations and exceeds expectations and like needs improvement. But there were and we had monthly bonuses for our support team. So, um, turn a level one bonus, you had to meet expectations and productivity quality c-set. And then, uh, have we had a maximum threshold for average number of replies per ticket to make sure that like they're really productive, but on average it's taking three replies to solve a ticket. We should not be giving a bonus for that behavior. So that was sort of like the like a threshold. So but we have that level one and level two criteria for quality, right. So it's like 85 if you got an 85 to 90 qa score, you were meeting expectations. If you get 90 or above, you're exceeding expectations. Um, and then we also had minimum standards which, if you missed one of those minimum standards you were automatically at means improvement for quality.

Speaker 1:

And so we basically created this system with the best of intentions that somebody could miss a minimum standard on their first QA. They know they're not going to get a bonus. Why would they be incentivized to exceed expectations and productivity? They'll still want to meet expectations because they don't want to get a raise in the annual performance review cycle, they don't want to go on a performance improvement plan, but they have no incentive to try to exceed expectations of productivity. And I actually got this feedback service to me by a specialist on the team in a skip level meeting I had and I was like, oh wow, this is really great. Because one of the minimum standards was around selecting the correct ticket fields, like ticket type, because reporting is very important, right. But like, yeah, and we we still believe that it was really important. So we actually adjusted it to put that as like a tense standard or we incorporated that into like complete back-end processes or whatever. So we took that out of being a minimum standard and then we made that one of like our regular, folded it into our regular email standards.

Speaker 1:

And then also the fact that if you missed a minimum standard because not all the minimum standards were like the same weight, right, but we weren't going to weight them differently so I was like, okay, that's also a good point, because that also is something like okay, my first two way I missed a minimum standard. I was like, okay, that's also a good point, because that also is something like okay, my first QA I missed a minimum standard. I'm not going to bust my butt to exceed expectations of productivity. And I was like, totally understand that that's valid.

Speaker 1:

What we did when we were using my QA there so I had a little bit more flexibility in terms of getting creative with the scorecard. So what we did is I figured out how we should weigh a minimum standard. So basically I figured out the math that a minimum standard would deduct 3% from your overall score, because that's what it would come out to, to be kind of double what a regular standard missed would. And so somebody could still miss one minimum standard on a qa, but they'd have to like really knock down to the park on everything else, but it wouldn't disqualify them and that change was like really well received, made sure we were incentivizing the right behavior and not sort of shooting ourselves in the foot um yeah so I thought that thought that to me is always like a really powerful example of me having a really good learning, of an aha moment.

Speaker 1:

I'm like, oh yeah, this is best of intentions, but in practice it was actually not serving the right purpose.

Speaker 2:

Yeah, and what really stands out to me about that is that you found this out in a skip level meeting, and it really highlights to me the importance of leadership, listening to frontline specialists and really taking time with intention, digging into what is your working life like, what are these well-intentioned programs that we're creating doing to your daily life, and how does that motivate or not motivate you Exactly?

Speaker 1:

Yeah, I love that, in terms of cadence, if somebody is like fully trained and doing well and they meet the their internal quality score, the next month will only look at five tickets, and so it's sort of like it helps keep the the workload manageable, but it also is something that somebody could be working towards right.

Speaker 2:

Yeah, so it's elastic. You grow the number of tickets as needed or shrink them as needed as people demonstrate. Yeah.

Speaker 1:

And I think if we had a big fundamental change in the product or new features, and then we would potentially feel like, okay, this month everybody's going to get 10 tickets because there's been a lot of change, we made some significant shifts in process and we want to make sure that, like everybody's sort of moving forward with the new processes, and so that's a way to kind of do that.

Speaker 1:

And then one other thing that I could maybe foresee in the future is, certain months, we might do sort of like a sprint, where maybe we know that our account-related tickets account modification tickets are always really, really good, so we're going to focus on, like technical issues this month, right, but you're doing that for everyone issues this month, right, but you're doing that for everyone. And the thing that you might want to do, though, is like, okay, if it's part of their performance metrics, maybe we say like, hey, let's look at how everybody did, and if the team average is way lower than maybe for this month, we adjust what that like internal quality score goal is, because it kind of goes back to being fair and equitable and transparent.

Speaker 2:

And if everyone is experiencing the same thing, then something's clearly going on Exactly. Yeah, I love the emphasis on fairness and making it work for your team's workflow, as is I think that's probably why it's been successful. Thank you so much for being here. I just really appreciate you. I have adored working with you. You're one of my favorite people, so.

Speaker 2:

I'm really glad, aw, thanks. That's Stacey's setup story in a nutshell. Here are the steps that you can take back to your team. One 10 interactions. Two reviews every month. Keep it lightweight. Two random recent and representative ticket picks. Three a four-pillar rubric. So keeping that lightweight. Make it simple. Hers is accuracy, completeness, customer excellence and empathy. Four private coaching for anything that maybe doesn't go perfectly, plus public kudos for what goes well. 5. Shrink to 5 tickets if an agent is crushing it. Bump back up if things change. So only offering QA where it's really needed. If, as you listen to this podcast, an idea landed, or if you're using this process in the future, please share this episode with a fellow support leader and drop us a quick review and a subscribe. Until next time, keep leading with clarity and with care. See you soon.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.