
Is There Anything Else I Can Help You With?
Is There Anything Else I Can Help You With?
Episode 2 - Choosing the Right KPIs for Your Business
Choosing the Right KPIs is more than just a Top Level Metric. Understanding what drives that metric in an operational way is key for your customer experience teams. They need to know what will help them push toward exceeding customer expectations.
In this episode, we’re going to talk through how to choose the right Key Performance Indicators or KPIs for your customer service organization to measure success with your operation and especially your customers. We will talk about 2 different levels of KPIs, how to think through design with a couple of stories, implementing them in your ecosystem and then go through some ideas on how to drive performance.
Good day and welcome to Is There Anything Else I Can Help You With?, a podcast for professionals in the Customer Service or Customer Experience industry, who are currently managing, developing or improving their own customer service ecosystems. In this podcast, we will have conversations about all the elements that make up a good CX ecosystem, best practices and at times more philosophical topics about great experiences.
I’m David Wilson, I’ve spent more than 25 years in the customer service and customer experience world and this This particular topic came up as a suggestion from a new connection who listened to the previous topic, so I’m happy to oblige on this very meaty subject. I’ll give my disclaimer that these are my opinions backed with time spent in the world of CX, I don’t consider myself perfect and you may disagree with some or all I have to say and I welcome any feedback you might have. You can reach me at david@gettheedg.com, and I would appreciate it if you would subscribe to this podcast series and if you have suggestions on topics, love to hear from you.
In this episode, we’re going to talk through how to choose the right Key Performance Indicators or KPIs for your customer service organization to measure success with your operation and especially your customers. We will talk about 2 different levels of KPIs, how to think through design with a couple of stories, implementing them in your ecosystem and then go through some ideas on how to drive performance.
Everyone needs to know how performance rates, it’s a natural human condition. You had school report cards, you do annual medical checkups (or should!), you get performance reviews at work and even my energy company sends me health reports on my utility usage! So it’s only natural then that you need to know how you’re doing as a company, your vendors need to know how they rank, agents on the front line need details to rally around and reach and you need to have confidence that you have a customer service strategy that exceeds your customer’s expectations. KPIs will provide clarity into where you can improve in your customer service or experience performance, how you should continue to push performance further and how to tweak along the way.
So let’s get into it and start with an understanding the types of KPIs we will talk about here.
First there are the Hero KPIs. These are the very top level metrics that your company has defined as the way they judge success. It can vary based on the type of product or service your company provides and there are a few Hero KPIs that come to mind here that you will already know. Customer Satisfaction or CSAT, maybe you’re in a Net Promoter Score or NPS environment. Both of these metrics are measures of sentiment, a powerful customer metric. A similar type would be Likelihood to Recommend, some businesses believe in this as a philosophy. Quite possibly you are more focused on First Contact Resolution as it may drive your business results in a way that a sentiment-based metric may not. Sales organizations might be mostly focused on Conversion. The decision on the Hero Metric will likely be out of your control unless you are in the decision-making seat for your company. If you are, let’s have coffee, I’d love to hear more about your choices and btw, you’re buying! I’ve worked with a few different models, at the end of the day, I have faith that those who create and dictate the metric have the right information to make that choice, and I will then focus on how to support that through the next layer in my customer service environments, which leads me to the category that we will spend more time on today, Performance Metrics or KPIs.
These are layer-down metrics that help to define the Hero Metric in an operational way. They are linked to the Hero Metric and are an indicator of process health of your customer service interactions. If part of your Top Level Org goal is Efficiency, then you are likely to consider performance KPIs such as Service Level, Occupancy, Staffing Adherence. If your goal is Satisfaction, you might think about Top Box %, % of Bottom Box experiences, maybe a strong indicator of satisfaction is the ability to solve the first interaction, so First Contact Resolution is your goal. The performance KPIs will support the top level metric but it’s not so easy to say here are the absolutes that you should choose. Each business is different, and understanding what the Performance KPIs should be will take a little thought, perhaps some experimentation or deeper analysis to understand if the performance KPIs are the right ones. If you’re mathematical inclined, I liken it to an a+b+c=d relationship, where the sum D is hopefully assured by meeting a, b and c. I also refer to these as Drivers vs Outcomes.
I’m a firm believer in the concept of Drivers vs Outcomes. You have much more influence on the path to the outcome of an interaction than the final assessment by the customer. It’s not as simple as saying, ok I’ve done what you want and followed the right procedures, so give me a top box score. That’s not what I mean by a+b+c here, It’s just not that easy. I remember seeing a study from Gartner a few years back that indicated the actual amount of influence an agent can have in the interaction moment is not even the majority of the customer’s determination of the final score. The study said that out of the 100% makeup of the overall score, that the agents were only responsible for 47% of the outcome. Think about that for a moment: The other 53% of the score is attributable to everything that has happened up to the moment the interaction begins. It’s a really mind-blowing way to think about the task of delivering a great experience in the moment. Either you are going to deliver less than previously and potentially blow up a loyal customer relationship or you might be about to take on a customer’s baggage from the entirety of the relationship to that moment. This is a lot of pressure on our Customer Service Teams, and they may not even be aware of it!
So let me reiterate, there is a difference in the outcome vs the journey to get there. You can have more influence on the drivers, than you can the outcomes. For example, if your company has a focus on CSAT, then that is the outcome. But you can’t ask an agent to chase after that outcome, it’s not in their hands or capability to lock in on a CSAT number. There is no magic formula or reliable way to get there. That’s up to the customer to decide. But what you can do, is to figure out what drives the best results for customers and then create some metrics around that to help the overall CS team understand what would reasonably get them to the right outcome. If I have done my homework well, I should be able to pinpoint a few impactful ways that I see from 5 star or 100% survey results that I can then transform into KPIs that will reasonably get me there. If I don’t know what moves the needle, I can’t expect the front line to know either.
Let’s create an analogy that might illustrate this a bit. Your customer has asked you to take them to the mall and your CSAT rating will be based somewhat on the outcome of getting to the mall. It’s now your job as a CSP to get them there in the most efficient way with the highest level of satisfaction in their journey. It’s not the customer’s role to plan it out, they are simply a passenger here. The elements of the journey are your responsibility as a CX strategist to meet their need. The car is the modality or channel you choose to offer. Hopefully it’s a vehicle that is the right match for the customer. Might be good to have a couple of options but not too many. For example, if your customer expects a BMW and you give them “less than that”, then it will impact their experience with you. The gas is your Knowledge Management and training, you want to go at the right speed on the road (AHT), you want to observe all the traffic rules (Process Adherence), you don’t want to forget anything at home and have to go back (FCR). Arriving at the mall is just the outcome, it makes the customer happy, but they will be heavily influenced by the journey to get there. Your sentiment on the journey will be tied up on a number of inputs along the way. Was the route efficient? Did we go at a pace that is comfortable for the customer? Was the customer looking for a 15 min journey but you went so fast that you got there in 8 mins but the customer really wanted to see the view along the way. So many things will have an impact on the way the customer feels about the journey. And there are many things under the agent’s control along that journey that you can hopefully make easier for them, but not all things are the responsibility of the agent here.
For instance, the weather is also an important factor in this particular analogy and the agent has no control over that. For instance, if it’s a sunny day, maybe that signifies the status of the customer’s relationship with your company. It’s a happy day, sun is shining, the roads are clear and it’s a straight shot to the mall with very little bumps along the way. You get your passenger to the mall, and drop them off at the main doors and they say thank you and move on. Perhaps you go even further to satisfy by driving them right up to the door and give them a 20% off coupon! There will be a lot of sunny days and excellent driving conditions in your company’s day-to-day and we should be thankful for that.
It’s not always sunshine and lollipops out there. Bad weather conditions can make the journey more treacherous and tensions could be high. This could mean that coming into this interaction they have a difficult history with your company and the agent interaction will be influenced by that. Increasingly, with more digital and self-service options, the shifting complexity curve of customer interactions has trimmed the types of journeys you will handle in your customer service team. There could be less sunny days out there as a result of increased digital offerings and that means your experiences can be harder to navigate, both at the program level and more importantly at the agent level. Harder to navigate and harder to be great.
This analogy could go on forever, but the point of this is that there is so much that can go into the sentiment of a customer that it’s impossible for you to control all of it. If you don’t “own” the entire outcome, then why put all of your focus in trying to that alone? It’s better that you focus your efforts into what is going to help your front line optimize for that specific moment, which means looking beyond the Hero KPI. And this means focus on what you know will “drive” toward the best outcome. What do you know about your interactions that help you to create drivers that will be reasonably assured to have a positive impact on the outcome? An extreme example might be…you’ve done some research on what really makes the top satisfaction happen in the scores from customers. The research showed that any time there was a laugh in a phone conversation it resulted in a top box outcome. So you can reasonably be assured that the laughter “drives” the best outcome. So it would seem reasonable then to create a focus in the “driver” KPIs to understand what percentage of conversations had laughter. It would also make sense to include it on your operational scorecard and create some focus on the entire team to push the Laughter factor and score higher.
Let me tell another story to illustrate a very different metric where I’ve helped one team think about this differently. Once upon a time there was a customer service team, located in the Philippines that was having a difficult time getting to the right Quality score. The score was hovering around 60% and the needle was not moving. When we talked about their scores in depth, there was a common element I was seeing in the output that drew my attention. The team had a much higher than normal percentage of “autofails” in their scores. For those that might not know, in some Quality Assurance systems, some errors are so egregious that if an infraction happens, it’s an automatic zero score, or failure. Scores of zero are mathematically not helpful when trying to reach a goal. Also, they distract from the fact that the experiences didn’t seem terrible, it was just that their percentage of autofails was around 20% so of course trying to get above 90% overall in QA was not going to work. So I challenged the team to think about that specific KPI, something that wasn’t really a traditional KPI tbh, but accumulatively the impact was undeniable. So the driver in this case was Autofail, and we determined that the right amount of Autofails to get to 90%+ QA score mathematically had to be less than 5%. Now the team had a KPI they could watch and manage. It was not just present at the site level, but individual team dashboards also had to show team level results as well. Plans were made to understand what was causing the autofail the most. There were changes made at the strategic level where perhaps it didn’t really make sense to result in a zero score, and individual plans were made with the agents to hold them accountable for the AF %. As a client for the BPO, I also added the KPI to my operational scorecard, so I could monitor progress and let the team know this was a huge priority for me. The result was a pretty significant reduction in autofails and the QA score reached goal. Focusing on the specific driver of the low scores in this case helped the team understand more tactically what they needed to do to improve. Otherwise it was tell me your score this week, and the focus on the overall was not directionally helpful to the team on the ground.
So let’s think of some of these “drivers” out there. If you have a phone program and you want the best satisfaction level, then what drives your customers to that top score? Is it an efficient conversation? There is a lot of research on Customer Effort out there, I’m a big believer in the lowest amount of effort you put your customers through, the higher the appreciation your company. To that end, in a phone situation for example, one metric that I’m addicted to is the Callback Rate. Not FCR although it could be seen as an example, but if as a customer you feel the need to call back within 24 hours, I most likely see the reason as additional effort because either a) I didn’t get the right answer, b) I didn’t feel like the answer was confident and I just want to check, or even worse c) I was promised something that was not followed through. Another potential driver for me for phone effort is the call flow itself. Putting customers on hold is an every day practice. Not one I like, but sometimes is needed. However the amount of hold can be a problem based on the complexity of your program. I want to make sure there is an appropriate time for hold and manage to it. And then manage it down over time! If it’s an email program, maybe I want to make sure Effort is managed by monitoring re-open rates on emails. There is nothing worse than a back and forth in email, it immediately makes me want to just pick up the phone and solve it that way.
So the lesson here is to take the time to understand what drives great outcomes in your business, and focus on those “drivers” to get there. It’s important that your teams can clearly see the connection and performance management activities can be planned more effectively. Don’t go overboard on the number of “driver” KPIs, boiling the ocean is not a good plan. Stick to the most impactful few to allow for proper focus and energy on the team. Generally, 2-4 KPIs at the team level should suffice to understand how you’re doing. More than that will dilute the potential to impact and also will be very confusing for your customer experience teams, who will lack a focal point for their folks. Don’t go into metric overdrive! Also, you will need to live with these metrics for a while, you can’t change the entire program weekly or monthly. If you have done the research and are committing to them, you shouldn’t change in the middle as it causes huge upset in your ecosystem and progress made might be progress lost.
So let’s talk about implementing these KPIs in your business:
Making it happen is all about effective communication of the drivers, tracking toward them, and understanding how to safeguard against poor performance.
Communicating out on the performance of the drivers depends on the audience. Top level execs may not be as interested as their focus is generally on the top KPI. But the amount of visibility from that point moving down the line should increase, so that at the team and agent level, they are not only talking about it daily, but agent scorecards should include the driver metrics as a measurement of effectiveness for the agent. Team boards should have the team result, maybe with a few high performers listed for kudos. If you are sufficiently covered from a communication perspective, then monitoring actions and roadblocks are next. Best practice sharing between teams should happen where some folks are really getting the drivers right. At some point as a business manager, you might hit some plateaus that you see in the drivers, which might then result in some gathering of intel from the teams to understand why it might not be moving and what can you do to help.
Most of us will already have some visibility on the production floor to the Top Level Metric but consider giving some real estate on your production floor to the drivers as well. They should most certainly be very visible within any team scorecards or dashboards, coaching should be in place specifically to help lower performers. Talk about the drivers often, get everyone comfortable with them and they will become very skilled in how to help their teams.
So now you have the Top Level KPI, you have your Drivers, you’re communicating out and managing the performance through various actions and business reviews. You’re seeing some movement on the drivers and as an extension because you’ve done your homework, you’re starting to see positive movement on your top level.
At some point there may be a need to Iterate, and important part of this formula. Don’t get stuck in what you do today should be the same for all time. If you are finding the plateau of improvement is not moving, it might be time to re-think the strategy. Maybe take a larger snapshot of performance over time to see if there is anything different in a longer view of time. Maybe the drivers have reached their limit of capacity to change results and it’s time for new drivers. In the Autofail case study discussed earlier, that metric is no longer necessary as a focus since the scores reached goal and it was no longer driving performance gaps. Think dynamically about your KPIs. What you need right at this moment may not be something you need 6 months from now. Maybe there have been external influencers on performance, new products, more digital servicing to avoid the need for service, etc. Just as business needs change, so do the KPIs and if you haven’t looked at them in more than a year, it may be time. Especially if you’re finding some gaps in your outcomes. Don’t be afraid of iteration, it’s a necessary strategic decision you should review on a periodic basis.
Test the upper limits of your metric by offering a time-based incentive for the front line. Nothing drives agents more than the opportunity to earn more. I remember once doing a Customer Experience Champion program, where we had a bit of budget to splurge on the teams, where top box customer survey scores earned a raffle ticket and at the end of a 4 week period we awarded some pretty awesome prizes. I do love to participate in reward and recognition times for agents, but for me it was also about seeing if the program was going to have a Red Bull effect on the front line, where we could test the upper limit of satisfaction and also, just what kind of % of the entire team could reach new levels of excellence. I don’t normally recommend “paying” for CSAT, but that’s not what I’m talking about. I’m saying that for a very short period, if you offer up a special incentive that is wrapped in the right cover, you will see agent energy build and you can learn a few, potentially powerful things:
· What is the true upper limit of your CSAT? You will quickly see whether you have selected the right targets for your business, and you can use the outcome from the incentive to redefine what that should be.
· What is the true % of agents that when sufficiently motivated can be in that range? Once you understand how your overall workforce can up their performance bar, then you might want to consider finding ways to keep it there. For example, if your CSAT target is by percentage a number like 85%, or 4.5 Stars out of 5, then you could negotiate with your vendors to think about % of agents above that number for a reward within your contract. Incentives overall are a powerful motivator for both agent and management. Use this trick sparingly however, don’t overindex on special incentives. The message to the front line should not be that they will deliver outstanding results ONLY when an incentive is in place. I am a total fan of a performance-based culture, but it has to be thought out in the right way, and that’s a bit beyond the scope of THIS episode, stay tuned for more on that one.
· And then another important outcome will be to determine who doesn’t perform well during the incentive and should potentially be found a seat on a different bus. If agents aren’t performing well even with the prospect of adding significantly to their income, then there are decisions to make. The fact is that not all agents are the right fit for your program and that’s not a comment on anyone’s ability. I’m sure there are no end to the number of ways a customer service professional acclimates to a program, and that doesn’t always mean a true fit happens. This is a finding for the manager of the contact centre, either internal to your company or externally with the management of the contact centre company. As a program owner, you should be looking to continuously improve your overall results and raising the bar, along with upping the expectations in the Risk and Reward clauses in your contract, that is where your focus should be, then holding accountability appropriately.
Well this was a lot of information but this is such a huge topic that we have to draw a line on it somewhere, and this is it. I’m not going to get into this, but a really important next-level question is shouldn’t these drivers factor heavily into your QA program? Boom!
Thanks again for listening in today, I’m pretty passionate about this topic as you can see, maybe a bit longer than I wanted to keep your attention but I really appreciate you listening! My name is David Wilson, you can reach me by email at david@gettheedg.com, and if you liked what you heard today, please show your love by subscribing to the series. To sum up, we talked about the difference between Drivers vs Outcomes, talked through some case studies, we touched on implementing the KPIs into your ecosystem, the importance of iteration and testing. Lastly, don’t forget to celebrate either! Reaching goals is such an awesome time to take a moment and recognize everyone for their contributions.
And if there is nothing else, I’m going to let you get back to your day! Thanks everyone!