Contact centers have a critical role to play in the world of customer service. That said, as contact centers continue to modernize (especially in light of COVID), some key performance indicators (KPIs) of traditional customer service can be less valuable or relevant than they once were.
If you’ve spent time researching some of the traditional methods to measure success in your contact center, you’ll likely have found a dozen different answers. So, what follows is a basic, modern breakdown of KPIs that are important for contact centers, and some KPIs that are more important for those working in either a voice-specific contact center or an omnichannel contact center.
Important KPIs for both all contact centers
Customer Satisfaction Score (CSAT)
Customer satisfaction scores (CSATs) are measures of how satisfied customers are in relation to the service they have received. Higher levels of customer satisfaction are correlated with increases in customer retention, customer loyalty, and increased brand reputation.
In fact, 59% of consumers say they’ll take their business elsewhere if they repeatedly have poor customer experiences.
For both call and contact centers, CSAT is usually measured through customer satisfaction survey questions that relate to the customer experience provided during a specific interaction. Typically, a customer is presented with a succinct list of questions relevant to their issue, and asked to provide one of the following sentiments based on a five-point scale:
- Very dissatisfied
- Dissatisfied
- Neither satisfied nor dissatisfied
- Satisfied
- Very satisfied
Clearly, this simple measure of customer satisfaction is beneficial to track in ant contact center. The only nuance regarding CSAT measurement is how the survey itself can be presented to a customer. Voice experiences typically rely on a customer to opt in to the survey process before they’re connected with an agent, while omnichannel services are less limited in when and how CSAT surveys can be deployed since their customers engage with more channels overall.
Keep learning about customer satisfaction:
- The Keys to Building Positive Customer Relationships: A Guide to CSAT Surveys
- How to Write Customer Satisfaction Survey Questions That Work
- Using Your Customer Satisfaction Score to Empower Agents and Delight Customers
- How to Design a Customer Satisfaction Survey That Gets Results
- 6 Steps to Measure Customer Satisfaction
Customer Effort Score (CES)
As a KPI, customer effort scores are the measure of how easy it is for customers to get the help they need. The concept at play here can also be referred to as “customer friction,” and it’s another important KPI for both call and contact centers. This is because a reported 96% of customers who experience high-effort CS interactions churn, compared to just 9% experiencing low levels of customer friction.
Like CSAT, measuring CES involves asking a series of relevant questions. But CES differs in that the questions focus less on happiness and more on effort expelled. A customer could have a high CSAT while scoring very low on CES due to poorly designed processes. Its’s likely that CES and CSAT will mirror each other, but CES certainly has more UX and customer journey implications than CSAT does.
The CES score is then calculated by dividing the total sum of answers by the overall number of responses. Like a lot of call and contact center KPIs, the idea of what is a “good” CES score varies across industries. But, in general, a CES of 5 (out of 7 total) is seen as average.
First Contact Resolution (FCR)
Once referring only to first call resolution rate, the more modern incarnation, first contact resolution (FCR), is the measure of how often an agent is able to resolve customer issues during their initial contact with a customer (i.e. no callbacks or agent follow-up needed). And this KPI matters to both call and contact centers because every 1% improvement in FCR can result in a 1% reduction in overall operating costs and a 1% boost to CSAT.
Like CSAT and CES, any measure of FCR should be based on the customer’s (not the agent’s) perspective. And for most customer service centers, an agent’s FCR score is determined by dividing their number of resolved cases by their total number of cases in the same period of time and then multiplying this amount by 100. For instance, if an agent handled 153 customers in one hour and resolved 117 of their customers’ issues without the need for follow-up, that agent’s FCR would be 76.47% for that hour, with scores between 70–75% being considered acceptable across the CS industry.
Important KPI differences for voice experiences
Here is the point we begin to show some divergence between KPIs based on the channel experience owned by the contact center team.
Percentage of Calls Blocked
This KPI is defined as the measure of how many customers get a busy tone when they try to reach an agent by phone. Blocked calls happen when the number of calls waiting in physical queue reaches 100% of the capacity to queue.
Similar to FCR, Percentage of Calls Blocked is measured by taking the number of calls that do not reach an agent, dividing that by the total number of calls over the same period of time, and multiplying by 100.
As opposed to other KPIs discussed here, the acceptable percentage of calls blocked is typically established by the call center itself—a common internal goal being to keep this KPI below 2%.
This is important to IT and operational managers who face the challenge of limited resources (tech, staffing) in creating busy signals, as well as the CX leader who wants to squash such an incredibly awful experience.
Quick tip: A premium callback service like Mindful takes callers out of the telephony queue and allows them to wait virtually—freeing up resources and bypassing the chance that busy lines are ever reached.
Abandonment Rate
Another call center KPI centered around the voice experience (primarily in IVR) is Abandonment Rate. This KPI is a measure of the percentage of customers who “give up” in the midst of attempting to contact an agent over the phone. And the causes of call abandonment can range from traditional (high hold times) to technical (confusing IVR scripting).
Abandonment Rate is calculated by taking the total number of calls received in a given period of time and subtracting the number of calls that were, in fact, handled. This value is divided by the original total number of calls and then multiplied by 100.
This KPI is similar to Percentage of Call Blocked in that it may vary from call center to call center. But, generally, an average of 5–8% is seen as acceptable. To use the same per-month average of 4,400 calls above, this would make for an acceptable abandonment rate of 220–352 calls per month.
Quick tip: Brands that use Mindful in their IVR see a 28% decrease in abandons on average, and in 2021 alone, we saw over 40 million abandons mitigated by utilizing Mindful Callback in the IVR.
Average Call Occupancy Rate
For some of these KPIs, we refer to acceptable thresholds as opposed to absolute percentages. This is because while a 0% or 100% for some KPIs could seem ideal on paper, absolutes can be unacceptable and/or unachievable as expectations for call center performance of real agents working with real customers.
This is especially apparent with this important call center KPI in the voice channel: Average Call Occupancy Rate. This KPI is a measure of agent utilization, or how much of an agent’s time per shift is spent providing customer support to callers.
This means a 100% Average Call Occupancy Rate for a call center agent would literally mean that agent was on the phone with customers 100% of the time they were at work. This is why benchmarks for this KPI in call centers are typically lower, closer to 85–90% across the call center industry.
Some important KPI differences for contact centers
Customer Service Email Count and Customer Service Chat Count
In addition to call counts, contact center agents field customer inquiries from a variety of channels, including email, online chat, text, and social media.
For this reason, call counts would be joined by the likes of email counts and chat counts as important contact center KPIs.
Average Response Time by Channel
Here again, every CS center can (and should) be measuring how quickly their agents are responding to customers on average. As a KPI, this is known as Average Response Time (ART). And, for phone calls, call centers may have similar benchmarks for this metric, measured in Average Seconds to Answer.
It’s important to set varying benchmarks across channels. Expectations vary across channels, which mean average response times should differ for chat (<48 seconds), Twitter (~15 minutes), or email (<1 hour).
Quick tip: As a high-touch experience, response time in a voice experience is the most critical. Mindful reduces average seconds drastically, cutting the average of 280 seconds without Mindful down to 37 seconds with.
Customer Escalation
Finally, customer service agents will, at times, need to escalate customer issues in order to get them resolved. Especially since just 9% of customers report being able to resolve their own issues through self-service. So, within the modern contact center, customer escalation KPIs need to account for this process happening across multiple channels.
In this specific instance, KPIs can be valuable in determining where the omnichannel customer escalation may be breaking down, ensuring each channel-specific touchpoint involved in the escalation process provides clear and useful information, re-affirms the escalation process is happening, and builds confidence in the customer over time.
Summing up
One recent trend has underscored the importance of examining the similarities and differences call and contacts centers place on their KPIs. COVID-19’s unprecedented disruption of brick-and-mortar business locations, agent staffing, and cloud technologies has, in turn, had an unexpected impact on the customer service industry.
As contact center agents find themselves on the front lines for brands, business leaders are now looking to CS centers and agents to evolve again, from cost centers to value centers. In doing so, CS leaders are having to completely reexamine what constitutes important call and contact center KPIs.
For a deeper dive on how some CX leaders are helping their agents do exactly that, listen to our interview with AWS’ Joe Eisner about Mastering the Voice Channel, or our webinar on bridging digital and voice experiences to increase value.