The simplicity and power of the customer effort score (CES) make it a good measure of any type of work to improve a company’s customer service.
A simple question, “How much effort did you personally have to put forth to handle your request?” on a 5-point scale from very low effort (1) to very high effort (5), proves to be an extremely strong predictor of future customer loyalty – with 96% of customers reporting high-effort experiences becoming more disloyal in the future, compared with only 9% of those with low-effort experiences.
It is quick and easy for customer to evaluate, easy to implement across different service and survey channels, is easy to track over time, and correlates with business outcomes.
And with 61% of the CEB Customer Contact membership saying they measure customer effort (with another 24% planning to do so in the next 6-12 months), it is clear that effort measurement is more than just a fad.
Room for Improvement
Over the years, we have partnered with many service organizations to incorporate CES into their survey mechanisms – largely to great success. But, as we rolled it out globally we found three key themes emerging that indicated room for improvement:
Inconsistent interpretation: Both the scale (where 1 is ‘good’ and 5 is ‘bad’) and the wording itself are open to customer perception, with some customers misinterpreting the scale and others feeling that the question was probing into whether they had done enough on their own before contacting the company.
Uneven global applicability: Effort does not translate neatly into all languages, and in fact has different meanings based on cultural or regional significance.
Lack of benchmarking capabilities: Due to the above, many companies adapted CES to fit into their customer base which meant that cross-company comparisons became difficult.
Given these findings, we went back to the drawing board to see if there could be a better CES metric that would keep the upsides of the first version of CES but also help close some of its emerging gaps.
We tested a host of different effort questions – everything from gauging customer expectations to time spent getting an answer – across a sample of nearly 50,000 customers from a range of companies, industries, and regions. And we found one measure that was head-and-shoulders above the rest. In fact, it was nearly 25% more predictive of customer loyalty than the next best metric:
Like its predecessor, it’s a simple question that has all of the same benefits for customers and companies alike.
But, it also has a more consistent impact on customer loyalty across companies. Across the 35 companies (from a wide range of industries and geographies) CES 2.0 maintained a strong relationship with loyalty:
The same holds across countries/regions. CES 2.0 was tested in 11 languages and maintained its predictive power for loyalty in all of the translations:
And while we don’t think CES is the ‘one question to rule them all’ and in fact believe in a multi-faceted approach to measure and track customer loyalty (one that includes NPS, a topic I’ll blog on next time), if you only have one question to measure the performance of your company’s contact center/service organization function in customer service interactions, then we’d suggest CES 2.0.
More specifically, we recommend focusing on moving customers to at least a ‘5’ on the 7-point CES 2.0 scale (or ‘somewhat agree’). Moving a customer from a ‘1’ to a ‘5’ boosts their loyalty by 22%, but there are diminishing returns after that – moving a customer from a ‘5’ to a ‘7’ only boosts their loyalty by 2%.
So, we recommend tracking the percent of customers who at least somewhat agree that the company has made it easy for them to resolve their issue.
And the payoffs of doing so – well, they are pretty huge. Moving from the bottom performing quartile companies on CES 2.0 to the top quartile improves NPS by 65 points and ups repurchase intent by 40%.