Net Promoter Score (NPS) is a popular and fashionable customer experience metric meant to express the loyalty of a company’s customers. It’s simple to administer, and since it’s in widespread use a company can compare their results to others fairly easily.
The method is fairly simple: once each year, disconnected from any specific sales or customer service interaction, ask each of your registered users on a scale of zero to ten how likely they are to recommend your brand or product to a friend or colleague. If you’re fancy you might also ask why they gave the number that they did.
The method of arriving at a score is also straightforward – subtract the percentage of respondents scoring six or below (detractors) from the percentage of respondents scoring nine or ten (promoters). You now have one number between 100 and –100. If that score is above zero you have more promoters than detractors. A higher NPS score is supposedly correlated with higher sales growth.
What’s not to love?
Like other interesting tools such as brainstorming or Agile, NPS is commonly misunderstood and its practice distorted. People enthusiastic about the idea of NPS press an NPS-like survey into service in all sorts of off-label ways, sowing confusion within their companies. The most common distortions are
- “Once each year…” – It’s common to receive an NPS-like question about this or that more often than yearly, increasing the likelihood that the next distortion will take place.
- “…disconnected from any specific sales or customer service interaction…” – It’s especially common to receive NPS-like questions during or immediately after customer service interaction, sales interactions, etc. These are not NPS, as they are heavily influenced by the quality of a specific interaction.
- “…ask each of your registered users…” – If your users are not your customers, asking about user loyalty will garner different results than asking about customer loyalty.
- “…on a scale of zero to ten…” – This part of the NPS method is rarely violated.
- “…how likely they are to recommend your brand or product to a friend or colleague.” – This part of the NPS method is less commonly violated, but it does happen.
Any one of these deviations would make your NPS-like survey not NPS. And one of them, confusing users with customers in situations where the two are distinct, is likely to produce misleading results.
It sort of sounds like I’m defending NPS, here, much as I would Agile or brainstorming. But I’m not. The real difficulties with NPS, if you manage to avoid the common traps above, are these:
- There’s scant evidence that respondents’ scores on an NPS survey are actually related to their recommendation behavior. This isn’t surprising when we consider that the NPS question “how likely are you to…” is asking that person to speculate about their future behavior, which is much less reliable than asking people to report on their past behavior. “Satisfaction” and “liking” are better predictors of recommendations than “likelihood to recommend.”
- NPS is no better than more direct measures of customer satisfaction at predicting sales growth.
- NPS is not apparently predictive of other behaviors that demonstrate loyalty, such as repeat purchases.
It’s not that you shouldn’t use NPS at all. Just be aware of what it is and what it is not. Listen carefully for the real question behind your organization’s wish to use NPS, and see if that question can be answered more directly.