What it means for NPS is that whenever we’re trying to run cross-country comparison of what has been promised to us as a universal score, we are comparing apples to pears. What can be a particualrly damning score for Ghana, might end up being a rather positive one for South Korea. This presents a rather peculiar problem specifically for international companies, where the NPS score might differ greatly across different markets, which is hard to explain to a layperson who might not be interested in methodological concerns around NPS.
What’s behind the number?
In the course of my career both as a market/consumer and as a UX researcher, I’ve heard this phrase and variations of it at least a billion times:
“I’m not giving it a rating of 10, because there is always room for improvement”
I’m not going to argue whether it’s a universal feeling as we’ve seen that cultural aspect comes into play, but what I know for sure is that participants in the UK are quite unlikely to give the highest rating even when perfectly satisfied. In classic Likert scales where each number usually has a verbal association (e.g. 10=very satisfied) and positive and negative values get netted, it’s not usually a problem. It is a problem, though, when it comes to the NPS.
An average person doesn’t think of 8 or 7 being a bad (or “passive” as NPS says) rating. It leaves something to improve on but it’s not necessarily associated with major problems. And how about rating of 5? Traditionally, most people believe five to be a neutral option where you are not experiencing any issues but you are also not over the moon about the product or service. It doesn’t mean you’re about to switch your loyalties but might simply mean you’ve not been “wowed” by the experience with a company.
The meaning assigned to each NPS value seems almost arbitrary to a layperson. It’s also not something that is explained to potential participants ahead of time, they are just presented with a rather standard scale and asked to provide an answer. In fact, a lot of studies looked at how well NPS score transfers into financial performance or other company metrics. Unfortunately for NPS fans, the correlation is proven to be pretty low. For example, Cambridge University study shows that “…NPS measurement does not necessarily correspond to actual behavior.”
Why not, though? People are telling us they would definitely, 100%, absolutely recommend our company to others; why wouldn’t that translate into higher revenue, customer loyalty or subscriptions? Well, for once, most companies offer more than one standalone type of product or service. Personally, I am a big fan of one particular skincare brand. Almost all my skincare products are from them. However, there is one very particular product that I bought a while ago and absolutely hated. Did I start to hate the brand or became a “detractor” in NPS language? No, I’m still a big fan. But I also absolutely discouraged my friends from buying that one product I disliked. So, what we have here is that overall I’m a promoter but my behaviour is contradictory — I’m both recommending and discouraging people from buying from the brand, which in NPS world is an impossibility.
It gets even more complex when we think about demographics. My likelihood of recommending something is not based solely on me liking what the brand has to offer. It also needs to match what I know about the person I’m recommending a product to. For example, I’m very happy with my choice of transitioning from Apple products to Samsung, but I’d be highly unlikely to propose the same change to my mother, who is bad at technology and has been using Apple products for ages. Cost concerns are another important factor — I might be perfectly happy with my 10 year old Fiat Panda, but would I really recommend it to a person that can afford to buy a new Mercedes?
Overly simplifying human behaviour leads to bad data. No human being is as simple as having one number perfectly explaining their future behaviour.
The classic NPS question asks you the likelihood of recommending a product/service to your friend or colleague. The idea behind it is that only loyal and content customers would risk to provide recommendations to people close to them, thus NPS is playing a bit on the emotional side of respondents to get a truthful response out of them. In its essence the thinking is more or less legit — the higher my satisfaction is the more likely I am to pass the good stuff to my close ones.
In reality, it poses more questions than answers. First of all, there is a huge difference between B2B and B2C companies. B2B companies, in particular, would inevitably struggle with translating NPS data into reality. This stems from a simple fact that selling a product to a company is largely different from selling a product to an individual. First of all, there are much more people involved in the decision making in B2B than in B2C. People involved in making actual decisions might not even be the people experiencing the actual need for a product (think Heads of department vs normal employees). On the other side, your recommendation as a user might not have any effect on others if others don’t experience the same needs as you, don’t have the same budget, etc. In a nutshell, for a B2B recommendation to work much more conditions need to be met, which is rarely the case.
But it’s not only the type of company that is a problem. On a high level NPS raises the biggest concern in its overall usefulness. So you found out that a large chunk of your customer base is “detractors”. Too bad for you since NPS doesn’t give you any indication as to why this is the case. Some researchers try to rectify this by adding an open-ended question asking customers to specify why they gave a particular score. This is not a bad idea in itself, but as researchers, we all know that open ends rarely provide a high level of insight, let alone the fact that no one wants to spend hours every 6 months or so deciphering largely unhelpful text data when there are other much better ways to find out the same information in a more systematic way.
I might be overly stern but as a researcher and not a marketing professional, I see almost zero value in NPS. At the very best, it’s another arbitrary, meaningless metric that your company can slap onto its website or distribute as part of their pitch to investors. At worst it’s misleading and leads to or overly positive or overly negative outlook on your company’s health, none of which in all likelihood will translate into real life behaviour on the part of your customers.
There are various ways in which you could meaningfully track customer experience, but it will always require much more than just one question. I’m generally not a big fan of one-size-fits-all approach, so my recommendation would be to tailor your customer experience survey to the needs of your company. Standardised CSAT surveys are a good way to start but they’d still require customisation from your side. Customer experience is a broad area and as an in-house researcher you’d know best what metrics are important for your company and customers. Don’t be misled by a fancy name and methodology. Sometimes what we need to do is go back to the basics and ask ourselves a question: what do we not know about our customer’s experience?
References and Further reading
- Devesh Gadkari — “Factors Influencing the Net Promoter Score”, 2018
- Mohamed Zak et al. — “The Fallacy of the Net Promoter Score: Customer Loyalty Predictive Model”, 2016
- Kuba Krys et al. — “Societal emotional environments and cross-cultural differences in life satisfaction: A forty-nine country study”, 2021
- Nicholas Fisher — “Good and bad market research: A Critical Review of Net Promoter Score”, 2018
- Douglas Grisaffe — “Questions about the Ultimate Question: Conceptual Considerations in Evaluating Reichheld’s Net Promoter Score”, 2022