Net Promoter Score (NPS): Concerns with Boiling Down Human Behavior to a Single Metric 

Estimated Read Time
clock icon 7 Minutes

Since its introduction in 2003 by Fred Reichheld, the Net Promoter Score (NPS) has been widely adopted as the sole best indicator of customer satisfaction and loyalty. NPS relies on a single question: “On a scale of 0 to 10, how likely are you to recommend X company?”. 

The responses are assigned to three categories. Respondents who answer nine or 10 are assigned to “promoters;” respondents who answer seven or eight are assigned to “passives;” and respondents who answer six or lower are assigned to “detractors.” 

To compute NPS, subtract the percentage of detractors from the percentage of promoters. Companies undoubtedly appreciate the simplicity of the measure, and how easy it is to explain to stakeholders. However, from a cognitive psychology perspective, NPS is a less-than-ideal measure on which to hang your hat. 

Category Assignment is Arbitrary in Calculating NPS

One obvious issue with the assignment of respondents to the three categories based on their numeric response is scale usage bias. Each of us tends to use scales in particular ways, and this phenomenon is quite dramatic across different cultures. There are people who tend to always rate everything toward the positive end of the scale. Others center everything around the midpoint of the scale and tend to be very noncommittal toward any opinion in either direction. Some skew all their responses very negatively due to an exceedingly high internal bar against which they measure things. And, of course, some will use the full-scale range. Given this, a score of low for one person, neutral for a second person, and high for a third person is common. NPS sees these three people all the same and does not consider an individual’s scale usage bias. 

Likelihood to Recommend is Context Dependent

NPS assumes a given person’s likelihood to recommend is static across all situations. However, it is human nature to assess the unique context of each situation and the people involved. A brand that might be a good fit for your best friend might not be a good fit for your grandmother. For example, imagine your father-in-law’s old iPod recently ceased to work, and he was looking for a new solution to listen to music before bedtime. You use Apple Music and Spotify but do not recommend either of those to him as they would both be too difficult for him to figure out. You might, however, recommend Spotify to your tech-savvy sister, though not Apple Music as you know she does not care for the Apple brand. Perhaps you would recommend both to your mother. In the context of your father-in-law and sister, your NPS category for Apple Music would be detractor, while in the context of your mother, that category would be promoter. 

Likelihood to Recommend is Product Dependent

Just as it assumes the likelihood to recommend is static across all situations, NPS also assumes it is static across all products/services a brand offers. You may love one product from a brand, and at the same time hate other products from that brand. For example, perhaps a respondent purchased bamboo toilet paper, tissues, and paper towels from Brand X in an effort to reduce their environmental impact. The respondent thinks toilet paper is great, and they have already recommended it to people. They find the tissues are okay, nothing to write home about, but good enough. However, this individual is not happy with the paper towels and will tell anyone who will listen to never spend money on them. With respect to the toilet paper, this respondent would be a promoter for Brand X; thinking about the tissues they would be passive; and thinking about the paper towels they would clearly be a detractor. 

Detractors are Not All Created Equal

The paper towel example illustrates another issue with NPS in that all detractors are not the same. That is, to indicate that you “would not recommend” a brand is not the same as indicating that you would speak out against a brand. Many people know that when all else fails, the way to get a brand to respond to your issue is to post something negative on social media. You may get the runaround or no response through the traditional customer service channels, but if you post something negative for others to see, that often gets a speedy response. The power of negative word of mouth is strong, and brands know this. The NPS system of grouping these genuine detractors, people who are inclined to speak negatively about a brand and discourage others from using it, with those who do not like a brand but never say anything about it to anyone else overlooks this key distinction. 

Tipping the Scales

Raise your hand if you have ever been to a car dealership and heard something like “In a few days, you will get a survey. If you are not going to give us a perfect score of 10, please contact us first.” How about if you have ever seen a pop-up after you have purchased something online saying something along the lines of, “Refer a friend and get a coupon for your next purchase”? Intentional or not, these company and employee behaviors skew customers’ likelihood to recommend in their favor by applying pressure or offering rewards rather than by having a high-quality product or service that consumers genuinely want to talk about. 

Moving Away From NPS

Undoubtedly, likelihood to recommend is an important metric in assessing consumer satisfaction and loyalty. However, NPS provides a metric devoid of context that is easily skewed by a variety of factors. This single metric, obtained and evaluated in a vacuum, cannot provide the insights needed to understand how consumers think and feel about your brand. To determine if your current brand initiatives work well or need to be improved, a more robust brand health tracking would provide a better road map forward. 

The Sago Strategy + Insight Brand Health Metric is composed of four key intertwined measures that encompass both rational and emotional aspects of brand satisfaction and loyalty: expectations, preference, love/hate, and likelihood to recommend. While the framework remains the same, we realize each market is unique, and as such, the scoring is custom-tailored to capture the right measures. Our brand health surveys also incorporate a series of market-specific behavioral calibration questions to prompt respondents to think about prior shopping situations and get them into the right frame of mind to evaluate brands. By tracking brand health over time, companies can evaluate what campaigns and product/service improvements have been able to move the needle in the eyes of consumers. 

Digging a Little Deeper

It is nice to get a snapshot of how your brand stacks up against the competitive set, and the Brand Health Metric scorecards provide that 10,000-foot view. However, the power of the Brand Health Metric lies in understanding why one brand scores better than another. The Competitive Action Blueprint digs deeper into the competitive space by relating various attributes of brands to the overall scoring. The resulting blueprint provides guidance on prioritizing potential areas for improvement that would have the most impact and would aid in improving your competitive position. 

We can also dig deeper into the emotional connection consumers have to your brand through Ambivalence Analysis. Uncover what has the power to turn consumers who are neutral toward your brand into brand lovers, and what can tip those same ambivalent folks into being brand detractors. Ambivalence Analysis identifies attributes where improving performance will delight consumers and attributes where poor performance is likely to turn them against you. 

Interested in learning more about Brand Health Tracking? Reach out at [email protected]. 

Take a deep dive into your favorite market research topics

How can we help support you and your research needs?