Net Promoter Score (NPS) is one of the most popular metrics being used in business today. And while NPS has many supporters to sing its praises, there seems to be an equal number of critics who have emerged to decry it, citing a number of reasons why it should be abolished. Among this debate, misconceptions have emerged from both sides. In order to truly understand this metric, these common NPS myths should be debunked.
Myth #1: NPS is not predictive
Some denouncers have claimed that NPS does not predict customer loyalty. While it may be true that NPS isn’t predictive in some cases in certain industries under particular circumstances, countless other research has shown that NPS is indeed an excellent predictor of repeat purchases, referrals, revenue, and business growth. Numerous studies have found a strong relationship between high Net Promoter Scores and revenue.
Therefore, it is inaccurate to say that NPS is unilaterally not predictive. Furthermore, predictiveness is far from the only benefit that NPS has to offer. When companies adopt NPS as a key metric, it inspires business growth, customer-centricity, and cross-functional alignment as different departments unite under the banner of decreasing detractors and increasing promoters.
It can also serve as a starting point for deeper customer research, a diagnostic tool for improving the customer experience, and an opportunity to connect with individual respondents.
Myth #2: NPS is not useful
Many NPS critics purport that the score isn’t useful. In some cases they’re correct: NPS is only as useful as you make it. When it’s not deployed correctly, only used as a vanity metric, or not leveraged properly, NPS won’t be very useful.
When done right, it’s an extremely valuable source of customer insight. Reaping the benefits of what NPS has to offer begins with setting it up thoughtfully from the start. Common mistakes in deploying NPS include:
Not sending NPS frequently enough. Some companies only measure NPS on an annual basis. With the fast pace of most businesses today, this isn’t often enough to spot shifting trends. It shouldn’t just be used as a once-in-a-while pulse check, but as part of a cycle of continuous improvement.
Sending NPS to the same people too often. This results in survey fatigue, lowered response rates, and aggravated customers who begin to select low scores because they feel overtaxed.
Surveying at the wrong time. Customers need to have been using the product for long enough to form an opinion before answering the NPS survey. At the same time, they shouldn’t be surveyed so long after their experience that they don’t remember it. A common benchmark for surveying is 30 days after getting a product or service, but this could vary. The best bet is to think through when a survey would make sense from the customer’s perspective. Long story short, sending at the wrong time can lead to vague and unactionable data.
Not asking for details. A score on its own doesn’t reveal much. But a follow-up question like “What’s the primary reason for your score?” along with a comment field will illuminate the driver behind the score, which is ultimately what will provide the insight to make improvements.
All of these mistakes can undermine the usefulness of any NPS program, but getting the most out of it doesn’t stop with the setup. The next step is to mine the data to learn from customers, follow up with them, and make improvements.
In this way, NPS is an important microphone for customer voices and a tool to drive business action. As customer experience leader Bruce Temkin has encouraged, “Instead of obsessing about the specific metric being used, companies need to obsess about the system they put in place to make changes based on what they learn from using the metric.”
As such, businesses should aim to monitor and evaluate NPS results to find the drivers of satisfaction and dissatisfaction. Trends will naturally emerge, and feedback can be put into categories. Armed with this knowledge, different departments adjust their approach to do more of what customers like and less of what they don’t. After making any changes, the previous NPS score can be used as a benchmark against the new one to see if the changes had the intended result.
Reviewing NPS responses also reveals opportunities to follow up with customers based on their status as detractors, passives, or advocates. Following up with detractors can mean righting a wrong and changing a customer’s opinion on the company. Connecting with passives can turn them into promoters. And seeing as how promoters are willing to recommend a business, they should be encouraged to do so.
Myth #3: NPS is a product metric
NPS isn’t a metric for just one team. While customers may be evaluating their likelihood to recommend a product, their responses could be affected by the brand, the messaging, the product experience, the customer support, the pricing, the competition, and many other factors.
Undeniably, NPS is influenced by the work of every team within a company. No one team should be held independently responsible for something that takes entire cross-functional coordination to impact. Not only is it demoralizing to hold a team to something that’s outside of their own control, but it’s also ineffective.
That’s not to say that it’s not possible to evaluate each team’s contribution to the NPS score. One way to learn more about what actions each team can take is to give a small handful of follow up questions asking customers to rate a variety of factors that are tied to specific teams, for example, their customer support exchanges or purchasing experience.
Similarly, NPS should not be used to rate individual employees, such as after a customer support interaction. Use a Customer Satisfaction Score (CSAT) or Customer Effort Score (CES) metric instead, and use NPS to gather more broad-reaching insights and mobilize the entire organization toward a common goal.
Myth #4: NPS is the only metric you need
An NPS metric is only one data point, and it alone can’t tell you everything you need to know about your business and customer experience. NPS should be used as part of a suite of metrics alongside options like a Customer Satisfaction Score (CSAT) and/or Customer Effort Score (CES). This can supplement learnings from NPS data as well as provide completely new perspectives.
And looking at metrics in combination can reveal dependencies and relationships and lead to more insights. For example, grouping customers who have answered both an NPS and a CSAT survey into buckets can further highlight opportunities for each group. Those with high NPS and high CSAT will be at the highest potential for advocacy and should be contacted as soon as possible to refer friends, provide quotes, or participate in other customer marketing activities. Those with low NPS and low CSAT will be at highest risk for churning and should be contacted immediately if there’s any chance of keeping them around. Plus, combining metrics can increase predictability.
Similarly, NPS itself can be broken out into segments, as different populations of customers may have different answers and needs. Separating NPS scores by product or platform and type of customer, for example, free versus paid or new versus established highlights nuances between groups that will allow for more specific remediation plans.
In addition, customer research shouldn’t stop at a single-question effort like NPS. More in-depth research tactics like longer-form surveys and interviews can be used to go deeper on specific topics of interest and help you get to know customers even better.
Two sides to every story
NPS is not a holy grail, as some advocates might have you believe. Nor is it an impractical frivolity, as some opponents might urge. Used correctly and as part of a rounded approach, it can be a highly useful tool in serving your customers and helping you achieve success.
Check Out Our New Net Promoter Score (NPS) Guide
Learn how GetFeedback can help you create the best customer experience—start your free trial today.