VoC to the Rescue
One of our clients does a fantastic job supporting a critical and complex hardware, software, and wireless networking solution. But, they faced a familiar challenge: since customer satisfaction was so high, the C-suite wondered aloud if they were overinvesting in Support. Support budgets came under increasing scrutiny.
Fortunately, this company has a rigorous Voice of the Customer (VoC) program. As we’ve analyzed their relationship survey data over the years, a clear trend emerged: customers who called themselves Extremely Satisfied with Support had a much, much higher Net Promoter Score (NPS) than those who called themselves “just” Satisfied. Since high NPS scores correlate with renewals, expansions, and referrals, it’s good business to invest to make customers Extremely Satisfied. And now Support executives have the data they need to have productive C-suite conversations.
None of this would have happened if they had been content to simply congratulate themselves for having higher-than-industry-benchmark CSAT scores. They needed the analysis to show why it matters. In the process, they also learned other factors that build customer loyalty—factors they can develop and encourage in their customer base.
Where Can You Hear Your Customers?
Sometimes, VoC is just a fancy name for surveys that you send to customers after closing a case. That’s just the start! To get a 360º view of the customer experience, you need so much more than transactional surveys.
Relationship surveys are the most strategic way of hearing from your customers. Once a year, reach out to each of them and find out what they think of you. If your customer base is large enough, you can reach out to half of them every six months, or a quarter of them every quarter.
Because it covers the entire relationship between your company and the customer, the relationship survey should have cross-functional input. Marketing, sales, customer success, product development, and professional services should all have input. The CEO or CCO should be the executive sponsor, and should be briefed on the results.
The relationship survey is the right place to ask the Net Promoter Score question, as well as other behavioral questions (“How do you train users in our software?”) and perceptual questions (“How do you perceive the effectiveness of Feature X?” or “Which of these benefits is most important to you?”) Look for correlations between behavior or perception and outcomes like NPS. One or two open-ended questions can drive great insights if you’re able to invest in analyzing them.
If you have different customer audiences (economic buyers, users, IT staff), segregate the contacts and customize the questions for each audience. Of course, if you don’t really care what (say) IT staff, don’t survey them.
In Support, post-case-close transactional surveys are the most common and, generally, the most reviewed and discussed. They often ask questions about overall satisfaction plus perceptions of the support engineer—knowledge, ownership, empathy, responsiveness, and professionalism.
We understand the desire to measure staff performance with surveys, and you can get specific helpful feedback for recognition or coachable moments. I am concerned that the satisfaction question colors everything else. If I’m dissatisfied with your product or policy, how can that not affect how I rate the person who delivered the outcome I don’t like?
Rather than satisfaction, which is often out of Support’s control, we prefer to ask a variation on the Customer Effort Score (CES) question: “Please indicate your agreement with the following statement: ‘It was easy to resolve my issue,’” scored on a five point “Strongly Agree” to “Strongly Disagree” scale.
To drive response rates up, keep surveys short. I’d far rather have data from a one-question CES survey with a 40% response rate, than a twelve-question survey with a 15% response rate. And don’t resurvey individuals within a 90 day period: we don’t want to penalize people for opening cases.
Explicit feedback from surveys is good. But implicit feedback from knowledge reuse is especially valuable, because you get it from almost every customer interaction.
Tracking the knowledge base article(s) that resolve each case is a foundation of KCS. While there are many benefits, measureable feedback on the customer experience may be the most important.
Long, long ago when I started work as a new product manager, I had to plan an upcoming software release. I asked Support what the biggest case driver was, and they showed me a pie chart showing they had the most cases about installation. I asked what installation issues drove the cases, so I could get them fixed. They told me I could read all the cases and draw my own conclusions. I didn’t have time for that!
If they had been tracking KB article reuse, running a simple report would tell me the KB articles that were most linked to Installation cases. Now that I could have done something useful with.
Page views (PVs) in self-service also give you great data about what customers care about.
Quantitative data is great, but stories are compelling. When I’m doing an assessment of a support center, the most powerful thing I can do is to relate something that I observed that, with the best of intentions, caused a bad customer experience or wasted time and money. Your people are talking with customers every day—just think of the powerful stories they have to tell! All we need to do is to encourage them to share.
In The Best Service is No Service, Bill Price, who ran customer service for Amazon, relates that Jeff Bezos used to ask him to share one thing they’d learned from customers that week. Fifty two insights a year, if acted upon, can make a real difference in the customer experience.
In addition to harvesting insights that come in through your support center, you can go out and get some. Sage set out on a 50-day Sage Listens nationwide tour in an RV, where a rotating team of executives went out and visited their customers where they worked. Leaders not only gained insights and empathy about the reality of how they products are used “in the wild,” they also created unforgettable customer experiences.
Customer Experience Journey Mapping
If a customer with a specific issue gets passed around from group to group, repeating themselves each time, we don’t need a survey to know it’s going to be a lousy customer experience. That’s why we think that Customer Experience Journey Mapping is part of any healthy VoC program.
The Rules of the Road
- Never ask a question if you won’t (or can’t) act on the answer. If there’s a technology issue that is unfixable, don’t ask how customers feel about it.
- Plan and test. Start with your research objectives, then turn them into questions and activities, and test with your intended audience to make sure they interpret the questions the way you meant them. For a book’s worth of detail on this, we recommend Fred van Bennekom’s Customer Survey Guidebook.
- Beware non-response bias. While statistical significance is an issue for surveys, the much more pervasive problem is that the people who respond are often not representative of all your customers. They may be delighted, or they may be livid, but they care enough to respond, which makes them different from those who don’t. Do everything you can to get good response rates. Shorten, simplify, follow-up, incent.
- Make sure someone owns analyzing and taking action. VoC data by itself doesn’t do anything. Unless there is someone, to drive improvements, your VoC program is wasted time and effort.
- Don’t create incentives that taint the data. Have you ever had a car dealer tell you, “If you don’t give me a five on this survey I fail?” The data is now tainted—a four is no longer “quite good.” Paying staff based on CSAT, for example, seems like a good idea at first blush. But cash rewards lead to an almost insurmountable temptation to influence the results, which corrupts them.
What have you learned from your customers recently?