The right survey to measure each touchpoint of the customer journey

Learn which surveys you should use to measure the experience at each touchpoint in the customer journey map (CJM).

Article

Bob Thompson

January 19, 2020

Back to Resources

To prepare for a recent business trip, I booked a roundtrip flight, hotel, and rental car. All done via Expedia, because it gives me one place to handle all travel arrangements, and provides a very nice app to remind me what I booked along with directions, phone numbers, and more. 

As my trip progressed, I got email requests for feedback at each step. For example, a few hours after checking into my hotel, I got an email with this message.

how to measure touchpoints in the customer journey map - Expedia survey

Now, I’ll give Expedia points for keeping the email short–just asking whether I was happy, or not. But the email is a bit confusing because it asks several questions, and there’s no neutral option.

In any case, I clicked on the icon to begin a survey process online, where I saw this:

how to measure touchpoints in the customer journey map - Expedia survey 2

The “overall rating” question gives me more options than the email would have suggested. I wasn’t so sure I wanted to “tell others” about my stay. If I just wanted to give feedback to Expedia or the hotel, I’d probably drop out at this point. 

After providing an overall rating, I got a request to rate four different attributes, using the same “terrible” to “excellent” scale. 

  • Cleanliness.

  • Staff & service.

  • Property condition & facilities.

  • Amenities.

A progress indicator at the bottom told me I had a few more questions to go. To complete the survey, I was then asked to:

  • Comment on things I liked or didn’t like.

  • Enter a display name and location (with the option to display with public review).

  • Select from possible reasons for the trip.

  • Rate the area around the property.

  • Rate about a dozen local attractions.

  • Add a final rating and comment about the city.

The entire process took a few minutes, and I think was designed well for a web or mobile experience. 

After my trip was over, I received one final request using a Net Promoter (NPS) rating scheme. This is the right time to use NPS. 

A “would you recommend?” type question works well when evaluating a relationship or complete experience, but it can be confusing if asked after individual touchpoints.

Expedia NPS survey Customer Journey Map

After answering the likelihood-to-recommend question, I was asked to explain why in an optional comment. Then, I was asked another dozen or so questions which I found mostly redundant with previous surveys. My guess: Expedia wanted one survey to catch people after the completed experience, who may or may not have answered one of the touchpoint surveys.

How could Expedia improve this process?

First, there are some quirks in the invitation email that might depress response rates. A better approach: show the first question in the email with the actual rating choices, then start the online survey with that choice filled in. A well-designed email with a smooth transition online is critical. 

Second, it felt a bit long, especially when I was asked to rate local attractions. This was optional but might discourage some from completing the survey. 

Third, why just email? I’m using the Expedia app, and Expedia certainly knows that. The entire survey could have been deployed in the app, with a text message or popup invitation. SMS surveys are increasingly popular, too. Suggestion: make the choice of survey methods part of the customer profile.

Key point: Feedback surveys have to be thoughtfully designed into each touchpoint, in terms of the channel, timing, and survey questions. 

Think “Survey+” for Customer Feedback

As you might expect, customer listening–often called a Voice of the Customer (VoC) program–is an essential practice for any CX program. Without feedback, you can’t understand customers’ perceptions and make plans to improve.

The most common way to listen is with surveys like those in the foregoing example. In CustomerThink’s research, around 90% of CX initiatives use surveys. 

Surveys are an example of solicited feedback. The brand invites the customer to complete a survey after a touchpoint, at the end of a completed experience, or periodically to assess the overall relationships. Surveys are highly versatile. They can range just a single yes/no question to elaborate market research tools. Structured questions also make it easy to calculate scores such as NPS, CSAT, or CES

The main weakness of surveys is they tend to get input from very happy or unhappy customers. Response rates of 10% or less are common for general customer satisfaction surveys, leaving CX leaders to wonder: What do all those other customers think?

The answer is not to ditch surveys. Instead,  supplement them with other forms of feedback such  as:

  • Text captured in customer emails, web site forms, call center agent notes, chat interactions, or even SMS.

  • Public comments on social media sites like Twitter, Facebook, LinkedIn, and review sites.

  • Call recordings which, like text, can be analyzed to learn about customer sentiment and issues.

  • Indirect or inferred feedback from analyzing customer interaction data.

  • Frontline employee feedback, because workers see first hand what’s going on, even if the customer doesn’t take time to give feedback.

Many Voice of the Customer (VoC) programs are limited to survey-based feedback. But surveys should be thought of as a foundation for getting customer feedback, not the only method used. 

In CustomerThink’s recent study of 200+ CX initiatives, respondents were asked about sources of feedback. The research compared results between three key segments:

  • Starting (17%): “no results yet.”

  • Developing (58%): “seeing some signs of CX improvement.”

  • Winning (25%): “outcomes can be quantified” or “created a competitive advantage.”

The study found that using only surveys is not a CX success driver. Winning CX initiatives use text and social media at least 80% of the time, compared to half of those in a Developing stage. Noticeable gaps were also found in the usage of operational data, call center recordings and website feedback.

customerthink feedback sources

Conclusion: More mature and successful CX programs are using a wider variety of feedback channels. With some planning, it’s possible to identify dozens of feedback sources beyond surveys. Other sources could include, says Bill Price of Driva Solutions: 

  • Output from Advisor council meeting.

  • Customer service agent notes and messages.

  • Complaints on third-party review sites.

  • Focus groups of buyers and sellers. 

  • Text from sales team interactions. 

  • Posts and comments on social media.

  • Third-party market research.

So, when developing your VoC strategy, think about how to collect survey data at each touchpoint and what other data can be used to build a more complete picture of your customers’ “voice.”

Effective deployment of surveys by touchpoint

To get the best response rates, surveys should be designed into the experience. Let’s step through a fictional customer journey and discuss how best to ask for feedback at each touchpoint, using this journey map developed by Jim Tincher of Heart of the Customer, a specialist in journey map consulting. 

surveys for customer journey map touchpoints

(Click here to enlarge map) 

One caveat: don’t take this as a model for the only or the right way to document a journey map. There are dozens of possibilities. That said, the following are a few key points to keep in mind. 

Design each map for a specific persona 

Unless you run a very small or simple business, you have different types of customers who will take different journeys. Create a different map for each major persona–a fictional “stand-in” for a type of customer. Personas can be developed based on customer goals, demographics, and other characteristics that may influence behavior. 

Stacy Sherman, director of customer experience & employee engagement at Schindler Elevator Corporation (U.S.), says:

It’s important to START with defining personas and then build journey maps, not the other way around. Journey mapping must not be a “cookie-cutter” approach, as people have different mindsets and needs. Furthermore, it’s important to validate the journey map with real users to identify areas of optimization.

Notice that this sample map is for “Jane,” a consumer looking for a health plan.

Break the journey into major touchpoints

In this example for the Jane persona, the journey is broken into four collections of interactions that belong together–known as touchpoints. 

  • Awareness: What sparks the customer to start thinking about a need or a goal to be achieved?

  • Research & Consideration: How does the customer go about understanding possible solutions, vendors, and rationales for buying?

  • Purchase: What is the process to make a final decision and consummate the purchase?

  • Post-Purchase: How will the customer get access to the solution/service, learn how to use it, and get support?

One big touchpoint missing from this example is the usage of the product or service. In the case of the journey to buying health insurance, perhaps not so critical

But for most companies, the product or service sold is a major source of value for the customer, and the usage experience should be represented in the journey map. A different map for Jane could delve into making a claim, for example. 

CustomerThink’s research uses a generic journey map as shown here. Tailor for your company and each persona.

CustomerThink journey map procress

Document the customer’s emotional reaction

One of the key things about a real journey map, as opposed to a process map, is information about how the customer perceives their interactions. Perception is reality. In this example, you’ll see each stage is noted as a positive, neutral, or negative experience. How is this determined? Through VoC and other data. 

I can’t overemphasize the importance of building maps with real customer intelligence, not just internal opinions. Surveys can certainly help, but don’t forget to use verbatim comments, argues Lynn Hunsaker, chief customer officer at ClearAction Continuum:

You already have a treasure trove of insights on-hand; make use of them for outside-in perspective. You need to keep a pulse on the various players in a customer account who have a say in purchase decisions. The consequences will elevate your thinking to customers’ jobs-to-be-done, or customers’ outcomes-based buying. 

Note interaction channel(s) used

Note how each touchpoint includes an icon showing how the customer is interacting using social media, phone, web, email, etc. In many journeys, there will be multiple ways for the customers to interact at each touchpoint. For customer service in the post-service step, they could make a call, get help online, or even visit a store. 

Each of the major interaction channels is a candidate for feedback collection. Where possible, see if you can integrate the survey invitation into the same channel that the customer is already using. When you use Skype, for example, you get a feedback invitation immediately after completing a call. 

Or, for customers on the move, make sure the survey is smartphone friendly. That means short, mobile responsive, and appropriately timed. After a car rental return experience, the customer will have a few minutes on the airport shuttle, and another hour or two at the airport, where a survey has a good chance of being answered. Better yet, use alerts to take immediate action on critical issues raised. Otherwise, complaints could end up on social media for all to see!

Touchpoint survey best practices

I’d like to close with thoughts on how best to engage customers as they progress through their journey. Here’s some general advice from the e-book How to Use Customer Loyalty Metrics: NPS, CES & CSAT

For touchpoint surveys, the goal is to understand how well the organization performed at that point in the customer journey, while memory is fresh. CSAT and CES can both work well. The NPS “would you recommend” question can be confusing for customers because referral behavior is usually based on more than just one recent interaction. 

Touchpoint surveys should be as short as possible to maximize the response rate. Some experimentation may be required to find the best number and avoid survey fatigue. At Sun Basket, for example, post-service surveys are 5-8 questions, sent to 50% of cases mainly by email. Survey requests are limited to one every 45 days.

Please keep in mind that there is no such thing as a generic journey. However, for this article, I’ll use some generalized stages to discuss how to get feedback on different parts of the journey.

Stage 1: Awareness

One of the big misconceptions in CX is that customer feedback and journey maps are only for customers. Strictly defined, customers are people who have already purchased, or at least have expressed serious interest in doing so (e.g. purchasing). 

But you’re missing a huge opportunity if you don’t figure out why some potential prospects never contact your brand. In the Awareness stage, our consumer Jane sent out a tweet looking for information about health care plans, and the response set her off exploring one choice. 

You really can’t ask Jane to fill out a survey at this point. But public comments on social media can be mined and analyzed to understand:

  • What brands are being recommended for the problem your company solves?

  • What are customers and users saying about their experiences with your brand and your competitors?

  • What content is being shared to help prospects figure out possible “shortlist” solutions?

While Twitter and Facebook are popular for consumer research, B2B buyers will spend more time on LinkedIn and product review sites. 

The perception of your current customers will be reflected in these public comments. So, analyzing social or public media can provide important clues about issues that create Detractors and Advocates. Improving experiences for your current customers will not help retain them, but also increase their likelihood of a positive reference on social media which will drive engagement with your brand. 

Best Metric: CSAT. As noted, for the Awareness stage, focus on public comments and social media to understand key issues. Some text analytics tools will output a derived CSAT or sentiment score, which you can use as an overall measure of this part of the journey.

Stage 2: Research & Consideration

If Awareness is the spark to begin a customer journey, the Research and Consideration stage fans the flame. Or snuffs it out.

At this stage, Jane is digging deeper to look for information about a vendor of interest. She gets feedback from a friend on Twitter, visits a vendor website, and reads online reviews. This is a very typical process in today’s world.

A study by Forrester Research found that 74% of consumers use search engines for consideration and purchasing. Some of these searches will end up on your company web site, where you have a chance to get feedback.

In research mode, prospects are usually anonymous and most will want to remain so. But some will take time to give feedback on their web experience. One technique I’ve seen used in major e-commerce sites is to invite visitors to give feedback via an exit popup (“before you go…”) or a pop-under (“will you give us your feedback after your visit?”. 

Consider the use of incentives to get prospects to engage. B2B companies typically offer some “gated” content requiring user registration to access. Why not use the same technique to ask for web experience feedback with a reward of a research report or e-book?

Best Metric: CSAT. Ask the customer to provide an overall rating, then ask about their goals, how well the site met their needs, performance, etc.

Stage 3: Purchase

Now you’ve got a real customer, or do you? For online purchasing, abandoned shopping carts are a huge issue. On average, about 70% of shopping carts are abandoned, based on 41 different studies. You can get some clues by using an exit popup survey, perhaps coupled with a discount offer or other incentive. 

This is also a good opportunity to use supplemental information from Google Analytics and other specialized web analytics tools. By analyzing page viewing history, dwell times, and other behavioral data, you can understand why prospects are leaving at the moment of truth. Some common reasons include surprise or high fees for shipping, having to create an account, payment security concerns, and an overly long or confusing process. 

After purchase, you have the customer’s contact information. Congrats! Now you can invite feedback in a number of different ways including:

  • Adding a feedback survey link to the purchase confirmation message.

  • Sending a separate survey invitation email. 

  • Use SMS, if the customer has agreed to that method of communication. 

  • Promote an in-app survey, if your brand offers.

Best Metric: CES or CSAT. For consumer purchases, the ease of completing the purchase is critical. CES is most useful for routine interactions that should be done efficiently. CSAT should be used to evaluate more complex purchasing experiences, including business-to-business. In this case, while being “easy” is still important, there are likely many other factors that influence the customer’s perception.

Stage 4: Solution Usage

In CustomerThink’s research, the purchased “solution” (product or service) is worth about 40% of the perceived value, although there are differences between industries. Non-solution interactions (buying and support experience) are worth another 40%, and the remaining value driver is price. 

You should study your own customers’ “loyalty drivers”–the attributes that motivate them to be loyal, or not. I mention this because I see a lot of attention given in the CX world to interactions around the solution. But if the solution is itself the main loyalty drive –as is the case in many B2B scenarios–then you’ll want to prioritize surveys to get solution feedback. 

These days, an emailed invitation coupled with an online (web/mobile) survey is commonplace. Emails can be timed to get feedback as users gain experience with the solution:

  • Unboxing and initial usage–Can the user get value immediately?

  • Onboarding–Are documentation, training and other resources sufficient?

  • Advanced usage–Do experienced customers will have different priorities?

Increasingly, major brands are designing feedback into the solution. Online services (e.g. banking) are an obvious choice. But even physical products like tractors can be connected in the world of the Internet of Things (IoT). While you might not want to ask a traditional survey, perhaps a voice-activated feedback channel can enable real-time feedback rather than trying to catch the user later. 

Best Metric: CSAT. For usage experiences, you’ll want to understand solution performance, usability, quality, etc. Customer satisfaction surveys are highly versatile and can be adapted to almost any use.

Stage 5: Service and Support

Poor service experiences are a leading cause of defection. When customers have a problem, emotions run hot and they are looking for quick, competent, and empathetic help. Is your brand delivering?

One challenge is the diversity of service channels available. Many customers still prefer to call for help with urgent or sensitive situations. But the big trend is towards more digital channels, including email, chat, text messages, and apps. 

Most channels that can be used to communicate can be used to solicit feedback. When calling into a call center, customers can be offered a chance to give feedback at the end of their call–via touchpad entry and voice comments. Digital channels obviously lend themselves to surveys. 

SMS is an intriguing new option because, according to Gallup, 90% of adults have a cellphone and open rates exceed 90%. Take that, email! Customers can answer structured questions easily and add comments as necessary. But, it’s not as user-friendly as a web-based survey, and brands need to navigate cost and user privacy issues. 

Best Metric: CES or CSAT. For most basic consumer service interactions, speed thrills. That why CES is growing in usage as a top-level metric. But for complex B2B support calls, the total time to resolve an issue could be more important, which suggests using CSAT instead.

Closing thoughts

You may have noticed that I didn’t recommend the use of NPS in any of the stages above.

There are some cases when NPS can work at a touchpoint level. For instance, when the main source of value and referral behavior is based on that touchpoint.

For example, AMEX uses a variant of NPS (would you recommend us to a friend?) after support calls, which are hugely important to customer loyalty.

But my general advice is that NPS is better used to evaluate the relationship or end-to-end experience.

The best brands deliver intentional or designed experiences that drive customer loyalty. 

Think of customer feedback as an experience to be designed into each touchpoint. Make thoughtful choices of survey design, channels, and timing. Use supplemental sources and data to fill in the gaps, and you’ll have the insight needed to drive CX change and win!

Check out our free Customer Journey Map guide with map examples and a free template!

Learn how GetFeedback can help you exceed customers’ expectations—start your free trial today.

About the guest author

Bob Thompson is the CEO of CustomerThink, an independent research and publishing firm focused on customer-centric business management.

Subscribe for the latest CX content

Privacy notice|California privacy notice
Terms of use
|Cookie policy

*Net Promoter, Net Promoter System, Net Promoter Score, NPS and the NPS-related emoticons are registered trademarks of Bain & Company, Inc., Fred Reichheld and Satmetrix Systems, Inc.