Surveys & Feedback

Customer feedback survey templates

Twenty-five question sets you can copy, customize, and ship today.

9 min read Updated April 29, 2026

A good template is not a script — it is a starting point that has already removed the obvious mistakes. The question sets below cover the moments that matter most in a customer lifecycle: onboarding, post-purchase, support, churn, product research, and the relationship pulse. Copy what fits, replace the brackets with your specifics, and keep what you cut out for next time.

Onboarding feedback

Sent twenty-four to seventy-two hours after first meaningful use. Goal: catch friction while the experience is still fresh and the customer has not yet decided to give up.

  • "How easy was it to get started with [product]?" (1–7 effort scale)
  • "What were you hoping to do first when you signed up?" (open)
  • "Did you run into anything confusing during setup?" (open, optional)
  • "On a scale of 0–10, how likely are you to keep using [product]?" (forecast question)
  • "What is one thing that would make this easier?" (open)

Branch the open questions on the effort score: ask "what was hard?" only of respondents who scored 4 or below. The pattern is closer to a Customer Effort Score with diagnostic follow-up than a generic satisfaction survey — see CSAT vs NPS vs CES for the metric choice.

Post-purchase

Sent three to seven days after the order arrives, depending on category. Long enough to use the product, short enough that the experience is still vivid.

  • "How satisfied are you with [product]?" (1–5 satisfaction)
  • "How does it compare to your expectations?" (worse / same / better)
  • "How was the delivery experience?" (1–5)
  • "What would you tell a friend who was thinking of buying this?" (open)
  • "Anything you would change about the product or the buying experience?" (open, optional)

The "tell a friend" wording surfaces marketing copy as a side effect of the survey. Tag the answers monthly and feed the strongest verbatims into your product page tests. Post-purchase survey best practices covers send timing in more detail.

Support CSAT

Fires automatically when a ticket is closed. Two questions max — anything more is a tax on the customer for using your support channel.

  • "How would you rate the support you received?" (1–5)
  • "What is the main reason for your score?" (open, optional)

For high-volume support, run CES instead: "It was easy for me to get my issue resolved" on a 1–7 agreement scale. Effort correlates with retention more cleanly than satisfaction in support contexts. Either way, the open follow-up is where the signal lives.

Churn and cancellation

The exit survey runs at the moment of cancellation, before the user closes the tab. Keep it short — they are leaving, you owe them speed.

  • "What is the main reason for canceling?" (multiple choice with five to seven categories: too expensive, missing feature, found alternative, no longer need it, support issue, other)
  • "If we changed one thing, what would bring you back?" (open, optional)
  • "Would it be okay if someone reached out to learn more?" (yes / no, with optional contact)

The categorical question gives you a chart that survives the slide review; the open question gives you the surprises. Tag the verbatims monthly with the same taxonomy used for NPS detractor themes.

Relationship pulse (NPS variant)

Twice a year per customer, capped so heavy users do not get hit more often than light ones.

  1. "How likely are you to recommend [product] to a friend or colleague?" (0–10)
  2. Branch: detractors get "What is the main reason for your score?", passives get "What would have made it a 9 or 10?", promoters get "What did you like most?"
  3. Optional: "Anything else you want us to know?" — the highest-signal question in many surveys.

For the score math, segmentation, and follow-up routing, see the NPS complete guide.

Product research

Used to validate or kill a feature idea before committing engineering time. Send to a known segment, never an open-list panel.

  • "How often do you do [task this feature would help with]?" (frequency scale)
  • "How do you currently solve this?" (open or multiple choice if you already know the alternatives)
  • "How disappointed would you be if [proposed feature] was not available?" (very / somewhat / not)
  • "What would the ideal version of this look like for you?" (open)

The "how disappointed" framing surfaces real demand more reliably than "would you use it" — people overestimate future use of features they think sound nice. If responses do not lead with "very disappointed," the feature does not have product-market fit yet.

Template hygiene: match the question to the moment, keep it short enough to finish on a phone, always pair quantitative scales with one open follow-up, and route the answers to a human who will read them. For wording rules across all of them, see how to write survey questions. Response-rate levers are in how to increase survey response rate.

Frequently asked

How long should each of these surveys be?
Onboarding and post-purchase: under three minutes. Support CSAT: two questions, period. Churn: under ninety seconds. Relationship pulse: two questions plus an optional third. Product research can run longer if the audience is qualified and the topic relevant. The shorter the survey, the higher the response rate and the cleaner the data.
Can I run all of these at once?
You can, but you need a suppression rule that prevents the same customer from receiving more than one survey per month. Without it, your most engaged users get hit constantly while quieter ones get missed, and the data ends up biased toward your loudest segment.
Should the questions be required or optional?
Make the core score required; make every open follow-up optional. Forcing open answers raises completion-time pain without raising data quality — most people who do not want to answer will type "n/a" or quit. Required score plus optional verbatim is the right balance.
How do I localize these for non-English audiences?
Translate the wording with a native speaker, not just a tool. Pay particular attention to scale anchor labels — "satisfied" and "easy" do not always map cleanly across languages, and a poor translation skews the distribution. Pilot the localized version with a small sample before rolling out.
Where do I store and analyze the responses?
In the survey tool for raw response review and routing, in your CRM or warehouse for cross-cutting analysis. The minimum stack is one place where a support manager can see this week's detractor verbatims and one place where the analytics team can join scores to retention and revenue.