10 Rules for Surveys That Deliver Real Insights
A poorly designed survey is worse than no survey. It gives you confidence in data that is wrong. Here are ten rules that fix the most common mistakes we see in B2B survey programmes.
- Every question you add beyond five costs you 5-10% of your respondents. If you will not act on the answer, delete the question.
- Leading questions and double-barrelled questions give you inflated scores that hide real problems. They are worse than not measuring.
- Additionally, timing matters as much as question design. A CSAT survey sent within an hour of a support interaction outperforms one sent the next day. The difference is 2-3x in response rate.
- Test the survey on real people before distributing. What is clear to you is often ambiguous to your customer.
A bad survey does not just waste time. It produces data that looks credible but is wrong. Indeed, decisions made on misleading survey data are worse than decisions made on no data, because they carry false confidence.
However, the good news: survey design mistakes are predictable and fixable. These ten rules address the ones we see most often in B2B survey programmes.
Rule 1: Fewer questions, better data
This is the single most impactful rule. Response rates collapse as surveys get longer:
| Questions | Typical Response Rate |
|---|---|
| 1-3 | 65-80% |
| 4-6 | 50-65% |
| 7-10 | 35-50% |
| 11+ | Under 35% |
For every question, ask: "Will we make a different decision based on this answer?" If not, remove it.
For transactional surveys (NPS, CSAT, CES), one metric question plus one open-ended follow-up is enough. That gives you the score and the reason.
Rule 2: One thing per question
"How satisfied were you with the price and quality?" is a double-barrelled question. If the customer is happy with quality but unhappy with price, what do they answer? As a result, you get a compromised score that tells you nothing.
Split it. One question about price. One about quality. Each answer is now actionable.
Rule 3: No leading questions
Leading: "We constantly work to improve our service, how satisfied are you?"
Neutral: "How satisfied are you with our service?"
The preamble in the leading version implies you should be satisfied. Consequently, it inflates scores, hides problems, and gives you data that feels reassuring but is fiction.
Leading: "Our support team is dedicated and helpful, do you agree?"
Neutral: "How would you rate your experience with our support team?"
If you are getting suspiciously high scores, check your question wording first.
Rule 4: Write like a human
No jargon. No internal terminology. No complex sentence structures.
Bad: "Evaluate the extent to which our service-oriented approach to the customer journey meets your requirements."
Good: "How well do we meet your needs?"
Test this way: if a 16-year-old without industry knowledge would not understand the question instantly, rewrite it.
Rule 5: Pick the right scale and never change it
- NPS: 0-10. Industry standard. Do not modify it.
- CSAT: 1-5 Likert. Most intuitive for respondents.
- CES: 1-7. Provides useful nuance.
Critical rules: do not mix scale directions within a survey (high = positive on one question, high = negative on the next). Always label endpoints, not just numbers. And never change your scale mid-programme, it destroys your trend data.
Rule 6: Question order shapes answers
Questions create context for subsequent questions. Therefore, a poorly ordered survey distorts responses.
Wrong order:
- "Did you experience any problems with your order?"
- "How satisfied were you with your order?"
Question 1 forces the customer to think about problems, which drags down the CSAT score in question 2.
Principles:
- Lead with your most important metric question (NPS or CSAT)
- Put open-ended questions at the end
- General before specific
- Do not prime the respondent with negative context before a satisfaction question
Rule 7: Let people say "I don't know"
If a customer has not used a feature, forcing them to rate it creates noise. Instead, include "Not applicable" or "Haven't used this" where relevant. Fewer valid responses are better than more invalid ones.
Rule 8: Design for mobile first
Over half of surveys are opened on phones. As a result, a survey that does not work on mobile loses half your potential respondents before they start.
- Large, touch-friendly buttons
- Radio buttons, not dropdowns
- Short text fields with clear prompts
- No matrix tables on mobile
- Test on iOS and Android before sending
Rule 9: Send at the right moment
Timing affects response rates and data accuracy equally.
Transactional surveys (CSAT, CES): Within 1-24 hours of the interaction. The closer the better. A survey sent three days after a support case produces lower response rates and less accurate recall.
Relational surveys (NPS): Tuesday to Thursday, 9-11 AM local time for B2B. Avoid sending right after invoicing or a known service incident.
Suppression rule: One survey per customer per 30-60 days, regardless of which team is sending it. Coordinate across departments. For more on timing and channel strategy: How to Improve Your Survey Response Rate.
Rule 10: Test before you send
The most skipped step. After all, what is clear to you is often confusing to your respondent.
Cognitive pretesting: Ask 3-5 people from your target audience to complete the survey while thinking aloud. Ask them: "What do you think this question means?" and "Is anything unclear?"
Technical testing: Desktop, tablet, mobile. Outlook, Gmail, Apple Mail. Conditional logic pathways. Display of response options. If it breaks on any device or client, fix it before distribution.
What We See Go Wrong
Five mistakes that recur across the survey programmes we review:
1. Too many questions. The internal stakeholder says: "While we have them, can we ask about..." The answer is almost always no. Every added question costs you 5-10% of respondents.
2. No open-ended questions. The metric tells you something is wrong. The text response tells you what and why. Skipping the open-ended question throws away the most actionable data.
3. Unanchored scales. "Rate us from 1 to 5" with no labels on what 1 and 5 mean. Respondents interpret the scale differently, and your data is unreliable.
4. Same survey for everyone. A customer in their first week should not see the same questions as a three-year customer. Use conditional logic and segmentation to keep the survey relevant.
5. No follow-up on responses. The most damaging mistake of all. Customers who take time to respond and never hear back are less likely to respond in the future, and more likely to churn. If you are not going to close the loop, do not open it.
Good survey design combines psychology, data literacy, and respect for your respondent's time. Keep it short. Be precise. Test before distribution. And act on what you learn.
Frequently Asked Questions
As few as possible. For transactional surveys (NPS, CSAT, CES), 1-3 questions is ideal. Relationship surveys can go up to 7-8, but response rates drop 5-10% per question beyond five. The test for every question: 'Will we make a different decision based on this answer?' If not, remove it.
'We work hard to improve our service, how satisfied are you?' is leading. The preamble implies you should be satisfied. Instead, neutral: 'How satisfied are you with our service?' Leading questions inflate scores, create false confidence and hide real problems.
NPS is always 0-10. CSAT works best on a 5-point Likert. CES on a 7-point scale. Do not invent your own scales. Do not mix scale directions within a survey. And always label the endpoints, not just the numbers.
For relational surveys: Tuesday to Thursday, 9-11 AM local time. For transactional surveys, the day does not matter, but the delay from the experience does. Send within 1-24 hours. Every hour of delay reduces both response rate and data accuracy.
Set a global suppression rule. No customer should receive more than one survey per 30-60 days, regardless of which team sends it. Coordinate across sales, support, product, and marketing. If three departments each send their own survey in the same month, fatigue is inevitable.
Ready to know what your customers actually think?
SurveyGauge helps Nordic B2B companies move from gut feeling to data-driven CX decisions.
SurveyGauge Team
Customer Experience Experts
SurveyGauge-teamet hjælper virksomheder med at måle og forbedre kundetilfredshed via professionelle surveys, analyser og rådgivning.
You might also be interested in
View all articlesWhat Is NPS? The Complete Guide to Net Promoter Score [2026]
NPS is the most widely used loyalty metric in the world, and also the most frequently misused. Indeed, the score means nothing without follow-up. Here is how to use it properly.
Voice of Customer (VoC): The Complete Guide to Your Program
VoC is not a survey programme. Instead, it is a system for continuously understanding what customers need, what frustrates them, and what drives them to stay. Here is how to build one that works.
