What Is CSAT? Guide to Customer Satisfaction Score
CSAT is the most direct satisfaction metric you can use, but it is also the most abused. Therefore, here is how to deploy it properly, when to trust it, and when it misleads.
- CSAT measures satisfaction with a specific experience, not with your company overall. That distinction is critical to using it properly.
- The score is the percentage of customers who rated 4 or 5 on a 5-point scale. Simple to calculate, easy to game, important to guard against inflation.
- CSAT is strongest at touchpoint level: support, onboarding, delivery. It is weak at predicting loyalty or churn.
- A CSAT of 80% in financial services and 80% in e-commerce are different achievements. Always benchmark within your industry.
CSAT is the most intuitive metric in customer experience. You ask a customer how satisfied they were. They tell you. No segmentation into promoters and detractors, no effort scales. Just a direct question about a specific experience.
That simplicity is its greatest strength and its biggest trap. Specifically, because CSAT is easy to collect, companies collect it everywhere, act on it nowhere, and end up with dashboards full of numbers that drive zero improvement.
However, used properly, CSAT is one of the sharpest diagnostic tools you have for fixing specific touchpoints. Used poorly, it becomes a vanity number that masks real problems.
The CSAT Question
The standard CSAT question is:
"How satisfied were you with [experience/product/service]?"
Three scale formats are common:
- 5-point scale: Very dissatisfied to Very satisfied (the industry standard)
- 3-point scale: Dissatisfied / Neutral / Satisfied (faster, less granular)
- 7-point scale: More nuance, but often confuses respondents
Stick with the 5-point Likert scale unless you have a specific reason not to. It is the best balance between detail and response rate.
Scale: 1-5 (standard Likert) Satisfied responses: Score 4-5 Measures: Satisfaction with a specific experience Best for: Support, onboarding, delivery, feature feedback Typical frequency: Immediately after interaction
Calculating CSAT
CSAT = (Number of satisfied responses / Total responses) × 100
"Satisfied" means scores of 4 and 5 on a 5-point scale. Scores of 1-3 are excluded from the numerator.
Example
You survey 300 customers after support interactions. 150 respond:
| Score | Count |
|---|---|
| 5 - Very satisfied | 75 |
| 4 - Satisfied | 45 |
| 3 - Neutral | 20 |
| 2 - Dissatisfied | 7 |
| 1 - Very dissatisfied | 3 |
Satisfied (4+5): 120
CSAT = (120 / 150) × 100 = 80%
An 80% CSAT is solid for most industries, but context matters. For example, 80% in telecoms would be exceptional. 80% for premium SaaS support would be a concern.
Where CSAT Works (and Where It Falls Short)
CSAT is a touchpoint metric. It answers: "Was this specific experience acceptable?" It does not answer: "Will this customer stay?" or "Is this customer loyal?"
Strong use cases
Customer support. The most common CSAT deployment. A post-resolution CSAT score tells you directly whether the support experience met expectations. Break it down by agent, channel, and issue type for actionable insight.
Onboarding. CSAT at onboarding milestones (day 7, day 30, day 90) reveals whether new customers feel equipped to use your product. Low onboarding CSAT is an early warning for first-year churn.
Product delivery. E-commerce and logistics companies use CSAT to measure the delivery experience separately from the product itself.
Feature releases. SaaS teams use in-app CSAT to gauge reception of new features while usage data is still ambiguous.
Events and webinars. Quick CSAT after a session captures attendee perception while the experience is fresh.
Where CSAT misleads
CSAT does not predict loyalty. A customer can score 4/5 today and churn next quarter because a competitor offered a better price. Furthermore, it does not capture effort, a customer can be "satisfied" with the outcome of a support interaction that took three transfers and two days. For those dimensions, you need CES and NPS.
CSAT vs. NPS vs. CES
| Metric | Measures | Timing | Scale |
|---|---|---|---|
| CSAT | Specific satisfaction | Transactional | 1-5 or 1-3 |
| NPS | Loyalty and recommendation | Relational/periodic | 0-10 |
| CES | Effort and friction | Transactional | 1-7 |
Indeed, the three are complementary. None of them is complete on its own. For the full breakdown: NPS vs. CSAT vs. CES.
Response Rates by Channel
| Channel | Typical Response Rate |
|---|---|
| 10-30% | |
| In-app/in-product | 30-50% |
| SMS | 25-40% |
| Live chat popup | 40-60% |
| IVR (phone) | 5-15% |
The biggest lever is timing. Surveys sent within an hour of the experience consistently outperform those sent the next day. Keep it to one or two questions, use a recognisable sender, and test send times (B2B mornings on weekdays tend to work best).
For a deeper look at improving participation: How to Improve Your Survey Response Rate.
Industry Benchmarks
| Industry | Average CSAT |
|---|---|
| E-commerce | 78-82% |
| SaaS / Technology | 78-84% |
| Finance and Banking | 75-80% |
| Telecommunications | 62-70% |
| Healthcare | 76-82% |
| Hospitality and Travel | 80-85% |
| Public Sector | 65-72% |
These are indicative. Benchmark against your sub-industry and direct competitors, not generic averages.
What We See in Practice
Among the companies we work with, three CSAT patterns come up repeatedly:
Aggregate scores hide everything. A company-wide CSAT of 82% tells leadership things are fine. Yet break it down by agent, product line, and customer segment, and you find one team at 92% and another at 64%. The aggregate obscured a serious problem.
Score inflation is rampant. Support agents who ask customers to "rate them a 5" or who cherry-pick which interactions get surveyed produce CSAT data that is useless. If your CSAT programme allows agents to influence who gets surveyed, you are measuring their ability to game the system, not customer satisfaction.
The open-ended question is where the value lives. The number tells you something is wrong. The text response tells you what. Consequently, companies that skip text analysis are leaving the most actionable part of CSAT on the table.
Common Mistakes
Not segmenting. An overall CSAT hides the signal. Segment by agent, team, product, customer tenure, and channel. That is where you find the problems worth fixing.
Measuring without ownership. If nobody owns the CSAT score for a given touchpoint, nobody improves it. Assign owners before you launch the survey.
Changing the scale. Switching from a 5-point to a 3-point scale mid-programme destroys your trend data. Pick a scale and commit to it.
Surveying without acting. Customers who give feedback and see nothing change become more dissatisfied than customers who were never asked. If you are not going to close the loop, do not open it.
Best Practices
- Trigger surveys automatically and immediately. Do not rely on manual survey distribution. Integrate with your support, CRM, or e-commerce platform so surveys fire automatically after the interaction. The closer to the experience, the better the data.
- One metric question plus one open-ended question. That is the ideal transactional CSAT survey. Anything longer reduces response rates without proportionally increasing insight. The open-ended follow-up ("What is the main reason for your score?") is not optional.
- Segment ruthlessly. Break CSAT down by every dimension that matters to your business: agent, team, channel, product line, customer type, geography. The insight lives in the segments, not the average.
- Act on low scores within 48 hours. Customers who score 1 or 2 should trigger an automatic alert to the team owner. Proactive follow-up on dissatisfied customers prevents churn and sometimes recovers the relationship entirely. See our guide to closing the loop.
- Guard against gaming. Survey all interactions, not a subset chosen by agents. Randomise if volume is too high. And never tie individual CSAT scores to bonuses without safeguards, it creates perverse incentives.
CSAT in the CX Stack
CSAT works best as part of a layered measurement approach:
- NPS for overall relationship health and loyalty trends
- CSAT for diagnosing specific touchpoint quality
- CES for identifying friction in processes
No single metric gives you the full picture. CSAT tells you whether a specific experience was acceptable. NPS tells you whether the customer will stay. Additionally, CES tells you whether the process was unnecessarily hard. Together, they cover the strategic, tactical, and operational layers of customer experience.
Pick two or three touchpoints that matter most to your business. Implement CSAT there. Analyse the segments. Act on the low scores. That is how CSAT creates value, not as a dashboard number, but as a trigger for improvement.
Frequently Asked Questions
CSAT measures satisfaction with a specific experience right now. NPS measures willingness to recommend, which is a proxy for future loyalty. In other words, CSAT is a thermometer. NPS is a forecast. They answer different questions and should be used for different purposes.
Above 80% is generally solid, but industry context matters enormously. A CSAT of 78% for a telecoms company would be outstanding. The same score for a premium SaaS support team would be concerning. Always compare within your own industry and track your trend over time.
As close to the experience as possible, ideally within 1-24 hours. Waiting three days introduces recall bias and lowers response rates. For support interactions, trigger the survey automatically when the ticket closes.
Email: 10-30%. In-app or in-product: 30-50%. SMS: 25-40%. Live chat popups: 40-60%. The biggest lever is timing. A survey sent within an hour of a support interaction gets dramatically higher participation than one sent the next day.
Poorly, on its own. A customer can score 4/5 today and churn next quarter. For example, the process may have been too effortful, or a competitor made a better offer. For churn prediction, CES and NPS are more reliable. In short, CSAT's strength is diagnosing specific touchpoint problems, not forecasting behaviour.
Ready to know what your customers actually think?
SurveyGauge helps Nordic B2B companies move from gut feeling to data-driven CX decisions.
SurveyGauge Team
Customer Experience Experts
SurveyGauge-teamet hjælper virksomheder med at måle og forbedre kundetilfredshed via professionelle surveys, analyser og rådgivning.
You might also be interested in
View all articlesWhat Is NPS? The Complete Guide to Net Promoter Score [2026]
NPS is the most widely used loyalty metric in the world, and also the most frequently misused. Indeed, the score means nothing without follow-up. Here is how to use it properly.
NPS vs. CSAT vs. CES: Which Metric Should You Choose?
NPS, CSAT, and CES measure different things. Choosing the wrong one for the wrong situation gives you data that looks useful but misleads. Therefore, here is when to use each, and why the best programmes use all three.
