Reading the CSAT stats panel
The last 30 days at a glance: invites, responses, response rate, average score, promoter and detractor percentages, plus the last 5 scores as colored dots.
The CSAT card on /analytics is the at-a-glance dashboard for satisfaction. It shows the trailing period (7d, 14d, 30d, or 90d depending on your selector), summarized into a small set of numbers your team can read in a few seconds.
This page covers what each metric means, what good and bad numbers look like, and how to drill in when something needs attention.
What the card shows
The CSAT card surfaces these things for the selected period:
- Invites sent — how many surveys went out.
- Responses received — how many customers actually rated.
- Response rate — responses divided by invites, as a percentage.
- Average score — mean of all ratings, on the 1-to-5 scale.
- Promoter percentage — share of responses that were 4 or 5.
- Detractor percentage — share of responses that were 1 or 2.
- Recent ratings — the most recent CSAT scores, color-coded.
Reading the numbers
Invites sent: this should grow with conversation volume. If invites are flat or shrinking while volume grows, deliverability gates are filtering more out than you'd expect; check Survey deliverability.
Responses received: this is your raw signal volume. With small response counts, individual ratings dominate the average. With 100+ responses, the average and percentages start to mean something statistically.
Response rate: typical ranges by channel:
- Email: 10% to 25%
- Widget: 40% to 60%
- Slack Connect: 30% to 60%
Below those ranges suggests something is off (deliverability, copy, channel mismatch). Way above the high end might mean the survey is firing too often or in moments customers feel pressured.
Average score: a healthy support team usually averages 4.3 to 4.6 on the 5-point scale. Above 4.7 may mean detractor responses are being filtered out by something or your customer base is uniquely happy. Below 4.0 means there's real friction worth investigating.
Promoter percentage: 70-85% is the normal range for healthy support. Below 70%, look at recent comments for patterns.
Detractor percentage: anything above 10% deserves attention. Above 20% suggests an active customer-experience problem.
Period selector
The same period selector that drives the rest of /analytics (7d / 14d / 30d / 90d) drives the CSAT card. The default is 30d. The window is "last N days," not a calendar boundary.
Drilling in
For a single bad rating, click the response in the recent-ratings strip to jump straight to the conversation. Read the thread, read the comment. Decide whether it's a real issue, an outlier, or a customer in a bad mood.
The agent leaderboard on the same page shows per-agent CSAT, so you can drill from a low team-level score to which agent's conversations are dragging it.
Per-agent CSAT
Per-agent breakdowns are powerful and easy to misuse. A few guardrails:
- Don't grade individuals on small samples. A single 1-star rating in a week of 5 ratings is noise.
- Read the comments before drawing conclusions. Customers often blame the closest agent for upstream issues (a billing customer rates the agent 1 star because they're mad about the price, not the agent).
- Use trends, not snapshots. A single bad week happens. A six-week downtrend is signal.
When the card looks wrong
Common reasons the numbers look unexpected:
- Master kill switch is off. Invites = 0. Check CSAT overview settings.
- Per-channel toggles are off. Invites = 0 for that channel.
- Deliverability gates filter most invites. Check Survey deliverability.
- Tokens expired before customers responded. Default token expiry is 30 days; if you have customers who respond very late, they'll see expired-token messages. See Survey tokens.
Related
Was this article helpful?