When Tolstoy opened Anna Karenina with the words, “Happy families are all alike; every unhappy family is unhappy in its own way,” he wasn’t thinking about companies struggling with low customer satisfaction scores. But if he were, he would have been on to something.
Every company struggling with bad CSAT is fighting a unique fight. No two companies have identical pain points, and the cause of these pains differ widely.
The good news is, there is some consistency in the solutions for this type of problem – companies can think about improving customer satisfaction scores using the following framework. Here’s a look at how it’s done, with an example of a company we recently worked with. We’ll call it: Alexei Inc.
The Problem at Alexei Inc
Alexei Inc grew rapidly, and had to scale its customer service team quickly. The team was struggling to keep up with the influx of customer outreach, so that became their top priority.
By adding agents they were able to meet demand, but the support team quickly found themselves with a greater issue. They used a solution like Zendesk CSAT to listen to their customers, and saw their CSAT score slip. Dramatically.
They needed to get a handle on the quality of their customer service and, if possible, improve customer satisfaction with the entire brand-customer experience.
Identify the Source of Bad Customer Satisfaction Scores
Without understanding why you are getting demerited by your customers, it will be near-impossible to affect change. But this is no easy task.
Customer service quality assurance programs give you a holistic view of customer feedback, and identify patterns therein – a dip in CSAT is a common impetus to start a customer service qa program for this reason.
Grading all tickets (or a sample of tickets, depending on your volume) with bad CSAT scores is a great place to start. With the right customer service quality assurance technology, you can then identify the common reasons for bad CSAT.
Reviewing customer satisfaction comments – potentially pages of qualitative, textual information – is most effective when data is structured in the right kind of QA tool. If these comments and takeaways are confined to spreadsheets, trends are even harder to catch, and very difficult to correlate with other service data like volume of inbound requests.
Here’s how to think about doing that:
1. Initial qualitative review: You might be able to think of a handful of things driving customer dissatisfaction (DSAT) off the top of your head. Start by reading through CSAT reviews to gut check if your intuition is correct. You might be surprised to see some issues coming up that hadn’t occurred to you before. Based on what you see, create a checkbox for each of these issue groups.
2. One round of QA to hone your categories/checkboxes: Going through a round of QA with these broad groups will systematically show you how often issues are happening relative to each other. You might find that you expected each issue to be equal in terms of impact, when actually the data is showing you that one issue accounts for over 50% of DSAT.
3. Break issues into component parts: Next, these larger issues should be broken into more granular pieces. If agent tone was the biggest problem, you should break it down further to see which components need to be coached for most (sounding rude, robotic, or disinterested). Additionally, the data may show that some of the predicted issues aren’t really affecting DSAT. These categories can be removed, and you can dive deeper into the most pressing issues.
Starting with broad categories allows you to eventually hone the rubric based on what the data tells you, therefore decreasing the risk of your predictions influencing your results.
Below is an example of a rubric meant to tease out patterns in an actionable way. The rubric first prompts the grader to identify if the issue was support-related or not, then uses a checkbox feature to tally reasons for bad CSAT, and gathers this granular insight across all graded tickets. Sweet success.
Once you fully understand the support-related (and non-support-related) issues impacting customer satisfaction, you can work to improve it. Analysis of these patterns often leads to two things: increased training and coaching of agents, and implementing company-wide policy changes (like updating billing policy or offering new support channels).
In our example, Alexei Inc identified high customer effort as one of the largest drivers of DSAT.
Use Quality Assurance and Customer Satisfaction Data to Inform Coaching Strategy
Once you’ve incorporated the reasons for bad CSAT into your rubric, you can dig deeper into the problem, and start coaching explicitly for that behavior.
Alexei Inc added a question in their rubric to evaluate if the agent could’ve done anything to lower customer effort. After honing their checkboxes, and breaking “customer effort” into many component parts, they identified that agents were asking for information that was available in their CRM, and that this exchange of information was annoying to customers.
In this case, they had over-rotated – they were trying respond as quickly as possible at the expense of making the customer provide information that the company already knew. This small issue was really hurting CSAT scores.
They then used coaching sessions to help agents understand how to improve, and encouraged agents to research historical conversations and context – even when they felt pressed for time.
They also uncovered some things outside of the agent’s control that was contributing to bad CSAT.
Use Quality Assurance and Customer Satisfaction Data to Inform Company-Wide Changes
The QA lead uncovered that customers were frustrated because they were having trouble connecting to an agent in the first place. Even if their interaction with the agent was overwhelmingly positive, their experience with the brand as a whole was not.
Again, they looked to their CSAT QA data, and honed down categories of customer pain until they landed on the most potent issue – customers on mobile devices were having a hard time figuring out how to reach support.
Fixing this issue required a broader, company-wide change. They worked with UX designers and product teams to change their mobile app to make it easier to reach support.
This change, along with their new coaching, helped Alexei Inc raise their CSAT by over 10%, helping them meet their goal of maintaining high quality while rapidly scaling.
This framework will help identify the issues causing bad customer satisfaction scores (both related to support and not), and enable brands to take steps toward improvement. If your team can commit to coaching, and implement the changes you identify, you should expect to see your CSAT go up.
While every company has different problems, this framework, along with the right QA tool, should help you root out your specific problems and solve them. This structured process to analyze CSAT results will also allow you to communicate with key stakeholders in your company to implement the changes that the data demands.