Chances are, if you're an RJMetrics user, you've interacted with a member of our Customer Success team directly. (There's a non-trivial chance it's been me – if so, hi there!) If you're not an RJMetrics user and you've stumbled upon this post thanks to the great efforts of our marketing team, chances are you're interested in learning more about optimizing your help center.

Our account managers and analysts have always focused on ensuring our global book of commerce, subscription, and mobile clients have the tools and support they need to thrive as citizen analysts. After spending over two years serving that goal, I've observed our support structure evolve and become increasingly efficient.

As a data-driven company, it should come as no surprise that we've taken an evidence-based approach to (1) defining our team's goals, (2) measuring their success, and (3) quickly taking action on what we notice in the day-to-day. But it should also come as no surprise that customer support is a very human business. In eight years, we've grown and iterated our processes an impressive – maybe ludicrous – number of times. We've made a lot of improvements, and learned a lot along the way.

While I was reflecting on this over my third cup of coffee for the morning, I realized that, as of September 2016, Zendesk (the service we use to monitor our help center) alone has over 81,000 paying customers. Not to mention all of the businesses using one of the many other platforms available. It's safe to assume we're not alone in the quest to build an efficient support system. So, here is an (abridged) history of our support team and some learnings we've gathered along the way.

Defining goals

From day one, our Customer Success team was tasked with the traditional goals of a software as a service (SaaS) company. This means we've taken aspirational advice from blogs like Saastr, while adapting those industry-standard goals to our specific situation – supporting a highly complex business intelligence (BI) app for an audience of primarily business users with varying levels of technical skills. Having clearly defined goals allows us to make sure we're constantly improving and analyzing progress against our past selves. Our team's goals include:

  • Delight users through encouraging self-service (ultimately a product education initiative)
  • Satisfy users by solving technical issues quickly and efficiently
  • Maximize client retention by preventing or eliminating churn

As a customer success team, our churn key performance indicator (KPI) is as important as new business is to a sales team. Because of this, our team measures success with operational metrics that illustrate our ability to provide the best support possible, such as:

  • Time between client ticket creation and resolution
  • Ticket satisfaction percentage
  • Time tracked per client ticket

These are key figures for our team, and we study them regularly. But there's a universe of opinions and case studies that our list deviates from a bit. Why? Well, like I mentioned before, we've learned a lot over the last 8 years, and analyzing those learnings has allowed us to make adjustments to what we need to be successful.

Lessons learned in creating order from chaos

Like many SaaS shops, we have an app with a support button managed by Zendesk. We serve our entire customer base with that support button, and the occasional phone call. The vast majority of our clients' requests fall into categories, which we've tracked as "ticket types" since 2010:

  • Product guidance
  • Diagnosing any errors in the extract, transform, load (ETL) process, such as data discrepancies and connection issues
  • Assisting and consulting on an analysis
  • Creating custom transformations not yet available in the user interface (UI) for clients to create

Unlike many SaaS shops, we happened to build a BI platform that warehouses our Zendesk data. This gives us the unique opportunity to create KPIs and measure success "100%" accurately. It also gives us the tendency to overanalyze and create KPIs that don't serve our end goal of reducing churn. This is where I really want to share some of our experiences with you.

Here are some examples of lessons I've learned over the last two years:

  • Don't assume your chosen KPI measures your actual goal. For some time, our focus was simply solving support requests – as many, and as quickly, as possible. There are two big consequences to this. The first is that we focused the maximum amount of our resources (support staff hours) on solving support requests – a single-client-serving, non-scalable action. The problem here is non-scalable actions that only solve one client's problem aren't economically efficient. In other words, we should've spent more resources on investments that provide support to many clients at once – like a larger documentation center, for example. The second consequence is simply that our clients began to have an unsustainable expectation of our support team as our book grew.
  • Inspect all data points, not just the roll-ups or averages, to find efficiencies. For a time, we placed focus on average time between ticket creation and resolution, without understanding the outliers on either side of that distribution. We were asking questions like "How many requests are resolved in one response?" and "How many of those one-response solves are 'equal' in resources to one ticket in the 90th percentile?" This prompted us to create a "quick solve" lane for simple requests. Instead of entering our general queue for tickets, these requests are solved promptly by a first responder, providing clients quick answers to their simple requests and allowing us to analyze that group of tickets as its own category.
  • Set SLAs, but communicate them appropriately. When we decided to create a service level agreement (SLA) for our time to resolution metric, we thought it would help set client expectations for resolution time by letting them know the SLA in each request. The problem was that the SLA estimates included a small number of extreme outliers, and didn't apply to most of the requests we received. The immediate feedback we received is that the SLA was alarmingly long – causing client concern, the opposite of what we wanted to happen.

Our own best practices

In a lot of ways, the 2016 RJMetrics support structure is radically different than in the past. But we still remain motivated by a bias toward efficiency for our clients, and the use of data-driven decision-making to make our processes better. Based on our experiences, we've developed these best practices:

  • Scalable support is the best kind of support. When we realized that the size of our book relative to our team's head count was growing unsustainably, we doubled down on scalable support tactics like a revamped Help Center, a video series for new users, and customer support webinars. This helps users self-serve as much as possible – providing users direct and immediate access to the answers they're looking for and preventing our support bandwidth from being spread too thin.
  • Measuring your team's bandwidth (and client "margin") is best done with time tracking. Since we've implemented time tracking with Toggl for our entire Customer Success team, we have a very accurate handle on (1) how long requests take to resolve, and (2) which type of clients use (or overuse) our support desk. This has proven to be a much more accurate measure than time-to-resolution or any other request-related measure.
  • Client satisfaction scores provide a helpful but incomplete picture – always measure this creatively. Like most SaaS products, we measure our net promoter score (NPS), and we separately measure the satisfaction of each support request via surveys. We recognize our surveys are heavily subject to response bias, so we also keep records of qualitative feedback clients provide us in requests, in order to detect patterns. For example, anytime a client makes an aside comment about our support structure – positive or negative – we record it and consider all feedback when tweaking our workflow.

I fully anticipate that we will have more learnings as we continually aim to improve our processes and converge to a 100%-efficient team. If you have any feedback on our experiences (or some of your own) – I'd love to hear it!