Few people would disagree with testing a software product before release, but when it comes to support resources, expectations are often different. Software projects vary wildly in how help material is integrated into the development lifecycle. However, many of the same valued practices we apply routinely to software development can greatly improve support content.

 

The Dropsource Help Center homepage aims to help people find an appropriate starting point for their use case and app development preparation stage

As the Developer Educator here at Dropsource, I help manage a variety of support channels, including a Help Center with text and video resources, in-app chat, and information presented inside the platform itself. We’ve experimented with a few additional methods in the past including webinars, and are always looking to improve how we help our members to be successful.

Let’s take a look at how we test and iterate on our support strategy.

 

Dropsource help documentation is pushed into the editor via our plugin system.

Dogfooding

The first level of scrutiny I personally use when I write instructional material is to follow my own instructions. Let’s call this a rudimentary QA test. Tech writing often involves a team of developers handing information over to a writer who turns it into something readable, but I’ve always found it more effective to learn how to do a task myself, then convey that learning to others. Before publication I’ll follow a tutorial again from scratch to establish that a number of basic requirements are being met:

    • no steps are missing
    • any required information is included or linked to
    • the instructions are readable and accessible
    • and so on

Not all help documentation is produced this way. For a start, I come from a development background myself, so I’m happy to try out some of the tasks our users—mostly developers—will be going through, such as working with backend platforms and APIs. Although this is more challenging and demands a particular skillset from the author, the experience takes you a long way towards the empathy we’re always striving for in tech support. If a process touches on factors external to our team, beyond our immediate control or knowledge, this type of due diligence is even more important.

Although I’m part of the same team as our developers and can pester them for information, we are a busy bunch and their time is ultimately best spent developing the product. Additionally, I’m able to write this way partly because I’m documenting a software product that runs in the web browser (so there’s no required setup info such as installation instructions).

Making our example app videos provides a natural opportunity to test the text tutorial content.

I’d highly recommend this empirical approach to anyone working on software documentation, and as a bonus it gives the team an additional, low maintenance phase of user testing and feedback on the product itself.

Defining Success

All of this is helpful stuff, but testing for accuracy and generic quality is only part of the story (e.g., is a sentence structured correctly, is the HTML page it’s in structured correctly etc). It’s also vital to validate your support strategy at a higher level (e.g., does it integrate with the overall user workflow and experience, is it serving the overarching project or business goals etc). This type of qualitative analysis is of course more demanding. Larger teams use tactics such as peer review which can help, but in an agile process it can slow things down, and when you’re the only one working on the content it’s not normally an option.

That’s not to say our support material is produced in a vacuum—other team members can and do contribute to it. Did I mention we’re hiring for QA and UX roles?

Listening

Reading your own material is naturally not as effective from a testing perspective as your actual users reading it. We use multiple channels for feedback on support resources, from the “Was this article helpful to you?” buttons at the bottom of each page in our help center, to our forum, and our chat service in the editor, as well as conversations carried out via social media.

 

Each article in the help center invites feedback.

By paying attention not only to feedback specifically on our resources, but also the issues people are struggling with in the product, we’re able to continually iterate on our help content, adding and editing as part of an ongoing dialog with our members.

 

Iterating on help content based on support requests.

Some feedback comes in without solicitation, but you can of course also ask your users for feedback. We occasionally do this, for example when we recently tried out a new onboarding feature, or a follow-up survey we prompted participants to complete when a webinar ended. However, we tend to keep this type of initiative to a minimum to avoid overwhelming our members with messages.

Analyzing

Analytics tools that track user interaction with help resources can also provide valuable insights. These services give you a general picture of engagement, and you can set up more advanced event-based tracking to see how people interact with particular components. The advantage to this type of testing is that it provides performance measurement with your actual users, but the data you can access by default essentially boils down to clicks and time spent on pages—it’s not qualitative, for that you need to talk to people.

You can analyze visitor behavior using a variety of metrics.

 

We primarily use Google Analytics to track views and pathways through our help center, with additional metrics for videos via Vimeo (this lets us see not only view counts but more fine-grained stats like the average percentage of a video people watched).

Testing with other humans

All of the data we’ve touched on so far is useful, but for qualitative feedback and more comprehensive insight into how people interact with your support resources, it’s hard to beat actually watching that interaction unfold in real-time. We recently started a program of user testing on our help center, initially aiming to identify any global issues such as navigation problems.

We approached our first phase of user tests with much the same technique as tests on the software itself—we created test plans, identified success criteria, and drew up a script with tasks we’d ask participants to carry out. We stuck with five testers in each round, and asked them to share their screens, speaking aloud as they tried to complete the tasks we set them.

We asked testers to find particular types of information in our help center.

Writing out test plans also forces you to formalize what you’re trying to achieve with a resource, something we don’t always take the time to do when we write documentation, assuming certain types of content are required by default (sometimes regardless of user profiles or other relevant parameters). In fact, planning and carrying out these tests highlighted a few mistaken assumptions we had made.

We have a complex technical product used by people with a wide variety of backgrounds and skillsets, so effectively targeting support resources is a challenge. Seeing people try to find information in our help center was a (sometimes painful) revelation—straight away we were able to see navigation trends and identify obvious problem areas.

We identified issues with our site search and article naming.

Right now we’re implementing a load of changes to our help center to address issues we identified during user testing, including renaming sections, restructuring some of the navigation, and upgrading our search function. Later we’ll test again, and continue iterating.

There are disadvantages to user testing, and difficulties such as recruiting valid participants. Ideally you’d test with actual users of your product, but in most cases the best you’ll manage is to recruit people who fit a particular user profile your product is intended for. Similarly, the test itself is an artificial situation which potentially alters the user’s behavior. What we’re aiming to do at the moment is use a combination of tactics and choose the right tool for each job—e.g. tracking analytics for objective data, user tests for qualitative feedback. So far we’ve only used moderated tests for the help center, but we’ll likely experiment with some unmoderated tests in future.

Remembering what we’re doing this for

No matter how focused the whole team is on customer success, the reality of working within a fast-moving software startup is that there’s isn’t much time to cram in a testing phase for documentation before a release—often there isn’t even an opportunity to author your resources until late in the development lifecycle. But whenever you manage to do it, testing affords you an understanding that creates a level of confidence when choosing where to focus your content development efforts. It’s important not to place too much faith in the data, but it does give you something concrete to use as a guideline for your help strategy. Whatever resources you’re working with, finding the scope to introduce a bit of QA here and there in your support process is well worth the effort.

 

Interested in learning more about testing? Here are some additional posts from the Dropsource team: