Features

How Can You Leverage Data to Know You Have Effective Content?

By Jenifer Schlotfeldt and Courtney Bittner

How many times have you received the following feedback: “The documentation could be improved” or “Some additional documentation would have helped”? Without specificity, it makes it difficult to know what content improvements you should address. This is where data can help you make an informed decision on how to prioritize the creation of new content and the updating of existing content, or maybe even what content you can archive or retire.

We’ve also started to see a big trend in the software industry leveraging Net Promoter Score (NPS) to know how your product is stacking up. NPS is calculated based on responses to a single question: How likely is it that you would recommend our company/product/service to a friend or colleague? In addition to their response, users can also leave comments. But now, those comments are also tied to a rating that your customer is leaving for your product. So, what do you do when your executive comes to you and says that your documentation team needs to contribute to raising your product’s NPS score?

Whether it is addressing vague comments or just needing to improve the overall customer experience with your content, data is essential in identifying where to spend your time and resources. But what data do you use and how do you get access to that data?

To help our content teams in the IBM Cloud space answer those questions, we recently rolled out a content analytics toolbox that includes a variety of tools that our writers can use to identify what content to work on and how to measure content improvements. We use results from each of the tools in our content analytics toolbox as additional data points for making content design decisions.

So, what’s in our toolbox?

Content Audits

You can use content audits to identify all the content assets that are available (or missing) for your offering. Content audits should help ensure that you have the right content published. They can quickly indicate if you have content dead ends, content gaps, or even duplication.

While you can do audits that are focused on a specific deliverable, you should collaborate with your pre- and post-sales content creators to understand the breadth and depth of the content that’s available to users. Audits can help the various contributing content teams validate that there are no broken or disjointed content journeys. What exactly is an effective content journey? We define it as enabling users to learn and succeed in accomplishing their goals by delivering the right content in the right place for the right experience. Because different teams might be creating the content for each phase of the user journey, you can use audits to help identify where the flow is broken.

Another type of audit you can do is competitive evaluations. Comparing your content with your competitors’ can help you with determining where you stack up in your publishing tool features, the breadth of content available, or even if you’re missing content.

User Comments

This might seem like a no-brainer, but user comments are a gold mine when trying to understand if your content is effective. Content teams tend to address broken or missing content comments as quickly as possible and get back to creating new content. But finding trends in the comments can help you identify which areas to prioritize. For example, if users are leaving comments about areas that are more technical, you might need to provide additional concept topics about those areas.

Figure 1. The “Dos” of User Feedback.

You can also identify trends to help inform you in which formats the content should be created. For example, in the last quarter, we saw consistent comments from users asking about videos, tutorials, and API docs. As you review user comments, you can categorize them to identify areas to focus on, format needed, and so on. When you see a trend or many comments grouped around the same topic or theme, it can help you in understanding what the content team should focus on in that area.

Besides the actual content of a comment, there are other metadata that can be valuable, such as:

  • The country or region of the author of the comment.
  • The device type used by the author of the comment.
Content Test Cases

Automated test cases can also be a good tool to get more data on your content quality. Just as software development teams build and run test cases to determine code quality, we should be running quality test cases on our content to ensure we are delivering good quality content.

A lot of content measurement tends to be subjective, but there is some content quality checking that you can do in an automated way. We’ve scripted tests such as checking the last updated date, if you translated the content, the number of open customer feedback issues, average style and grammar score, and confirming that content passes accessibility testing.

Reporting the results on a dashboard enables teams to quickly see which content passed or failed testing. In the dashboard that we created, we’ve set thresholds and assigned colors to indicate if that test is passing (green), should be looked at (yellow), or is failing (red). The color scheme provides an at-a-glance view of where their content stands.

Figure 2. Example of a content quality dashboard from IBM Cloud.

A public dashboard is also a driver for teams to be “green.” It enables not just content teams, but the entire product team for the content set, to be aware of content quality.

If you are building your content in an automated pipeline, leverage that same build framework to run automated test cases daily. This is an efficient way to report on the content quality of hundreds of content sets in large collections.

Machine Data

If you use tools for real-time event streaming and tracking of user actions in your software product, you can leverage these tools for getting insight into your content. These tools give you the capability to learn even more about how customers are using your content.

Some typical event streaming or user actions you might leverage for content are page views, search phrases, referring pages, or even total visitors to your doc site. But you shouldn’t take the numbers at face value. You should try to understand the why of the data. You might ask yourself if the page views are high because the content is helpful, is wrong, or perhaps the user interface (UI) that the page describes is not easy to use. Knowing the why helps to determine the right content strategy direction to take, and can give you more in-depth insight.

Combining Content Use Analytics with Product Use Analytics

We also like to show that the content is successful by asking teams to identify an overall goal or job to be done for procedural topics. For example, the goal or job to be done for a database service’s “getting started” tutorial might be creating a table and adding data to it. If you can build a funnel that follows a page view with doing the job in the product within a close time frame, you can demonstrate that your topic is not only getting viewed a lot but it is successfully enabling your customer to complete the task.

If you’re not sure if the topic as a whole is successful, you can set milestones for key steps in your tutorial and map those to actions in the UI. Doing so can help you determine if and when a user is struggling to complete a step in the procedure. It might be because they just can’t complete the action successfully in the UI, or it might indicate that you have incorrectly documented how to complete the action.

Content Experience Scorecard

The content experience scorecard is a heuristic tool that was created in IBM to measure the effectiveness of our users’ experience with technical content. To accurately measure content effectiveness, the first step is to establish baseline metrics. With a baseline established, you can measure improvements in content effectiveness. The content experience scorecard offers a way to demonstrate the value and impact of content on the success of your users and your business objectives.

We’ve tailored the original heuristic tool for the IBM Cloud space by defining specific use cases for content teams to evaluate: getting started and adding their offering to a solution. Because we follow a continuous delivery model, we needed to define the use cases for teams so they can conduct the evaluation and apply the resulting recommendations for improvements within two two-week sprints. We also updated some of the scoring criteria to include cloud-specific objectives. For example, because the UI can change frequently, we can use the scorecard to score the relevancy and accuracy of content and evaluate if the content is in sync with the product UI.

In addition to making the scorecard available for all contributing content teams, we also leverage it in our team. Most recently, we used it to evaluate a single, complex feature that was added to our product.

First, we analyzed the as-is content experience to establish a baseline to drive measurable quality improvements. We synthesized our findings to help pinpoint and prioritize our focus areas. To do this, we held a series of interviews with our subject matter experts across the IBM Cloud space. These experts included marketing, management, design, development, support, and more. We asked them two simple questions:

  • Where do we have issues with sales and adoption?
  • Where do we have issues with product consumability (use of product through entire lifecycle)?

We then summarized our findings in a SWOT report, where SWOT stands for strengths, weaknesses, opportunities, and threats. After receiving stakeholder buy-in on which areas of the content experience can help resolve the issues referred to by the previous two questions, we walked through specific use cases that map to those areas and scored the content experience of each use case. The scoring criteria is grouped by the following user-focused objectives:

  • The content is high value.
  • The content is easy to find.
  • The content experience enables quick, successful goal accomplishment.
  • The content experience meets user expectations for consistency and branding.

After we evaluated the use cases, we created a report that summarized the results of the scorecard evaluations and our recommendations for improvements. More specifically, our summary included an overview of the scoring for each use case, the top areas to address, and any minor items to address. This provided prioritization of future content efforts.

We concluded the evaluation process by working with our content team to plan the execution of our recommendations. Based on the planning session, the team created work items to complete during the next sprint. After they completed the content changes, we conducted a follow-up evaluation to re-score the use cases. Not only were we able to make changes that had positive customer experience impact, we were also able to measure that improvement.

Figure 3: Example of a content experience scorecard dashboard from IBM Cloud.

Some of the positive outcomes we’ve observed by using the content experience scorecard include being able to make an informed decision when prioritizing content requirements. We’ve also seen tighter collaboration across content, marketing, design, development, and support teams. And we’ve found that the content experience scorecard helps illustrate the value and impact of content to individuals outside the content teams.

More and more, we’re seeing data that indicates we need increased focus on improving our customers’ experience with technical content. But how do you know what to focus on and where to target first? A good content strategy takes several data points into consideration. Leveraging the breadth of data that you have available to you is the key to ensuring you have the right, effective content.

Reference

Reichheld, Fred, and Rob Markey. The Ultimate Question 2.0: How Net Promoter Companies Thrive in a Customer-Driven World. Boston, MA: Harvard Business Review Press, 2011.

JENIFER SCHLOTFELDT (jschlot@us.ibm.com) is a Senior Content Strategist and the Content Experience Architect for IBM Cloud. Jenifer leads a team of software engineers and content designers that own the IBM Cloud Content Experience. Not only is the team supporting continuous delivery of the IBM Cloud Docs, but it also embraces DevOps and Design Thinking practices. She is also the co-author of DITA Best Practices: A Roadmap for Writing, Editing, and Architecting in DITA, published in 2011.

COURTNEY BITTNER (cdmauney@us.ibm.com) is a Content Strategist and Designer at IBM. In addition to providing editing and terminology support for the IBM Cloud Docs, she is focused on creating content strategy enablement assets for teams to deliver a consistent content experience across IBM Cloud.