Columns

We Are the (Quality) Champions

By Michelle Corbin | STC Fellow

In Editing Matters, Michelle Corbin covers matters (topics) about editing that matter (are of consequence) to communicators of all kinds. Watch this space to understand more about editing and what you can do to improve the quality of your content. To suggest a topic or ask a question, contact Michelle at michelle.l.corbin@gmail.com.

I firmly believe that technical editors are the arbiters of quality. My soapbox (that I formalized in a journal article) has always been “technical editing is a quality assurance process.” A complete quality assurance process involves:

  1. Defining your quality goals.
  2. Quantifying your quality goals so you can measure them (defining your metrics).
  3. Taking action on the measurements.
Defining What Quality Means

For technical documentation, quality means adhering to a set of standards and exhibiting certain quality characteristics. Or, put another way, it is the absence of defects as defined by those standards or quality characteristics. Some documentation teams define quality as the degree to which the customer’s expectations are met or surpassed, and use beta tests, usability tests, and customer surveys to gather metrics.

Technical editors can help ensure that the documentation is free from defects, meets the agreed upon standards, and exhibits the desired quality characteristics.

One of the most robust definitions of quality that I know is the nine quality characteristics that are detailed in Developing Quality Technical Information: A Handbook for Writers and Editors. Within each quality characteristic, there are numerous guidelines that help writers and editors deliver quality documentation. Whenever a guideline is not followed, it constitutes a potential defect.

Measuring Your Quality

To measure your quality goals, you need a way to quantify them. A standard set of metrics—a set of numbers, ratings, or rankings—will help you to apply the measurements consistently and repeatedly. You will want to take baseline measurements, and then repeated, periodic measurements so that you can show quality improvement over time. On the Focused Objective blog, one of the tips for avoiding the traps of using metrics is to avoid focusing on single-point values and instead to focus on trends and showing the bigger picture.

One of the ways that IBM has measured the quality of its documentation is by using the Editing for Quality process. We detailed this process in a Technical Communication article, but I’ll summarize it here:

  1. Completing a detailed, comprehensive (substantive) edit, and writing a quality report that lists and categorizes the issues according to the nine quality characteristics defined in Developing Quality Technical Information.
  2. Using the quality report to assign a rating (one to five, from very satisfied to very dissatisfied) for each of the nine quality characteristics. Then, a team of editors would review and confirm these ratings.
  3. Computing an overall quality score. This quality score is a formula that weighs each of the nine quality characteristics and multiplies the assigned ratings by those weights, and then converts the result to one number on a scale of 0 to 100.

This Editing for Quality process produces a metric that measures the quality of a piece of content.

In my last Editing Matters column, I talked about Acrolinx as a grammar checker, or as “content optimization software.” What I did not mention was the scorecard and the quality score that it provides after running your content through its “advanced linguistic analytics engine.” When you install and configure Acrolinx, you can configure the set of grammar, style, and terminology guidelines (you can essentially define what “quality” means). Acrolinx’s analytics engine then uses your quality definition to identify defects and measure the quality of the content. You see scores for each category of guidelines (normalized document length divided by the normalized document length plus the number of issues for that category), and an overall score based on the average of all the individual category scores. It allows you to define an excellent score, an acceptable score, and an unacceptable score.

Taking Action on the Measurements

Regardless of how you collect the data, what matters most is what you do with it. Once you’ve identified the list of defects, you must work to remove as many as you can.

Personally, I love Acrolinx as a way to measure the quality of the information because I can run it on the content before I do a technical edit to get the baseline measurement, run it on the content after the writer has updated the content based on the editing comments, and immediately see quality improvement (hopefully!). Acrolinx can also serve as that final quality check before you release your information, catching any issues that crept in with iterative updates.

By watching the metrics over time and watching the trends and amount of quality improvement, you can use your metrics to help justify more resources (writers or editors) or more time, as needed.

So, why measure quality?

Really, what’s the point of defining what quality means, measuring quality, and taking action on the measurements?

  • To measure the value of the information and provide:
    • Improved, simplified documentation
    • Easier to use documentation
  • To increase benefits:
    • Increase productivity (your own, but also your customers’)
    • Increase satisfaction (your customers’, but also your own)
    • Increase sales (if you’re doing it right!)
  • To decrease costs:
    • Decrease documentation production costs (resources, processes)
    • Decrease support costs (training, help desk, maintenance)
  • To demonstrate that writers and editors have a positive effect on the quality and the value of the documentation and ultimately the end product.
References

Carey, Michelle, Moira McFadden Lanyi, Deirdre Longo, Eric Radzinski, Shannon Rouiller, and Elizabeth Wilde. Developing Quality Technical Information: A Handbook for Writers and Editors, 3rd ed. Indianapolis, IN. IBM Press, 2014.

Corbin, Michelle, “[Grammar] Check Please!” Intercom. 65.4 (2018): 28–29.

Corbin, Michelle, Pat Moell, and Mike Boyd, “Technical Editing as Quality Assurance: Adding Value to Content.” Technical Communication. 49.3 (2012): 286–300.

Wilde, Elizabeth, Michelle Corbin, Jana Jenkins, and Shannon Rouiller. “Defining a Quality System: Nine Characteristics of Quality and the Editing for Quality Process.” Technical Communication. 53.4 (2006): 439–446.

“How the Acrolinx Score Is Calculated.” Acrolinx. Retrieved 7 July 2018. https://support.acrolinx.com/hc/en-us/articles/204251822#How_the_Acrolinx_Score_is_Calculated_.

“Metrics Don’t Have to Be Evil—5 Traps and Tips for Using Metrics Wisely.” Focused Objective. Retrieved 7 July 2018. http://focusedobjective.com/metrics-dont-evil/.

“Scorecard Essentials,” Acrolinx. Retrieved 7 July 2018. https://support.acrolinx.com/hc/en-us/articles/205446062-Scorecard-Essentials.