Features

A Framework for Thinking about Documentation Quality

By Steven Jong | STC Fellow

Our interest in documentation quality isn’t new, and through the years, the questions have been the same: what is documentation quality? How do we measure it? And most importantly, how do we achieve it?

U.S. Supreme Court Justice Potter Stewart famously refused to offer a legal definition of pornography, saying only, “I know it when I see it.” The concept of quality seems equally elusive. The literature is full of different (and sometimes contradictory) definitions, and some have even declared that a universal definition of quality is impossible.

What’s the problem? We recognize quality all around us. We desire it in our products and services as consumers, and most people consider higher quality something worth paying extra for. But when we are asked (usually by an engineering manager) what quality means in our own field, and particularly when we are asked to meet a specific quality standard, we reflexively balk and claim that writing is art and an ineffable mixture of taste and style.

Attributes of Quality

In search of documentation quality attributes, our instinct has always been to ask our customers what quality means to them. Over the years, practitioners and academics have administered numerous surveys. This is good and necessary work, because products and modes of communication change over time. Each attempt to define or apply documentation quality, and each new survey, yields a fresh set of attributes. Yet each survey is only a snapshot in time of a limited set of customers about a limited set of products from a limited set of vendors. We would like to combine the results, but survey methodologies, and the questions asked in each one, change over time as well.

With enough answers, could we determine exactly what attributes bring quality? The problem is establishing which attributes cause quality and which are just associated with quality. Consider this attribute: “Every statement in the document should be necessary and correct.” It’s safe to say that correctness causes good quality, while incorrectness causes poor quality. Now, consider this one: “Timestamps in examples should match the product release date.” This attribute is associated with good quality, but it’s not causal. Examples can have old timestamps and still be correct. But they raise doubts in the reader’s mind, because outdated examples tend to be inaccurate. At one time, a popular working definition of documentation quality centered on four key attributes: correct, complete, clear, and concise. This definition sounds good; it’s hard to argue that an incorrect, unclear, or prolix work is of high quality. However, completeness hasn’t aged as well, because the principle of minimalism suggests it’s not actually desirable. Another once-popular attribute, index thoroughness, has fallen into disuse, because users today overwhelmingly prefer search engines to indexes. Meanwhile, accessibility has become important. All quality attributes are subject to similar interpretation and evolution.

Also, quality consists of both things present and things absent: the presence of positive attributes (though some are implicit) that customers ask for, and the absence of negative attributes, which customers won’t mention but don’t want. Just the other day, I ordered a takeout salad that arrived with a beetle crawling through it. Previously I would not have defined quality in salads as “having no bugs”—I’ve never ordered a salad “hold the bugs”—and maybe someday people will regard them not as bugs but as features. For now, I’ll at least make it a point of inspection.

We shouldn’t just discard potential attributes; they all reflect some underlying truth. Instead, I think we should categorize them (grouping attributes into categories that one hopes remain stable over time), and then determine which ones are most important. To make sense of them all, we need a framework to think about quality, one into which every quality attribute and category can fit.

Stakeholders in Quality

I think the best way to classify quality attributes is to consider the perspective of stakeholders, a business concept familiar to most of us. Who cares if the work we do is of high quality? Our customers, obviously, but it’s more than just them. Technical communicators exist in the realm of work for hire intended for an audience. The full set of stakeholders are customers (our audience), clients (those who hire us), and communicators (we ourselves). Each stakeholder has a different perspective on documentation quality, and all of the perspectives are valid (if perhaps not equal).

Customers

Most quality attributes are customer facing. Customers say they want clear and accurate information. Some companies define “quality” wholly as customer satisfaction, and many academic and practitioner surveys focus on what users think of information products. You’ll find plenty of examples, so I won’t try to add to this expanding body of knowledge, but customer quality attributes are typically grouped into these categories:

  • Audience: The document is appropriate for the intended audience.
  • Writing: The information is correct and clear.
  • Editing: There are no spelling or grammatical errors.
  • Illustration: Diagrams are crisp and clear, and screenshots are useful and legible.
  • Organization: Topics are logically grouped.
  • Navigation: It’s easy to find and get to information.
  • Production: Physical documents are well printed and bound; online documents are well laid out on all display devices.
Clients

Customer-facing attributes are well known, but few of them touch on the needs of clients. What do clients want? Geoffrey Bessin offers a novel approach: “Quality is 1) a well-defined process for 2) creating a useful product that 3) adds value for both the consumer and the manufacturer” (2004). Client quality attributes are almost entirely different from what customers look for. The motto of a discount chain that once did business in my area was “good stuff, cheap”; that’s it in a nutshell. Every business wants the best product it can make for the lowest cost of production. The fundamental job description at my first employer was to produce “timely and accurate” documentation. Yes, accuracy was a shared attribute, but timeliness—documents ready on schedule to support a release—came first. A quality production process is timely, productive, efficient, repeatable, and predictable.

Years ago, I watched two colleagues document an email product with two user interfaces. A veteran writer was assigned to write the command-line interface manual, while a junior writer was tasked with writing the forms-based interface manual. Development was ongoing and some functions were volatile. The newbie energetically tackled the changing functions first. She met daily with developers, tracked every change closely, and regularly sent out drafts to review. Meanwhile, the veteran worked steadily, starting with the stable functions and leaving the unstable ones until they settled down near the end. When the drafts came due, the veteran was done, but the newbie had finished just the one chapter. To save the release, the rest of us had to pitch in. We got it done, but only after nights and weekends of heroic work. Judging from their reception, the resulting books were equally accurate and effective, so to customers they were of equal quality. But from the client’s perspective, the work of the veteran—completed on time, on budget, and without draining additional resources—was of much higher quality than the work of the inexperienced writer, which was a debacle.

Communicators

As technical communicators, our own views on quality matter, too. Much of what we do is invisible, or implicit, in that we are paid to avoid—or at least to root out—errors of omission and commission. Readers give no credit for lack of errors but are quick to complain if they spot any. (A 2019 study by Website Planet found that Web visitors are nearly twice as likely to bounce off a site if the first page they see has a spelling or grammatical error.) Our professional standards and ethics drive us away from negative quality attributes, such as errors and typos, and toward positive attributes, such as clarity, concision, and consistency, which lie entirely within the domain of writers, graphic artists, and editors, or sometimes one person assuming all of these roles. A roomful of reviewers can make something accurate, but rarely can they make it clear or concise. That’s up to us.

Doing an excellent job is great, but can you repeat the results? An individual piece of technical communication can be unique to one release of one product. Take a step back, or look over time, and you recognize the similarities between document versions, documents of the same type for different products, and types of information. (This is the theory behind DITA.) Technical communication is, in many ways, a manufacturing process, so the principles of process quality apply. The hallmark of professionalism is consistency of results; even lone writers create style guides. Getting the job done right every time requires focusing energy on elements that differ, scheduling work realistically, and obtaining regular technical reviews. These are all elements of process quality that clients might not value, but we should. The best way for us to synthesize our processes is with checklists, which are themselves collections of quality attributes.

Taking the Measure of Quality

Product managers tell us that what can’t be measured can’t be managed. It’s difficult to demonstrate, improve, or even maintain quality unless you can measure what you’re doing.

It’s hard to measure poorly defined attributes. Even a simple metric like errors per page requires common understanding of what constitutes an error and a page, as well as how to take the measurement. Do we measure the entire document or just a sample? Do we measure graphics? Other attributes are even more challenging: how do you measure clarity? Can there be too many illustrations, or too few?

With careful definitions, though, it’s possible to define and use documentation quality metrics. This is the theory behind publication competitions. Measurement requires an agreed-upon formula, an understanding of the range and domain, and a protocol (how to measure). Effective metrics are repeatable and objective: the number you get today must be the same if you measure again tomorrow, and my measurement must match yours. Effective metrics can be collected and combined.

You can apply some metrics to finished products (for example, is the steak cooked medium rare?) and others during the production process (did the center reach 60°C/140°F?). To apply this classification to documentation, evaluative metrics assess the quality of a completed information product using customer-facing attributes, which enables quality control, while predictive metrics assess the quality of a draft information product using client-facing attributes, which enables quality assurance. It’s impractical to ask technical communicators to spend a lot of time collecting predictive metrics while they’re working, so the best metrics are both valuable and easily obtained (or well worth the effort to collect). Perhaps the most readily available, crudest, and fastest predictive tool is the automated readability checker in Microsoft Word. More thorough and nearly as fast is the ISO Schematron, which flags defects in XML document source files.

For assessing customer-facing documentation quality attributes, a checklist such as the one compiled in Developing Quality Technical Information (Carey et al. 2014) is an effective tool.

Client quality attributes—business metrics—lend themselves more readily to precise definition and measurement, so you will more often see the most important few process metrics on a dashboard. Dr. JoAnn Hackos’s Information Development: Managing Your Documentation Projects, Portfolio, and People (2006) is a good source for client-facing attributes.

Collecting data is good; extracting information from data is better. Composite metrics, measuring two or more elements at once (such as words per topic, which is information), are better and more meaningful than simple metrics (such as word counts, which is data).

Considering All Perspectives

Within this framework, then, quality attributes come from three sources.

  • What satisfies the customer is the most important part of the quality equation. They are the primary stakeholders, because if they don’t buy what our clients are selling, our clients will go out of business.
  • The next most important source of attributes (and the most fertile source of metrics) is process, the perspective from which our clients view our work. Clients are the secondary stakeholder, because if our clients don’t like our work, we’ll go out of business.
  • The last part of the equation is the value that we as technical communicators add to our own work through style guides, checklists, and personal skill. When push comes to shove, we must accede to both customer demands and client standards.

Not all quality attributes are equally valued by all stakeholders. Every company claims its customers are paramount, but tension exists between the products and services a company offers and the money that they’re willing (and able) to invest in manufacturing them. For example, topic reuse and sharing—the strength of DITA—reduce client costs by eliminating nearly identical blocks of text (software developers will recognize them as “clones”). But writers in DITA shops know that customers see text optimized for reuse as vague.

A professional informed by best practices can quickly and efficiently produce high-quality results. No customer has ever complained that a document was too well written or that its illustrations were too attractive! Yet there’s also tension between our urge to craft and polish prose, which we know can always be clearer and more concise, and the schedule and budget constraints of our clients. From their perspective, it’s possible for us to add too much quality by taking too long. Our job is fundamentally a compromise: To do the best we can with the time and resources at hand.

If you draw a Venn diagram (see Figure 1) grouping customer, client, and communicator quality attributes, I believe the customer circle will be the largest, and ours the smallest. There will be attributes that matter to each stakeholder but not as much to the others, attributes that appeal to two stakeholder groups, and attributes valued by all three. I can’t suggest how much the circles overlap, but the sweet spot will contain attributes that all stakeholders agree are important. We should focus on well-defined attributes that all stakeholders value, starting with accuracy. Where an attribute is valued by some, but not all, stakeholders, we should favor attributes valued by at least two of them. In that way, we can filter the universe of potential attributes into a manageable—and measurable—set.

Figure 1. Common Attributes within the Quality Framework

This framework also gives us a test for evaluating new attributes. Do you have evidence that a potential attribute is valued by customers, clients, communicators, some combination thereof, or all stakeholders? The best new attributes are well defined, evidently valuable to all stakeholders, and easily measurable (or well worth the effort to measure).

Summary

The bottom line is that while no number of attributes can fully capture documentation quality, some attributes are more revealing than others. By adopting a framework of quality as important to customers, clients, and communicators—in that order—and by considering both product and process quality, we can classify quality attributes, determine which are most relevant and valuable, evaluate potential new ones, and focus on the most important ones—perhaps few enough to fit on a dashboard—without getting bogged down in details.

Resources

Bessin, Geoffrey. The Business Value of Software Quality. 15 June 2004. https://www.ibm.com/developerworks/rational/library/4995.html.

Carey, Michelle, Moira McFadden Lanyi, Deirdre Longo, Eric Radzinski, Shannon Rouiller, and Elizabeth Wilde. Developing Quality Technical Information: A Handbook for Writers and Editors. IBM Press: Upper Saddle River, NJ, 2014.

Hackos, Joanne T. Information Development: Managing Your Documentation Projects, Portfolio, and People. Wiley Publishing: Indianapolis, IN, 2006.

Jong, Steven. “Quality Programs: Six Sigma.” STC DocQment 9.2 (2002).

Stieglitz, Sarah. (2019). “Your Typo is Costing You 12% Extra on Your Google Ads Spend.” Website Planet. 6 August 2019. https://www.websiteplanet.com/blog/grammar-report/.

STEVEN JONG (stevefjong@comcast.net) has been a member of STC for more than 35 years. He contributed the “Musing on Metrics” column for the Quality SIG from 1996 to 2005. Steve is a Fellow and the recipient of the 2012 President’s Award. He has served on the STC Board of Directors and the first Certification Commission, and he is in his third term as President of the New England Chapter.

2 Comments

  • Thanks for reminding us about quality attributes. It does boil down to applying measures you’d use to judge a document: Audience, Writing and Editing, Illustration, Organization, Navigation, and Production. But, instinct says you know a quality doc when you see it. To do better, we must think more like a customer. Sage advice.

  • Thanks for this, Steve. Categorizing the quality criteria by Customers, Clients, and Communicators is useful and insightful. It’s too bad that criteria in the biggest circle (Customers) are the hardest to agree on and the hardest to measure. But we technical communicators love a challenge, don’t we? Thanks for moving the conversation forward.

Click here to post a comment