By Yoel Strimling
Purpose: The purpose of this paper is to propose a preliminary, focused, clearly defined, and reader-oriented model for collecting meaningful and actionable feedback to improve documentation quality and increase reader satisfaction. This model is based on a narrow yet comprehensive set of 15 distinct information quality dimensions (based on previous research by Wang and Strong, 1996) that cover all categories of information quality – Intrinsic, Contextual, Representational, and Accessibility (ICRA). Research was done to determine which information quality dimensions readers rated as most important per category (as they related to documentation), which were then used to create a clear, comprehensive, and empirically based definition of documentation quality from the readers’ point of view. This definition of documentation quality is the heart of the model and provides a strong basis for measuring what readers want from the documentation we send them.
Methods: Questionnaires were sent to readers, asking them to rate Wang & Strong’s information quality dimensions in terms of importance as they applied to documentation. Dimensions were sorted by information quality category, and the most important dimension per category (as determined by weighted average) was calculated.
Results: According to readers, the following four information quality dimensions are the most important per ICRA category for documentation: Accurate, Relevant, Easy to Understand, and Accessible (AREA).
Conclusions: We can use the AREA information quality dimensions to create a preliminary, focused, clearly defined, and reader-oriented model for collecting meaningful and actionable feedback that will improve documentation quality and increase reader satisfaction.
Keywords: documentation quality, information quality, quality definitions, documentation quality feedback, documentation quality assessment
- Feedback from readers about documentation quality must be meaningful and actionable to be worthwhile.
- This article proposes a preliminary, focused, clearly defined, and reader-oriented model for collecting meaningful and actionable feedback, based on empirically tested information quality categories and dimensions.
- Aside from collecting feedback, this model can be used as a starting point for technical communicators and their managers who need to have reliable methods and metrics for measuring documentation quality.
- This model can also be used to help technical communication instructors provide evidence-based materials for teaching students how to write quality documentation.
Socrates. Anyone may see that there is no disgrace in the mere fact of writing.
Phaedrus. Certainly not.
Soc. The disgrace begins when a man writes not well, but badly.
Soc. And what is [written] well and what is [written] badly – need we ask Lysias, or any other poet or orator, whoever wrote or will write either a political or any other work, in meter or out of meter, poet or prose writer, to teach us this?
– Phaedrus (Plato)
As technical communicators, we put a lot of time and effort into creating the best possible documentation we can. We write because we want to help our readers to do the tasks they need to do or to understand the concepts they need to know.
Because we are professionals, we take pride in our work and want it to be the best it can be. But how do we know if what we are writing is what our readers want? How do we know that the information we are sharing with our audience is helping them do or know what they need to do or know? We might be writing documentation with one standard in mind, and be satisfied with it, yet our readers might look at the same documentation and be very unsatisfied.
A disconnect between what we are producing and what our readers actually want makes it very difficult to justify writing documentation at all—why should we write things nobody wants? As Filippo (2007) says, “Without an intimate understanding of our users and their needs, how can we design information intended to assist them, or help them do their jobs more efficiently? (p. 9)”
The best way to align ourselves with our audience’s needs is to get direct feedback from them (Wiley, 2006). But we also need to ensure that the feedback we get is clear and focused, rather than vague and hard to implement. To collect feedback but be unable to act upon it because it is not clear to us what the problem is and what we need to do about it is worse than not collecting any feedback at all.
Getting Feedback from Our Readers
There are many ways to get feedback about our documentation from our readers—for example, Wilson (1999) lists 29 different techniques for testing the usability of documentation, and Barnum (2002) describes numerous methods for collecting feedback about documentation quality from both experts and users. What is common to all of them, however, is that they rely on getting meaningful and actionable feedback to improve documentation quality:
- Meaningful feedback requires readers to focus only on the important issues.
- Actionable feedback requires us to be able to take what our readers tell us and do something about it.
Getting Meaningful Feedback
To get meaningful feedback, we need to make sure that what we ask our readers is presented in a way that maximizes its effectiveness. As Hart (1997) writes, “How you ask a question strongly determines the type of answer that you will obtain. (p. 52)” He goes on to say that questions must be precise, have answers that efficiently direct towards improvements, be framed to prevent simplistic answers, and focus on the problems (what he calls “negative feedback”) that interfere with customer satisfaction. Similarly, Barnum (2002) writes that questions we ask readers about documentation need to be unambiguous, unbiased, and presented so they prompt respondents to answer in a consistent way. Dillman, Smyth, and Christian (2014), in their definitive and comprehensive book on how to design surveys, provide numerous guidelines about how to build questions that adequately measure the concepts of interest to the questioner (e.g., use as few words as possible to pose the question, find specific and concrete words to specify the concepts clearly, and choose the appropriate question format). The fundamental theory behind these guidelines is that the questions must be presented in a way that every potential respondent will be willing to answer, will be able to respond to accurately, and will interpret in the way the questioner intended.
Table 1 lists other characteristics of good feedback questions; although the focus of some of these is on how survey questions should be written, these characteristics apply to all methods of collecting feedback from readers (for example, user focus groups, usability testing, and other face-to-face interactions).
Based on these characteristics, we can say that to get meaningful feedback from our readers, we need to ensure that we meet the following criteria:
- We must focus only on the most important issues from the readers’ point of view.
- We must ask only the fewest possible number of questions that can cover all of these important issues.
- We must use terminology that can be clearly and universally understood by all respondents.
Getting Actionable Feedback
But it is not enough for us to collect meaningful feedback about our documentation from our readers. We also need to be able to use the information we collect to take actions that will help us directly address and prioritize the issues that are important to them (Parameswaran, 2005; LaMalfa & Caruso, 2009).
Table 1. Characteristics of good feedback questions
|Barnum (2002)||Long enough to be useful, and short enough to encourage participation|
|Bevis & Henke (2008)||Short and focused|
|Dillman, Smyth, & Christian (2014)||
Questions need to enable the respondent to:
|Harker & LaMalfa (2009)||
|Lacki (2010)||Focus only on the issues that are the most important to the target audience (to separate the signal from the noise)|
|LaMalfa & Caruso (2009)||Based on concepts that are clear and universally understood by the target audience|
|Redish (2008)||Short and asked in a neutral manner|
Questions need to:
Questions should not:
For feedback to be actionable, it must meet the following criteria:
- The readers’ responses must be unambiguous.
- The issues that the respondents are concerned about must be easily understood and easily addressable by the people the feedback is intended for.
In other words, if an answer is vague, it becomes almost impossible to know how to solve the problem and whether it was solved successfully. But if we know exactly what the issue is, then we know how to fix it and who in the organization is responsible for fixing it (Hacker & LaMalfa, 2009; Lacki, 2010).
Defining Documentation Quality
Collecting feedback from an audience requires asking questions that can supply meaningful and actionable answers. But when we attempt to collect this type of feedback from our readers about the quality of the documentation we give them, we need to add another criterion to those listed in the previous section—we need to know what readers mean when they talk about documentation quality.
Numerous attempts have been made to define quality, and it is beyond the scope of this paper to go into detail for each one (Table 2 summarizes the most widespread and accepted definitions). Reeves and Bednar (1994) conclude that there is no such thing as a universal definition of quality—different definitions are appropriate under different circumstances and for different users. However, all quality definitions point to the same thing—it is the user who is the final arbiter of what quality is and what it is not.
This is especially true for documentation, which is always written for a potential audience and must always keep their needs in mind (see, for example, Barnum & Carliner, 1993; Bursaw, Alred, & Oliu, 1993; Redish, 1993; Robinson & Etter, 2000; Carey, et al., 2014). Some readers might be looking for conceptual information, some might need procedural information, and some might want in-depth reference information—for documentation, it is always the reader who is the final arbiter of what quality is and what it is not (Mead, 1998).
Current Definitions of Documentation Quality
To understand what readers actually want from the documentation we give them, we must first find out how readers define documentation quality so we can align ourselves with their expectations. It is, of course, very important that the concepts that readers use to define documentation quality be understood in the same way by writers and that they can be measured consistently.
Table 2. Quality definitions
|Crosby (1979)||Quality is “conformance to requirements,” that is, meeting the customer’s expectations, both stated and implied.|
|Deming (1986)||Quality is that which the customer specifies, and depends on the customer’s needs.|
|Juran (1988)||Quality is “fitness for use,” that is, it meets the customer’s needs and is free from deficiencies.|
|ISO 9000 (2015)||
Quality is the degree to which a set of inherent characteristics of an object fulfils requirements, where:
Table 3. Documentation quality definitions
|Cover, Cooke, & Hunt (1995)||
|DQTI (Carey et al., 2014)||
|Hackos et al., (1995)||
|Robinson & Etter (2000)||
|Smart, Seawright, & DeTienne (1995)||
|Telcordia Technologies Generic Requirements Document – GR-454-CORE (1997)||
|Weinstein & Sandman (1993)||
The summary review of the literature presented in Table 3 shows that there are many different possible definitions for documentation quality. However, there are a number of problems with these definitions.
First of all, many of the traits are vague and can be defined in multiple ways—what is meant by “familiar to the reader” (Brown, 1995) or “worthwhile” (Bush, 2001)? Does “easy to read” (Quesenbery, 2001) mean the same thing as “highly readable” (Betz, 1996), “interesting to read” (Spyridakis, 2000), or “readable” (Gregory, 2004)? Is the ISO/IEC26514:2008 standard’s “easy to understand” the same as the DQTI’s (Carey, et al., 2014) “easy to understand”?
Secondly, it is very difficult to objectively measure some of these quality traits. Although terms such as “complete,” “accurate,” or “clear” (for example, in Tarutz, 1992) are relatively straightforward, the definitions used by Albers (2005) and Manning (2008) listed in Table 3 are purely subjective. Eppler (2006) defines subjective quality as “meeting expectations” and objective quality as “meeting requirements,” and states that any approach to quality must take this twofold nature of quality into account. Similarly, Pirsig (1974), in his classic Zen and the Art of Motorcycle Maintenance, says that “quality is the relationship of the two [objective quality and subjective quality] with each other, (p. 304)” which means that it is important that our definition of documentation quality includes both types.
Lastly, the fact that there are so many different definitions of documentation quality is itself a problem. Although it is true that different definitions are appropriate under different circumstances, in different contexts, and for different readers, there is no single trait in any of the definitions in Table 3 that can be found in all of them. It is highly unlikely that the definition of documentation quality changes so much from situation to situation (and from reader to reader, regardless of the type of information) that there is not at least some overlap between them.
This lack of a clear and unified definition of documentation quality presents documentation providers with an inability to create a robust and repeatable way to collect feedback. A Google search for the phrase “documentation feedback” returns documentation feedback surveys from a wide variety of companies—but none of them ask the same questions in the same way, and none of them can be easily compared to the others.
Criteria Needed to Define Documentation Quality
To properly define documentation quality, we must therefore meet the following criteria:
- The definition must be from the readers’ point of view: Because it is the readers alone who determine if the document we give them is high quality or not, any definition of documentation quality must come from the readers’ perspective. Writers can come up with any number of quality attributes that they think are important, but, at the end of the day, what they think is not as important as what the readers think.
- The definition must be clear and unequivocal: Both readers and writers have to “be on the same page” when it comes to what makes a document high quality. Misunderstandings of what readers actually want from the documentation are a recipe for unhappy readers.
- The definition must cover all possible aspects of quality: Quality is a multidimensional concept, and we must be sure that any attempt to define it is as comprehensive as possible. A definition that emphasizes one dimension over another, or leaves one out altogether, cannot be considered to be a usable definition.
- The definition must have solid empirical backing: To be considered a valid definition of documentation quality, serious research must be done to give it the proper theoretical underpinnings. Years of experience or anecdotal evidence can act as a starting point, but if we are serious about our professionalism and our documentation, we need more.
Building a Comprehensive Definition of Documentation Quality
The goal of this paper is to use the four documentation quality criteria presented in the previous section to create a preliminary, focused, clearly defined, and reader-oriented model for collecting meaningful and actionable feedback from readers, based on how they define documentation quality. To do this, I turned to a groundbreaking study by Wang and Strong (1996) that developed a “comprehensive, hierarchical framework of data quality attributes (p. 8)” that were important to what they called “data consumers”.
Wang & Strong’s Data Quality Framework
The underlying assumption of Wang and Strong’s (1996) approach was that, to improve data quality, they needed to empirically understand what data quality meant to data consumers—data quality cannot be approached intuitively or theoretically because these do not truly capture the “voice of the data consumer.”
To do this, they ran a two-part study. The first part was divided into two “stages”: In the first stage, they collected an extensive list of 179 potential data quality attributes from 137 data consumers; in the second stage, they asked a different group of 355 data consumers to rate the importance of 118 of these attributes (61 attributes were removed as the result of a pretest) using a unipolar, nine-point, closed-ended ordinal scale (often called a Likert scale), with 1 being “extremely important” and 9 being “not important at all.” Using factor analysis of the importance ratings to uncover underlying data structures and their stability, they grouped these attributes into 20 data quality dimensions.
Because Wang and Strong (1996) decided that 20 data quality dimensions were too many for practical evaluation purposes, the second part of the study was designed to sort them into a smaller set of meaningful data quality categories. This second part was divided into two “phases” –in the first phase, they asked 18 data consumers to sort the dimensions into three to five groups, and then name the groups. As a result, five dimensions that were not consistently assigned to a category and had low importance ratings were eliminated, and four preliminary quality categories were identified: Intrinsic quality, Contextual quality, Representational quality, and Accessibility quality (ICRA). In the second phase, 12 other data consumers were asked to sort the remaining 15 dimensions into the predefined categories to confirm that the dimensions indeed belonged in these four categories. The categories and dimensions that make up Wang & Strong’s hierarchical data quality framework are described in Table 4.
Based on their categories, Wang and Strong (1996) concluded that high-quality data must be:
- Intrinsically good
- Contextually appropriate for the task
- Clearly represented
- Accessible to the consumer
Wang & Strong claim that their proposed data quality framework of four ICRA categories and 15 dimensions can be used as a basis for further studies that measure perceived data quality in specific work contexts. They state that the framework is methodologically sound, complete from the data consumers’ perspective, and is useful for measuring, analyzing, and improving data quality. They cite “strong and convincing” anecdotal evidence that the framework has been used effectively in both industry and government, and has helped information managers better understand their customers’ needs by “identifying potential data deficiencies, operationalizing the measurement of these data deficiencies, and improving data quality along these measures. (p. 9)”
Subsequent research on this framework has found that it works very well in identifying and solving information quality issues, and that its underlying methodology (information categories and dimensions) are robust and applicable to real-life information quality situations (Strong, Lee, & Wang, 1997a, 1997b; Wang, 1998; Kahn, Strong, & Wang, 2002; Pipino, Lee, & Wang, 2002; Lee et al., 2002).
Eppler (2006) evaluated seven different information quality frameworks in the literature to determine if they:
- Provide a systematic and concise set of criteria according to which information can be evaluated
- Provide a scheme to analyze and solve information quality problems
- Provide a basis for information quality measurement and benchmarking
- Provide the research community with a conceptual map that can be used to structure a variety of approaches, theories, and information-quality-related phenomena
Eppler divided his evaluation criteria into what he called “analytic criteria” (clear definitions, positioning of the framework within the existing literature, and a consistent and systematic structure) and “pragmatic criteria” (conciseness of the framework, real-world examples to demonstrate it, and tools with which to apply it). He determined that Wang and Strong’s (1996) framework “offers both a solid foundation in existing literature and practical applications (p. 54)” and “is the only framework in the series of seven that strikes a balance between theoretical consistency and practical applicability. (p. 54)”
Table 4. Wang & Strong’s (1996) data quality categories and dimensions
|Intrinsic Quality: Data must have quality in its own right.||Accuracy: The data is correct, reliable, and certified free of error. Believability: The data is true, real, and credible. Objectivity: The data is unbiased (unprejudiced) and impartial. Reputation: The data is trusted or highly regarded in terms of its source or content.|
|Contextual Quality: Data must be considered within the context of the task at hand.||The Appropriate Amount: The quantity or volume of the available data is appropriate. Completeness: The data is of sufficient breadth, depth, and scope for the task at hand. Relevance: The data is applicable and helpful for the task at hand. Timeliness: The age of the data is appropriate for the task at hand. Value: The data is beneficial and provides advantages from its use.|
|Representational Quality: Data must be well represented.||Conciseness: The data is compactly represented without being overwhelming (that is, it is brief in presentation, yet complete and to the point). Consistency: The data is always presented in the same format and is compatible with previous data. Ease of Understanding: The data is clear, without ambiguity, and easily comprehended. Interpretability: The data is in an appropriate language and units, and the definitions are clear.|
|Accessibility Quality: Data must be easy to retrieve.||Accessibility: The data is available or easily and quickly retrievable. Security: Access to the data can be restricted, and hence, kept secure.|
Applying the Wang and Strong (1996) Data Quality Framework to Documentation Quality
Can Wang and Strong’s (1996) data quality framework be applied to documentation quality as well? Can we use these quality categories and dimensions to create a reader-oriented definition of documentation quality that we can use to get meaningful and actionable feedback?
On the surface, it seems that Wang and Strong’s (1996) data quality framework is a good fit for our purposes. Like data quality, to understand documentation quality, we cannot rely on an intuitive or theoretical approach; we must get to the data consumers—that is, our readers. Like data quality, to improve documentation quality, we must understand what documentation quality really means to our readers. And, like data quality, high-quality documentation must be:
- Intrinsically good
- Clearly represented
- Contextually appropriate for the task
- Accessible to the reader
But it is important to make a point clear here before we continue comparing data quality to documentation quality. Wang and Strong’s (1996) framework focuses on data quality—are the terms “data” and “documentation” synonymous and interchangeable?
“Data” are abstract, raw, and meaningless without context (Eppler, 2006; Kumar, 2009). However, when data are organized in a logical way and given context that can be understood by someone or something, they become “information” (Chisholm, 2012). In other words, information is data in a meaningful form (ISO 9000, 2015).
While Wang and Strong’s (1996) originally stated object was data quality, it seems to be more accurate to call it information quality (Wang, 1998; Lee et al., 2002; Arazy, Kopak, & Hadar, 2017). When data consumers use data, they can no longer really be called data because they are now being given context by the consumer.
Documentation, then, is not data, but rather information—information that is intended to be used by readers in a particular context for a particular reason. High-quality documentation is high-quality information that is transformed into knowledge: Readers take the information, interpret it, evaluate it, use it to connect with prior knowledge, and then apply it to new contexts (see Eppler, 2006).
Wang and Strong’s (1996) assumptions about the need for an empirical approach to determine what information consumers want, and what high-quality information must be, are parallel to those we are making about documentation quality. Because their framework is really a framework for measuring information quality, and the documentation we send to our readers is used as information, there is a strong basis for attempting to use this framework to create a model for accurately measuring what our readers consider to be high-quality documentation—and then make plans to improve what needs to be improved.
A Proposed Documentation Quality Feedback Model
Using Wang and Strong’s (1996) research findings, the goal of this paper is to create a preliminary, focused, clearly defined, and reader-oriented model for collecting meaningful and actionable feedback from readers, based on how they define documentation quality. To be credible, this model must:
- Focus only on the most important issues
- Contain the fewest possible number of questions
- Use universally understood terminology
- Approach the issues from the readers’ point of view
- Collect unambiguous responses from readers
- Enable writers to easily understand and address readers’ issues
Wang and Strong’s (1996) information quality framework enables us to meet all of these criteria:
- There are only four information quality categories:
- Each category covers a distinct measurement of information quality with no overlap between them.
- The categories are based on robust and extensive user research, and their meanings and dimensions are succinct and clearly understood.
Unfortunately, there are a few drawbacks with directly using Wang and Strong’s information quality framework for collecting documentation feedback:
- The categories do not lend themselves easily to the creation of feedback questions—for example, we cannot ask readers, “How was the intrinsic quality of the documentation?”
- The gradations between each category’s dimensions are too fine to be used for focused feedback.
- There are too many dimensions for practical use.
Given these issues, this current study focused on limiting the number of information quality dimensions we could use to define documentation quality. Arazy and Kopak (2011) and Arazy, Kopak, and Hadar (2017) list a number of studies that show that information users may perceive certain information quality dimensions to be more important than others. Our approach was to find the single most important information quality dimension for each of the four information quality categories (as they related to documentation), which would then be used to represent the entire category. These four dimensions (one per category) would then serve as the basis for a documentation quality feedback model. A schematic diagram of the proposed documentation quality feedback model is shown in Figure 1.
A questionnaire was developed, asking readers to rate Wang and Strong’s (1996) 15 information quality dimensions by their perceived of importance, as they applied to documentation. Some minor modifications were made to the original information quality dimension definitions (the word “data” in the definitions was changed to “information in the documentation”).
The link to the questionnaire (https://www.surveymonkey.com/r/V2ZX5FP) was sent to technical communicators (who were contacted via the STC website and SIG groups, as well as numerous technical writing groups on Facebook) and customer service personnel from various companies, who were then asked to send it on to their readers. This was done to ensure that a broad, worldwide range of readers from different fields answered the questionnaires, and that the people answering the questions were the people who actually read and used the documentation.
For a list of the sources of reader questionnaire responses, see APPENDIX A: READER QUESTIONNAIRE SOURCES.
Rating and Data Analysis
Like in the second stage of the first part of Wang and Strong’s (1996) study, information quality dimension rating was done on a nine-point Likert scale, with 1 being labeled “extremely important” and 9 being labeled “not important at all.” Even though such a scale is cumbersome, it was used in this study to get a finer gradation between the weights; Griffin and Hauser (1993) found that a nine-point Likert scale is acceptable for measuring the importance of customer needs. Fewer points on a scale can lead to a broader weight range; more points can show differences better, especially in a study like this one with a small sample size. In future studies that validate the robustness of this model, it is recommended that a larger sample size be studied and a five-point Likert scale be used (as suggested by Barnum, 2002; Wiley, 2006; Dillman, Smyth, & Christian, 2014; Revilla, Saris, & Krosnick, 2014).
The dimensions were sorted by information quality category, and the mean weight and standard deviation of each of them was calculated—the lower the weight, the more important the dimension. For each information quality category, the information quality dimension with the lowest rating was considered to be the most important, and represented the entire category. A one-way ANOVA (run at http://statpages.org/anova1sm.html) was completed to determine whether the differences in mean weights between the dimensions in each category were significant (set as p < 0.05).
A total of 81 readers responded to the reader questionnaire, but only 80 of them rated all of the information quality dimensions. The following information quality dimensions had the highest ratings (that is, the lowest mean weights) per information quality category:
- From the Intrinsic information quality category = Accurate (n = 81, mean = 1.80, SD = 1.18), as shown in Figure 2 and Table 5
Table 5. Descriptive statistics for intrinsic documentation quality dimensions
- From the Contextual information quality category = Relevant (n = 81, mean = 1.96, SD = 1.22), as shown in Figure 3 and Table 6
Table 6. Descriptive statistics for contextual documentation quality dimensions
|THE APPROPRIATE AMOUNT||81||3.01||2.08|
- From the Representational information quality category = Easy to Understand (n = 81, mean = 1.91, SD = 1.15), as shown in Figure 4 and Table 7
Table 7. Descriptive statistics for representational documentation quality dimensions
|EASY TO UNDERSTAND||81||1.91||1.15|
- From the Accessibility information quality category = Accessible (n = 81, mean = 2.20, SD = 1.46), as shown in Figure 5 and Table 8
Table 8. Descriptive statistics for accessibility documentation quality dimensions
The full range of information quality dimension mean weights and descriptive statistics is presented in Figure 6 and Table 9.
Table 9. Descriptive statistics for all documentation quality dimensions
|Quality Category||Quality Dimension||n||Mean||SD|
|Representational||EASY TO UNDERSTAND||81||1.91||1.15|
|Contextual||THE APPROPRIATE AMOUNT||81||3.01||2.08|
The focus of this study was to create a model for collecting meaningful and actionable documentation quality feedback based on Wang and Strong’s (1996) information quality categories and dimensions, using readers’ definitions of documentation quality and the criteria listed previously for getting this kind of feedback.
To accomplish this, I looked for the most important dimension per ICRA information quality category, which was to be understood as the most important criteria for how readers define documentation quality. The results of the readers’ ratings show that readers expect the documentation they get to be accurate, relevant, easy to understand, and accessible (AREA). Although this might seem self-evident, it provides a strong empirical underpinning for the claim that documentation quality can be defined using a small yet comprehensive set of clear and unambiguous information quality dimensions.
Intrinsic Documentation Quality
The Accurate information quality dimension was the most important dimension in the Intrinsic information quality category (n = 81, mean = 1.80, SD = 1.18). It was defined in the reader questionnaire as “the information in the documentation is correct, reliable, and certified free of error.” For example, if an out-of-date screen capture is used in a procedure, then the information in the documentation is not “accurate.”
The second most important dimension in the Intrinsic information quality category was Believable (n = 81, mean = 2.25, SD = 1.77). It was defined in the reader questionnaire as “the information in the documentation is true, real, and credible.” For example, if one section in a document describes the system in one way, and another section describes it in a different way (even if both descriptions are technically accurate), then the information in the document is not “believable,” because the reader will be confused by the different descriptions and doubt the credibility of both (see Strong, Lee, & Wang, 1997a, 1997b; Pipino, Lee, & Wang, 2002; Eppler, 2006).
The difference between these two dimensions is almost significant (F = 3.6246, p = 0.0587). It is possible that, were the sample size larger, the difference might have been significant; further research is needed to determine whether this is the case.
Nevertheless, the difference between the Accurate and Believable dimensions is meaningful. Information can be accurate but not believable, or believable but not accurate: That is, readers might trust well-presented information in a document, even though it is inaccurate, and they might dismiss accurate information because they do not trust the source. This is a common complaint, for instance, with health-related information found on the Internet: A lot of information is available, but readers have very limited tools available to judge its accuracy and must rely only on how believable it is (Kim et al., 1999; Eysenbach et al., 2002). However, the accuracy of information is logically more important than its believability: Inaccurate information is useless to readers, regardless of how believable it is.
The differences between the Accurate information quality dimension and the other dimensions in this category were statistically significant.
Contextual Documentation Quality
The Relevant information quality dimension was the most important dimension in the Contextual information quality category (n = 81, mean = 1.96, SD = 1.22). It was defined in the reader questionnaire as “the information in the documentation is applicable and helpful for the task at hand.” For example, if details are included about a procedure that is not appropriate for the reader’s skill level, then the information in the documentation is not “relevant.”
The second most important dimension in the Contextual information quality category was Valuable (n = 81, mean = 2.06, SD = 1.23). It was defined in the reader questionnaire as “the information in the documentation is beneficial and provides advantages from its use.” For example, if a procedure describes how to set up a complicated system but not in the most efficient way, then the information in the documentation is not “valuable” (even if it is relevant).
The difference between these two dimensions is not significant (F = 0.2699, p = 0.6041). Nevertheless, there is still a functional difference between the Relevant and Valuable dimensions. If the information in the document is “applicable and helpful for the task at hand,” it means that it helped readers do what they needed to do or know what they needed to know—no more, and no less. For example, a document can instruct readers how to set up a complicated system, tell them how to manage network clusters, or explain the hardware architecture.
On the other hand, if the information in the document is “beneficial and provides advantages from its use,” then it means it was more than just relevant: It gave the readers something extra. For example, the document helped them set up a complicated system in the most efficient way, told them how to manage network clusters more effectively, or explained the advantages of the hardware architecture.
A document can be “helpful” but not “beneficial” or “advantageous”: For example, the reader used the information to set up the system, but it took three hours when it could have taken two; the reader can manage the network clusters, but it is more complicated than it needs to be; the reader understands the hardware architecture, but does not understand why it is this way.
The third most important dimension in the Contextual information quality category was Complete (n = 81, mean = 2.27, SD = 1.36). It was defined in the reader questionnaire as “the information in the documentation is of sufficient breadth, depth, and scope for the task at hand.” For example, if a procedure is missing a step or details that help readers, then the information in the documentation is not “complete.”
The difference between this and the Relevant dimension is also not significant (F = 2.3320, p = 0.1287). Yet, here too there is a functional difference between the two dimensions.
In Wang and Strong’s (1996) information quality framework, the Complete dimension is in the Contextual information quality category and not in the Intrinsic information quality category. While many information quality studies categorize “completeness” together with “accuracy” (see, for example, the meta-review in Eysenbach et al. (2002)), here the idea of information completeness is purely contextual because it impacts how the information can be used for the “task at hand.” As Lee et al. (2002) explain, completeness is an intrinsic dimension when referring to any missing data, but is a contextual dimension when referring only to missing data actually needed by users.
While true that incomplete documentation is problematic, the Relevant dimension is a much higher-level information quality dimension than Complete is because it measures the ultimate contextual usability of the information in the document. Information in a document can be complete but still irrelevant: if a procedure that is not appropriate for the reader’s skill level is provided, it does not matter if it is complete or not, but even incomplete information can still be “applicable and helpful” to a reader to some degree.
Further research is needed to determine which of these three closely rated dimensions (Relevant, Valuable, and Complete) best represent the Contextual information quality category; however, it seems logical that the Relevant dimension should be the most important contextual dimension. Because documentation is never read in a vacuum and is only used in context, the usability of its information depends mainly on its ability to help readers do the “task at hand” (even if not in the most advantageous way), which can be accomplished even if there are issues with how valuable or complete the information is. On the other hand, irrelevant information in a document cannot add any value, and irrelevant information that is complete is still irrelevant.
The differences between the Relevant information quality dimension and the other dimensions in this category were statistically significant.
Representational Documentation Quality
The Easy to Understand information quality dimension was the most important dimension in the Representational information quality category (n = 81, mean = 1.91, SD = 1.15). It was defined in the reader questionnaire as “the information in the documentation is clear, without ambiguity, and easily comprehended.” For example, if the language used is ungrammatical or inappropriate for the intended audience, then the information in the documentation is not “easy to understand.”
The second most important dimension in the Representational information quality category was Concise (n = 80, mean = 2.37, SD = 1.55). It was defined in the reader questionnaire as “the information in the documentation is compactly represented without being overwhelming (that is, it is brief in presentation, yet complete and to the point).” For example, if a conceptual topic is overly detailed, then the information in the documentation is not “concise” (even if it is easy to understand). More information is not necessarily better and can present problems for readers who are trying to apply it and put it into practice (Strong, Lee, & Wang, 1997b).
The difference between these two dimensions is significant (F = 4.5810, p = 0.0339), indicating that readers feel strongly about how easy it is to understand the information in the documentation. They do not want to struggle to understand what is written, and consider the grammar, style, and clarity of the documentation to be important. While the conciseness of the text might be a “nice to have,” ease of understanding is a “must have.”
The differences between the Easy to Understand information quality dimension and the other dimensions in this category were also statistically significant.
Accessibility Documentation Quality
The Accessible information quality dimension was the most important dimension in the Accessibility information quality category (n = 81, mean = 2.20, SD = 1.46). It was defined in the reader questionnaire as “the information in the documentation is available or easily and quickly retrievable.” For example, if the search functionality does not return useful results or the links in the table of contents do not work, then the information in the documentation is not “accessible.”
The second most important dimension in the Accessibility information quality category was Secure (n = 81, mean = 5.19, SD = 2.41). It was defined in the reader questionnaire as “access to the information in the documentation can be restricted, and hence, kept secure.” For example, if readers can make their own changes to the documentation after it has been published, then the information in the documentation is not “secure.”
The difference between these two dimensions is very significant (F = 91.2060, p < 0.0000), indicating that readers have strong, diametrically opposite opinions about these two dimensions. Readers must be able to easily find and retrieve the information they need from the documentation; however, they do not particularly care if the information is secure. In fact, it seems that readers prefer that access to the information not be restricted, possibly so they can update it and modify it themselves as needed (see Strong, Lee, & Wang, 1997a). Further research into this is recommended.
Practical Applications of the Documentation Quality Feedback Model
This study takes Wang and Strong’s (1996) ICRA information quality categories and dimensions and applies them to documentation quality. By asking readers to rate the different dimensions that make up each category, I was able to find the single most important information quality dimension per category and create a reader-oriented documentation quality definition:
- Intrinsic = Accurate
- Contextual = Relevant
- Representational = Easy to Understand
- Accessibility = Accessible
This empirical, reader-oriented definition of documentation quality can be applied in many practical ways.
Collecting Meaningful and Actionable Documentation Quality Feedback
Because the ICRA information quality categories are distinct, clearly defined, and focused, writers can use the representative AREA information quality dimensions to easily understand what their readers are telling them about the documentation (meaningful feedback) and what they want improved (actionable feedback).
The proposed documentation quality feedback model asks readers only the following four questions:
- Could you find the information you needed in the document?
- Was the information in the document accurate?
- Was the information in the document relevant?
- Was the information in the document easy to understand?
These four questions can be applied to all methods of collecting meaningful and actionable documentation quality feedback. For examples, let’s look at a few of Wilson’s (1999) techniques for testing the usability of documentation.
A usability edit of the documentation is defined by Wilson (1999) as a “detailed edit of the instructions,” but should really be applied to both procedural as well as conceptual information. Users are asked to read through the document and mark things that are “hard to understand, wordy, inconsistent,” and so on.
If we ask the users doing the usability edit to apply the proposed documentation quality feedback model and focus only on the four most important reader-oriented aspects of documentation quality—the accuracy, relevance, ease of understanding, and accessibility of the information in the documentation—we are much more likely to get the meaningful and actionable feedback we are looking for.
A documentation survey is a short questionnaire sent to readers that asks them about the usability of the documentation. This popular way to collect feedback can be created easily and emailed to a large number of users (for example, as an online survey). However, surveys have drawbacks, and the sample size is often small and biased.
Our proposed documentation quality feedback model using the AREA information quality dimensions is a good fit for the creation of surveys that are as focused and succinct as possible, which is a critical component of survey design (see Table 1).
For an example of how a documentation survey based on this model might look, see APPENDIX B: DOCUMENTATION FEEDBACK SURVEY (EXAMPLE) or go to https://www.surveymonkey.com/r/VJL6QHD.
According to Wilson (1999), feedback about the usability of the documentation can be collected by sitting in on product training sessions and asking the participants to note any problems they come across in the documentation.
Working together with trainers, technical communicators can collect a great deal of meaningful and actionable feedback from real users by asking them to focus only on the four AREA information quality dimensions when they work with the documentation.
Helping Writers Understand What Is Important to Readers When Feedback Is Unavailable
It is important to ensure that writers understand how readers define documentation quality—relying on their “gut instincts” when it is impossible to get direct, meaningful, and actionable feedback is a risky proposition. If writers emphasize dimensions that readers do not, or incorrectly assume that readers put more importance on certain dimensions than others, then the quality of the documentation they create will not match what their readers expect, want, or need.
Using the ICRA information quality categories and their dimensions to compare and contrast how writers and readers define documentation quality, as well as how writers assume readers define it, will go a long way toward increasing reader satisfaction, because it will give writers a sound theoretical basis for focusing on certain dimensions of documentation quality in their writing (for the results of a study that did this, see Strimling, 2018).
Similarly, this proposed model can be used in academic technical communication courses to teach students about reader-oriented documentation quality measures. Instructors can use the ICRA information quality categories and the four AREA dimensions to provide evidence-based materials for teaching students how to write quality documentation that readers will find useful.
Providing Reliable Methods and Metrics for Measuring Documentation Quality
The four AREA dimensions that make up the reader-oriented definition of documentation quality at the heart of this proposed feedback model can be used to classify and sort existing internal or external feedback. This can then be presented to management as clear and reliable metrics about the documentation that will help determine where more emphasis might need to be invested.
For example, if the feedback that is being received indicates that a majority of the issues are about the accuracy and relevance of the documentation, then management can make a clear decision about who in the organization is responsible for addressing these issues and can later compare before-and-after feedback to see if the percentage of these complaints has decreased.
Similarly, technical communicators (who, by and large, are responsible mainly for the Easy to Understand dimension) can show their managers how good writing can lower complaints about this issue.
Creating a Common Documentation Quality Terminology
The documentation quality definition proposed here can provide writers with unambiguous terminology they can use when discussing, planning, and analyzing documentation needs with SMEs and other writers. The AREA information quality dimensions (and their underlying ICRA categories) cover all aspects of documentation quality and are understood in the same way by all parties. This will ensure that everyone involved understands what readers want and how to get there—which should be the goal of all people involved in creating documentation.
Anecdotally, it seems that SMEs do actually find it much easier to understand documentation needs and plans when focusing on the four AREA dimensions—they know very well what the terms mean, who is responsible for what, and how to go about addressing the issues.
This is only a preliminary study, intended to create a framework for a documentation quality feedback model based on empirically tested and distinct information quality categories and dimensions, and using a reader-oriented definition of documentation quality.
To make this model more robust, it is suggested that this study be replicated with a larger sample size (preferably more than 200 readers) and use a narrower, fully labeled, five-point Likert scale for determining the relative mean importance weights of each dimension. This will test the stability of the Accurate, Relevant, Easy to Understand, and Accessible information quality dimensions as the most important dimensions per ICRA category.
To further increase the robustness of this model, it might also be useful to apply the Kano Model of customer satisfaction to the dimensions. The Kano Model is uniquely suited to measuring reader-oriented documentation quality dimensions because its goal is to demonstrate how different categories of customer requirements influence customer satisfaction in different ways (Verduyn, 2013).
Briefly, the Kano Model is based on the idea that customer satisfaction with a product’s features depends on the level of functionality that is provided (Zacarias, 2015). The model consists of two measurable dimensions:
- Satisfaction, from “Frustrated” to “Delighted”
- Execution (or Functionality), from “Done badly or not at all” to “Done well”
These two dimensions combine to form the following four quadrants (or categories) of feature quality:
- Must-Be: These are features that are expected and must be present. Their presence does not increase satisfaction, but their absence decreases it.
- Performance: These are features that the customer expects to be present, and the better they are executed, the greater the satisfaction.
- Attractive: These are features that the customer does not expect to be present, but when they are, they increase satisfaction.
- Indifferent: These features have no effect on customer satisfaction, regardless of if they are expected or not.
For a more in-depth discussion of the Kano Model and how to apply it, see Verduyn (2013) and Zacarias (2015).
In an exploratory pilot study using the Kano Model (n = 41), all but one of the information quality dimensions were located in the Performance quadrant, with Accurate, Relevant, and Easy to Understand located much closer to the Must-Be quadrant, and Accessible located firmly within the Performance quadrant. Only the Secure dimension (from the Accessibility information quality category) was clearly located in the Indifferent quadrant. The importance ratings of the dimensions were similar to those in this current study.
Another possible avenue of future research on this proposed model is to look at the measurability and inter-rater reliability of each of the AREA information quality dimensions.
Arazy and Kopak (2011) investigated whether a small subset of Wang and Strong’s (1996) information quality dimensions were inherently better indicators of information quality, because they could be reliably measured (which they defined as “the degree to which independent assessors are able to agree when rating [information] objects on these various dimensions (p. 89)”). They posited that “to draw any conclusion from studies on information quality, it is required that measurement instruments produce high inter-rater reliability, (p. 90)” and that “an understanding of which dimensions tend to produce higher agreement than others would have implications for a quality-assessment procedure. (p. 90)”
Their study focused on the Accurate, Complete, Objective, and “Representation” dimensions (they combined the Easy to Understand, Consistent, and Concise dimensions from the Representational information quality category, and named it “Representation”). They used this subset, not because they felt that these were the most important dimensions, but because this subset “reasonably represented the different kinds of information quality dimensions that others have viewed as important and that researchers have employed with success” (Arazy, Kopak, and Hadar, 2017, p. 406). They did not include any of the Accessibility information quality category dimensions.
Arazy and Kopak (2011) found that inter-rater agreement was higher for the Contextual and Representational information quality dimensions they tested (Complete and “Representation” respectively) and lower for the Intrinsic information quality dimensions (Accurate and Objective). They suggested that this might be because the dimensions in the former categories have quick heuristics that can be used for rating, while the dimensions in the latter have none (or hard-to-identify ones). In a follow-up study, Arazy, Kopak, and Hadar (2017) looked deeper into the underlying heuristics (namely, the searching, stopping, and decision rules) used for these four dimensions, and found this to be the case.
Both Arazy and Kopak (2011) and Arazy, Kopak, and Hadar (2017) state that future studies should expand their research into the other information quality dimensions. Based on this proposed documentation quality feedback model, it is suggested that the Relevant, Easy to Understand, and Accessibility dimensions are good candidates for this effort (in addition to the Accurate dimension that was already tested in the previous studies).
Finally, and most importantly, this documentation quality feedback model must be tested in the field. The ICRA information quality categories and the four AREA dimensions can be used in real-life situations (as suggested in the Practical Applications section) to see if they really do collect meaningful and actionable documentation quality feedback, help writers understand what is important to readers, provide reliable methods and metrics for measuring documentation quality, and create a common documentation quality terminology for both technical communicators and SMEs.
Having a robust, empirically based model for collecting meaningful and actionable documentation quality feedback that does all of this will contribute greatly to the field of documentation quality and enable technical communicators to provide high-quality documentation that makes their readers happy.
Albers, M. (2005). The key for effective documentation: Answer the user’s real question. Usability Interface, May, 5–8.
Arazy, O., & Kopak, R. (2011). On the measurability of information quality. Journal of the American Society for Information Science and Technology, 62(1), 89–99.
Arazy, O., Kopak, R., & Hadar, I. (2017). Heuristic principles and differential judgments in the assessment of information quality. Journal of the Association for Information Systems, 18(5), 403–432.
Barnum, C. (2002). Usability testing and research. New York, NY: Longman.
Barnum, C., & Carliner, S. (1993). Introduction. In Barnum, C. & Carliner, S. (Eds.), Techniques for technical communicators (pp. 1–11). Needham Heights, MA: Allyn and Bacon
Bartlett, P. (2012). Your content, only better. Acrolinx white paper. Acrolinx GmbH
Betz, M. (1996). Delivering customer satisfaction: our experiences with responding to customer feedback. STC Proceedings from the 1996 STC Summit.
Bevis, K., & Hemke, K. (2008). Getting real-world feedback on your information: A case study. STC Proceedings from the 2008 STC Summit.
Brown, D. (1995). Test the usability of research. Technical Communication, 42, 12–14.
Bursaw, C., Alred, G., & Oliu, W. (1993). Handbook of technical writing (4th ed.). New York, NY: St. Martin’s Press.
Bush, D. (2001). Editing is magic. Intercom, June, 39 & 43.
Carey, M., McFadden Lanyi, M., Longo, D., Radzinski, E., Rouiller, S., & Wilde, E. (2014). Developing quality technical information: A handbook for writers and editors (3rd ed.). Upper Saddle River, NJ: IBM Press (Pearson plc).
Carliner, S. (1997). Demonstrating effectiveness and value: A process for evaluating technical communication products and services. Technical Communication, 44, 252–265.
Chisholm, M. (2012). Data quality is not fitness for use. Retrieved from http://www.information-management.com/news/data-quality-is-not-fitness-for-use-10023022-1.html
Cover, M., Cooke, D., & Hunt, M. (1995). Estimating the cost of high-quality documentation. Technical Communication, 42, 76–83.
Crosby, P. (1979). Quality is free. New York, NY: McGraw-Hill.
Deming, W. (1986). Out of the crisis. Cambridge, MA: MIT Press.
Dillman, D., Smyth, J., & Christian, L. (2014). Internet, phone, mail, and mixed-mode surveys: The tailored design method (4th ed.). Hoboken, NJ: Wiley.
Eppler, M. (2006). Managing information quality: Increasing the value of information in knowledge-intensive products and processes (2nd ed.). Berlin/Heidelberg, Germany: Springer.
Eysenbach, G., Powell, J., Kuss, O., & Sa, E. (2002). Empirical studies for assessing the quality of health information for consumers on the World Wide Web: A systematic review. JAMA, 287(20), 2691–2700.
Filippo, E. (2007). Merging usability practices with document design and development. Intercom, December, 8–12.
Gregory, J. (2004). Writing for the Web versus writing for print: Are they really so different? Technical Communication, 51, 276–285.
Griffin, A., & Hauser, J. (1993). The voice of the customer. Marketing Science, 12(1) , 1–27.
Hackos, J. (2002). Content Management for Dynamic Web Delivery. New York, NY: John Wiley & Sons, Inc.
Hackos, J., Winstead, J., Gill, S., & Hartmann, M. (1995). Finding out what users need and giving it to them: a case-study at Federal Express. In Measuring value added, Redish, J. & Ramey, J. (Eds.). Technical Communication, 42, 322–327.
Haramundanis, K. (2001). Commentary on “Little machines: understanding users understanding interfaces.” ACM Journal of Computer Documentation, 25(4), 128–131.
Harker, J., & LaMalfa, K. (2009). 11 easy ways to improve your survey response rates. Allegiance white paper. Allegiance Inc.
Hart, G. (1997). Accentuate the negative: obtaining effective reviews through focused questions. Technical Communication, 44, 52–57.
HCi, (2002). Simple metrics for documentation. Retrieved from www.hci.com.au/documentation-metrics, HCi Professional Services Pty Ltd.
InfoPoll (1998). How to write a good survey. Retrieved from http://www.accesscable.net/~infopoll/tips.htm
ISO 9000 (2015). Quality management systems – Fundamentals and vocabulary. Geneva, Switzerland: International Organization for Standardization. Retrieved from https://www.iso.org/obp/ui/#iso:std:iso:9000:ed-4:v1:en
ISO/IEC 26514 (2008). Systems and software engineering – Requirements for designers and developers of user documentation. Geneva, Switzerland: International Organization for Standardization.
Juran, J. (1988). Juran on planning for quality. New York, NY: The Free Press.
Kahn, B., Strong, D., & Wang, R. (2002). Information quality benchmarks: Product and service performance. Communications of the ACM, 45(4), 184–192.
Kim, P., Eng, T., Deering, M., & Maxfiel, A. (1999). Published criteria for evaluating health related Web sites: Review. BMJ, 318, 647–649.
Kumar, M. (2009). Difference between data and information. Retrieved from http://www.differencebetween.net/language/difference-between-data-and-information/
Lacki, T. (2010). Capitalizing on customer feedback. Allegiance/ Peppers & Rogers Group white paper. Peppers & Rogers Group Inc.
LaMalfa, K., & Caruso, B. (2009). The top 10 voice of the customer (VOC) best practices. Endeavor Management/Allegiance white paper. Allegiance Inc.
Lee, Y., Strong, D., Kahn, B., & Wang, R. (2002). AIMQ: A methodology for information quality assessment. Information & Management, 40, 133–146.
Manning, S. (2008). Using content management to improve content quality. Presentation at the 2008 STC Summit.
Mead, J. (1998). Measuring the value added by technical documentation: A review of research and practice. Technical Communication, 45, 353–379.
O’Keefe, S. (2010). Calculating document quality (QUACK). Retrieved from www.scriptorium.com/2010/05/calculating-document-quality-quack
Parameswaran, J. (2005). Managing customer feedback on user documentation. Usability Interface, 11(4), 19–21.
Pipino, L., Lee, Y., & Wang, R. (2002). Data quality assessment. Communications of the ACM, 45(4), 211–218.
Pirsig, R. (1974). Zen and the art of motorcycle maintenance. New York, NY: William Morrow & Co.
Quesenbery, W. (2001). On beyond help: Meeting user needs for useful online information. Technical Communication, 48, 182–188.
Redish, G. (1993). Understanding readers. In C. Barnum & S. Carliner (Eds.), Techniques for technical communicators (pp. 14–41). Needham Heights, MA: Allyn and Bacon.
Redish, G. (2008). Personal communication.
Reeves, C., & Bednar, D. (1994). Defining quality: Alternatives and implications. The Academy of Management Review, 19(3), 419–445.
Revilla, M., Saris, W., & Krosnick, J. (2014). Choosing the number of categories in agree-disagree scales. Sociological Methods & Research, 43(1), 73–97.
Robinson, P., & Etter, R. (2000). Writing and designing manuals (3rd ed.). Boca Raton, FL: CRC Press.
Smart, K., Seawright, K., & DeTienne, K. (1995). Defining quality in technical communication: A holistic approach. Technical Communication, 42, 474–481.
Spyridakis, J. (2000). Guidelines for authoring comprehensible Web pages and evaluating their success. Technical Communication, 47, 359–382.
StatPac Inc. (2014). Qualities of a good question. Retrieved from https://statpac.com/surveys/question-qualities.htm
Strimling, Y. (2018). So you think you know what your readers want? Intercom, 65(6), 4–9.
Strong, D., Lee, Y., & Wang, R. (1997a). Data quality in context. Communications of the ACM, 40(5), 103–110.
Strong, D., Lee, Y., & Wang, R. (1997b). 10 potholes in the road to information quality, Computer, 30(8), 38–46.
Tarutz, J. (1992). Technical editing: The practical guide for editors and writers. Reading, MA: Addison-Wesley.
TechScribe Documentation Consultancy, (2004). What is good documentation? Retrieved from www.techscribe.co.uk/techw/good-documentation.htm
Telcordia Technologies Generic Requirements Document – GR-454-CORE (1997). Retrieved from http://telecom-info.telcordia.com/site-cgi/ido/docs.cgi?ID=SEARCH&DOCUMENT=GR-454&
Verduyn, D. (2014). Discovering the Kano Model. Retrieved from http://www.kanomodel.com/
Wang, R. (1998). A product perspective on Total Data Quality Management. Communications of the ACM, 41(2), 58–65.
Wang, R., & Strong, D. (1996). Beyond accuracy: What data quality means to data consumers. Journal of Management Information Systems, 12(4), 5–34.
Weinstein, N., & Sandman, P. (1993). Some criteria for evaluating risk messages. Risk Analysis, 13(1), 103–114.
Wiley, A. (2006). Customer satisfaction measurement. Intercom, July/August, 53–54.
Wilson, C. (1999). Documentation usability techniques. Retrieved from https://wiki.library.oregonstate.edu/confluence/download/attachments/5308515/documentation%20usability%20techniques.doc?version=1&modificationDate=1242330566000&api=v2
Zacarias, D. (2015). The complete guide to the Kano Model: Prioritizing customer satisfaction and delight. Retrieved from https://foldingburritos.com/kano-model/
About the Author
Yoel Strimling has been an editor for 20 years and currently works as the Senior Technical Editor/Documentation Quality SME for CEVA Inc. in Herzelia Pituach, Israel. Over the course of his career, he has successfully improved the content, writing style, and look and feel of his employers’ most important and most used customer-facing documentation by researching and applying the principles of documentation quality and survey design. Yoel is a member of tekom Israel, a Senior Member of STC, and the editor of Corrigo, the official publication of the STC Technical Editing SIG. He can be contacted at firstname.lastname@example.org.
Manuscript received 2 March 2017, revised 24 October 2017; accepted 14 November 2018.
Appendix A: Reader Questionnaire Sources
The readers who participated in this study were from the following companies:
- Alcatel-Lucent International SAS, France
- Bell Mobility Canada
- Eastman Kodak – Network Administration, Belgium/USA
- Kodak GCG Canada
- Kodak Gesellschaft mit beschraenkter Haftung, Austria
- McKesson Corp., USA
- Oracle, Israel
- Oracle, UK
- Orange France
- Orange Niger
- Orange Polska Spolka Akcyjna
- Pressco Technology, USA
- Rogers Canada
- SAP AG
- Telstra Internet, Australia
- T-Mobile, USA
- Verizon Wireless, USA
Appendix B: Documentation Feedback Survey (Example)
In an effort to improve the quality of our documentation, as well as better understand your needs, we have created this quick survey to gather your feedback.
Please take a few moments to answer these questions about our documentation. If you do not want to give us feedback now, just close this window.
This survey is, of course, anonymous; however, if you do not mind if we contact you for more details, please give us your email address in Question 5.
Your feedback is very important to us, and we appreciate your contribution to the improvement of our documentation.
- Which document(s) are you using?
- Can you find the information you need in the documentation?
<comment box> If there are problems with this, please provide more details.
- Is the information in the documentation:
<checkboxes> Yes|Partially|No|N/A (I cannot find the information I am looking for)
<comment box> If there are problems with this, please provide more details here.
b. Easy to understand?
<checkboxes> Yes|Partially|No|N/A (I cannot find the information I am looking for)
<comment box> If there are problems with this, please provide more details here.
<checkboxes> Yes|Partially|No|N/A (I cannot find the information I am looking for)
<comment box> If there are problems with this, please provide more details here.
- If you have additional comments or suggestions about our documentation, please write them here.
- May we contact you if we have questions?
<comment box> If Yes, please provide your email address here.