61.3, August 2014

What Measures of Productivity and Effectiveness Do Technical Communication Managers Track and Report?

Saul Carliner, Adnan Qayyum, and Juan Carlos Sanchez-Lozano

Abstract

Purpose: Previous literature focuses on what practitioners should be doing to demonstrate the value of technical communication, rather than what they actually do. This study addresses the gap by asking managers about the extent to which they track two measures of value—productivity and effectiveness—as well as the expectations of sponsors for receiving reports on these issues.

Method: A survey of corporate communication, training, and technical communication groups was conducted. Participants were recruited through local chapters of the STC and the American Medical Writers Association. Ninety technical communication managers responded.

Results: The evidence suggests that activities for tracking productivity by technical communication managers are limited. Technical communication groups rarely solicit feedback and perceptions on individual communication products and employ usability testing on a limited basis. Technical communicators rarely track return on investment (ROI). Technical communication managers feel limited pressure to report productivity and effectiveness. The most significant criteria against which the productivity and effectiveness of technical communication groups is assessed is word of mouth. The evidence only partially supports this entering belief: Customer surveys play an important role in assessing general impressions of technical communication products.

Conclusions: These results are consistent with earlier studies and suggest that despite a discussion about means of assessing the productivity and effectiveness of technical communicators that has spanned over a quarter of a century, none of the methods of assessment has reached wide use. The study also suggests that perceptions are the most significant factor in assessing the value of technical communication products and services, and should be given more focus in future research and writing on this topic.

Keywords: assessment, effectiveness, management, metrics, productivity

Practitioner’s Takeaway:

  • When assessing how others in your organization perceive the value of the technical communication services provided by your group, primarily focus on the word-of-mouth flowing through the organization and the quality of the service provided.
  • When looking for quantitative measures of productivity and effectiveness, recognize that none of the methods proposed have achieved wide use and most require extensive resources that might be better used for other purposes.

Background

One of the recurring conversations among technical communicators focuses on demonstrating the value of our services to the people who sponsor them. (Sponsors are the people who authorize funding for technical communication services—and can stop payment on approved projects.) The issue received wide recognition with the publication of a special issue of Technical Communication on “Adding Value as a Professional Technical Communicator” (Redish, 1995), which described an approach for determining the value added by technical communicators based on the goals of a given project and reports on several cases that adapted the approach (such as Blackwell, 1995; Daniel, 1995; Spencer, 1995). Later publications have continued to report cases that replicate this methodology (such as Downing, 2007; Fisher, 1999; Henry, 1998).

The issue has continued to receive attention in both the scholarly and professional literature. For example, at the beginning of the recession of 2008–2009, Intercom, the magazine of STC, published a special issue with advice to individual technical communicators on how to demonstrate their value to their employers and clients. In an effort to actively promote the profession, STC explains “How Technical Writers Add Value to Your Team” on its website (STC, n.d.).

But this body of literature primarily suggests what technical communicators could do to demonstrate their value. It does not assess what technical communicators actually are doing, have time to do, or what is expected of them by their internal and external sponsors.

This study intends to address this gap by asking managers about the extent to which they track two key characteristics of value—the productivity and effectiveness of their staffs—as well as the expectations of their sponsors for receiving reports on these issues. Specifically, this study captured descriptive statistics about behaviors suggested by a related qualitative study (Carliner, in preparation), which we call entering beliefs. (Entering belief is a term we devised to represent the conclusions of the qualitative study.)

  • Activities for tracking productivity by technical communication managers are limited.
  • Technical communication groups rarely solicit feedback and perceptions on individual communication products.
  • Technical communication groups employ usability testing on a limited basis.
  • Customer surveys play an important role in assessing general impressions of technical communication products.
  • Technical communicators rarely track return on investment (ROI).
  • Technical communication managers feel limited pressure to report productivity and effectiveness.
  • The most significant criteria against which the productivity and effectiveness of technical communication groups is assessed is word of mouth.

The following sections describe this study. The next section situates the study within the literature on demonstrating the value of technical communication products and services. Subsequent sections describe the methodology for conducting the study, present the results, and suggest conclusions that researchers and practicing professionals can draw from the data.

Literature Review

Technical communicators have tried to demonstrate the value of their services throughout the history of the profession. As far back as 1859, the U.S. Congress heard a report that, as a result of the publication of a guide for lighthouse operators, no preventable maritime disaster occurred in the previous year.

While it is not fair to give the written instructions the entire credit for the improvement [technical advancements were introduced]…[These] comments regarding how the lights were kept is positive testimony to the value of the documentation. (Loges, 1998, p. 452)

With time, the conversation on demonstrating value in the literature has grown more sophisticated and specifically addresses three general issues: defining value, prescribing ways to measure value, and finding out how technical communicators actually track value on an ongoing basis. The following sections provide brief summaries of the literature on each of these issues and place this study within the context of this earlier work.

Defining Value

Despite the 1859 mention, most of the conversation about demonstrating the value of technical communication has occurred in the past 30 years. One of the first challenges in this conversation is defining the term value to ensure that all parties have a common understanding of the concept and to operationalize it for further study. Defining value is no easy task because concepts of the term are often personal and situational (Carliner, 2003). A quick-and-dirty help system that permits a small start-up to go to market with its software would easily satisfy its sponsors but would probably be insufficient to support a new release of Microsoft Word, which has extensive usability targets and editorial guidelines to meet before it is ready for release. More fundamentally, the “value” provided by both of these systems would be impossible to compare because neither definition provides a means for measuring the extent to which technical communicators have provided value. Redish (1995) addressed both concerns by defining value in financial terms:

Value means generating greater return on investment than the cost of the initial investment. Return on investment can mean bringing in more money (or increasing users’ satisfaction), or it can mean reducing costs, such as the cost of supporting customers. (p. 26)

Carliner (1998) broadens this definition of value to encompass both financial and non-financial elements. He defines “bringing in …money” as generating revenue, noting that pre-sales content, like marketing brochures, advertising, and white papers, primarily tackle such opportunities. He defines “reducing costs” as containing expenses, noting that, if actual sales increase, total support budgets will increase, but effective documentation might slow the rate of growth. He notes that post-sales and internal content, such as user assistance, references, and project documentation, primarily have responsibility for containing expenses. He adds a third category: complying with regulations. He notes that much content is provided because governments, industry associations, or corporate management require it. Organizations generally do not expect a financial return on such content. Rather, they approach such documentation as an insurance policy against fines (such as those imposed when organizations fail to provide documentation on occupational safety) or failure to qualify for certain organization-wide certifications, like ISO 9000.

These definitions are consistent with definitions of value used in marketing communications, which defines value in terms of the sales generated by marketing content (Wright-Isak, Faber, & Horner, 1997), and training and development, which defines value in terms of the return on investment (Bassi & McMurrer, 2007; Phillips 2003).

Prescribing Ways to Measure Value

In addition to defining value, the literature presents specific methods for demonstrating the value of technical communication products and services. One set of methods focuses on demonstrating the value added by technical communication products. Because demonstrating value-add requires extensive resources and months or years after publication, technical communicators also seek proxy measures: alternate measures that suggest that a communication product is likely to be developed productively or perform effectively. The following sections describe both of these prescriptive ways of demonstrating value, and comment on the challenges of collecting these types of data.

Demonstrating “Value-Added.” Following the publication of a special issue of Technical Communication on demonstrating the value added by technical communication products and services in 1995, an implicit consensus emerged that “value” referred to the value added, and could be demonstrated by calculating some tangible, financial benefit that resulted from publishing the technical content.

To show readers how, a key component of this special issue was a series of case studies. Each showed how to determine the value added by technical communication products in a particular instance, such as user support and organization communication. But as these and more recent case studies (such as Blackwell, 1995; Daniel, 1995; Fisher, 1999; Henry, 1998; Spencer, 1995) suggest, because organizations develop individual communication products to achieve a unique set of objectives, quantifying the value added requires a unique methodology in each situation, one tailored to the specific value proposition of the communication product. As a result, although a general approach exists, no specific, standard methodology for calculating the value added really exists. When performed, these calculations of value require a significant data collection effort and a similarly complex calculation.

Even when organizations invest this effort, the results are only approximations, as accounting systems that track revenues and expenses can only track transactions that actually occurred. When technical communication products contain expenses, the costs that are saved were never incurred. So the accounting systems have nothing to track. The best that technical communicators can do in such situations is show trends in spending before and after publication of the content to suggest that the cost savings have, indeed, occurred (Carliner, 1998).

Furthermore, organizations often cannot unambiguously attribute that savings to the publication of the technical content. For the oft-cited benefit of technical communication of a reduction in calls to a help line, Spilka (2000) notes that several alternative explanations could exist, such as end users refusing to call because of previous bad experiences with the help desk or finding a helpful co-worker to replace both the help line and the manual (Kay, 2007). Another problem with demonstrating the value added by individual technical communication products and services is that the data can only be collected long after publication, sometimes as long as 6 months to 2 years.

Collecting Proxy Measures of Value. Because gathering evidence of value added consumes time, and sponsors could question the results (Spilka, 2000), technical communicators have sought proxy measures of value: alternate measures that suggest that a communication product is likely to be developed productively or perform effectively. Proxy measures are generally easier to obtain than value added, can be collected during the development process or shortly thereafter. Most of these proxy measures sought have emerged from the quality movement (Smart, Seawright, & de Tienne, 1996), which identifies both product and process measures and attempts to measure both.

Product Measures focus on the effectiveness of the resulting technical communication product. Some organizations attempt to collect effectiveness measures during development of the technical content. Some are critical assessments, such as assessments from substantive and copy edits (Rook, 1993; Rude & Eaton, 2010), technical reviews, and heuristic reviews of the usability of content (Barnum, 2011). Each has its limitations; as Van Buren and Buehler (2000) note, different levels of editorial reviews yield different types of feedback and assessments. Similarly, Lentz and De Jong (2009) and De Jong and Schellens (2000) raise concerns about the extent of actual usability problems identified by heuristic reviews. Others have attempted to turn these critical assessments into quantitative measures, such as counting comments on documents. Although the schemes continue (Swanwick & Leckenby, 2010), most technical communicators reject them because they tend to equate fewer comments with fewer defects in the content. In reality, the lack of comments might be more reflective of a reviewer who didn’t read the draft. More costly pre-publication usability testing (De Jong & Schellens, 2000) provides empirical insights into the likely effectiveness of technical content, often under conditions that are, at the least, somewhat reflective of the actual context of use.

Carliner (1997) proposed an adaptation of Kirkpatrick’s (1994) four-level model of evaluation for training for technical communication products. He defines effectiveness as a multi-layered construct, which encompasses both users’ and sponsors’ perceptions of communication products and the processes that created them, as well as measures that assess users’ actual abilities to perform the tasks described in the communication products and sponsors’ actual satisfaction with the communication product. Level 3 of this framework, client results, actually incorporates the demonstration of value added that Redish (1995) proposed.

Process Measures: Because of the limitations of pre-publication data on effectiveness and the challenges of collecting data post-publication, other organizations have sought to focus efforts on demonstrating the productivity and effectiveness of the processes that produce the communication products. Such efforts view the design and development of technical content from a systemic perspective (Hackos, 2007, 1994; Hargis, Carey, Hernandez, Hughes, Longo, Rouiller, & Wilde, 2004), noting that asking certain questions and performing certain work earlier in the process can minimize costly rework later in the process. Such efforts emphasize the importance of a well-defined process that can produce consistently high results on each project (Amidon & Blythe, 2008) and track adherence to the process, including the completion of each milestone in that process and adherence to the schedule and budget proposed for the project.

To ensure that the process can produce consistent results with similar amounts of labor, such approaches also emphasize tracking the quantity of work that technical communicators can produce (Swanwick & Leckenby, 2010). This measure of the amount of work that technical communicators produce in a given period of time defines the concept, productivity. To calculate productivity, managers must determine an average number of pages, screens or similar units that technical communicators can produce in a given amount of time—no easy task given that each project has its own level of complexity of content and team, and that different technical communicators have different strengths (Swanwick & Leckenby, 2010). Editors, in turn, have their own sets of productivity and effectiveness measures (Eaton, Brewer, Portewig, & Davidson, 2008; Swanwick & Leckenby, 2010).

The numbers resulting from these various schemes to track the effectiveness and productivity of technical communication work are called metrics. Many technical communicators use metrics to track their own work (Hamilton, 2009) and compare their rates with others, a process called benchmarking (Hackos, 1994). Benchmarking answers questions like these: If a technical communicator can produce one finished page in 6 hours, is that fast? Slow? Average? If a user’s guide has a general satisfaction rate of a 3.95, how does that compare with other user’s guides? In the absence of data clearly demonstrating a return on investment, the stronger one group’s metrics appears in comparison to its own history and the metrics of other groups in the industry, the stronger the case that these proxy measures provide a manager that his or her group is providing value to the sponsor.

Challenges of Collecting These Measures: Most of the methods described in this sub-section are prescriptive—that is, they suggest what technical communicators should do. Most of these methods have not been validated in practice. Indeed, other than a few isolated case studies, no empirical study has explored the extent to which technical communicators use these methods.

What has been assessed is the appropriateness, feasibility, cost effectiveness, and persuasiveness of several commonly proposed means of measuring the value added by technical communication products and services. As part of the research underlying the special issue of Technical Communication on the value added by technical communication products and services, Ramey (1995) surveyed technical communication managers about six commonly proposed measures and found that managers felt that measures focused on perceptions tended to be easier and more cost effective to collect and reasonably persuasive to their managers. In contrast, performance measures—such as the ability of users to perform tasks on their own and reductions in support costs—were generally perceived as more difficult to collect, although probably more persuasive to management than perception measures.

Finding Out How Technical Communicators Actually Track Value on an Ongoing Basis

So what are technical communicators actually doing to track the value of their products and services? Studies conducted infrequently over the past 25 years provide insights into actual practices by typical technical communication groups.

One of the earliest was a 1990 interview-based study by Barr and Rosenbaum (reprinted 2003), which explored the ways that technical communication managers tracked and reported productivity. They found that most managers used ad-hoc means of doing so, and only on a limited basis.

In an effort to get a sense of the actual workloads of technical communication managers, Carliner (2004) surveyed managers of larger technical communication groups (ones with 20 or more workers) about the work of their departments and the ways that managers tracked it. Like Barr and Rosenbaum, he found that most managers used ad-hoc means tracking both productivity and effectiveness, and only did so on a limited basis, often for less than 10% of projects.

Situating this Study

As suggested by the literature, this study defines the value added by technical communication in terms of two particular characteristics: productivity, the amount of technical content produced with the resources invested, and effectiveness, the extent to which technical communication products achieve the purposes for which they were developed. Productivity and effectiveness represent different measures of value and are not always compatible. Ensuring the effectiveness of content often involves efforts that reduce productivity and vice versa. Or as one author observed, producing “less” text often requires more work.

This study addresses the gaps in the literature on the methods used by technical communication managers to assess productivity and effectiveness, which have not been studied since 2004, and never explored across the entire population of technical communication managers. Furthermore, it provides updated insights into the types of measures that technical communication managers report to their managers—and what managers feel is persuasive evidence of productivity and effectiveness.

Methodology

This section describes the methodology followed to conduct the research. It first describes the choice of a research framework, then describes the selection of participants. Next we describe the survey and the processes for collecting and analyzing data. We close the section by describing methods to ensure the reliability and validity of the data.

Choice of a Research Framework

Given research questions that focused on determining the extent to which technical communication managers follow particular practices to track and report the productivity effectiveness of their staffs, we chose a quantitative methodology because quantitative research methodologies are designed to gather data from large numbers of participants (Creswell, 2008).

We specifically sought a quantitative methodology that would allow us to efficiently gather information from a potentially large population (as we assumed the population of technical communication managers to be); a survey seemed most appropriate (Creswell, 2008).

Participants

Because managers have the primary responsibility for generating work for a group and bear primary accountability for completing the work (Hackos, 1994, 2007), we specifically sought managers of technical communication groups as the participants for this study. Finding participants who met this characteristic posed two challenges to us. The first was clarifying what we meant by the term manager. Because we were concerned with the ways that managers report the productivity and effectiveness of their staffs, they needed to have responsibility for establishing performance plans for their staffs and evaluating performance against those plans. Such tasks are generally considered to be core responsibilities of managers with personnel responsibility (Swanson & Holton, 2009).

The second challenge involved recruiting participants. Because funding was not available to rent mailing lists with the names of known managers of technical communication departments, we leveraged the power of the Web to recruit a convenience sample that met the characteristics we targeted.

We contacted professional organizations who serve and, therefore, might be able to reach, the likely participants: American Medical Writers’ Association (notices sent to the organization and the presidents and newsletter editors of individual chapters), International Association of Business Communicators (notices sent to the presidents, newsletter editors, and Webmasters of local chapters), and Society for Technical Communication (notices sent to the presidents and newsletter editors of individual chapters, as well as the managers and newsletter editors of the Special Interest Group on Management).

We asked the people we contacted to publish the Call for Participants in their newsletters and on their Web sites and, if they send regular e-mail messages to their members, to mention the Call for Participants. The Call for Participants included a link to a Web site where prospective participants could learn more about the study. That site also included a link to the survey.

Ninety (90) technical communication managers completed the survey.

To ensure that all participants were informed of the purpose of the survey and the use of the results before participating, they first read and provided informed consent to participate. Because we did not provide participants with an opportunity to provide their names, all participants were anonymous. All procedures were reviewed and approved by the university Research Ethics Committee (similar to an Internal Review Board (IRB)).

About the Survey

The survey had several parts. The first collected demographic data, including the size of the group managed and the overall size and scope of the employer served (local, regional, national, international); the industry segment in which the employer operates, and the management experience and responsibilities of the participant. This would not only allow us to identify the participants but compare across groups.

The next section explored ways managers track and report productivity, including methods for estimating, tracking, and reporting productivity by project, budget, and overall operations.

Subsequent sections explored ways that managers track and report effectiveness. Questions addressed ways that managers track satisfaction with the communication products, the extent to which readers acted on the message in the communication products, and the impact of those actions.

The last section explored ways that managers report effectiveness as well as their perceptions about the importance of doing so.

Procedure for Collecting and Analyzing Data

In addition to using the Web as a recruiting tool, we also used it to conduct the survey. This afforded many advantages (Evan & Mathur, 2005), including (a) recruiting participants and conduct the survey in a short period of time, (b) offering convenience to participants, who would not have to keep track of a paper survey and make an effort to return it by surface mail, and (c) using the data as entered by participants in the analysis, thus eliminating errors resulting from re-entering the data. We used Web-based survey instrument, Hosted Survey, which let us administer the survey over the Web and let to collect the data in formats that could be read and used by reporting and analysis software, such as SPSS.

As part of the process of developing the survey, we validated it through two pilots. The first pilot primarily focused on the usability and clarity of the survey and related instructions. We provided participants in the pilot with a link to the draft survey and asked them to complete it. Among the issues that arose in the pilot were questions about who is a manager. We learned at this time that some technical communicators have the job title of manager but did not have responsibility for managing personnel, such as preparing job descriptions, hiring permanent employees, preparing performance plans, conducting performance reviews, and setting salaries and bonuses. Other workers had these responsibilities but used the title supervisor rather than manager. So we clarified our recruiting materials and, just in case prospective participants did not catch the clarifications in the recruiting statements, added questions at the beginning of the survey about specific management responsibilities. Those participants who did not have responsibility for establishing performance plans and evaluating performance were directed out of the survey.

After validating the survey and related instructions, we began recruiting participants and tracking the progress of the study. To ensure a high response rate, we sent follow-up responses to our contacts in the local chapters of professional associations and, for those that have them, special interest groups. The recruiting and surveying process took eight weeks.

After the data was collected, it was downloaded in an Excel format for use with SPSS. We compiled results for each part of the survey.

To see whether certain characteristics of particular technical communication groups made them more or less likely to adopt a given practice, we ran cross-tabulations. Given the small size of the population and the even smaller sizes of these sub-groups, the resulting insights were more suggestive than generalizable.

Results

This section reports the results of the study. First, it reports the demographics of the participants. Next, it reports how technical communication managers assess productivity, then it reports how they assess effectiveness. Last, this section reports how the managers report productivity and effectiveness metrics to their managers and staffs.

Demographics of the Participants

The first set of questions was intended to build a profile of the managers participating in the study. Ultimately, we were hoping to find managers representing a diverse range of industries and that had a diverse scope of operations.

Screen-Shot-2014-09-08-at-12.36.27-PM

In terms of industry, the largest number of participants worked in high tech and telecommunications industries (39 participants or 43%). This finding is similar with other surveys of technical communicators, which suggests that high tech and telecommunications are the key industries employing technical communicators (STC). Of the rest of the participants, 14 (16%) worked in the manufacturing industry, and 9 (10%) worked in the financial services (including insurance) industries.

The rest were scattered among several industries, including 1 (1%) in the education industry, 3 (3%) in the energy industry, 3 (3%) in government, 2 (2%) worked in professional services, 1 (1%) in the real estate industry, 1 (1%) in the retail industry, 3 (3%) in the transportation industry, and 11 (12%) in other industries. Figure 1 shows where managers participating in this survey worked.

In terms of the size of the department managed, the majority (62%) managed groups of 10 or fewer workers. One conclusion is that technical communication groups tend to be somewhat small. Table 1 shows the size of groups managed.

In terms of the size of the organization supported by the technical communication group, the majority (57%) supported medium-sized groups with 101 to 2,500 workers and a logical conclusion is that the majority of technical communicator groups are in medium-size enterprises. Table 2 shows the size of organization supported by the technical communication staff.

Screen Shot 2014-09-08 at 10.35.54 AM

Screen Shot 2014-09-08 at 10.37.13 AM

The majority of participants 71 (79%) were from the United States. In addition, 7 (8%) were from Canada, 1 (1%) from China, 2 (2%) from the European Union, 7 (8%) from India, and 2 (2%) from other countries. We do not feel that this reflects demographics of the profession; rather, we think this reflects the membership of the organizations from which we sought participation.

In terms of the scope of operations, most participants 63 (70%) state their operations are global in scope. Of the rest, 2 (2%) have operations that focus on a single metropolitan area, 2 (2%) have operations that focus on the state or province in which they are located, 3 (3%) have operations that focus on the geographic region where we are located (such as the U.S. mid-Atlantic or New England regions), 14 (16%) have operations that are national in scope, and 5 (6%) have operations that are continental in scope.

In terms of management experience, most (57) of the managers responding to the survey had more than 5 years’ experience. 2 (2%) had more than 25 years of experience, 5 (6%) 21 to 25 years, 9 (10%) 16 to 20 years, 16 (18%) had 11 to 15 years, 16 (18%) had 8 to 10 years, and 9 (10%) had 5 to 7 years. Of those with 5 years of experience or fewer, 9 (10%) had 4 to 5 years, 6 (7%) had 3 to 4 years, 8 (9%) had 2 to 3 years, and 8 (9%) had 1 to 2 years of experience. Only 2 (2%) had less than 1 year of experience.

Although the managers responding to this survey had extensive experience, most had a shorter tenure in their current position. Sixteen (16) (18%) had been in their current management role for less than 1 year, 18 (20%) had 1 to 2 years in their current role, 15 (17%) had 2 to 3 years, 8 (9%) had 3 to 4 years, 8 (9%) had 4 to 5 years, 12 (13%) had 5 to 7 years, 8 (9%) had 8 to 10 years, 3 (3%) had 11 to 15 years, 1 (1%) had been in the current management role for 16 to 20 years, and 1 (1%) had been in the role for more than 20 years.

In other words, sizable groups of the managers participating in the survey represented high tech and telecommunications firms, managed staffs of 10 or fewer workers, worked in medium-sized organizations that tended to have a global scope of operations. The typical manager responding to this survey had more than 5 years’ experience in management but 5 years or less experience in their current positions, although managers with other levels of experience and scope of operations are represented.

How Technical Communication Managers Assess Productivity

This section reports how managers assess productivity and offers conclusions regarding the entering belief:

Activities for tracking productivity by technical communication managers are limited.

For the purpose of this study, productivity refers not only to how much work technical communication departments produce, but also related planning and payment activities. Specifically, this part of the survey reports on the methods that managers use to estimate and track projects, broader measures that managers use to track productivity, and a possible relationship of productivity to budgets, which we thought might exist.

The dominant method used for estimating projects is guess-timating based on experience (53 responses (59%)). Table 3 shows the different means that technical communication managers use to estimate projects.

Table 3

Just over half of the participants (49 or 55 %) use a project tracking system to follow their projects; 41 participants (46%) do not. Of the 49 participants who have a project tracking system, 41 indicated why they track projects:

  • “…because my sponsor requires me to” (19—46%—of the 41 participants).
  • “… to ensure accurate billing for services” (17—42%—of the participants).
  • “…as a basis for reporting the status of projects to sponsors” (12—29%—of the 41 participants).
  • “… to determine whether projects are progressing according to plan” (7—17%—of the 41 participants).

Although one of the purposes of project tracking systems is informing future project management efforts (Lasecke, 1996), the majority of the participants (34 (83%) of the 41 participants who do use project management systems) do not use the data tracked by the project tracking systems.

Just 23 (26%) participants of this survey track their productivity rates. Of those who do:

  • 13 of 23 participants (or 57%) track the number of pages produced per staff member per day
  • 4 of 23 participants (or 17%) track the number of screens produced per staff member per day
  • 10 of 23 participants (or 44%) track the average number of finished hours of instruction produced per staff member

The majority of participants (60 or 67%) are not required to report productivity to their bosses. Of those who are required to report their productivity measures, they report the following:

  • Revenue generated by their departments (10 of 30 responses or 33%)
  • A comparison of the output of the staff (in terms of dollars generated) with input (in terms of dollars invested) (14 of 30 or 47%)
  • The number of hours of effort required to produce a finished page, screen or hour of instruction (11 of 30 or 37%)
  • The number of people reached by the team’s products, such as the number of people who have visited Web pages, or the number of people who have read publications (17 of 30 responses or 57%)

Thirty (30) participants, the largest number of people responding to this section of the survey, indicated “My boss does not ask for reports on productivity, so I do not provide them.”

Because few of the managers who participated in the related qualitative study established their department budgets, we assumed that most technical communication managers did not participate in setting their budgets. The results of the survey suggested otherwise; 52 of 90 participants (58%) prepared the initial drafts of budgets for their staff. The extent of involvement was therefore higher than expected and did not support the entering belief.

Similarly, 54 (60%) knew their staffs’ budgets. Of those who knew their budget, the items included in a staff’s budget varied. Budgets included the following items:

  • Salaries for staff (51 or 54 participants or 94%)
  • The salary of the manager completing the survey (42 of 54 participants or 78%)
  • Printing and warehousing costs for training and related materials (23 of 54 participants or 43%)
  • Professional development and training costs for the technical communication staff (46 of 54 participants or 85%)

In other words, line items in the budgets vary widely among technical communication groups.

When asked “if you were to choose a measure to track the productivity of your team,” 52 participants (58%) said that they would know what the measure would be.

  • Twenty seven (27) of the 43 write-in responses related the amount of work produced per unit of time, such as the number of screens or help topics produced per day.
  • Sixteen (16) of the responses focused on meeting deadlines.
  • The remaining response was “cost per type of attraction.”

Figure 2 presents the write-in responses.

Screen Shot 2014-09-08 at 10.39.22 AM

When asked “if your boss were to ask you to track the productivity of your team, what measure do you think your boss would require?” about half (46 or 50%) of technical communication managers would probably be at a loss.

All the same, productivity measures were used to determine staff salaries or bonuses in the organizations of 33 participants (37% of all the participants). Of those 33, 97% use this productivity data to determine salary increases and 82% use the productivity data to determine bonuses.

To determine whether larger departments were more likely to track productivity, we ran cross-tabulations comparing size of department and the requirement to track productivity. Only in organizations with 250 or fewer staff were participants more likely to report productivity. Because the number of participants in those categories was small, however, these results might not be replicated in a broader study. Table 4 presents the cross-tabulations.

Screen Shot 2014-09-08 at 10.39.36 AM

We also ran cross-tabulations to determine whether the size of the team that a technical communication manager oversaw or the industry in which the manager worked had an impact on the likelihood of reporting productivity. But in every size category and in every industry, those who were not required to report productivity outnumbered those who were required to do so. Similarly, active involvement in setting the department budget was not related to a requirement to report productivity.

Given that:

  • The dominant method of estimating projects is guess-timating,
  • Just 55% of participants use a project tracking system and less than 25% of those use the project tracking system to report on projects,
  • Only one-third of participants said that they were required to report productivity, and
  • More than half of the participants indicated that they were not aware that of a productivity measure that their managers expected them to report.

The data supports the entering belief, “Activities for tracking productivity by technical communication managers are limited.”

How Technical Communication Managers Assess Effectiveness

This section reports how managers assess effectiveness and offers conclusions regarding the entering beliefs:

  • Technical communication groups rarely solicit feedback and perceptions on individual communication products.
  • Technical communication groups employ usability testing on a limited basis.

Specifically, this part of the survey reports on the methods that managers use to solicit different measures of the effectiveness of communication products, including user feedback, usability, general perceptions of technical communication products and services, and the value added by—or return on investment in—technical communication products and services.

Practices for Soliciting Immediate Feedback from Users. Previous research suggests that technical communicators use a couple of methods to solicit immediate feedback from users, and neither is widely used (Carliner, 2004). One is Reader’s Comment Forms (RCFs), which are primarily included with printed materials and solicit feedback regarding the technical accuracy of the content. Another is satisfaction surveys, which are included with both printed and online materials. Sometimes the satisfaction surveys pertain to an entire document (what did you think of the user’s guide?); in other instances, the survey pertains to an individual Web page (did this page help you?).

The evidence supports previous research. Of those responding, only 24 (27%) provide Reader’s Comment Forms with the materials they publish. (Note that not all of the technical communication managers participating in this survey answered this question.)

Of those providing Readers’ Comment Forms, 18 of 24 (21% of the total participants) provide them for user’s guides, 15 of 24 (17%) for references, 18 of 24 (21%) for Help (online user assistance), 14 of 24 (16%) for service guides, and 11 (13%) for other types of technical communication products. Table 5 shows the extent of use of Reader’s Comment Forms for different types of technical communication products.

Participants primarily use Reader’s Comment Forms to track feedback on the accuracy and general usability of the content, and user satisfaction. Consider these uses of Reader’s Comment Forms:

  • Information about technical errors in the content (21 responses, 24%)
  • Feedback on the general usability of the content (apart from technical errors) (21 responses, 24%)
  • Feedback on users’ general levels of satisfaction with the content (21 responses, 24%)

Table 6 summarizes the types of feedback sought by Reader’s Comment Forms.

Screen Shot 2014-09-08 at 10.39.49 AM

Screen Shot 2014-09-08 at 10.40.19 AM

Screen Shot 2014-09-08 at 10.40.28 AM

The extent to which Reader’s Comment Forms are used is mixed, with responses ranging from nearly all material published to less than 10%. Table 7 shows the usage rates for Reader’s Comment Forms.

In general, response rates to Reader’s Comment Forms are extremely low. Eighty (80) (93%) participants have a response rate of less than 10%. Table 8 shows the response rates for Reader’s Comment Forms.

Cross-tabulations suggest that Readers’ Comments’ Forms were more likely to be used in certain types of organizations, including ones with 26 to 50 workers, and those with 2,501 through 25,000 workers. Table 9 shows this cross-tabulation.

Cross-tabulations also suggest that technical communication groups with 26 to 50 staff are more likely to use Reader’s Comment Forms than groups of other sizes, and those working in the energy industry are more likely to use Reader’s Comment Forms than people in other industries.

Screen Shot 2014-09-08 at 10.40.37 AM

Screen Shot 2014-09-08 at 10.40.46 AM

Given that only 27% of participant use Readers’ Comment Forms and, when they do, the majority have a response rate of 10% or less, the evidence supports the entering belief that technical communication groups rarely solicit feedback and perceptions on individual communication products. Furthermore, the evidence suggests that when technical communicators do solicit this type of feedback, the response is low. All the same, the evidence also suggests that the feedback can provide insights into the accuracy, usability and perception of these communication products.

Use of Usability Tests. Usability tests assess the extent to which users can actually perform the tasks and procedures described in communication products (Barnum, 2010). Because usability tests provide insights into the effectiveness of the work of technical communicators, the literature on the field identifies these tests as a key tool for all technical communication products (Markel, 2009). Yet previous studies (such as Carliner, 2004, in preparation) suggest the entering belief that technical communication groups conduct usability testing on a limited basis. The next section sought evidence to address this entering belief.

In terms of the extent to which technical communication groups perform usability testing, 49 (54%) of participants said that some or all of the content that their teams produce undergoes usability tests, 37 (41%) said that the content they produce does not, and 4 (4%) participants did not respond.

When usability testing is performed, 24 (27%) use it to test an entire product or service, including the documentation (that is, a broader test than just one of the documentation). Four (4) (4%) use it only to test the interface—and documentation is considered separate from the interface, 18 (20%) use it to test only the documentation, and 3 (3%) use usability testing for some other purpose.

Thirty eight (38) of participants (42%) conduct usability tests while a product and its documentation are still in development; 8 (9%) conduct usability testing after general release of the product and documentation, and 3 (3%) conduct usability tests at some other time. Forty one (41) or 46% participants did not respond.

Twenty three (23) participants (26%) said that someone on their immediate staff has primary responsibility for conducting the usability test while 26 (29%) said that someone from outside of their immediate staff has primary responsibility for conducting the test.

In terms of the extent of usability testing, the majority of those who conduct it test 25% or fewer of their communication products. Table 10 presents the overall extent to which usability testing is performed.

Screen Shot 2014-09-08 at 10.40.55 AM

Screen Shot 2014-09-08 at 11.04.12 AM

Cross-tabulations suggest that usability testing was more likely to be performed in organizations with 26 to 50 workers, 251 to 500 workers, and 1,001 to 5,000 workers. Table 11 shows the cross-tabulation linking size of organization and likelihood of conducting usability testing.

Cross-tabulations also suggest that technical communication groups with 11 to 50 workers were more likely to conduct usability testing than those in other size ranges. Table 12 presents this cross-tabulation.

In addition, cross-tabulations show that technical communicators working in the high tech and telecommunications, hospitality, manufacturing, and other industry groups were more likely to conduct usability testing than technical communicators working in other industries. (Because only 1 person identified as Education and responded to this question, the sample is too small to reach a conclusion.) Table 13 presents this cross-tabulation.

Screen Shot 2014-09-08 at 11.04.22 AM

Screen Shot 2014-09-08 at 11.04.36 AM

Screen Shot 2014-09-08 at 11.04.45 AM

Last, technical communicators who knew their budget and actively participated in setting it were more likely to perform usability testing than those who do not. Table 14 presents this cross-tabulation.

Given that only just over half of the technical communication groups conduct any usability testing, that the majority of those conducting usability tests address both a product and its documentation, that the majority of those conducting usability testing only do so for 25% or less of their work, and that more than half of the usability testing is conducted by someone outside of the technical communication group, we find that the evidence supports the second entering belief: technical communication groups employ usability testing on a limited basis.

Tracking Perception of Technical Communication Efforts. Previous research suggests that one of the ways that technical communicators assess the long-term impact of their work is through responses to questions posed to customers in semi-annual, annual, and bi-annual opinion surveys (the surveying schedule varied among organizations). This prompted the entering belief that customer surveys play an important role in assessing general impressions of technical communication products. The next section of the survey assessed the nature and extent of practices associated with customer surveys.

Screen Shot 2014-09-08 at 11.10.49 AM

Screen Shot 2014-09-08 at 11.11.54 AM

Screen Shot 2014-09-08 at 11.13.18 AM

Only 36% of participants (32) said that they use opinion surveys to track the perceptions of the technical communication products produced by their teams.

In terms of the way that organizations track perceptions, the largest percentage of participants (23 or 26%) said that their organizations do so through a larger survey about their organization, such as a customer survey, rather than customer perceptions about a particular product. Only 13 participants (14%) said that their organizations conduct a separate survey just about documentation. Table 15 lists the ways that organizations track perceptions of technical communication products.

Cross-tabulations suggest that perception surveys were more likely to be used in by technical communication groups in organizations with 1,001 to 2,500 workers. Table 16 shows this cross-tabulation.

Other cross-tabulations suggest that technical communication groups with 26 to 50 workers were more likely to conduct perception studies; Table 17 shows this cross-tabulation.

Additional cross-tabulations suggest that technical communicators in the high tech industry were also more likely to conduct perception studies than technical communicators working in other industries. Table 18 shows the cross-tabulations.

Cross-tabulations did not suggest any effect of the technical communication manager’s knowledge of—or involvement in setting—the budget for the group and the likelihood of conducting perception studies.

Given that just 36% of participants stated that their organizations use surveys to track perceptions, the entering belief that customer surveys play an important role in assessing general impressions of technical communication products is only partially supported. The surveys are used, but not widely. The evidence suggests that the surveys play a more significant role in some industries (like high tech) and organizations of a particular size.

Tracking Return on Investment. Both the peer-reviewed (such as Redish, 1995) and popular literature (such as Rockley, 2004) advise professional technical communicators to demonstrate the positive financial impact of their communication products on organizations by contrasting the investment in designing and developing effective documentation with the resulting benefits, such as a reduction in the volume of calls to a help line (Downing, 2007; Spencer 1995) or reduction in rework (Daniel, 1995). Earlier empirical studies (such as Carliner, 2004; Ramey, 1995) suggest that technical communicators rarely perform these types of evaluations. This prompted the entering belief that technical communicators rarely track ROI, a belief explored in the next section of the survey.

According to the results, just 9% (8 of the participants) determine ROI for some or all of the technical communication products their team produces. Of the few who do determine ROI:

  • Three calculated costs versus revenues (though they did not mention the source of these revenues),
  • Three compared the cost of technical communication products with the cost of support,
  • One compared the cost of developing technical communication products internally and externally, and
  • One used an electronic ROI calculator (though did not identify the source of this calculator).

Figure 3 lists all of the methods participants used to calculate ROI.

Among the few who do calculate ROI, half do so for 25% or fewer of the work they produce. Table 19 shows the extent to which technical communication managers calculate ROI.

Screen Shot 2014-09-08 at 11.15.28 AM

Screen Shot 2014-09-08 at 11.16.33 AM

Screen Shot 2014-09-08 at 11.18.58 AM

Cross-tabulations only suggested that calculations of ROI were more likely to be performed by technical communication groups who knew their budget and who actively participated in setting it. Table 20 shows the cross-tabulation.

Given that fewer than 10% of participants indicated that they calculate the ROI of technical communication products and, of those, half only calculate ROI for less than 25% or fewer of the technical communication products they produce, the evidence supports the entering believe that technical communicators rarely track ROI.

Reporting Effectiveness and Productivity Measures

The last section of the survey explored how managers report the effectiveness and productivity of their staffs to others in their organizations. These questions specifically assessed two entering beliefs:

  • Technical communication managers felt limited pressure to report effectiveness..
  • The most significant criteria against which the effectiveness of their staffs is assessed is word of mouth.

Fewer than half of the participants (40 or 44%) were required to report the effectiveness of their staffs.

Cross-tabulations suggest that technical communication groups supporting organizations of 26 to 50, 101 to 250, 5,001 to 10,000 and 25,001 and more workers are most likely to report effectiveness. Table 21 presents the cross-tabulations.

A second characteristic affecting the likelihood of reporting effectiveness is size of the technical communication group itself. Cross-tabulations suggest that technical communication groups with 6 to 10, 26 to 50 or more than 100 workers are more likely to report effectiveness than other sized groups. Table 22 presents the cross-tabulations.

A third characteristic affecting the likelihood of reporting effectiveness is industry. Cross-tabulations suggest that technical communication groups working in the financial services, government, and other industries are more likely to report effectiveness than groups working in other industries. Table 23 presents these cross-tabulations.

Screen Shot 2014-09-08 at 11.19.59 AM

The last characteristic affecting the likelihood of reporting effectiveness is whether the technical communication manager participated in drafting the budget for his or her team. Cross-tabulations suggest that managers who do are more likely to report effectiveness than those who do not. Table 24 presents the cross-tabulations.

In terms of specific measures requested of technical communication managers by their sponsors, slightly fewer than one-third of participants (29 or 32%) said that they were required to report results of surveys. Fewer than one-fifth of the participants (17 or 19%) are required to report the results of usability tests. Fewer than one-sixth (12 or 13%) are required to report results Reader’s Comment Forms or were requested to report other types of measures of effectiveness. Table 25 presents the measures of effectiveness requested by sponsors of technical communication.

Managers identified other measures used by sponsors to track the effectiveness of technical communication teams. The most commonly indicated one was word-of-mouth (informal positive and negative feedback about the staff) reported by 74 participants (82%). Other measures include:

  • Service quality—how well the staff services the requests that are received, such as turnaround time on requests (55 participants or 61%)
  • Reach—the number of users the technical communication group has reached in a given year (20 participants or 22%)

Screen Shot 2014-09-08 at 11.20.52 AM

About one-fifth of the participants (19 or 21%) do not know how their sponsors assess the effectiveness of their groups.

In terms of the most important measure of effectiveness to sponsors, the largest number of participants (33 or 37%) reported word of mouth. Service quality was the second most important measure (25 participants or 28%). Table 26 presents the most important measures to sponsors of the effectiveness of technical communication.

The most common means of reporting the productivity and effectiveness of the technical communication group is a regularly produced report, such as a monthly or quarterly report (48 responses or 53%). Next most common is “when it comes up in conversation” (19 responses or 21%). About 10% (9 participants) do not report results. Table 27 shows the means that managers use to report the results of their training teams.

In terms of whether effectiveness or productivity metrics are more important, nearly two-thirds (59 or 66%) felt the two types of metrics were equally important.

Similarly, in terms of reporting metrics, nearly three-fifths of participants (53 or 59%) felt that reporting both types of metrics is important.

In terms of satisfaction with the measures they use to track and report effectiveness, the majority of participants felt that the measures were not adequate. Twenty one (21) (23%) did not think the measures used were representative at all and another 37 (41%) felt that the metrics they used were representative, but extremely incomplete (complete means that they represent all aspects of the group’s work). Table 28 presents managers’ satisfaction with the measures used to reflect the effectiveness of their staffs’ work.

Screen Shot 2014-09-08 at 11.23.37 AM

When asked about the relative importance of tracking metrics of productivity and effectiveness among all tasks performed by their staffs, three-quarters of the participants did not feel it was an important task: 23 (26%) felt that this reporting was not important at all, and 45 (50%) said it was only moderately important. Table 29 shows the perceived importance of tracking measures of productivity and effectiveness.

Screen Shot 2014-09-08 at 11.24.41 AM

Last, participants were asked whether their organizations use effectiveness measures to determine staff salaries or bonuses. Over half (48 or 53%) indicated no. Of those using effectiveness metrics to determine salaries or bonuses, 37 (41%) said they use them to determine salaries and 28 (31%) use them to determine bonuses.

Given that just 44% of the participants were required to report the effectiveness of their staffs, and that 76 % felt that reporting productivity and effectiveness measures was, at best, only moderately important, the entering belief that that technical communication managers felt limited pressure to report productivity and effectiveness is supported by the data.

Furthermore, given that the technical communication managers participating in this survey chose word of mouth and service quality as the two most important measures to their sponsors—and that participants chose these responses 4.7 and 3.6 times more frequently than the third most common response suggests that the evidence supports the entering belief that word of mouth is the most significant criteria against which the productivity and effectiveness of technical communication staffs is assessed. Not only was use of other means much lower, but the technical communication managers who participated in this study did not have a high level of satisfaction with the measures available to them.

Discussion and Conclusions

This section considers what these results mean. First, we suggest the implications of these results for practicing professionals. Next, we consider the limitations of these results. We close by suggesting future research.

Implications

This survey-based study provides evidence that supports most of the entering beliefs of this study:

  • Activities for tracking productivity by technical communication managers are limited.
  • Technical communication groups rarely solicit feedback and perceptions on individual communication products.
  • Technical communication groups employ usability testing on a limited basis.
  • Technical communicators rarely track ROI.
  • Technical communication managers feel limited pressure to report productivity and effectiveness.
  • The most significant criteria against which the productivity and effectiveness of technical communication groups is assessed is word of mouth.

The evidence also suggests that service quality (perceptions of the quality and responsiveness of the service provided by technical communicators to the people who hire them rather than the users who ultimately benefit from the end products) is another significant criteria against which the productivity and effectiveness of technical communication groups is assessed (which was not an entering belief). The evidence suggests, too, that customer surveys do not play as strong a role in assessing general impressions of technical communication products and services as expected at the onset of the study.

In addition, the cross-tabs hint that pressures to report vary among groups of different sizes, in different industries, and in different size of enterprises. That only makes finding consensus metrics that are useful in all situations all the more challenging.

To be honest, these results are not surprising. They confirm findings from the earlier empirical studies of practice by technical communication managers, such as Barr and Rosenbaum’s (1990) study of the productivity of technical communication managers, Ramey’s (1995) study of the perceptions of measuring value added, and Carliner’s (2004) study of the management portfolios of technical communication managers.

But they do have serious implications for practicing professionals. Despite a discussion about means of assessing the productivity and effectiveness of technical communicators that has spanned over a quarter of a century, the evidence suggests that none of the methods of assessment has reached wide use. Methods like Readers’ Comment Forms, perception studies, and return on investment are only used by a minority of technical communication groups. Usability testing is more widely performed, but still performed sparingly—both in terms of the number of organizations conducting these tests and the number of technical communication products tested. Few assessments of the productivity of technical communicators exist and, of those that do, none is in use by even a sizable minority of technical communication groups. Given the limited use of observable and measurable metrics to assess productivity and effectiveness, and the more widespread use of untracked perceptions (especially word-of-mouth and service quality), that technical communication managers report low satisfaction with the means used to assess the productivity and effectiveness of their groups should not be surprising.

Given this void of useful measurements of productivity and effectiveness, what is surprising is that the issue has generated little interest at all, much less sustained interest within the research community.

The evidence from this study suggests that, instead of quantifiable measures, the most common means of assessing the productivity and effectiveness of technical communicators is word-of-mouth and service quality—that is, perceptions of the quality and responsiveness of the service provided by technical communicators to the people who hire them, not the users who ultimately benefit from the end products.

That technical communicators would seek valid, generalizable quantitative measures of their productivity and effectiveness is understandable, given that the majority work in engineering- and science-based environments, like high technology and telecommunications firms, defense contractors, and engineering firms, which are known for assessing themselves on quantitative measures. Furthermore, given that the majority of technical communicators work in private corporations that measure their success in financial terms, the search for similar financial measures to assess the contributions of technical communication is also understandable.

But apparently, it’s not done. And in the few instances when it is performed, the measures that have been developed are, at best, imprecise. For example, because accounting systems can only measure money that was actually spent, the cost savings offered by technical communication products are, at best, just an estimate. Furthermore, many of these estimates rely on imprecise and often inaccurate data that is self-reported by participants, further reducing the credibility of such measures. Most significantly, most of the means for deriving more precise measures are cumbersome and time consuming, and time-constrained technical communicators can often spend their time more productively working on other tasks. Ramey (1995) identified this practical concern two decades ago; the situation has not changed since.

So instead of focusing on ROI, perhaps tracking perceptions of the service provided by technical communicators might prove easier to conduct on a sustained basis and provide more useful data to inform the answer to the crucial question underlying the concern about providing value: what are the sponsor’s long-term intentions regarding this group of technical communicators? Will the sponsor continue using them? Will the sponsor expand use of the service, or curtail it?

Limitations of the Study

Several issues limit the results of this study. One is that we used a convenience sample recruited from members of various professional associations. As a result, a response rate as a percentage of invitations cannot be reported. In addition, the small number of participants (just 90) further limits this study. Although demographics of the participants provide insights into the characteristics of the participants and bear a strong similarity to the demographics of the Society for Technical Communication (the principal investigator is a past STC officer and regularly received regular membership statistics while serving on the Board), the population might not be representative of the larger population of technical communication managers and, therefore, not generalizable to them.

That suggests a second limitation of the study. Because of the nature of the sample, we chose not to run inferential statistical tests, like t-tests, on the data.

A third limitation of this study is that the data is self-reported. In other words, the assessments of productivity and effectiveness that participants reported might not reflect their actual practice.

The fourth limitation of the study is the length of the survey. The survey was admittedly long and, because of that, some of the participants might have been affected by survey fatigue. Although attempts were made to address this concern through pilot testing of the instrument, results suggest that it might have continued into the actual survey.

A fifth limitation is time. As this survey was conducted before the economic downturn of 2008 and 2009, the perceptions about the importance of tracking productivity and effectiveness might have changed, as activities might also have.

Even with these limitations, however, as noted in particular sections, the results tend to be consistent with earlier research by others, so the results might still have value.

Suggestions for Future Research

Given the limitations of this study, at the least, a project that replicates this study under more controlled conditions and with a larger population could indicate the extent to which these findings hold.

But is such a study really needed? Given that research spanning 20 years has produced essentially the same findings—even if it was performed with convenience or purposeful samples rather than random ones—provides a preponderance of evidence that, for the purpose of deciding how to proceed, an additional study is only likely to confirm what is already known. Although purists in quantitative research techniques might differ with our conclusion, we strongly believe that the only benefit of conducting a similar study later would be to assess whether the patterns continue to hold.

Rather, future studies might act on what earlier research has shown and explore in more depth some of the specific findings of this one. One set of effects pertain to the possible impact of organization size, department size, industry, role in setting the budget on the extent to which technical communication managers perform specific activities in evaluating productivity and effectiveness. Most likely, separate studies would be needed to explore each of these effects.

Similarly, this study identified that informal perceptions—communicated through word-of-mouth and external assessments of service quality—play a role in shaping how the bosses of technical communication managers assess the work of these departments. Although previous research and writing about quality has focused on externally verified metrics, perhaps the issue of these boss’ perceptions should be explored more formally. Given the lack of a pervasively used metrics, that the majority of technical communication managers indicated that the most important means used by their bosses of assessing the productivity and effectiveness of their staffs was these informal perceptions, and the possibility that assessing these perceptions might ultimately prove more logistically practical than gathering other types of metrics, perhaps future research focus on characterizing the nature of word-of-mouth and service quality assessments with a goal of devising metrics that might be more easily and widely gathered and of greater utility to technical communication managers.

But most significantly, this study found that technical communication managers felt that tracking productivity and effectiveness was, at best, moderately important. Future studies might investigate this perception further, exploring not only why technical communication managers feel this way, but the extent to which their sponsors feel the same.

Acknowledgment

This study was partially supported by a seed funding grant from Concordia University.

References

Amidon, S., & Blythe, S. (2008). Wrestling with proteus: Tales of communication managers in a changing economy. Journal of Business and Technical Communication, 22(1), 5–37.

Barnum, C. M. (2011). Usability testing essentials: Ready, set…test! Burlington, MA: Morgan-Kaufmann.

Barr, J. P., & Rosenbaum, S. (1990, reprinted 2003). Documentation and training productivity benchmarks. Technical Communication, 50(4), 471–484.

Bassi, L., & McMurrer, D. (2007). Maximizing your return on people. Harvard Business Review, (March), 115–123.

Blackwell, C. A. (1995). A good installation guide increases user satisfaction and reduces support costs. Technical Communication, 42(1), 56–60.

Carliner, S. (1997). Demonstrating the effectiveness and value of technical communication products and services: A four-level process. Technical Communication, 44(3), 252–265.

Carliner, S. (1998). Business objectives: A key tool for demonstrating the value of technical communication products. Technical Communication, 45(3), 380–384.

Carliner, S. (2000). Physical, cognitive, and affective: A three-part framework for information design. Technical Communication, 47(4), 561–576.

Carliner, S. (2003). Characteristic-based, task-based, and results-based: The three value systems for assessing professionally produced technical communication products. Technical Communication Quarterly, 12(1), 83–100.

Carliner, S. (2004). What do we manage? A survey of the management portfolios of large technical communication departments. Technical Communication, 51(1), 45–67.

Carliner, S. (2009). Culture conflicts in demonstrating the value of HRD. In C. Hansen & Y. Lee (Eds.), The cultural context of human resource development (pp. 179–196). New York, NY: Palgrave Macmillan.

Creswell, J. (2008). Research design: Qualitative, quantitative, and mixed methods approaches.Thousand Oaks, CA: Sage.

Daniel, R. (1995). Revising letters to veterans. Technical Communication, 42(1), 69–75.

De Jong, M., & Schellens, P. J. (2000). Toward a document evaluation methodology: What does research tell us about the validity and reliability of evaluation methods? IEEE Transactions on Professional Communication, 43(3), 242 – 260.

Downing, J. (2007). Using customer contact centers to measure the effectiveness of online help systems. Technical Communication, 54(2), 201–209.

Eaton, A., Brewer, P. E., Portewig, T. C., & Davidson, C. R. (2008). Examining editing in the workplace from the author’s point of view. Technical Communication, 55(2), 111–139.

Evans, J. R., & Mathur, A. (2005). The value of online surveys. Internet Research, 15(2), 195–219.

Fisher, J. (1999). The value of the technical communicator’s role in the development of information systems. IEEE Transactions on Professional Communication, 42(3), 145–155.

Fulkerson, A. (2010). The evolution of user manuals. Forbes, August 9, 2010. Retrieved from http://www.forbes.com/2010/08/07/customer-service-fulkerson-technology-documentation.html.

Galloway, L. (2007). Don’t focus on ROI. Training, November/December 2007.

Hackos, J. T. (1994). Managing your documentation projects. New York, NY: John Wiley.

Hackos, J. T. (2007). Information development: Managing your documentation projects, portfolio, and people. Indianapolis, IN: Wiley.

Hamilton, R. L. (2009). Managing writers. Fort Collins, CO: XML Press.

Hargis, G., Carey, M., Hernandez, A. K., Hughes, P., Longo, D., Rouiller, S., & Wilde, E. (2004). Developing quality technical information: A handbook for writers and editors (2nd ed.). Armonk, NY: IBM Press.

Henry, J. (1998). Documenting contributory expertise: The value added by technical communicators in collaborative writing situations. Technical Communication, 45(2), 207–220.

Kay, R. H. (2007). A formative analysis of resources used to learn software. Canadian Journal of Learning and Technology, 33(1). Retrieved from http://www.cjlt.ca/index.php/cjlt/article/view/20.

Kirkpatrick, D. L. (1994). Evaluating training programs: The four levels. San Francisco, CA: Berrett-Koehler.

Lasecke, J. (1996). Stop guesstimating, start estimating! Intercom, 43(9).

Lentz, L., & De Jong, M. (2009). How do experts assess usability problems? An empirical analysis of cognitive shortcuts. Technical Communication, 56(2), 111–121.

Loges, M. (1998). The value of technical documentation as an aid in training: The case of the U.S. Lighthouse Board. Journal of Business and Technical Communication, 12(4), 437–453.

Markel, M. (2010). Technical Communication (9th ed.). Boston, MA: Bedford/St. Martin’s.

Mead, J. (1998). Measuring the value added by technical documentation: A review of research and practice. Technical Communication, 45(3), 353–379.

Phillips, J. J. (2003). Return on investment in training and performance improvement programs (2nd ed.). Burlington, MA: Butterworth-Heinemann.

Pieratti, D. D. (1995). How the process and organization can help or hinder adding value. Technical Communication, 42(1), 61–68(8).

Phillips, J. J. (2003). Return on investment in training and performance improvement programs (2nd ed.) Burlington, MA: Butterworth-Heinemann.

Ramey, J. (1995). What technical communicators think about measuring value added: Report on a questionnaire. Technical Communication, 42(1), 40–51.

Redish, J. (1995). Adding value as a technical communicator. Technical Communication, 42(1), 26–39.

Rockley, A. (2004). Identifying the components of your ROI. The Rockley Report, 1(1), Retrieved from http://www.rockley.com/TheRockleyReport/V1I1/Gaining%20Management%20Support.htm.

Rook, F. (1993). Remembering the details: Matters of grammar and style. In C. M. Barnum & S. Carliner (Eds.), Techniques for technical communicators (pp. 274–290). New York, NY: Macmillan.

Rude, C. D., & Eaton, A. (2010). Technical Editing (5th ed.). Boston, MA: Allyn & Bacon.

Smart, K., Seawright, K. K., & de Tienne, K. B. (1996). Defining quality in technical communication: A holistic approach. Technical Communication, 42(3), 474–481.

Spencer, C. J. (1995). A good user’s guide means fewer support calls and lower costs. Technical Communication, 42(1), 52–55(4).

Spilka, R. (2000). The issue of quality in professional documentation: How can academia make more of a difference? Technical Communication Quarterly, 9(2), 207–220.

STC. (n.d.). How technical writers add value to your team. Retrieved from: https://www.stc.org/story/value.asp.

Swanson, R. A., & Holton, E. F. (2009). Foundations of human resource development (2nd ed.). San Francisco, CA: Berrett-Koehler.

Swanwick, P., & Leckenby, J. W. (2010). Measuring productivity. Intercom, 57(8). Retrieved from: http://intercom.stc.org/wp-content/uploads/2010/09/Measuring_Productivity.pdf.

Van Buren, R., & Buehler, M. F. (2000). Levels of edit (2nd ed.). Pasadena, CA: Jet Propulsion Laboratory.

Wright-Isak, C., Faber, R. J., & Horner, L. R. (1997). Comprehensive measurement of advertising effectiveness: Notes from the marketplace. In W. W. Wells (Ed.), Measuring advertising effectiveness. Mahwah, NJ: Lawrence Erlbaum.

About the Authors

Saul Carliner is an associate professor, Provost Fellow for Digital Learning, and Director of the Education Doctoral Program at Concordia University in Montreal. He is a Fellow and past international president of STC. Contact: saulcarliner@hotmail.com

Adnan Qayyum is an assistant professor of education at Penn State University. He has worked in educational technology since 1996, as a researcher, instructional designer and project manager. He has been director of a university online education division, and has consulted for universities, governments, businesses and the World Bank on the effective management and design of technology for learning. He holds a PhD in Educational Technology from Concordia University.

Juan Carlos Sanchez-Lozano holds a PhD in Educational Technology from Concordia University in Montreal, as well as an MBA and a BEng in Aerospace Engineering. He taught software applications and programming at Concordia University. He has also presented his work on the application of new technologies and narratives for advancing skills learning at conferences in North America and Europe. Contact: jcsanchezlozano@gmail.com.

Manuscript received 31 July 2011; revised 18 June 2014; accepted 13 July 2014.