61.4, November 2014

The Technical Communication Editing Test: Three Studies on This Assessment Type

Ryan K. Boettger

Abstract

Purpose: In this paper, I present the results of three studies on editing tests used to screen prospective technical communicators and the error types common to these tests. Because few publically available, authentic examples exist, I first explore the general characteristics of 55 tests and 71 error types. Error types are correlated against 176 professionals' perceptions of these error types.

Method: The sample's characteristics were first identified from the tests and the hiring managers. Three raters then independently classified the errors types using coding schemas from previous taxonomies of college-level writing. Finally, a 24-question survey was administered to capture professional communicators' perceptions of error.

Results: Editing tests were typically designed in narrative format and evaluated holistically, but variation in administration and format existed. The sample included 3,568 errors and 71 error types. Errors related to wrong words, spelling, and capitalization dominated, but 13 other errors were frequently found as well as dispersed within at least 50% of the sample. Conversely, professionals were bothered most by apostrophe errors, homonyms, and sentence fragments. No significant correlations were found among the frequencies and dispersions of the editing tests' errors and the professionals' perceptions of those errors.

Conclusions: Editing tests share common characteristics, but organizational context substantially influences its format and contents. There were consistencies between the editing test error types and types identified in college-writing taxonomies; however, context again influences why errors are introduced as well as the types of errors that were identified. Finally, hiring managers and professionals share different perceptions of error. Understanding these differences can produce better assessment tools and better prepare test takers.

Keywords: editing tests, error taxonomies, grammar, technical editing, usage

Practitioner's Takeaway

  • The editing test is a privatized and highly contextualized assessment type. Practitioners need to understand where variation in format and content exists so they can better prepare for or create more effective tests.
  • The weighted index of 71 error types provides the first empirically derived list of errors from a professional text. Practitioners can use this resource to prepare for an editing test or revise their in-house style guide. Academics can use this resource to extend research on editing and error as well as prepare the next generation of technical communicators.
  • Hiring managers and professionals have different perspectives of errors. This knowledge might help move the discussion away from personal preferences to understanding why audiences in our field perceive and prioritize error differently.

Introduction

Editing tests assess the basic skills of prospective technical communicators. According to a 2003 article in Intercom, hiring managers typically construct these tests around three primary skill sets: (1) the test takers' ability to recognize obvious grammatical and mechanical errors; (2) their ability to recognize less obvious errors that could relate to organization and logic; and (3) their overall editing process, including how they solve problems and demonstrate good judgment (Hart, 2003). With the exception of this Intercom piece, little has been written about this assessment tool.

In this paper, I report the results of three studies that extend the assumptions of editing tests and the opinions of usage error. In the first study, I analyze the general characteristics of 55 authentic tests that managers use for hiring purposes. Results suggest typical conventions of tests as well as insights into how hiring managers prepare and evaluate these tests.

The second study focuses on the major content of editing tests—the error types related to grammar and mechanics, punctuation, spelling, content, style, and design. Usage error is a popular topic among technical communication practitioners and academics. However, the field also offers little empirical evidence of what the most predominant and prioritized errors are, or how these audiences think about error.

The third study reports how 176 technical communication professionals perceive error, specifically the errors that were frequently occurring and strongly dispersed within the editing test sample. Collectively, the results from these studies can benefit the hiring managers who create the tests, the practitioners who take the tests, and the academics who prepare the next generation of technical communicators.

I posed the following research questions to guide these inquiries:

RQ1: What are the general characteristics of editing tests? Specifically, how are the tests administered, what are the typical formats, and how is performance assessed?

RQ2: What are the general contents of editing tests? Specifically, which error types appear most frequently and how dispersed are these errors?

RQ3: How do practicing technical communicators' perceptions of error reflect the errors identified in the sample?

Study 1: Characteristics of Editing Tests

The editing test is a privatized genre, containing sensitive information that is circulated only within organizations or with important stakeholders. Technical communication includes a variety of privatized genres, including the proposal and the report for decision making (Johnson-Sheehan, 2008; Rude, 1995). The editing test's privatization also means few authentic examples are publically available. Textbooks include copyediting practice, and the Dow Jones News Fund releases an annual test in connection to its internship program, but these examples do not necessarily assess the skills specific to technical communicators (Bragdon, 1995; Rude & Eaton, 2011). Practitioners then are left to speculate about the contents, and new hiring managers have no model for building their company's tests. For the first study, I investigated the general characteristics of 55 authentic editing tests, including their administration, format, and evaluation. The results provide new information on these tests and insights into how managers make hiring decisions.

Methods

I obtained most of the editing tests by posting requests on Listservs (for example, the Society for Technical Communication's Technical Editing Special Interest Group, Copyediting-L). I also emailed the human resources representatives of organizations that posted technical editor or writer positions on job search engines (for example, STC job bank, Monster). I signed a nondisclosure agreement with a majority of the companies to protect the integrity of the test.

I coded the 55 editing tests on a number of content variables related to administration, format, and assessment. The actual tests included most of this information. Almost 80% of the sample's hiring managers also volunteered insights into how they assessed the test as well as their justifications for their test's design and contents.

Results

The editing tests represented companies in 21 states and nine industries; they were all used to evaluate prospective technical communicators. Forty percent of the sample were from either Texas-based (22%) or California-based (18%) companies. This result reflects data reported in STC's most recent salary survey (2013), which indicated companies in these two states employed the most technical writers. Hiring managers from 19 other states contributed to this study's sample, with the next highest number of tests coming from Maryland-based companies (9%).

The industries of the participating companies were broadly classified within the North American Industrial Classification System. Over 60% of the sample were classified as Professional, Scientific, and Technical Services (35%) or Information (26%). The remaining industries included Manufacturing (11%); Educational Service (9%); Transportation and Warehousing; Utilities; Health Care and Social Assistance (5% each); Federal, State, and Local Government; and Arts Entertainment and Recreation (2% each).

Administration. Hiring managers indicated no preference in testing site or administration method. Fifty-one percent of tests were administered on-site and 49% off-site. Similarly, 55% of the tests were taken on a computer and 45% on paper. Not surprisingly, 93% of the off-site tests were computer-based, and 89% of the on-site tests were paper-based.

Hiring managers indicated they used computer or paper testing more for their convenience than a means for assessing a particular skill, such as an applicant's knowledge of copyediting marks. “All the markup in my department is on PDF,” wrote a hiring manager. “I don't care whether applicants use standard markup symbols. I do expect the markup be comprehensible though.” Another hiring manager added she and her hiring team do not discredit applicants for incorrect copyediting marks. “If they legibly and correctly indicate an error or suggest a revision, we count it as correct.” For the computer-based tests, all applicants were instructed to use “Track Changes” tool in MS-Word.

Twenty-two percent of applicants were given unlimited time to complete their test. Of the remaining tests, applicants had about 46 minutes to complete the paper tests (sd = 31.43, median = 30) and about 4,227 minutes (approximately three days) to complete the electronic tests (sd = 12867.12, median = 180).

Hiring managers cited a variety of reasons for time limits. Two hiring managers said they observed how applicants work within the designated deadline. One hiring manager gives her applicants only 15 minutes for the test to gauge how quickly applicants can diagnose the major issues with the document. “You'd be surprised at the results,” she said. “The issues applicants identify often tell me more about their knowledge gaps than strengths.” While neither expected applicants to be content experts, they expected them to query the author about that content in a reasonable period of time. A third hiring manager built 10 additional minutes into her testing time to encourage applicants to ask questions and query the author. She also does not inform applicants when they exceed the test's 30-minute time limit, a data point she records and considers in her hiring decision. However, time limits can also hinder the number of edits applicants can make. Several managers provided unlimited time to assess for over-editing, their mark of a poor editor or time manager.

Format. Sixty-seven percent of the tests were written in narrative format. On average, these sections were 1,141 words in length (sd = 1826.97, median = 528). Eleven percent of the tests were in sentence format. On average, these tests contained 18 sentences (sd = 7.12, median = 18). Seven percent of the tests were multiple choice. On average, these tests included 48 sentences/questions (sd = 47.64, median = 20). The remaining 15% of the sample included a combination of narrative, sentence, multiple choice, and true/false formats. Only 34% of tests required applicants to edit a technical table or an instructional graphic.

All of the narrative tests included copy from a live document previously used by the company. The hiring managers reported using a narrative format because it was more open-ended. Indeed, narrative tests are more subjective and therefore give applicants options for correcting an issue. In contrast, the sentence and multiple choice tests were more prescriptive by design (with only one correct edit) and focused on assessing rudimentary knowledge of grammar, punctuation, and spelling.

Finally, 70% of the tests required applicants to demonstrate knowledge of one of five style guides including the Chicago Manual of Style (35%), the American Medical Association Manual of Style (15%), the Microsoft Manual of Style (9%), the Associated Press Stylebook and Briefing on Media Law (3%), and the American Psychological Association Style (1%). Otherwise, several tests included style sheets for applicants (7%) or instructed them to use the style guide of their choice (7%).

Assessment. The majority of hiring managers assessed their tests holistically (64%) rather than assigning a point for every issue correctly addressed (31%) or a combination of these approaches (5%).

According to the hiring managers, holistic assessment allowed them to gauge which errors the applicant fixed in relation to the ones they missed. “I am always surprised at the things that are missed,” wrote a hiring manager. “Sometimes applicants focus on rewriting sentences and miss the basics like not spelling out percent.” Holistic assessment also allowed hiring managers to evaluate comprehensive rather than just copyediting skills. Another hiring manager designed her test to be rewritten rather than simply copyedited: “If an applicant just marks incorrect grammar and catches font changes, he won't be the writer we need.” When evaluating the test, she looks for this awareness and how well applicants demonstrated their understanding to the test's subject matter, saying, “If they don't get the gist of the procedure, then they won't be able to do the work we need here.”

Assessment instruments varied across companies. Ninety-three percent of the hiring managers supplied me a key to their test (suggesting a more objective assessment) but then acknowledged the key functioned more as a guide, and they already had an idea of the issues they wanted applicants to address. This suggests issues that could impact the actual assessment of these tests, including evaluation disparities across multiple raters who might value certain errors over others.

Over 30% of hiring managers assessed their tests objectively or point-based, citing standardization in hiring procedures as the reason. One hiring manager used a five-point scale to evaluate “Big Picture” and “Details,” his terminology for macro- and micro-level editing. “Big Picture” skills included how well the applicants understand the content, if they distinguished between steps and explanations, used you-inclusive language, and queried the author on specific issues. “Details” measured if applicants eliminated jargon or wordiness as well as integrated and maintained consistency in the document, such as step numbering, typefaces, and the punctuation of compound words.

Collectively, the hiring managers continue to administer their test because they assess the skills needed to work at their organization. Many acknowledged that some applicants feel insulted for having to take an editing test but stated their tests evaluate a host of skills and behaviors, some of which applicants are never aware are being tested. “Periodically we have arguments on the STC manager's listserv as to whether writing/editing tests are worthwhile,” wrote a hiring manager. “Personally, I think they can show a great deal: how you work under pressure, how you actually write on your own, and the breadth of your skills.” A vital part of being successful on editing tests though is recognizing and fixing a variety of usage errors. The second study focuses on these error types.

Study 2: Error Types in Editing Tests

Usage error remains a popular topic among technical communicators. To prepare her newsletter article on “The Top Ten Errors That Technical Communicators Make,” Wenger (2010, September 13) solicited input through the STCTESIG-L, eliciting a variety of responses, including the correct use and punctuation of restrictive and nonrestrictive clauses and the ungrammatical use of for example. To date, these types of anecdotal discussions remain the best source of information on error for technical communicators.

The best information our field has on error are the results from two empirical studies on the prominent errors in college writing (Connors & Lunsford, 1988; Lunsford & Lunsford, 2008). Connor and Lunsford's 1988 study interrogated the most common patterns of college student writing errors and the corresponding frequency of error markings by teachers. This study resulted in a published list of over 50 formal and mechanical errors college students made in their writing where misspellings outnumbered the other errors by 300% and were removed from the formal study for independent analysis (Connors & Lunsford, 1992). Connors and Lunsford ranked the remaining errors by frequency, selecting the top 20 for further inquiry. The list began with “Missing comma after an introductory element” (occurring 11.5% of the time) and ended with “Its/it's error” (occurring 1.0% of the time). Twenty years later, Lunsford and Lunsford (2008) followed up this study. The updated results reflected how a broader use of academic genres and the expansion of technology changed the error patterns in college writing. Due to an increase in argument papers, the new list included errors related to using sources, quotations, and attributions. Technology also played a role in the rank of specific errors. “Misspellings” now ranked fifth, and “Wrong word” emerged as the top error. Lunsford and Lunsford attributed these shifts to electronic spellcheckers; the technology helps students remedy misspellings, but a reliance on the automated spelling suggestions likely correlates to the increase in wrong words.

The Connors and Lunsford and Lunsford and Lunsford taxonomies offer the most thorough research on usage error, but these results reflect the error patterns of developing writers rather than the patterns of expert, professional communicators. Additionally, the results do not necessarily reflect the context of the editing test situation where applicants are under time constraints or the contexts of professional writing where error types related to style and content could play a more prominent role than in academic writing. Further, the rank and pattern shifts found in these studies suggest the importance of studying error in relation to context. Haswell (1988) stressed errors were best understood within their context: “When context is neglected, as in much research into the relation of error and change in writing, conclusions are often difficult to interpret, sometimes even outright misleading” (p. 482). For example, simply reporting the raw frequencies of errors committed in a writing sample motivates little useful discussion if these mistakes are not contextualized. My second study scaffolds from Connors and Lunsford to examine the patterns and perceptions of error in technical communication, a, thus far, neglected field in error study. I identified the types, frequencies, and dispersions of errors within the sample from the first study. Editing tests are unique because they purposely contain errors; therefore, this study operated under the hypothesis that the more frequent and dispersed the error, the more important its identification is to hiring managers.

Methods

Two raters independently classified the errors in each test. Ninety-three percent of the hiring managers provided keys for their tests, which ensured these errors were classified by the company's perspective. Both raters were pursuing master's degrees in technical communication and had successfully completed graduate-level courses in editing and style. When possible, errors were identified by the types/pattern used in Connors and Lunsford and Lunsford and Lunsford; however, multiple new errors related to style, content, and design were found in this sample. Errors were also classified into one of six broad categories: grammar and mechanics, punctuation, spelling, style, content, and design.

A third rater was consulted in instances where a test had no key. Four of these tests were from medical fields, and the rater had over a decade of medical editing experience. Percent or pairwise agreement between the raters identified an 81% consensus level, an acceptable level of agreement (Frey, Botan, & Kreps, 2000; Watt & van den Burg, 1995).

Measures. I examined the sample through two measures. Results from the contingency table analysis revealed how evenly distributed the errors were across the six broad categories. Only one previous study broadly grouped errors to measure their distribution (Boettger, 2012); therefore, the null hypothesis assumed that if the errors were evenly distributed, each category would contain 594.67 errors. I determined this number by dividing the total number of errors (n = 3,568) by the total number of categories (n = 6).

I also report what I hereafter refer to as an error's weighted index. The weighted index factored the frequency and the dispersion of each error into a single numerical value. While a lone frequency list provides useful information on the frequency (or popularity) of errors, it cannot account for errors that cluster in a small number of the sample (that is, weakly dispersed errors) compared to errors that appear consistently throughout the sample (that is, highly dispersed errors). The index weighted each error's frequency and dispersion 50/50 because I could not identify an existing model that suggested a different weighting. I provide more explanation of both measures in the results section.

Results

There were 3,568 errors and 71 error types in the sample. Each test contained an average of 66.46 errors (median = 69.5, sd = 37.48) and an average of 23.38 different error types (median = 26.0, sd = 9.90).

Contingency Table Analysis of Errors by Category. The contingency table analysis determined if the errors were evenly distributed across six broad categories. Table 1 presents these categories by their observed frequencies. Grammatical and mechanical, style, and punctuation errors all appeared in the sample at a higher than expected frequency (that is, > 594.67 errors); but content, spelling, and design errors appeared at a lower than expected frequency. None of the errors were distributed as expected.

boettger_table1Weighted Index (Frequency and Dispersion) of Errors by Type. The weighted index factored how frequent and dispersed each error was within the sample. Table 2 lists the sample's top 16 errors, which were dispersed in at least 50% of the tests. These errors made up 57% of the total errors in the sample.

As described in the methods section, the index weighted each error's frequency and dispersion 50/50. The approach reveals subtle but important results a lone frequency list could not. For example, “Misplaced/dangling modifier” was the 14th most frequent error, found 75 times and comprising 2% of the errors in the sample. This error's frequency index of 0.02 was determined by dividing 75/3,568, or the total number of errors. However, this error was also the fourth most dispersed error, found in 37 tests. This error's dispersion weight of 0.67 was determined by dividing 37/55, or the total number of tests in the sample. An average of these two scores (0.02 + 0.67/2) produced this error's moderate frequency and strong dispersion, yielding the weighted index score of 0.35 and the rank as the fourth most predominant error.

boettger_table2
Typically, an error's frequency and its dispersion strongly correlated as visualized in Figure 1. However, errors like “Lack of subjective-verb agreement” and “Missing comma in a series” secured higher ranks because of their strong dispersions, an indication these errors were commonly found in the tests but in comparatively lower frequencies. Figure 1 also illustrates the three errors that dominated this sample: “Wrong Word” (broadly categorized as a content error), “Misspelling” (spelling), and “Unnecessary or missing capitalization” (grammar and mechanics). Collectively, these errors accounted for almost a quarter of the sample's errors and were also strongly dispersed. The following section further analyzes these error types.

boettger_fig1

Wrong word. There were 190 wrong word errors, which consisted of 38% of the sample's content errors (Table 1). These errors included words that were incorrectly used like prepositions, words that sounded alike, or words with similar shades of meaning. Editors might have difficulty identifying these errors because they are not detectable with electronic spellcheckers as illustrated in examples [1-2].

[1] The subcontracting was distributed between three firms (Alred, Brusaw, & Oliu, 2012).

[2] I did not realize that half the accounting staff had a severe allegory to peanuts (modified from Lunsford, Matsuda, & Tardy, 2013).

Further analysis indicated incorrect preposition use substantially affected the weighted rank of “Wrong Word.” Raters identified 63 instances of incorrect prepositions, elevating “Wrong Word” from the fifth to the most dominant error.

Misspelling. Raters identified 439 errors related to spelling (Table 1). These errors were subdivided into five mutually exclusive categories: general, homonym, proper noun, compound, and British. Over 50% of the misspellings were classified as general, or misspellings that could be detected by an electronic spellchecker ([3], see Table 3).

[3] A former employee was charged with sexual harassment in an embarassing and costly lawsuit.

boettger_table3
British spellings comprised only 5% of the sample, but, when grouped with general, misspellings that could be detected by a spellchecker accounted for 56.5% of the sample. This insight is interesting when considering hiring managers administered 49% of the tests off-site where applicants could activate their electronic spellcheckers. The remainder of the “Misspelling” category (43.4%) included spelling errors that required additional knowledge or research to fix [4-5].

[4] During the meeting, she sited data from the latest research study (Gurack & Hocks, 2009).

[5] Since 1976, National Instrument has equipped engineers and scientists with tools that accelerate productivity, innovation, and discovery.

The proper noun errors in the tests reflected some of the hiring managers' observations from the first study about their tests not always assessing what applicants assumed is being assessed. For example, one of the editing tests included a style sheet that spelled four proper nouns in bold print with the instructions that “everything in boldface is correct.” This device illustrates a technique to editing consistently and the ability to follow instructions. Additionally, over a dozen of the tests included a misspelling of that particular company's name (as recreated in [5]). These hiring managers included this error to assess attention to detail and measure the applicant's familiarity with the organization.

Unnecessary or missing capitalization. There were 226 errors related to capitalization, which consisted of 26% of the sample's grammatical and mechanical errors (Table 1). These errors overwhelmingly related to the capitalization guidelines outlined in style guides, including capitalizing titles of words, organisms and pathogens, viruses, tests, and sociocultural designations as well as the de-capitalization of common words derived from proper nouns (for example, parkinsonism).

Fifty-five additional errors were identified in the sample. These errors were all dispersed in 49% or less of the sample and collectively made up 43.5% of the total errors. As shown in Table 4, many of these errors shared identical weighted ranks, a result of their lower frequency and weaker dispersion.

boettger_table4
Despite their position, the presence of these error types merits reporting. For example, the seven errors broadly categorized as design all ranked low, but their inclusion is significant as design errors were not identified in the previously discussed error taxonomies. This may indicate differences in academic and professional writing.

Some errors that ranked prominently in previous studies ranked lower in this sample. “Missing comma after an introductory element” was the 31st most frequent error and only moderately dispersed, accounting for its placement in the weighted index. In taxonomies of college writing, this error ranked first or second (Connors & Lunsford, 1988; Lunsford & Lunsford, 2008). Interestingly, both studies also indicated teachers marked this error on students' papers with less frequency than other errors like incorrect uses of its/it's or possessive apostrophes. It could be hypothesized then that though college students frequently made this error, teachers did not consider its absence with the same weight as other, perhaps more glaring, errors. These results indicate that teachers often mark errors in terms of their relationship to a complex context. Since the present study identified its errors primarily from company-provided answer keys, the insignificant rank of “Missing comma after an introductory element” might also reflect hiring managers' opinion on this error in relation to wrong words, misspellings, and capitalization errors.

boettger_table4b
Finally, the errors that did not rank in the top 16 list are arguably more subjective and invite discussion regarding their inclusion in editing tests. For example, “Unnecessary passive construction” appeared on this list because the raters identified the pattern from the company-provided answer keys. Humanities-based fields generally discourage passive voice; however, previous language studies have identified that scientists and engineers use passive purposefully because the subject of their sentences are frequently mechanisms instead of people (Boettger, 2012; Conrad, 1996; Ding, 2001; Wolfe, 2009). Similarly, ending a sentence with a preposition might indicate an awkward sentence, but it is not necessarily an error. The inclusion of these more subjective types is noteworthy because they suggest how individuals—in this case, technical communication hiring managers—define correctness and perceive error. The third study extends this idea by correlating how professionals perceive error in relation to the errors in the editing tests.

Study 3: Professionals' Perceptions of Error Types

The majority of error research in technical communication emphasizes professionals' opinions of error rather than the errors found in authentic professional writing (Beason, 2001; Gilsdorf & Leonard, 2001; Hairston, 1987; Leonard & Gilsdorf, 1990). In one of these earlier studies, Hairston (1987) recorded practitioners' botheration level to specific usage errors. The practitioners represented 63 occupations and were considerably bothered by errors classified as status markers: for example, “When Mitchell moved, he brung his secretary with him” (p. 796). The next tier of bothersome errors was grouped by mechanical mistakes—sentence fragments, fused sentences, and faulty parallelism. Two follow-up studies yielded similar results (Gilsdorf & Leonard, 2001; Leonard & Gilsdorf, 1990): fused sentences, faulty parallel structure, sentence fragments, and danglers ranked as some of the most distracting errors. While these studies generated important findings, the generalizability of the data is somewhat limited by methodological design. All of these studies solicited data via a questionnaire, including errors the researchers believed could be the most bothersome to practitioners. These results may not accurately reflect the errors practitioners prioritize over others. Similarly, data collected from questionnaires depend on self-reporting, which can motivate participants to respond in ways they think are appropriate to the research (Frey et al., 2000).

For the third study, I also surveyed how technical communication professionals perceived usage error. However, the errors measured here were identified in the editing tests. I then correlated these results with the weighted ranks of the errors in the second study.

Methods

For the survey, 176 participants registered their opinion on 24 different errors on a 7-point Likert scale.

The 24 errors were identified from the results of the second study. All six of the broad error categories (Table 1) were represented by 4 questions that included the two highest ranked errors, the median error, or the lowest ranked error. This approach provided a more representative sample of each category and extended previous studies where researchers selected errors based on how they believed respondents would react.

All 24 questions were in sentence format and included one error. The questions included error examples I took directly or modified from the earlier-cited surveys, technical editing textbooks, or technical writing handbooks. The examples in these texts presumably offered the best illustration of the particular errors.

I designed the survey using Qualtrics software, an approach that addressed some validity threats from similar studies. I selected two different sentences for each error, one of which was randomly shown to participants. For example, participants recorded their reaction to sentence fragments after seeing either [5] or [6]:

[5] The staff wants additional benefits. For example, the use of company cars (Alred et al., 2012).

[6] Two years ago, a similar study was done by members of the accounting department. However, this study was negated. Because it was based on outdated estimates of the costs involved (Beason, 2001).

This approach helped ensure participants were responding to the error in the sentence and not just a poor or confusing sentence. Additionally, all 24 questions were randomized, so no two participants responded to the same questions in the same order.

The requests for participation were emailed to the same Listservs I used to collect the editing tests, including the STC's Technical Editing SIG-L, Copyediting-L. I targeted participants who played a role in hiring technical communicators. Participants were then instructed to respond to each error as if they were evaluating the writing of a prospective applicant for a technical communication position. One hundred and seventy six professionals responded to the survey. The response rate was 10.5%, which was determined by averaging the number of subscribers from the Listservs, though it is likely some participants belonged to both. For at least the last 20 years, the public has become saturated with surveys, yielding low response rates and making it difficult to generalize those results (MacNealy, 1992). Surveys conducted in our own field, such as the STC's Annual Salary Survey, have also typically had low response rates (Eaton, Brewer, Portewig, & Davidson, 2008b). Results from this survey cannot generalize how all technical communication professionals perceive error, but they provide a perspective on this topic from 176 respondents, a greater body of knowledge than previously.

Measures. I explored the results via descriptive statistics. Additionally, I correlated the survey results with the frequency and dispersion results from the editing test sample using a Spearman's rho test. Spearman's rho is a distribution-free, non-parametric correlation test; it is an alternative to Pearson and can be applied to two continuous variables (Baayen, 2008).

Results

Participants were moderately bothered by content errors, including wrong words, vague or missing language, poorly integrated source material, or logic/sequence errors (M = 5.04, sd = 0.36, see Table 5). As a contrast, results from the binominal test indicated content errors appeared in the editing tests at a lower than expected frequency (Table 1). Survey participants were also collectively neutral toward or had no opinion of the error types from the five other broad categories. Issues with grammar and mechanics ranked as the most bothersome in this scale (M = 4.74, sd = 0.60), and style errors ranked as the least bothersome (M = 4.30, sd = 0.69).

boettger_table5
When analyzed individually, three tiers of bothersome errors emerged: errors participants were moderately bothered by, neutral toward, or somewhat bothered by (see Table 6). The moderately bothersome tier consisted of nine errors that represented all the broad categories. Participants were typically most bothered by “Unnecessary or missing apostrophe (including its/it's)” (M = 5.77, sd = 1.42).

Survey questions related to misspellings were organized into the same subcategories described in the second study (Table 3). Homonym errors were the only spelling type to rank in the moderately bothersome tier (M = 5.73, sd = 1.48). In contrast, homonym errors made up only 16% of the spelling errors in the editing tests, though “Misspelling” held an overall predominant rank on the weighted index.

boettger_table6
Finally, three errors broadly classified as content appeared in the moderately bothersome tier, including “Logic/sequence error” [7], “Poorly integrated source material” [8], and “Wrong word [1-2]. “Wrong Word” was one of the most predominant errors in the editing tests; however, the others were both lower frequency/weaker dispersed errors.

[7] A 1970s study of what makes food appetizing “Once it became apparent that the steak was actually blue and the fries were green,” some people became ill (Lunsford et al., 2013).

[8] We examined three storage methods most frequently used in our industry: (1) Trax, (2) Stacker, (3) Wide-Aisle Racking, and (4) Floor Storage (modified from Beason, 2001).

The neutral/no opinion tier included 10 errors. Errors from all six broad categories were also represented, including three errors related to grammar and mechanics. One of these errors was “Unnecessary or missing capitalization,” which was a predominant error in the editing test sample. Both survey sentences on capitalization [9-10] received comparable means (M = 4.43 and 4.39, respectively), suggesting professionals did not respond to either example in drastically different ways.

[9] Visitors Must Register All Cameras with the Attendant at the Entry Station (Rude & Eaton, 2011).

[10] The principle agency involved is the department of agriculture; however, the budget bureau is also peripherally concerned (modified from Rew, 1999).

The somewhat bothersome tier included five errors and represented the broad error categories of spelling, style, and punctuation. In contrast, both “Redundant, expendable, or incomparable language” and “Hyphen, en- or em-dash errors” were predominant errors in the editing tests and also the highest ranked errors in their respective broad category.

Finally, I performed a series of Spearman's rho correlations on the frequency and dispersion data from the editing test sample and the survey data. As illustrated by Figure 2, overall error frequencies and dispersions significantly correlated (S = 47.18, p = 0.00). However, there was no correlation between overall error frequencies and perceptions (S = 1808.70, p = 0.45) or between error dispersions and perceptions (S = 1867.43, p = 0.35). This suggests that the weighted ranks of the editing test errors did not correlate with the professionals' perceptions of these same errors. This dichotomy suggests further discussion among technical communication hiring managers, practitioners, and academics.

boettger_fig2

Discussion

I conducted three studies using 55 editing tests and the results from a related survey on error perception. The first study investigated the general characteristics of the editing test, which I classified as a privatized genre. The sample shared some common conventions, but other conventions were tailored to each organization's assessment needs. The second study investigated the specific error types within the sample. A weighted index of frequency and dispersion identified 16 predominant errors that accounted for 57% of the sample's total errors. I used these results to construct a survey for the third study that measured technical communication professionals' perceptions of usage error. Results indicated that the frequency and the dispersion of the errors in the editing tests did not correlate with professionals' perceptions. Collectively, these results offer valuable information on how editing tests are constructed, which types of common errors are included in these tests, and how our perceptions of error might influence how we prepare for or construct these tests.

The most meaningful finding from the first study was the variations among the tests. Hiring managers typically created tests in narrative form (as opposed to multiple-choice or sentence), required applicants to demonstrate style guide knowledge, and evaluated the test with a holistic rather than a point-based approach. However, there were strong variations (indicated by the standard deviations) in other characteristics, including the time allotted to complete the test and the length of the test. These variations could reflect the privatization of editing tests, so hiring managers do not have access to publically available models. Variations could also relate to the sample size. Finally, the variations could reflect highly contextualized environments. For example, qualitative data indicated that hiring managers often imposed time limits or offered no limits to evaluate how applicants worked under pressure, if they appropriately queried the author, or displayed tendencies of over-editing. In other words, hiring managers did not always design their tests to assess what applicants thought was being assessed but instead the qualities these managers sought in a technical communicator and a new colleague. This then justifies the preference toward holistic test assessment; however, this approach invites subjectivity and suggests why technical communicators could dismiss the value or need of these tests.

The weighted index of the sample's 71 usage errors provides a valuable resource to both practitioners and academics. The results from this second study reaffirmed the importance other researchers have placed on considering error in context (Haswell, 1988; Lunsford & Lunsford, 2008). For example, “Unnecessary or missing capitalization” was a predominant error in the editing tests because they tested applicants' knowledge of a style guide. However, the significant presence of this error in other error studies relates to other factors. Lunsford and Lunsford (2008) attributed the high frequency of capitalization errors in college writing to technology and the development of these student writers; MS-Word automatically capitalized words that followed a period (for example, a period used in an abbreviation), and students often capitalized terms to suggest significance (for example, “High School Diploma”). These observations indicate that though developing and expert writers make similar errors, they make them for different reasons. This knowledge is valuable for self-correction but also for academics who teach error for professional writing purposes.

It is also valuable to explore how the predominant error types in the editing tests reflect the error types that appeared in studies of college-level writing. Five errors were consistent across this study and both the Lunsford and Lunsford (2008) and the Connors and Lunsford (1988) studies: misspelling, wrong word, unnecessary shift in verb tense, missing comma with a nonrestrictive element, and unnecessary or missing apostrophe (see Table 7). Further, almost all of the predominant editing test errors related to grammar and mechanics, punctuation, and spelling were also top errors in either of the previous studies. These overlapping errors indicate some universally shared beliefs across writing registers. What is unique about the present study's error list is the increased number of errors related to style and content. These errors included “Redundant, expendable, or incomparable language,” “Incorrect number format,” “Faulty parallel structure,” and “Inconsistent terminology.” In total, the weighted index included 17 style and 10 content related errors. The weighted index also included seven lowly ranked design errors, which were not identified in the previous studies. Collectively, these previously unidentified error types emphasize the need for studies specific to technical communication.

boettger_table7
The most significant result from the third study was that technical communicators' perceptions of error did not correlate with the frequency or dispersion of those same errors in the editing tests (Figure 2). In fact, 75% of the errors that professionals placed in the moderately bothersome tier (the highest tier in this study) were errors that were infrequent and weakly dispersed in the editing tests (Table 4). These disparities could indicate the level of interpretation associated with the general concept of error as well as our individual biases toward and knowledge of specific error types. For example, “Hyphen, em-, and en-dash” ranked sixth in the weighted index but bothered the professional communicators the least. It is possible that this error type shares some of the same features “Missing comma in an introductory element” represented in the college writing taxonomies; it appears in writing frequently but is marked infrequently because it usually does not impair comprehension. Another explanation could be that the professionals did not have a complete understanding of the differences among hyphens, em-, and en-dashes, and so this error type carried less significance than others.

Limitations

Careful attention was given to the methodological design and the error classifications, but this study is not without limitations. The examination of 55 editing tests provides the best information on the genre, but it is not a representative sample. The privatization of this genre invites more subjectivity in its design and contents because of a lack of publically available examples. Consequently, the error results reported in the second study cannot be generalized beyond the parameters of the sample. Results from the survey also cannot be generalized. Participants were asked to register an opinion of a single usage error within an isolated sentence rather than examining that error within the context of several paragraphs and in relation to other errors. Hiring managers placed errors in their tests because of situation and context, and survey participants responded to error in structured environment where they knew an error was likely present. These differences could influence validity.

Future Research

I designed these three studies to extend the assumptions about editing tests and error and to motivate new research in these areas. This study offered perspective on how and why hiring managers construct these tests, but future studies could also evaluate if alternative assessments better reflect an applicant's skill set. More research should investigate the role of editing tests as authentic assessment tools.

Results from this study also showed a correlation between error and context, and this relationship merits additional research. For this study, I identified editing tests from companies that represented a variety of industries. Additional studies on specific industries would likely produce valuable information. For example, in an analysis of medical editing tests, “Wrong Word” ranked lower (13th) than it did in this study (Boettger, 2012). Additionally, six of the 20 predominant errors in the medical study related to style, including, “Inconsistent terminology,” “Informal or discriminatory language,” and “Unnecessary passive construction.”

Additional follow up is needed on how technical communication professionals perceive error, particularly compared to how other audiences like academics perceive these errors. Results from these studies would indicate overlap and divergence and suggest ways to train the next generation of technical communicators.

Finally, I designed these three studies to contribute to the small, but growing body of empirical technical editing research. Technical editing is arguably the most underdeveloped subfield of technical communication because unverified common knowledge often dictates the best practices. For example, Eaton et al. (2008a; 2008b) identified multiple concepts that editing literature had presented as common knowledge yet had not been examined on a larger scale, including the adversarial relationship between the editor and author and the best way to phrase editorial comments. To date, the best source of information on editing tests and error patterns specific to our field fall into the category of unverified common knowledge. Therefore, the findings from these current studies should motivate future research and also redirect practitioners and academics from focusing on our personal preferences of errors to addressing why certain audiences in our field perceive and prioritize error differently.

Acknowledgments

Thank you to Stefanie Beaubien, Carrie Klein, and Wanda J. Reese for assistance with classifying the sample. I also thank Angela Eaton, Stefanie Wulff, Lindsay A. Moore, Katharine O'Moore-Klopf, and the TC reviewers for their feedback on drafts.

References

Alred, G. J., Brusaw, C. T., & Oliu, W. E. (2012). Handbook of technical writing (10th ed.). Boston, MA: Bedford/St. Martin's.

Baayen, R. H. (2008). Analyzing linguistic data: A practical introduction to statistics using R. Cambridge, UK: Cambridge University Press.

Beason, L. (2001). Ethos and error: How business people react to errors. College Composition and Communication, 53(1), 33-64.

Boettger, R. K. (2012). Types of errors used in medical editing tests. American Medical Writers Association Journal, 27(3), 99-105.

Bragdon, A. D. (1995). Can you pass these tests? The toughest tests you'll never have to take, but always wanted to try. New York, NY: Barnes & Noble Books.

Connors, R. J., & Lunsford, A. A. (1988). Frequency of formal errors in current college writing, or Ma and Pa Kettle do research. College Composition and Communication 34(4), 395-409.

Connors, R. J., & Lunsford, A. A. (1992). Exorcising demonolatry: Spelling patterns and pedagogies in college writing. Written Communication, 9(3), 404-428.

Conrad, S. M. (1996). Investigating academic texts with corpus-based techniques: An example from biology. Linguistics and Education, 8(3), 299-326.

Ding, D. (2001). How engineering writing embodies objects: A study of four engineering documents. Technical Communication, 48, 297-308.

Dow Jones News Fund. (2014). News media invited to hire summer interns. Retrieved from https://www.newsfund.org/

Eaton, A., Brewer, P. E., Portewig, T. C., & Davidson, C. R. (2008a). Comparing cultural perceptions of editing from the author's point of view. Technical Communication, 55(2), 140-166.

Eaton, A., Brewer, P. E., Portewig, T. C., & Davidson, C. R. (2008b). Examining editing in the workplace from the author's point of view: Results of an online survey. Technical Communication 55(2), 111-139.

Frey, L. R., Botan, C. H., & Kreps, G. L. (2000). Investigating communication: An introduction to research methods (2nd ed.). Boston, MA: Allyn and Bacon.

Gilsdorf, J. W., & Leonard, D. J. (2001). Big stuff, little stuff: A decennial measurement of executives' and academics' reactions to questionable usage errors. Journal of Business Communication, 38(4), 439-475.

Gurack, L. J., & Hocks, M. E. (2009). The technical communication handbook. New York, NY: Pearson Longman.

Hairston, M. (1987). Not all errors are created equal: Nonacademic readers in the professions respond to lapses in usage error. College English, 43(8), 794-806.

Hart, G. J. (2003). Editing tests for writers. Intercom, 50(4), 12-15.

Haswell, R. A. (1988). Error and change in college student writing. Written Communication, 5(4), 479-499.

Johnson-Sheehan, R. (2008). Writing proposals. New York, NY: Pearson/Longman.

Leonard, D. J., & Gilsdorf, J. W. (1990). Language in change: Academics' and executives' perceptions of usage errors. Journal of Business Communication, 27(2), 137-158.

Lunsford, A. A., & Lunsford, K. J. (2008). “Mistakes are a fact of life”: A national comparative study. College Composition and Communication, 59(4), 781-806.

Lunsford, A. A., Matsuda, P. K., & Tardy, C. M. (2013). The everyday writer. Boston, MA: Bedford/St. Martin's.

MacNealy, M. S. (1992). Research in technical communication: A view of the past and a challenge for the future. Technical Communication, 39(4), 533-551.

Rew, L. J. (1999). Editing for writers. Upper Saddle River, NJ: Prentice Hall.

Rude, C. D. (1995). The report for decision making: Genre and inquiry. Journal of Business and Technical Communication, 9(2), 170-205.

Rude, C. D., & Eaton, A. (2011). Technical editing (5th ed.). Boston, MA: Longman.

Society for Technical Communication. (2013). STC's 2012-2013 Salary Survey. Retrieved from https://www.stc.org/salary-database

Watt, J., & van den Burg, S. (1995). Research methods for communication science. Boston, MA: Allyn and Bacon.

Wenger, A. (2010, September 13). Top ten errors that tech writers make. Retrieved from http://mailman.stc.org/mailman/listinfo/stctesig-l

Wolfe, J. (2009). How technical communication textbooks fail engineering students. Technical Communication Quarterly, 18(4), 351-375.

About The Author

Ryan K. Boettger is an assistant professor in the department of technical communication at the University of North Texas, Denton. His research areas include technical editing, grant writing, STEM education, and English for Specific Purposes. Professionally, he has worked as a journalist, technical editor, and grant manager. Contact: Ryan.Boettger@unt.edu

Manuscript received 28 July 2014; revised 18 October 2014; accepted 21 October 2014.