Features

Measuring Productivity

By Pam Swanwick | Member
& Juliet Wells Leckenby | Member

Every manager struggles to balance writer workload and project capacity. A simple spreadsheet-based system can help you objectively evaluate assigned tasks, task time and complexity, special projects, and even writer experience levels to more accurately assess individual workload and capacity. The result is a simple but useful representational graph. In addition to measuring current team capacity and productivity, this method also provides objective metrics to better estimate future project capacity and to support performance evaluations for individual writers.

Productivity Is Relative

Metrics are a necessary part of a manager’s job. We need to be able to identify high- and low-performing writers, realistically balance workloads, prove our productivity to upper management, and justify requests for additional headcount. As a manager of a team of writers, what metrics can you use to realistically project your team’s capacity? How can you evaluate your team’s productivity rate? How can you assess the productivity of an individual writer compared to the rest of your team?

Research indicates that no industry standards are available for technical writer productivity rates. Some practices, such as page counts, have proven to be counter-productive in our experience. If a writer is evaluated by number of pages, page counts may tend to increase to the detriment of quality. In many projects, reducing page count should be the goal. Page counts also do not take into account the varying complexity levels of different deliverables; realistically, it takes longer to produce a page of highly technical material compared to user help.

Measuring time spent on projects is also not a good practice. Writers might put in long hours, but how do you measure how productive they are? How do you identify a writer who is handling twice the work in half the time?

In reality, all performance evaluations of technical writing are subjective. However, if your team is working on related projects with similar outputs, it is possible to develop standard metrics to evaluate writer productivity relative to a project’s standard deliverables and to other team members.

For this method to succeed, managers must evaluate as carefully as possible the differing values used in this measurement system and customize the inputs and calculations as appropriate for specific groups. This method does not evaluate the quality or usefulness of documentation.

Relative Variables within Our Team

Some teams create such diverse deliverables that no objective measurement is possible. However, our team of 15 writers produces standardized and consistent deliverables that can be reasonably compared.

Our team uses standard templates so that we can compare like to like. Our team’s deliverables are limited and consistent across products (online help in HTML format, technical references in PDF format, quick start guides in Word format, and release notes in PDF format).

We trust the team to provide accurate assessments of deliverable size, complexity, and percentage of new or updated content. In some cases, we independently verify their numbers.

B
C
D
E
D
F
G
H
I
J
K
Release
Applica-
tion
Sub-
Applica-
tion
Deliverable
Incomplete
Document
title
Total
topics
or
pages
Wild guess?
Complexity
of doc
(low=1,
medium=2,
high=3)
Percent
new/
changed
content
Writer
STL 8.7
STL
PDF
50
STL_rpt
guide
87.pdf
212
N
2
25%
Alice
STL 8.7
STL
PDF
50
STL_user
guide
_87.pdf
725
N
1
25%
Alice
STL 8.7
STL
WebHelp
50
BLYhelp
.htm
540
Y
2
25%
Alice
STL 8.7
STL
PDF
50
STL BlY upgrade tool.pdf
8
N
1
50%
Alice
Figure 1. Key data points in the tracking spreadsheet

Relative Productivity Can Be Measured

Using the methods below, we can reasonably assess productivity in three areas:

  • Current writer workload relative to the team
  • Past performance of a writer
  • Future team capacity

In this article, we discuss evaluating writers’ current workloads relative to the team. However, you can adjust the spreadsheet formulas to measure past individual performance or future team capacity.

To measure productivity, we:

  • Gather data
  • Calculate work units
  • Normalize the data
  • Account for special projects
  • Normalize the data again
  • Account for job grade

The basic formula we use is this:

(# topics or pages) × (complexity of deliverable) × (% of change) + (% time spent on special projects) × (job grade)

Let’s break it down.

Gathering Data

The team maintains a tracking spreadsheet for several reasons. The main reason is to track the progress of deliverables against established milestones. Among the inputs to the tracking spreadsheet are the following data points, entered by the writers and verified by the manager, if necessary:

  • Number of topics (for a help project) or pages (for a Word document or PDF). Early in the document lifecycle, this number is an estimate.
  • Complexity of the deliverable. Our team assigns a numeric value from 1 to 3, although you might develop more nuanced values. For example, we might assign release notes a value of 1 and a technical reference manual a value of 3.
  • Percentage of new or substantially revised content. For example, we assign a value of 100% to a document that must be written from scratch; we might estimate a value of 10% for minimal updates to an existing document.
  • Special projects. The writers record the percentage of time they spend on special projects. Special projects are an optional measure. Most of our writers volunteer for special projects in addition to their assigned deliverables (for example, updating standards and style guides).

Figure 1 shows an example of the spreadsheet in which the writers enter the data for their projects.

The final data point is equally important, but not entered by the writers in the spreadsheet:

  • Job grade (for example, entry level, mid-level, senior). Expectations are (and should be) different for each of these levels.

Calculating Work Units

For each deliverable, we multiply these inputs (see Table 1):

(# of topics or pages) × (complexity of deliverable) × (% of new or changed content)

Writer

Number of topics/pages

Document complexity

% change

Work units

Alice

128

x 3

x .10

= 38.4

Don

62

x 1

x .30

= 18.6

Kylie

200

x 2

x 1.00

= 400

Kruz

21

x 2

x .40

= 16.8

Misty

220

x 2

x .60

= 264

Lola

79

x 1

x .20

= 15.8

Table 1. Calculating work units for each deliverable

We call the resulting number a work unit. We total each writer’s work units so that all individual deliverables are included. Now each writer has a number that reflects his or her total workload from all deliverables (see Table 2).

Writer

Total work units

Alice

3181

Don

1477

Kylie

1476

Kruz

1395

Misty

2231

Lola

2520

 

Table 2. Sum of work units for each writer

Normalizing the Data

The next step is to calibrate the team’s average productivity in terms of total work units. You can derive this number using several methods, such as adding the team’s total work units and dividing by the number of writers; however, we prefer a more subjective approach that takes into consideration the productivity level we want our writers to achieve as a team. For example, if we have 12 writers, we identify three or four writers who consistently meet the average level of productivity we expect from the team, and then average their work units.

Yes, this is subjective, but in this way we can adjust for current working conditions, such as an atypical sprint to meet a tight deadline or a lull in company activity.

Determining the Productivity Factor

We take the total work units for those three or four writers and determine what number we need to divide by to make their numbers close to 100; in other words, the expected productivity is 100%. If Writer X has a total workload number of 1,400, dividing by 14 gets us to 100 (1400 / 14 = 100). Thus 14 becomes the productivity factor by which we divide all writers’ total work units.

Applying the Productivity Factor

The next step is to divide each writer’s total work units by the productivity factor you have established:

(writer’s total work units) / (productivity factor)

The resulting number is each writer’s current initial workload (see Table 3). A competent, mid-level writer’s workload number should be around 100%. If it is not, you should reassess your calibration numbers.

Writer

Sum of work units

Productivity factor

Initial Workload

Alice

3181

/ 14

= 227

Don

1477

/ 14

= 106

Kylie

1476

/ 14

= 105

Kruz

1395

/ 14

= 100

Misty

2231

/ 14

= 159

Lola

2520

/ 14

= 180

Table 3. Each writer’s normalized initial workload

Accounting for Special Projects

Next, we add the special projects percentage to the writer’s initial workload percentage (see Table 4):

(initial workload %) + (special projects %)

Writer

Initial workload

Special project %

Adjusted workload

Alice

227

+ 0

= 227

Don

106

+ 30

= 136

Kylie

105

+ 10

= 115

Kruz

100

+ 40

= 140

Misty

159

+ 0

= 159

Lola

180

+ 0

= 180

Table 4. Workloads adjusted for special projects

Normalizing the Data Again

At this point, we usually normalize the numbers again to bring the average back to near 100. In this case, we multiply all numbers by .8 (see Table 5).

Writer

Adjusted workload

Productivity factor

Total workload

Alice

= 227

x .8

= 182

Don

= 136

x .8

= 108

Kylie

= 115

x .8

= 92

Kruz

= 140

x .8

= 112

Misty

= 159

x .8

= 127

Lola

= 180

x .8

= 144

Table 5. Workloads normalized again

Accounting for Job Grade

Job grade is the final metric we factor in. We assign a multiplier value to each job grade to quantify the assumption that senior writers are expected to be more productive and maintain a heavier workload than junior writers. For a junior writer, we set the multiplier at 1.0; the mid-level writer multiplier is 0.9, and the senior writer multiplier is 0.8. The final calculation is (see Table 6):

(total workload) × (job grade multiplier)

Writer

Total workload

Job grade

Job grade multiplier

Adjusted workload

Alice

182

Mid-level

x .9

= 164

Don

108

Junior

x 1

= 108

Kylie

92

Senior

x .8

= 74

Kruz

112

Junior

x 1

= 112

Misty

127

Senior

x .8

= 102

Lola

144

Mid-level

x .9

= 130

Table 6. Final workload, adjusted for job grade

Graphing the Results

We plot the resulting value for each writer on a bar chart (see Table 7). We find an acceptable range to be within 90–110%.

Table 7. Graph of productivity

Balancing Workload

What do you do about writers who are significantly above or below 100%? There are three factors that can be adjusted to change the percentage:

  • Shift deliverables from an overloaded writer to an underloaded one.
  • Increase or decrease a writer’s participation in special projects.
  • Promote an overloaded junior or mid-level writer.

The spreadsheet is very useful as a simulation tool in this situation. See what happens if you move a project to a different writer, how much the number changes if you decrease someone’s participation in a special project, or how much the number changes if a junior writer is promoted.

How Do Writers React?

We have experienced a range of reactions from writers when presented with their metrics. (We only show a writer his or her percentage as compared to 100% and to a team average, not as compared to other writers.) Some writers are doing well and are pleased or impressed to see their suspicions confirmed! (“I knew I was doing the work of one and a half people!”) Writers with low numbers have occasionally expressed appreciation for the tangible nature of the metrics. Writers who disagree with the numbers or are dissatisfied with the process nevertheless find the quantifiable nature of the productivity metrics difficult to argue with. In several cases, consistently poor performance numbers have prompted writers to leave the company on their own, sparing us the time, expense, and legal issues associated with terminating an underperforming employee.

Caveat Emptor

We have used variations of the above metrics and calculations for the past several years to accurately and consistently estimate (a) past performance of a writer relative to his or her peers, (b) current workload of each writer relative to the team, and (c) future team capacity. However, we cannot overemphasize that what we have described is ultimately a subjective process. It must be tailored by each documentation manager to suit the needs and conditions of the specific team. Perhaps you have other factors to consider: Should different work products be evaluated using different criteria? Should writer location be a factor? Are there other metrics to consider, such as compliance requirements?

We have proven that this system works for our team:

  • We never miss deadlines.
  • Our productivity rate is very high compared to similar documentation teams working on similar products with similar deliverables. This claim is based on the fact that our team has expanded to encompass three existing products and associated writers from within our company. By using this system to periodically evaluate the added writers, we have reasonably objective evidence of each writer’s performance levels and their workloads over time.
  • We have used the resulting metrics to successfully identify and cull low performers, set reasonable workload expectations for all writers, and identify and promote top performers.
  • We usually have adequate staffing because we can provide reasonably objective metrics to management when requesting staffing adjustments.

By objectively measuring what we can and consistently comparing what can be compared but not easily measured, we have made this system to measure productivity work for us. We hope you can make it work for you, too.

Please feel free to contact us if you would like an example of the measurement spreadsheet that we have calibrated for our team.$$

PAM SWANWICK (Pam.Swanwick@McKesson.com) has worked as a technical writer for more than 20 years, primarily in technology industries. For the past dozen or so years, she has focused on medical software at McKesson. She managed one of McKesson’s product documentation teams for five years.

JULIET WELLS LECKENBY (Juliet.Leckenby@McKesson .com) has worked as a technical writer for almost 20 years, the last five at McKesson. She served as a team lead under Pam and is now the manager of the documentation team.

Tags