Features

Revising the Review-and-Revision Process: A Case Study of Improving the Speed and Accuracy of Technology Transfer

By Geoff Hart | Fellow

Most organizations develop a review-and-revision process designed to improve the quality of the information they produce. Such processes usually follow a careful analysis of the needs of the organization and its authors, and accommodate the organization’s operating context. After a period of trial and error, the process is revised to account for any flaws in that initial analysis, then becomes standard operating procedure. If the process was designed well and followed scrupulously, the quality of the published information remains high. However, an organization’s operating context changes over the years—sometimes gradually, and sometimes abruptly—in response to a changing business context, technological change, or changing audiences. When the context no longer matches the context used to design the original review-and-revision procedures, these procedures may no longer provide the necessary quality, or may do so inefficiently. In that case, it’s time to review and revise the procedures.

In this article, I’ll describe the results of one such review process, resulting in a large (more than 50%) reduction in the time to publish information without sacrificing the quality of the information (and possibly even improving its quality). Such a dramatic improvement is unlikely to be possible in every context, but the gains can still be impressive.

The Case Study Organization

Because the case study relates to work performed at a former employer, and I was asked not to name the employer in exchange for permission to publish this article, I have necessarily presented a somewhat generalized summary. The organization in question is a research and development team that performs contract operational research for a range of clients in industry and government, and the primary output of this research is technology transfer: communication of the research results to clients in such a way that the clients can implement the research and improve the efficiency and cost of their operations. Of the various means of communication used by the organization, the review exercise focused on the printed publications. At the time of the review, a consolidation exercise had been conducted that simplified the former publication series (around 14 different report types) into four categories:

  • implementation reports, designed to help a wide audience understand how to implement a solution.
  • contract reports, designed to provide information to one client or a few specific clients.
  • internal reports, designed to document studies whose results did not justify wide distribution of the information.
  • newsletters, designed to provide ongoing and ad hoc information that was not yet sufficiently refined to justify the production of a full implementation report.

The division that employed me comprised roughly 30 researchers, plus a variable number of research assistants, including summer students and graduate students working on advanced degrees. A communication group composed of only four communicators (a manager, an editor, a graphic artist, and a desktop publisher) provided the services required to plan and produce these publications.

At the time of the review, the organization had recently celebrated its 25th anniversary, and the review-and-revision process had evolved slowly, with only minor changes, throughout this 25-year period. Each year, the organization redesigned its research program based on inputs from advisory committees composed of members from all client groups, who identified and prioritized key operational problems. Researchers then studied the problems, found potential solutions, and wrote reports to document the results of their studies. Reviewers both inside the organization and selected from among the clients then performed quality control, and authors revised reports—often repeatedly—in response to these reviews. Subsequent reviews by managers led to additional revisions before the report was approved for publication and distribution.

Researchers faced many constraints that impeded this process. In addition to the length of time spent performing the research, most of which was performed over weeks or months “in the field,” far from the organization’s headquarters, researchers had many other demands on their time. They were required to:

  • meet with clients, often with little notice, to discuss various issues that had not been prioritized by the overall research program
  • attend meetings of advisory committees and present previous research results, while also taking the opportunity to better understand client needs, and
  • perform ongoing liaison with clients in response to occasional telephone calls and emailed requests for information.

Because the organization’s success relied heavily on this ongoing and highly personalized and responsive series of interactions with clients, these activities were prioritized over the writing of reports. Because the researchers were primarily engineers, writing was not their primary job responsibility, and their writing ability thus covered a wide range of skill levels.

The Original Review Process

The original review process had many stages:

  • Before reports entered the formal review process, they went through a pre-review in which the group supervisor, research director and editor worked with the author to produce something deemed acceptable for formal review. Each of these three individuals worked sequentially on the manuscripts.
  • The formal review used at least two reviewers: an internal reviewer with expertise in the area of research and one or more external reviewers representing the client or other experts.
  • In response to these formal review comments, the author revised the manuscript yet again, but the editor, research director, and the division’s vice president all subsequently reviewed the manuscript, resulting in further revision by the author.
  • The approved manuscript was then sent for desktop publishing and proofreading, followed by approval by the organization’s head office.

Distribution followed the final approval. Because the primary deliverable of this process was one of the four abovementioned types of printed publication, much of the process was performed on paper, even though suitable computer review tools (Microsoft Word) were available to all participants. In part, this was because researchers traveled so extensively during the year that paper represented a more efficient tool for them, at least until laptop computers became more broadly available. A quick count revealed that a typical manuscript passed through at least 12 phases: 3 for the author, 3 for the editor, and 6 for various levels of management. During each phase, a manuscript might be handled several times. For example, if the research director identified a significant problem, they would return the manuscript to the author, who would then return it to the research director to approve the changes, and if the changes were not satisfactory, the exchange would be repeated until the research director was satisfied.

Clearly, this approach involved far too much repeated handling of the manuscript. In addition, there was little or no formal control of deadlines for each person involved in the process. Although this might at first seem to represent ineffective management, it actually represents a recognition of the heavy demands that were being placed on each researcher’s time and of the fact that reports, although a key deliverable for the organization, were assigned a lower priority than the research itself and the ongoing process of client communication.

As a result, what had formerly seemed logical and effective had increasingly become inefficient: the results were still good (i.e., the quality of the reports remained high), but report production times had become excessive: six months was not unusual for reports that began their life at the start of the busiest time of year for research. Surprisingly, this timeframe was for reports numbering tens of pages (at most), not the hundreds of pages many technical communicators typically produce during a comparable period. Again, it’s important to note that producing reports represented only a small proportion of the engineer’s job description; for most technical communicators, it may be the primary or only responsibility.

Process Improvement

Although the process produced good results (our reports had an excellent reputation among our clients), it was clearly inefficient. To bring the process up to speed, we conducted a review of the current process. First, we examined and clearly described every step. (This was not an ISO 9000 exercise, since a consultant had previously suggested this form of certification would not be useful for our organization. However, ISO 9000 techniques can clearly inform any such review.)

For each step in the process, we collected data on the time elapsed for each phase of the review and the number of times a manuscript was handled during this phase. This data was obtained from tracking sheets attached to each manuscript during the review; each person who handled a manuscript signed the sheet and added a date. Based on this analysis, we produced metrics (completion times for each stage) that allowed us to reveal and quantify problems. Based on discussions of how long all stakeholders believed a given phase should ideally take (accounting for the constraints that each stakeholder faced), we then brainstormed solutions designed to reach those targets; an additional goal was to eliminate repeated re-handling of manuscripts, so we asked the hard question of whether a given phase was truly necessary. Finally, we implemented and tested the proposed solutions, retaining the ones that worked and discarding or modifying those that didn’t.

There are many structured methods for performing such reviews. We chose the Kaizen approach (http://en.wikipedia.org/wiki/Kaizen). Kaizen is a Japanese form of continuous quality improvement with several important features. Kaizen has several requirements:

  • A management champion must be appointed to provide the authority to implement changes.
  • An outsider must be found who is willing to point out and challenge assumptions. This person must come from outside the organization or from outside the group of stakeholders that is conducting the review so that they are not bound by the assumptions and blind spots of the stakeholders. In our case, we hired a Kaizen consultant to guide us through the process, but the key attribute of this individual is their unfamiliarity with the process in question.
  • Representatives of all stakeholders must participate so that no source of knowledge is neglected and no stakeholders are forced to adopt a solution that they had no voice in designing.
  • Consensus must be reached on all proposed changes. This does not mean that everyone agrees the best solution has been reached; rather, it means that everyone must be willing to implement and test the solution. If a solution fails that test, new solutions are proposed until consensus is again achieved, and the solution is again tested.
  • Data collection and analysis is fundamental to the process: decisions must be based, as much as possible, on fact, not opinion. Discussion of the collected data then helps participants to identify problems and potential solutions.

The result of this process is a series of meetings in which participants decide what works and should be retained, what works but inefficiently enough to require improvement, and what doesn’t work and should therefore be discarded or replaced with something else.

Results of the Kaizen Exercise

The Kaizen exercise revealed a large number of problems, of which the most important are summarized in Table 1, along with their solutions. To implement these solutions, we performed the following steps.

Problem

Proposed solution

The initial planning process was deficient. For example, only the best writers produced effective outlines before they began writing.

Create effective outlines and obtain approval before writing began.

Because the review process was conducted almost exclusively on paper, there was no way to automatically track manuscripts and route them to the next step in the process.

Automate the routing and tracking of manuscripts by moving the manuscripts and the reviews off paper and onto the computer.

There was no way to set deadlines, monitor progress towards meeting the deadlines, or apply pressure when deadlines slipped.

Set and enforce deadlines, and make performance towards meeting deadlines part of each employee’s annual performance appraisal.

Because the review process was performed primarily on paper, it could not be automated.

Adopt onscreen editing and support authors during the initial phases of this implementation.

Both writers and reviewers were often unresponsive and tardy—not because they were irresponsible, but because they received vague deadlines that were not enforced.

“Manage” our writers and reviewers to ensure that they received deadlines they had participated in setting and understood the importance of meeting those deadlines.

There were too many review steps and no responsibility for doing the job right the first time.

Eliminate unnecessary review steps (“work smarter, not harder”).

Table 1. The most important findings of the Kaizen review process with proposed solutions.

To resolve the problem of poorly planned manuscripts that took many revisions to “get right,” we emphasized the importance of creating effective outlines before beginning to write. Editorial assistance was provided, on request, to help less-skilled authors produce these outlines. Once an outline was available, we then held an initial planning meeting so that all stakeholders who might potentially reject a manuscript, in whole or in part later in the review process, had a chance to propose a solution that would eliminate the need for rejection. Thus, the author, their supervisor, the research director, and a representative of the communications team met to rigorously critique the outline. Only once everyone was satisfied that the flow of information would be effective and the content was complete and defensible did the author begin writing. This eliminated a major cause of revisions that formerly occurred late in the review process, when a stakeholder identified problems that were obvious to them (but perhaps to nobody else) at the end of the process rather than the beginning. Resolving those problems during the planning stages meant that they did not have to be solved later, after one or more rounds of revision was complete and any changes would require time-consuming backtracking.

To permit tracking of where manuscripts were within the system, we developed a tracking system based on the task-management system provided by the combination of Microsoft Outlook and Microsoft Exchange Server. This system allowed us to pass reports from one review phase to the next, automatically assigning a task and deadline to the person responsible for that phase. The system therefore identified at a glance where a given report was within the overall process, monitored progress toward deadlines, and sent reminders when necessary. To accompany this part of the process, we obtained a commitment from managers at all levels (group supervisor, research director, and senior management) to enforce deadlines and make the ability to meet deadlines part of each researcher’s annual performance appraisal.

Since working on paper was an important obstacle to efficiency, we moved the process online: reports were written in Microsoft Word, then were reviewed and revised exclusively using the onscreen editing tools provided by Word. To ensure that the implementation of this process would go smoothly, I provided the necessary instruction and “handholding” for all authors; as editor, I already had strong relationships with all the authors, and I further strengthened these relationships by working directly with authors and reviewers to help them become comfortable with the technology. In addition, we demonstrated flexibility by helping these individuals to develop approaches that worked best for them, including reviewing manuscripts on paper (if so desired) followed by transferring the changes into the word processor file.

We “managed” our reviewers in several key ways. First, we decided who our reviewers would be during the initial planning meeting. Rather than assigning these reviewers to a manuscript and assuming that they were both qualified and available, we informed them of the expected time when the manuscript would be ready for review and asked them whether they could commit to perform their review at that time. If not, we chose another reviewer and repeated this process. Well in advance of the review, we reconfirmed each reviewer’s availability (since life sometimes throws us unexpected changes to our schedules). We discussed how best to meet each reviewer’s specific needs; for example, not all reviewers had reliable email access (particularly those in remote areas of the country) or used the same software we were using, so for them, we might offer to send manuscripts on a CD or fax them a printout. Once a review was underway, we gently reminded each reviewer of the review deadline a few days before the review was due to be returned.

Last but not least, we streamlined the review process by insisting that each participant do the job right the first time, instead of handling a manuscript several times until they did the job right. We were able to convince all stakeholders that taking a little extra time would be worthwhile by reminding them that it took less time to do the job right the first time than to redo it several times later, as was often the case in the old system. One important part of this change involved discussion between reviewers and authors: rather than simply returning a marked-up manuscript and washing their hands of it, reviewers were required to reach consensus with the authors on any contentious changes. Thus, problems that formerly involved exchanging a manuscript multiple times could often be resolved by a short phone call or office visit. This led to a drastic reduction in rehandling of a manuscript at each stage of the review. In addition, we reduced the number of management reviews. In particular, because a management representative participated in the initial planning meeting and approved the overall content and approach for each document, and because managers were confident in the communication group’s competence, they were willing to “let go” and not perform multiple reviews.

In implementing these changes, we explicitly included a trial-and-exploration stage that involved several volunteers who were willing to participate in the testing and provide detailed feedback. Our goal was to confirm that our theoretically superior process worked as well as we hoped, and to detect and solve any problems before extending the new process to the entire organization. By limiting the test to a few people who fully expected to encounter and solve problems, we avoided the common error of implementing a solution before it was ready. No matter how good a new process looks on paper, implementing it without tests raises a serious risk of losing staff support for the new solution when inevitable problems arise. (After all, the staff has work to do and can’t afford to have their work delayed while the implementers desperately look for solutions.) By testing carefully before implementation, we were able to present the rest of the staff with a proven solution developed by their peers that had undergone a rigorous reality check by those peers. As a result, the new process was quickly and broadly accepted.

We also went to great lengths to help all stakeholders ensure that their concerns were met and to help them meet their responsibilities within the process.

  • We helped the authors by developing effective outlines, obtaining consensus on their goals and those of the organization during a planning meeting, providing templates that contained links to useful documents (such as online style guides and the results of the planning meeting), setting and helping them to manage deadlines, and providing personalized support to learn onscreen editing.
  • We helped the reviewers by confirming they were willing and able to perform a review at the specified time, learning about and accomodating their preferences and needs during the review, reconfirming their availability before the actual review, confirming that they actually received the document (spam filters, email failures, and many other problems occasionally prevented this), and reminding them of their deadlines.
  • We helped the managers by providing a simple tracking system that quickly and easily displayed the status of all manuscripts by their authors, including embedded help in the system’s interface (including a graphical overview of the full sequence of the new review cycle), retaining each version of the manuscript so they could review previous comments and decisions, and retaining a “history” of who did what and when so that problems could be prevented in the future.

The New Process

The new process began with a rigorous, pre-review planning meeting to insure that all stakeholders had a chance to define the nature and content of the deliverables. Internal and external reviews were then performed, followed by layout, publication, and distribution of the reports. The net result was slightly fewer phases, but more importantly, dramatically less rehandling of documents during each phase. The philosophy of “do it right the first time” was generally followed, and the philosophy of discussing rather than imposing changes was enthusiastically embraced.

Several of the proposed ideas worked very well. The use of initial planning meetings produced effective outlines that let authors write an effective document in a single step, instead of producing it as a result of multiple structural revisions, and minimized subsequent changes by managers. Editing before the formal review phase made these reviews more efficient: reviewers received a polished document that let them focus on substantive issues instead of structural and grammatical issues. Onscreen editing was universally adopted. Because most authors wrote relatively few reports each year, they did not develop the same proficiency with the editing tools as more prolific authors, and required my assistance. On the plus side, this reinforced a sense of cooperation and teamwork between us. Even when authors disliked the intensity of my edits, they appreciated my help and knew they could always rely on me for assistance.

Because of the tracking system, we always knew where a report was at any time, and no reports slipped through the cracks or were forgotten. Because we defined reasonable deadlines for each phase of the process in cooperation with the stakeholders during our review of the old process, the deadlines were attainable, but we were also flexible; if an author told us they would be away from the office doing field research around the time of a deadline, we extended the deadline. Because we enforced deadlines once they were negotiated, deadlines were generally met. A particularly nice surprise was that reviewers became more responsive and routinely met their deadlines, even when reviewers were external to our organization and we had no authority over them. The decreased number of review phases and the greatly decreased rehandling of manuscripts during each phase resulted in a corresponding decrease in time requirements; by tracking times at every stage, we proved that doing the job right the first time, even if it took longer, took less time than repeating the job several times.

Though we did not quantify the time savings for each participant, the overall results were impressive: total time from the completion of the research to publication decreased from a range of 6 to 12 months with the old process to a consistent maximum of no more than 3 months in the new process! Although we did not quantify the quality of the reports, informal discussions with our clients suggested that we at least maintained and potentially improved the quality. In short, we decreased production times, decreased the drain on staff, and maintained or improved the quality of the information we produced. Best of all, the process was still working with only minor modifications more than five years after its implementation. That’s a particularly important result because many process improvement exercises fail when they fail to reflect how people actually work; if the process doesn’t fit the actual needs of those who use it, people tend to quickly revert to old ways of doing things.

Of course, some of the ideas that were proposed did not work out. However, by testing the process with a small group before expanding it to the entire organization, we provided evidence of our successes and an opportunity to correct any failures. For example, the “do it right the first time” philosophy suggested to some participants in the Kaizen exercise that we could eliminate proofreading of reports after layout. I strenuously objected to this, and although other participants did not agree, they were willing to let me separately track and quantify the kinds of errors that were still present in reports after layout. My data revealed too many errors for eliminating proofreading, so we restored a formal proofreading stage to our process. The homegrown task-management system developed to manage the review-and-revision process worked, but because it was clunky and had many problems, it was subsequently replaced with a more standardized system based on Microsoft SharePoint.

Lessons Learned

It’s productive to ask why this process worked and how it might be generalized to other organizations, other review exercises, and other forms of continuous improvement than the Kaizen method. Based on my readings in the performance improvement literature and my own experience with this particular Kaizen exercise, I share the following key points:

  • The use of extensive data collection and the development of metrics that quantified these data transformed the exercise from a subjective assessment (opinions) to a more objective assessment (facts). Metrics help stakeholders (and especially managers) believe that a problem exists and that the proposed solution actually solves the problem.
  • It’s difficult to persuade staff to change a process that they believe is working. In the present case, frustration with the former process, and particularly with the lengthy delays in report production and endless rehandling of manuscripts, provided a motivation to change. The use of metrics clarified what aspects of the process most needed to be changed.
  • A rigorous initial planning meeting provided the “measure twice, cut once” control over what would be produced. Because everyone agreed on the final deliverable, very few significant changes were required later in the review cycle, when such changes cost the most time and effort to fix.
  • Management support provided the authority to make the necessary changes, particularly with respect to the setting and enforcement of deadlines and the willingness to trust the review process, thereby minimizing the number of management reviews required. In particular, the involvement of a management representative during the initial planning meeting, and rigorous review during that meeting, gave managers confidence they would have less work to do subsequently.
  • Because the review was performed by stakeholders from all groups, and because these people participated in the testing and subsequent modification of the proposed solutions, the staff was willing (and sometimes even eager) to try the new process: they felt that their voices and concerns had been heard during the design of the new process, that the process would not cause any new problems, and that the result would make their work life easier.
  • We recognized that not all suggested improvements would work, and we performed careful testing and monitoring to identify any problems before implementing the new process throughout the organization.

Geoff Hart (www.geoff-hart.com) has been working as an editor for nearly 25 years, both in government and the private sector, and is currently a freelance editor specializing in scientist clients for whom English is a second language. He’s also a columnist at the Techwhirl site (www.techwhirl.com).