57.1, February 2010

Requirements Specifications and Anticipating User Needs: Methods and Warnings on Writing Development Narratives for New Software

Brian D. Ballentine

Abstract

Purpose: This article studies and determines the benefits for technical communicators using narrative to compose and edit software requirements specifications. Specifically, this article is an examination of requirements specifications written for a Web-based radiology application serving the medical industry.

Method: The study adheres to the usability principle that successful design accommodates complex problem solving. Requirements specifications, the application, and the application’s code are examined as part of the study.

Results: The first determination is that composing detailed narratives within the requirements specifications can ensure flexible spaces for users, in this case doctors, to view, study, and manipulate data as they see fit. The article also acknowledges and accounts for the reality of low-level or code-level procedural programming required for creating such flexible spaces. The second determination is that employing narratological structures within requirements specifications also leads to technical inventions at the code level. Practitioners will have a better understanding of how their work facilitates the development of a software application’s functionality, design, and even code.

Conclusion: Ultimately, narrative is the suggested method for developing the flexible affordances desired by usability specialists and it simultaneously helps negotiate low-level code.

Keywords: Requirements specifications; Usability, Narrative; Interface design; Software development

Practitioner’s Takeaway

  • Practitioners will gain insight to how technical communication that employs narratological structures facilitates technical inventions at the interface and code levels of software development.
  • Readers can examine excerpts from requirements specifications that use narrative in order to witness how technical communication facilitates the development of a software application’s functionality, design, and code.
  • Technical communicators will learn the importance of striking a balance between development at the code level and developing an application interface that accommodates complex problem solving.
  • Technical communicators will be able to identify how narratives included within requirements specifications facilitate knowledge transfer among a development team.

What really happens in most programming shops is that there is no one on staff who has a clue about designing for end users. However, these same clueless people are far from clueless about program design, and they have strong opinions about what they like, personally. So they do what they do, designing the interaction for themselves, subject to what is easiest and most enjoyable to code, and imagine that they are actually designing for users.

Alan Cooper, The Inmates Are Running the Asylum (p. 22)

The inherent solipsism in software engineering described above by Alan Cooper remains a perennial risk. And, when personal preference trumps useful information design, both companies and clients suffer. According to Michael Albers (2003), information design “must be considered the practice of enabling a reader to obtain knowledge” (p. 7). He goes on to declare, “Unless that information is properly designed, displayed, and can be manipulated for interpretation, the information (and consequently, the system) are a failure, period” (p. 8). Albers describes essentially the usefulness of a design, and successful design is, of course, informed by usability research and the work of scholars like Albers, Mirel, Spinuzzi, and Quesenbery. Mirel (2003), for example, advances a concept of “usefulness” by insisting that knowledge, or a user’s pursuit of it, is often enmeshed in a series of user-determined, intricate tasks and that we must therefore design to facilitate “complex problem solving” (p. xviii). She advocates for approaching design with what she calls a “structural framework” where there is an “emphasis on the structure of the situated work and how it sets and constrains possibilities for action” (2002, p. 178). Users must have at their disposal flexible options for problem solving.

Mirel (2002) sets her structural framework in contrast to a “procedural framework” that “includes features and user interface interactions for moving from one program state or mode to another and knowing its allowable interactions” (p. 175). The real challenge is that at some point in the development process, procedural and action-driven events must be incorporated intelligently into the design. Mirel (2002) writes:

Admittedly, a low-level, unit orientation is needed once product development moves to the stages of detailed specifications and programming. Object-oriented programming does require attributing elemental properties and events to low-level objects, be they things or acts. Yet the constraints of object-oriented programming and design do not force usability specialists or designers down a slippery slope of designing for discrete low-level actions and operations (p. 173).

Mirel’s assessment of object-oriented programming and design is sound, as is what amounts to her summary advice to “analyze tasks and design at a higher than unit-task level, focusing on the integrated sets of relations and actions” (2002, p. 183). But, the question then becomes what methods and strategies might we employ that will result in the affordances described by Mirel while simultaneously managing the very real and necessary low-level or code-level actions? In other words, if usability specialists or designers want to proceed with Mirel’s structural framework, they will still need a method of negotiating at the code level.

In this study, I investigate how technical communication prowess, specifically the ability to narrativize work flow by documenting an application’s use scenarios, can both negotiate development at the code level and satisfy Mirel’s requirement to accommodate complex problem solving by increasing usefulness, effectiveness, efficiency, and learnability. These four attributes are key components to producing a “truly usable” application or, as defined by Jeffrey Rubin and Dana Chisnell (2008), an application with which “the user can do what he or she wants to do the way he or she expects to be able to do it, without hindrance, hesitation, or questions” (p. 4). Second, practitioners will have the opportunity to witness how technical communication that employs narratological structures also leads to technical inventions at the code level. That is, practitioners will have a better understanding of how their work facilitates the development of a software application’s functionality, design, and even code. Evidence is presented in the form of both technical communication and code; each is required to validate this claim. With the use of a large, central document from the development of the last commercial software application on which I was a senior engineer, I demonstrate that the contents of software requirements specifications become embedded in the code of the final product. The written communication and its narratives, therefore, serve a valuable epistemic function as the application moves through its iterative design process toward completion.

The application in question, IntelliView Web (IVW), was developed for the medical industry during my employment with Marconi Medical Systems (now Philips Medical Systems). While I am not claiming that all software engineering projects develop as the software project described below, I do wish to demonstrate the critical roles of technical communication and narrative, specifically within software engineering, from a perspective not yet explored. Certainly scholars such as Susan Regli (1999) have made compelling cases for appreciating the contributions technical writers make to the invention process. Regardless of who is doing that writing, however, I want to emphasize the inventive role technical communication plays and offer unique evidence that a “writer” holding any number of job titles plays a crucial part in engineering development well beyond that of mere “scribe” (Regli, 1999, p. 31).

Indeed, the relationship between writing and engineering continues as a rich area of study (Baker, 1994; Geisler & Lewis, 2007; Selzer, 1983; Winsor, 1990, 2003). The narratives written by our team were created, shared, and edited within the software requirements specifications (SRS) for IVW. Through the SRS, we are able to witness both the pitfalls of solipsistic engineering as well as the successful features fueled by way of absent or present narratives. My procedure is as follows: (1) Elaborate briefly on how this study defines narrative and what constitutes a successful narrative. (2) Describe the generic conventions of an SRS. (3) Provide context for the application IVW and a brief history of the medical industry’s transition to the digital age. (4) Analyze the development of two specific features in IVW in order to demonstrate narrative’s ability during the development process both to provide the flexibility required for complex problem solving and to manage development at the code level. (5) Suggest that we continue to study narrative by carefully investigating how entrenched it is within software functionality.

Defining Narrative

This article does not set out to enter the fray of competing definitions of what constitutes a narrative (Abbott, 2002; Bal, 1997; Fludernik, 2009; Mitchell, 1980; Prince, 1982). As S. Louisa Wei and Huaxin Wei (2006) discovered in their experiments teaching narrative forms to students studying digital arts, theories on narrative are often “loaded with wordplays, divisions, subdivisions, and lengthy explanations of synonyms with subtle nuances, which do not say much to the artist/designer” (p. 481). Instead, this article proceeds by subscribing to a broad and more malleable outlook on narrative offered by H. Porter Abbott (2002) that emphasizes actions or events. According to Abbott:

Simply put, narrative is the representation of an event or a series of events . . . “My dog has fleas” is a description of my dog, but it is not a narrative because nothing happens. “My dog was bitten by a flea” is a narrative. It tells of an event. The event is a very small one—the bite of a flea—but that is enough to make it a narrative (p. 12).

Including even the bite of a flea as constitutive of narrative reveals in the coming analysis of the software requirements the smallest uses of narrative in the authors’ attempts to accommodate user needs as well as provide space for flexible problem solving. For example, “The user rotates an image” is a small narrative that signals the beginning of an event-driven scenario within the software application whose outcome will depend on a variety of existing conditions within the application. Mirel’s structured framework approach requires that our narrative respects that “Courses of action are dynamic and emergent, exploratory and opportunistic” (2002, p. 173). So, a successful narrative developed to capture image rotation specifications might note that the user should be able to rotate the image a full 360 degrees by using the mouse or by entering numeric values between 0 and 360 into text fields. The narrative may also specify the types of file formats the application can manipulate as well as the many options users have at their disposal once the image rotation is complete. Meeting Mirel’s conditions will not mean that the narrative captures exactly what users will do next but instead is concerned with being able to accommodate whatever their next decision may be. With that, the narrative might note that other image manipulation tools should be available to the user after or even during the rotation. The ability to zoom in on the image, for example, might be necessary for the user’s problem solving. In short, the narrative should be used to ensure that the user will retain options as “[p]roblem spaces are not well bounded” (Mirel, 2002, p. 173).

However, creating an open-ended venue for users to explore options to their problem cannot happen without accounting for the low-level events that make, for example, image rotation possible. At the code level, the value of narrative is that it enables the developers to identify and share information on how to address all the possible outcomes stemming from the user rotating. The ability to capture events with technical communication does put some “boundaries on human [user] experience so that it is segmented into temporarily meaningful chunks of information, grounding and giving particular shape to knowledge that may otherwise exist in multiply understood ways” (Kim, 2005, p. 123). As Hayden White (1980) remarks, “Far from being a problem, then, narrative might well be considered a solution to a problem of general human concern, namely, the problem of how to translate knowing into telling . . .” (p. 1). Low-level code must still be written and narrative can strike a balance between that coding and accommodating complex problem solving. Ultimately, the narratives our engineering team compose and include in the requirements specifications not only facilitate knowledge transfer that helps maintain a user focus and avoid solipsistic practices, but also generate code.

Software Requirements Specifications (SRS)

The purpose of the SRS is to state in as precise language as possible the functions, features, and capabilities a software application must provide, as well as detail any required constraints by which the system must abide. The document is traditionally drafted collaboratively by a team of engineers, all of whom will work closely with the application’s development. According to the “IEEE Recommended Practice for Software Requirements Specifications” (IEEE, 1998, p. 3), the SRS must address these basic issues:

  1. Functionality. What is the software supposed to do? External interfaces. How does the software interact with people, the system’s hardware, other hardware, and other software?
  2. Performance. What is the speed, availability, response time, recovery time of various software functions, etc.?
  3. Attributes. What are the portability, correctness, maintainability, security, etc., considerations?
  4. Design constraints imposed on an implementation. Are there any required standards in effect, implementation language, policies for database integrity, resource limits, operating environment(s), etc.?

The SRS is also a company’s written and documented understanding of a customer’s or potential client’s requirements and dependencies at a particular point in time prior to any actual coding. The “point in time” remark is important because as the introduction to our SRS in support of IVW (Marconi Medical Systems, 2001) explicitly states, the specifications will change as the project develops:

This version of the document captures the requirements for the system to be developed. It is expected that changes will be made during the course of product development. These changes, as well as final screen captures will be included in this document as necessary. Additionally there may be reviews of the document, after the preliminary version is complete, that produce specific action items or decisions about feature sets (p. 1).

It is understood, then, that we would continue to modify and edit the supporting narratives for the application over time. In order for IVW to begin development—that is, for our team to begin writing code and designing interfaces—the SRS needed to contain a cohesive narrative demonstrating the feasibility of the application. It serves as proof that there is a developmental blueprint or road map for the project.

Introducing IVW and Picture Archiving and Communication Systems

Digital technologies and networked environments are allowing hospitals and the health care industry in general to move away from viewing and storing patient scans only as film or in a hard-copy form. Patient scans, usually captured via magnetic resonance imaging (MRI), positron emission tomography (PET), or computed tomography (CT), can now be stored on a central computer server for doctors and radiologists to archive and access remotely. This system, where computers and networks are dedicated to the storage, search, retrieval, distribution, and presentation of images for radiology, is formally known as a “Picture Archiving and Communication System,” or PACS, in the medical industry. Computers or PACS workstations connected to a hospital’s network offer a means of adjusting the patient scans with special software that enables doctors to crop, rotate, zoom, and otherwise manipulate the patient data. Hospitals can also use film scanners to digitize their existing hard-copy film, and store the data in their new PACS. IVW, Marconi Medical Systems’ software application, was designed to be a novel, remote radiology or “teleradiology” application to expand PACS via the Internet. Simply put, teleradiology is the process of sending radiology images through the Internet to a secure location. With a teleradiology application such as IVW, images can be retrieved anywhere inside or outside the hospital as long as the doctor has access to the Internet and a Web browser.

Imagine, for instance, that someone is injured in a car accident late at night. After the ambulance brings the patient to the hospital, a series of scans needs to be performed quickly in order for physicians to provide effective treatment. Their first concerns: Is there internal bleeding or other damage? Where is it? Are the injuries life threatening? The scans are performed and saved right to the hospital’s PACS server. The hospital pages the radiologist on call, but he or she is at home. With teleradiology, the solution is a simple one. The on-call radiologist turns on a personal computer and launches an application such as IVW. After securely logging in to the system, all the doctor has to do is go to his or her in box where the scans have been saved and double-click the appropriate icons to view them. No film needs to be generated or archived. After the doctor completes the diagnosis, a brief report needs to be filed. The software allows the doctor to compose the report in the same window as the patient scans, attach the report when it is complete, and then save and send the information back to the central server for physicians to consult.

The Medical Industry Transitions into the Digital Information Age

Beginning in the late 1990s, the medical industry was adjusting to and adopting breakthroughs in information technology. Teleradiology was among such innovations. Adapting to networked and digital environments was and continues to be a challenge to any hospital wishing to implement a PACS. Part of the struggle is that the functionality offered by PACS is in competition with, or more accurately, endeavors to remediate traditional radiology (Bolter, 2001). The migration from film to filmless radiology made possible by PACS has not been a smooth, seamless transition. Radiologists, the end users of PACS software like IVW, with years of experience reading and reporting on traditional film, may be reluctant to accept and transition to PACS. Consequently, there is also tension built into the development process of a new software application such as IVW in that the existing practices, one could even say the existing narratives, governing radiology as a whole resist change. To clarify, IVW’s novel abilities can only depart so far so fast from the old ways of hanging film on a light board. The software and its functionality must successfully negotiate a place for itself within a radiologist’s day-to-day activities. The difficulty for a company such as Marconi is that the application needs to tout its distinctive, even breakthrough, features, which will save time and money along with improving patient care. Conversely, the application and its functionality cannot have departed so far from the established film technology to make its functionality and therefore its existing narrative unrecognizable. In the next section, I will begin to show how the SRS is an integral part of developing an application that successfully manages radiology’s remediation by examining a specific feature from IVW. I will demonstrate not only how IVW is shaped by the document’s narratives but also how the SRS serves as a way of communicating or telling knowledge, aiding the software engineering team as the code develops. Practitioners involved at a similar stage of development should note that here hanging film on a light board is an integral part of a doctor’s complex problem-solving process when preparing patient treatment plans. The narratives for IVW needed to respect and if possible preserve that problem-solving space while still providing guidance at the code level. This section relies on a substantive engagement with engineering that requires examining the code that makes IVW possible. By doing so, we will be able to witness how the narratives of the SRS are realized in a final, material product.

From Communication to Cutting-Edge Feature: IVW and “Images-Only”

As we struggled to avoid solipsistic engineering practices and create narratives within the SRS that, as White (1980) suggests, translated our knowledge into a tell-able tale, we needed to keep in mind the older technology: film. Again, film on a light board, not a digitized patient scan displayed on a monitor, has been playing an important role in a doctor’s complex problem-solving process. This section examines the functionality in IVW titled “images-only” that attempts deliberately to simulate the experience a radiologist has while reviewing film on a light board. One of the most frequent complaints radiologists have with Web-based applications is that the user interface is bulky and monopolizes too much screen space or “real estate” (Hart, 2003). When radiologists are reviewing film on a light board, nothing obstructs the patient data; we replicated that experience and this problem-solving space by devising the images-only function for the application. This function, which is controlled by the images-only icon at the top of the application, automatically minimizes the “tree” and the image manipulation area or “toolbox” to their smallest size by revealing only their tabs. In Figure 1, all the standard controls for IVW are visible. Once the user single-clicks on the images-only icon, the patient scan dominates the screen, as seen in Figure 2.

Figure 1. IntelliView Web with Tree and Toolbox Showing

Figure 2. IntelliView Web with “Images-Only” Selected

The original entry in the SRS (Marconi Medical Systems, 2001) regarding the images-only functionality is relatively brief. It reads:

Images Only—A button whose purpose is to maximize the image viewing area in the application. This should reduce the tree and image manipulation tools to their smallest (e.g. tabs only visible) so the images can consume a vast majority of the browser window (p. 88).

The language in this description is bordering on noncommittal. The engineers used the word should because there is the possibility that this proposed solution may not be the final solution. During usability testing, radiologists, for example, may find images-only deficient in some unforeseen way. Initially, however, this idea was enough to persuade our engineering team that the problem of available real estate had been solved or remediated because radiologists could quickly and easily favor patient data. The images-only icon was essentially a toggle switch for hiding or revealing the tree and the toolbox areas.

The impetus to revisit functionality comes from many sources, and it was not long before there was new debate about the possibility of the toggle-switch functionality’s being too simple for the goals of images-only. As IVW developed, management at Marconi allowed us to demonstrate the application at large trade shows in order to obtain user feedback. In addition, radiologists at local hospitals were invited to “test-drive” emerging technologies such as IVW at Marconi Medical’s headquarters. Finally, we were permitted to conduct site visits to hospitals where we observed radiologists at work. The feedback from usability testing is not only invaluable to our engineering team but also often generates edits to existing narratives and a reevaluation of decisions written down in the SRS. Ultimately, images-only needed more advanced capabilities to persuade radiologists of its usefulness.

After a group of doctors visited Marconi to test IVW, they praised the core idea of maximizing the viewing area, but images-only left this important audience wanting more. Specifically, radiologists wanted more advanced control over the tree and toolbox areas of IVW. Radiologists need access to the tree and toolbox area in order to perform basic operations such as opening additional images, manipulating images, creating reports, and saving files. We had not considered what other controls could and should be given to the radiologists when it came to adjusting the available real estate on screen. In effect, we had inadvertently restricted their problem-solving space. What if, for example, the radiologists just wanted to move the image toolbox area out of the way or just hide the tree? Should the tree and the toolbox have functionality so these areas can be controlled independently of one another? The radiologists suggested that there will be situations where a doctor wants to do administrative work, such as organizing patient files and scans, and he or she may want to see more of the tree and not less. Our team did not want the radiologists to view the tree, image toolbox, or images-only as cumbersome or even necessary evils but rather as features or assets. Consequently, we were forced to revisit the SRS and discuss about how best to accommodate the needs of the audience.

After taking into account all these advanced needs for screen real estate, we rewrote the narrative describing the possible uses of and the criteria for images-only. The rewritten functionality goes well beyond the original toggle switch of just hiding the tree and image toolbox area. But, since the original, basic toggle-switch functionality received positive reviews, that functionality remained. If the user clicks the images-only icon, the tree and the image toolbox are hidden with only their respective tabs exposed. If the user clicks the images-only icon again, the tree and the image toolbox area return to their prior positions. In addition, based on the usability tests, our team decided that the user should be able to control the size of the tree independently of the rest of the application. By placing the cursor on the vertical bar that divides the tree and the image viewing area, the user may manually resize the tree by clicking and holding down the left mouse button and then dragging the tree frame in either direction. If a user wishes, he or she may drag the tree all the way to the right-hand side of the screen so it completely dominates the real estate on the page. However, in the event that the user does manually resize the tree and then clicks the images-only icon, IVW must be “smart” enough to remember the tree’s position so the application can restore that position in the event that the images-only icon is clicked again. In addition to the tree, the image toolbox area can be controlled independently as well but with a few differences. The functionality built into the toolbox, such as the ability to zoom in on or rotate an image, does not benefit from having the ability to take up more real estate on screen. That is, the ability to drag the toolbox area to the top of the screen would not, according to doctor feedback, add any value to IVW’s functionality, as the controls occupy a set amount of space. Consequently, the area is a binary, either open for use or closed with just the tabs showing. Users may also close the area independently of the tree. In this scenario, users would not click the images-only icon, but instead double-click on any of the image toolbox’s tabs to hide it. The image toolbox area can be restored to its default size by clicking once on any of the tabs.

This additional functionality raised the question of whether or not the same abilities should be available for the tree and its tabs. In an effort to maintain symmetry within IVW, we decided to include tab-clicking functionality for the tree. As the narrative became more complex, the importance of documenting all the functionality of images-only in the SRS became increasingly important to a collaborative engineering environment. The design document served as an external memory device that stored design parameters for our team. Otherwise, solipsistic engineering practices would dominate the development of and functionality for images-only. Indeed, a seemingly simple bit of functionality was all of a sudden significantly more intricate.

SRS Narrative to Final Product: Images-Only at the Code Level

The flexible problem-solving spaces detailed above and called for by Mirel (2003) cannot come to fruition without attention to the role narrative plays at the code level. Within the SRS and subsequently within IVW’s code, our engineering team divided the layout of the application into separate parts (system menus, tree, image display, and toolbox) in order for areas such as the tree and the toolbox to be adjusted by the user. That is, we adopted what most user interface designers would call a “Center Stage” pattern for our design where the image display was the dominant focus (Tidwell, 2006, pp. 103–106). Basic conventions for this design pattern hold that “content should be at least twice as wide as whatever is in its side margins, and twice as tall as its top and bottom margins” (Tidwell, 2006, p. 103). A basic outline of the application can be seen in Figure 3.

Figure 3. IntelliView Web Frame Layout

In order to implement the Center Stage design, we created IVW as a series of different HTML frames held together by a frameset (for an introduction to framesets, see John Ducket’s Beginning Web Programming With HTML, XHTML, and CSS [2004]). The code that our team developed works to control these individual frames based on the narrative found in IVW’s SRS. However, there are rules to understand as we worked our way through the code. IVW is a Web-based application and in this case it was designed to run in a specific version of Internet Explorer (IE). Web browsers such as IE are governed by a set of standards put forth by the World Wide Web Consortium (W3C) known as the Document Object Model, or DOM. The DOM is designed not to favor a particular platform and allows engineers to use programs and scripts to access dynamically the content and structure of documents. One of the key functions that the DOM was designed to enable is the control and manipulation of user actions or “events” within the browser. These events can be mapped directly onto the narratives detailed in the SRS, controlling everything from the outcome following the images-only icon’s being clicked to what happens when tree tabs are double-clicked. The last version of the SRS supporting images-only that maps out all the possible combinations of user actions, IVW situations, and IVW outcomes for our engineering team to consult and make sense of the increasingly complex nature of the design space is shown in Table 1. Note that the table is read from left to right and is therefore driven by the user action or event occurring within the application.

Table 1. Updated Images-Only Functionality in IVW SRS

User Action or “Event”

Situation in IVW

Outcome in IVW

Images-only icon is clicked

Both tree and toolbox are showing

Tree and toolbox are hidden; tree position is stored

Both tree and toolbox are hidden

Tree and toolbox are shown; tree returned to last position

Tree is showing and the toolbox is hidden

Tree is hidden; tree position is stored

Toolbox is showing and the tree is hidden

Toolbox is hidden

Tree tab is double-clicked

Tree is showing

Tree is hidden; tree position is stored

Tree is hidden

Tree is shown; tree returned to last position

Tree tab is single-clicked

Tree is showing

No change

Tree is hidden

Tree is shown; tree returned to last position

Toolbox tab is double-clicked

Toolbox is showing

Toolbox is hidden

Toolbox is hidden

Toolbox is shown

Toolbox tab is single-clicked

Toolbox is showing

No change

Toolbox is hidden

Toolbox is shown

In order for practitioners to witness how our team used the event-driven narrative found in the SRS to write code, we must examine and understand another technical item about the predominant object-oriented language used to code IVW. In the case of the images-only functionality, the code that controls the frames and consequently IVW’s real estate was written in JavaScript. JavaScript can be written right inside the portion of an HTML page, or it can be developed in a separate file and saved with a “.js” extension that is then loaded into the HTML page. The code discussed here was in a separate file titled frameResize.js. Inside this file was a series of programmed functions. Each function contains code responsible for a particular task. For instance, clicking on the images-only icon is an event that triggers a function. The naming conventions for the functions in IVW come right from the SRS. The function names include treeTabDoubleClick, treeTabSingleClick, imageToolboxTabDblClick, imageToolboxSingleClick, imagesOnlyOnClick, collapseFrames, and expandFrames. Based on the above table from the SRS detailing the events necessary to manage screen real estate in IVW, the duty of each one of these functions is intuitive because their names describe the functions they perform. When engineers compose design documents such as an SRS, they have the opportunity to name and describe the components that will make up their project. Programmers can name a function anything they desire, but random names would be a disservice to other members on the team who may need to make use of their code and scripts. By deriving the function names directly from the narrative of the SRS, the text is a form of registry. Our engineering team, therefore, reduced the opportunity for a communication breakdown and produced more intuitive code. This is just one example of the SRS capturing design specifications, holding those specifications in a collective, long-term memory, and finally serving as a way of telling or sharing knowledge.

All the functions named above rely on information from three of many “global variables” within the application. Global variables are set up by programmers so the data they store can be easily accessed from anywhere in the application. These variables generally contain data that will be needed by other programmers; otherwise, the engineer would simply create a “local variable” that was specific to a particular function. The first two global variables are gTreeFrameExp and gImageToolboxFrameExp. Our team used a lowercase “g” in front of the variable to identify it as global and, again, the naming convention for the variable is intuitive to the other members of the programming team. So, whether or not the tree frame or the toolbox frame is expanded can be determined from the global variable gTreeFrameExp and gImageToolboxFrameExp, respectively. Both these variables are Boolean and are set to either “true” or “false.” By default, when IVW first loads, both the tree and the toolbox are showing or expanded, so both of these variables are set to “true.” The third variable is gTreeWidth, which stores or “remembers” the position of the tree in order for it to be returned to that exact position if the tree is collapsed and then expanded. If a user clicks on the images-only icon, that event triggers the function imagesOnlyOnClick. In following with the scenarios mapped out in Table 1 above, the first step in this narrative is for the function to check to determine whether either the tree or the toolbox is open or expanded by examining the two global variables. The entire function is simply:

function imagesOnClick(){

if (top.gTreeFrameExp==true || top.gImageToolboxFrameExp==true){

collapseFrames();}

else{

expandFrames();}}

The “ ||” signifies “or” in JavaScript, the “ ==” signifies “equals.” That is, both are comparative operators and not assignment operators. Again, according to the SRS, “if” either of those two areas are expanded, then the job of the images-only icon is to collapse the frames so the radiologist can see as much of the screen as possible. Thus, if either global variable generates a “true” response, then a separate function collapseFrames is fired. If neither is true, or to use the programming logic “else,” then both areas must be in a collapsed state and need to be restored to their original sizes. The result is that the function expandFrames is fired.

Expanding both frames is slightly more complicated than collapsing them. Returning the toolbox area to its original size is, however, a simple task in that as discussed above, its size is always the same when it is open. It is the tree that requires the use of the mentioned third global variable gTreeWidth. One of the responsibilities of the function collapseFrames is to capture and store the width of the tree before it is collapsed. Using the DOM, the code provides a “pathway” to the width value of the tree and captures it:

top.gTreeWidth=top.document.frames[“container”].document.frames.

[“fTree”].frameElement.width

While the mission of this article is not to provide an advanced understanding of the DOM, it is important to note that the code that follows the assignment operator or “=” symbol is a path to an element in the application. In this case, that element is the frame containing the tree and we titled it “ fTree.” Here, the width of the tree is set as the value of or “equal to” the global variable gTreeWidth so it can be “remembered” and used in another function. After that value has been captured, the rest of the code in the function executes and collapses the tree, resulting in just the tabs showing. In turn, when the function expandFrames is triggered, the value associated with gTreeWidth will be needed to reset the position of the tree. This is also accomplished with the use of the DOM and a path that sets this width for the tree.

To demonstrate how narrative can lead to engineering invention, computer code can be mapped onto Table 1 from the SRS. The first event in Table 1, excerpted below, deals with possible scenarios if the images-only icon is clicked.

Table 1 Excerpt

User Action or “Event”

Situation in IVW

Outcome in IVW

Images-only icon is clicked

Both tree and toolbox are showing

Tree and toolbox are hidden; tree
position is stored

Both tree and toolbox are hidden

Tree and toolbox are shown; tree
returned to last position

Tree is showing and toolbox is hidden

Tree is hidden; tree position is stored

Toolbox is showing and tree is hidden

Toolbox is hidden

The naming conventions and the procedures found in the code described above can be shown to have been generated by the SRS by replacing the language in the table with the appropriate code (N.B.: In JavaScript “!=” is a comparison operator signifying “not equal.”) See Table 2.

Table 2. Updated Images-Only Functionality with Code in Place of the Original Text

User Action or “Event”

Situation in IVW

Outcome in IVW

onMouseClick=

imagesOnlyClick

top.gTreeFrameExp==true

top.gImageToolboxFrameExp==true

collapseFrames()

gTreeWidth=n

top.gTreeFrameExp!=true

top.gImageToolboxFrameExp!=true

expandFrames()

gTreeWidth=n

onMouseClick=

imagesOnlyClick

top.gTreeFrameExp==true

top.gImageToolboxFrameExp!=true

collapseFrames()

gTreeWidth=n

top.gTreeFrameExp!=true

top.gImageToolboxFrameExp==true

collapseFrames()

The code generated in order to make the images-only functionality a reality was propagated directly by the SRS. The narratives within the document ensured that this array of additional features could perform with the flexibility required to accommodate a doctor’s often open-ended and complex problem-solving processes. The SRS was ultimately imbricated in the design process and the final material product. That is, as the narratives told in the SRS resurface in functionality of the application, practitioners should have a better understanding of how their work can facilitate development. Engineering students (or anyone who will be responsible for technical communication) should not understand the task of developing such texts as something to be started once the engineering work is finished. Too often, the engineering project is completed first and the communications follow. The goal has been to demonstrate how beginning the engineering process with narrative-driven requirements specifications can help meet user needs and expectations. Images-only began as a form of technical communication or textual production and it evolved into code. Eventually, everyone from the engineers to end users were pleased with the functionality of images-only and its solution to the problem of screen real estate.

System Menus and the Danger of Incomplete Narratives

In the images-only section, our team made many adjustments to IVW’s SRS and functionality based on user feedback and usability testing. The end result was that images-only became a distinguishing and successful feature in IVW. However, the first time a working prototype of IVW was demonstrated publicly at the annual Radiological Society of North America convention, we received unexpectedly negative assessments of some key functionality features in IVW. The “tree,” for example, while appearing to be quite similar in design to a standard Microsoft Windows tree table, did not behave with the same expected Windows-like functionality. As with Windows-based directory structures, IVW’s tree is a mechanism for a doctor to manage his or her patient data, query files, and organize and open scans. Tree tables are typically an effective means to “show the hierarchy of items, plus a matrix of additional data or item attributes, in one unified structure” (Tidwell, 2006, p. 197). However, doctors using the prototype tree attempted to context-click or “right-click” on menu items for additional editing and properties information just as they would in a Windows operating system. That functionality, unfortunately, was built in elsewhere in the application and not intuitive. Right-clicking produced no results in the demonstration version of IVW. Doctors also tried to “drag and drop” files to reorganize the tree. This feature was also not programmed into the application. The feedback from the trade show illustrates the danger of not beginning by writing content for the SRS. In these instances we were not guided by event-driven narratives and instead let our own personal preferences dictate the development. In our rush to develop, we failed to identify, document, and replicate established expectations. Simply put, there were (and still are) de facto functionality standards that were governing user expectations that our team had not considered in the SRS. Research from the field of usability studies offers guidelines and even heuristics that can help prevent against similar oversight. In her chapter “The Five Dimensions of Usability,” Whitney Quesenbery (2003) suggests a series of prompts to safeguard against this design flaw, including, “Are users making mistakes because they expect the design to follow a standard?” (p. 101). For an interface to be useful, it must be consistent with standards. “A consistent interface ensures that terminology does not change, that design elements and controls are placed in familiar l nocations, and that similar functions behave similarly” (Quesenbery, p. 89). If an attempt was made to release IVW with functionality that went against these standards, it became clear that the application would not be persuasive and ultimately not be adopted by the market. Such standards are proliferated by the functionality found in the dominant operating systems and software on the market, mainly Microsoft products. Our engineering team left the trade show needing to reevaluate and rewrite the SRS to include a then-absent narrative structure.

In addition to some of the functionality in the tree, the tabbed systems menus at the top of the screen performed much differently than users expected. Once again, we had disregarded functionality found in dominant software. This portion of the study examines how the omission of documentation on the expected performance of the menus from the original SRS led to poorly developed functionality and unpersuasive or less useful software. The tabs and the menus associated with them represent some of a doctor’s fundamental controls. Needless to say, offering an inconsistent interface, especially where such fundamental behaviors are concerned, could easily sabotage the ability of these menus and their functionality to play any role in the problem-solving process. From left to right at the top of IVW’s screen, the three tabs are File, User, and Help. When doctors testing the application originally placed the cursor over a menu tab, the menu and its contents automatically appeared or “dropped down,” as in Figure 4.

Figure 4. IVW’s Tabs with Menus Showing

Behaviorally, this is counterintuitive to the functionality built into applications such as Microsoft Word or Internet Explorer, where a user places the cursor on a tab and that tab’s menu simply changes color, as in Figure 5. In order to see the contents of the menu in Microsoft applications, the user must single-click on the tab, as in Figure 6.

Figure 5. Screen Capture from Microsoft Word

Figure 6. Screen Capture from Microsoft Word Where a Menu Appears After a Tab Is Clicked

This difference, although slight, was enough to provoke strong, negative comments from users such as “I didn’t expect to see the menu,” “It’s distracting,” and “I didn’t mean to open that.” Again, our team had failed to explicitly describe what happens when a radiologist interacts with the menus and how the menus perform. Instead, we had focused only on the functionality of the contents inside each menu. For example, the original SRS Section 3.12.3.6.1 titled “File Menu” addresses only what happens if a user selects E-mail, Print, or Exit from the menu options. The SRS (Marconi Medical Systems, 2001) reads:

  • E-mail—Will manually e-mail a link to the currently selected information from the current user to any e-mail address. The link will take the e-mail recipient into the application in e-mail mode with the selected information displayed. See Section 3.12.1.1 for more details on e-mail mode. The entry will only be enabled when an object being displayed can be e-mailed (e.g., a study or a result).
  • Print—Will print the currently selected objects using RAP/WAP. This menu option will only be enabled if a printable object is being displayed (e.g., a study or a result).
  • Exit—Will close the user’s session in the application and will attempt to close the browser window in which the application is running. This exit option performs the same function as the Exit button described in Section 3.12.3.5 (p. 89).

Nowhere in the SRS is there documentation in support of how the actual tabs and menus function. The end result was solipsistic engineering and a strongly disliked portion of the application generating negative user feedback.

The software engineers charged with fixing the functionality of the tabs and menus in IVW needed to return to the SRS and write a narrative for the menu behaviors. In the images-only example above, the engineers used the SRS and existing film technology to advance and build the rest of the events for managing the Center Stage design pattern and a space to accommodate complex problem solving. In this example, the engineers determined the established or expected series of events for tabs and menus found in dominant software applications when they returned to edit the SRS. The SRS now had language that dictated if a user places the cursor on top of any of the tabs, the tabs will now only change color. As soon as the mouse is removed from the tab, the tab changes back to its original color. If a user places the cursor on a tab and clicks the left mouse button, then the menu associated with that tab will now appear below it. The menu will remain open until the user completes one of three actions. First, he or she can click with the mouse anywhere else on the application screen and the menu will disappear. Second, selecting any of the active options from the menu—for example, clicking on Print under the File tab—will execute the print function and close the menu. Finally, just as with standard Windows applications, if a user moves the cursor over a different tab, the menu originally opened will disappear and the new menu from the appropriate tab will appear automatically. As with the images-only functionality, the engineers added to the SRS by mapping out this functionality as a series of event-driven actions in a table (see Table 3).

Table 3. Updated Tabs and Menu Functionality in IVW SRS

User Action or “Event”

Situation in IVW

Outcome in IVW

User puts cursor on tab

No other menus are open

Menu tab changes color

One menu is already open

Menu tab changes color; new menu displays; old menu closes

User puts cursor on tab and clicks

No other menus are open

Menu tab changes color; new menu displays

User removes cursor from tab

No menus are open

Menu tab changes back to default color

A menu is open

Menu remains open

User removes cursor from tab and clicks

No menus are open

Menu tab changes back to default color;
menu closes

A menu is open

Menu tab changes back to default color; menu closes

User clicks on a menu item

A menu is open

Appropriate function fires; menu closes

The functionality, at first glance, appears straightforward. However, the menu items’ appearance and functionality require more complex coding. The menus in IVW rely heavily on DHTML, JavaScript, and the DOM. Our team leveraged the DOM and its ability to accommodate user events in a Web-based application. Microsoft’s Developer Network (2009) contains explanations and sample code for “event handling” with the DOM. It explains,

Clicking a button, moving the mouse pointer over part of the Web page, selecting some text on the page—these actions all fire events, and a DHTML author can write code to run in response to the event. This particular piece of code is generally known as an event handler, because that’s what it does, it handles events.

While coding event handling appears as a straightforward concept, the power to create events becomes at best unwieldy and at worst detrimental to a project without a guiding narrative. The updated SRS, which now documented the events, situations, and outcomes for the three menus, directed the development of the code for the menus. There are numerous established and standardized events that a browser like IE can recognize, such as onMouseOver, onMouseOut, onClick, onDblClick, and onKeyPress. The menus in IVW make use of three event handlers, onMouseOver, onMouseOut, and onClick. In practice, when a radiologist moves the mouse over a menu tab, the onMouseOver event is used to fire a function that handles this event with the line of code: onMouseOver=”menuChangeOver();”. Simply put, the engineers have created a separate JavaScript function called menuChangeOver that is here activated by the user event. In this case, the event is the user “mousing-over” a menu tab. As with images-only, this is another instance where narrative guided us as we endeavored to manage all the different scenarios for the menus and develop the code. Without the SRS as a way of knowing how the menus should perform, the team was left, as Cooper (1999) fears, “designing the interaction for themselves” with no focus on end users or audience.

When the function menuChangeOver is triggered by the onMouseOver event, the series of events mapped out in the new SRS states that if another menu is already open on the screen, then the next menu tab should automatically open when the mouse moves over it. Without dissecting all the complexities of the DOM, two lines of code verify whether or not another menu is currently open, and, if so, the function responsible for displaying or “dropping” the next menu is fired. The code is:

if (top.frames[“container”].frames[“fSystem”].mMenu.isOpen==true){

dropFileMenu (w,x,y,z,pMenu); }

For instance, if mMenu.isOpen is found to be “true,” then the dropFileMenu function fires. However, regardless of whether or not another menu is open, (if mMenu.isOpen==true) the narrative for the menu events in the SRS tells the engineers that the function still must change the color of the menu tab. Our team, therefore, included code in the last line of the menuChangeOver function that executes the color change. This function is titled changeOver, and its sole purpose is to change the color of the tab to its highlighted state.

The onMouseOut and onClick events are also essential in engineering and preserving the scenarios documented in the SRS. The code attached to the onMouseOut event—that is, when a user removes the cursor from a tab—serves to fire a function called changeOut. The sole purpose of this function is to change the color of the tab back to its original state. Finally, the onClick event is responsible for displaying the menu for a tab if the user does click. Once the user has the cursor on the tab, then the onMouseOver event has already been triggered and the tab has changed color. If the user then clicks the tab, the onClick event fires the function dropFileMenu, which is responsible for displaying each tab’s menu and its appropriate content. All three of these event handlers, onClick, onMouseOver, and onMouseOut, are used to maintain the expected series of events contained in the SRS dictating how menus display and how users interact with them in IVW.

Table 3 Excerpt

User Action or “Event”

Situation in IVW

Outcome in IVW

User puts cursor on tab

No other menus are open

Menu tab changes color

One menu is already open

Menu tab changes color; new menu displays; old menu closes

User puts cursor on tab and clicks

No other menus are open

Menu tab changes color; new menu displays

User removes cursor from tab

No menus are open

Menu tab changes back to default color

A menu is open

Menu remains open

If these processes seem challenging, it is because they are. What I wish to make clear is that without taking time to compose a narrative, our engineering team overlooked a standard tenet of usability studies requiring design to adhere to established conventions. A doctor’s problem-solving process cannot be impeded by the design of the very tool he or she is using to problem solve or, in this case, treat patients. However, using the first three rows from the revised SRS in Table 2 above, we can see how our failures were addressed by once again mapping the code directly onto the language from the SRS. My claim is that it is the SRS that enabled our team to repair their solipsistic engineering and successfully develop IVW’s system menus after the first failed attempt.

The naming conventions and the events found in the code described above can be shown to have been generated by the SRS by replacing the language in the table with the appropriate code. Table 4 below demonstrates how the computer code mirrors the language in the SRS. Event-driven narrative structures within the SRS capture each adjustment engineers make to the application as they can more accurately simulate, describe, or tell the story of IVW’s use as design proceeds.

Table 4. Updated Tabs and Menu Functionality in IVW SRS With Code in Place of the Original Text

User Action or “Event”

Situation in IVW

Outcome in IVW

onMouseOver

mMenu.isOpen!=true

changeOver


Menu.isOpen==true

changeOver;
menuChangeOver;

dropFileMenu

onMouseOver;
onClick

mMenu.isOpen!=true

changeOver;
dropFileMenu

onMouseOut

mMenu.isOpen!=true

changeOut;

mMenu.isOpen==true

No change

Just as with the images-only functionality, the code engineered for the tabs and menus appears here as a direct result of the language in the SRS. The tabs and menus, as with images-only, were treated not as fixed and final but as functionality that could be returned to and rewritten via the SRS. This is a critical point, as the SRS in this example focused us on designing for others and not remaining entrenched in our own comfortable preferences. Our team had never asked, “What will or will not make these menus usable?” Instead, we started engineering guided only by our own preferences. The SRS was needed not only to guide the development at the code level but also to keep the needs and expectations of the end user in focus.

Conclusion

At a 2007 conference, John Carroll (2007) advocated “narration” as an “outside-in alternative to specification.” He suggested that using narrative could help designers maintain a focus on a project’s “big picture.” Part of that big picture must be useful information design that offers the “leeway” required for complex problem solving (Mirel, 2002, p. 175). In the first example above, narrative included in the SRS ensured the outcome of a flexible space for doctors to view, study, and manipulate patient scans. Through this example, images-only, practitioners could also see how entrenched writing is within the development process and low levels of code. Conversely, the second example detailing missing narration for menu functionality served as an important demonstration of how an absent narrative may contribute to an application’s failure to meet standard usability conventions. Applications like IVW are useless to end users if their development is guided by only the whims of a cloistered team of engineers and technical writers. Narrative mediates the development of the application, and the users reap the benefits when the narratives resurface in the functionality of the application. I suggest that researchers and practitioners study further how to more efficiently incorporate narrative in the design process and continue to search for means of teaching or at least sharing their methods and strategies for employing narrative.

References

Abbott, H. P. (2002). The Cambridge introduction to narrative. Cambridge, UK: Cambridge University Press.

Albers, M. (2003). Introduction. In M. Albers & B. Mazur (Eds.), Content and complexity: Information design in technical communication (pp. 1–13). Mahwah, NJ: Lawrence Erlbaum.

Baker, F. R. (1994). Toward establishing a rhetoric of engineering: Broadening the theoretical framework for technical writing pedagogy. Issues in Writing, 7, 23–48.

Bal, M. (1997). Narratology: Introduction to the theory of narrative (2nd ed.). Toronto: University of Toronto Press.

Bolter, J. D. (2001). Writing space: Computers, hypertext, and the remediation of print (2nd ed.). Mahwah, NJ: Lawrence Erlbaum.

Carroll, J. (2007, July 8). Narrating the future. Penn State Conference on Rhetoric and Technologies, State College, PA.

Cooper, A. (1999). The inmates are running the asylum: Why high-tech products drive us crazy and how to restore the sanity. Indianapolis, IN: Sams.

Ducket, J. (2004). Beginning Web programming with HTML, XHTML, and CSS. New York: Wrox.

Fludernik, M. (2009). An introduction to narratology. New York, NY: Routledge.

Geisler, C., & Lewis, B. (2007). Remaking the world through talk and text: What we should learn from how engineers use language to design. In R. Horowitz (Ed.), Talking texts: How speech and writing interact in school learning (pp. 217–334). Mahwah, NJ: Lawrence Erlbaum.

Hart, G. (2003). Redesigning to make better use of screen real estate. In M. Albers & B. Mazur (Eds.), Content and complexity: Information design in technical communication (pp. 337–350). Mahwah, NJ: Lawrence Erlbaum.

Institute of Electrical and Electronics Engineers (IEEE) Computer Society. (1998). IEEE recommended practice for software requirements specifications. Retrieved from http://ieeexplore.ieee.org/iel4/5841/15571/00720574.pdf?arnumber=720574

Kim, L. (2005). Tracing visual narratives: User-testing methodology for developing a multimedia museum show. Technical Communication, 52, 121–137.

Marconi Medical Systems. (2001). IntelliView Web software requirements specifications. Marconi Medical Systems, Inc.

Microsoft Developer Network. (2009). About the DHTML object model. Microsoft Corporation. Retrieved from http://msdn.microsoft.com/en-us/library/ms533022.aspx

Mirel, B. (2002). Advancing a vision of usability. In B. Mirel & R. Spilka (Eds.), Reshaping technical communication: New directions and challenges for the 21st century (pp. 165–187). Mahwah, NJ: Lawrence Erlbaum.

Mirel, B. (2003). Interaction design for complex problem solving: Developing useful and usable software. San Francisco, CA: Morgan Kaufmann.

Mitchell, W. J. T. (Ed.). (1980). On narrative. Chicago: University of Chicago Press.

Prince, G. (1982). Narratology: The form and functioning of narrative. Berlin: Mouton.

Quesenbery, W. (2003). The five dimensions of usability. In M. Albers & B. Mazur (Eds.), Content and complexity: Information design in technical communication (pp. 81–102). Mahwah, NJ: Lawrence Erlbaum.

Regli, S. (1999). Whose ideas? The technical writer’s expertise in inventio. Journal of Technical Writing and Communication, 29, 31–40.

Rubin, J., & Chisnell, D. (2008). Handbook of usability testing: How to plan, design, and conduct effective tests (2nd ed.). Indianapolis, IN: Wiley.

Selzer, J. (1983). The composing process of an engineer. College Composition and Communication, 34, 178–187.

Spinuzzi, C. (2003). Tracing genres through organizations: A sociocultural approach to information design. Cambridge, MA: MIT Press.

Tidwell, J. (2006). Designing interfaces. Sebastopol, CA: O’Reilly.

Wei, S. L., & Wei, H. (2006). Uncovering hidden maps: Illustrative narratology for digital artists/designers. Computers and Composition, 23, 480–502.

White, H. (1980). The value of narrativity in the representation of reality. In W. J. T. Mitchell (Ed.), On narrative (pp. 1–23). Chicago: University of Chicago Press.

Winsor, D. (1990). Engineer writing/Writing engineering. College Composition and Communication, 41, 58–70.

Winsor, D. (2003). Writing power: Communication in an engineering center. Albany: SUNY Press.

About the Author

Prior to completing his PhD at Case Western Reserve University, Brian D. Ballentine was a senior software engineer for Marconi Medical and then Philips Medical Systems designing user interfaces for Web-based radiology applications and specializing in human computer interaction. This past work experience ties to his current research interests, which include open-source software, technical communication, and digital literacy, as well as intellectual property and authorship. Ballentine is currently an assistant professor and coordinator for the Professional Writing and Editing program at West Virginia University. In his spare time he still enjoys programming and design. Contact: Brian.Ballentine@mail.wvu.edu.