60.1, February 2013

Heuristics for Broader Assessment of Effectiveness and Usability in Technology-Mediated Technical Communication

Roger A. Grice, Audrey G. Bennett, Janice W. Fernheimer, Cheryl Geisler, Robert Krull, Raymond A. Lutzky, Matthew G.J. Rolph, Patricia Search, and James P. Zappen

Abstract

Purpose: To offer additional tools for the assessment of effectiveness and usability in technology-mediated communication based in established heuristics.

Method: An interdisciplinary group of researchers at Rensselaer Polytechnic Institute selected five disparate examples of technology-mediated communication, formally evaluated each using contemporary heuristics, and then engaged in an iterative design process to arrive at an expanded toolkit for in depth analyses.

Results: A set of heuristics and operationalized metrics for the deeper analysis of a broader scope of contemporary technology-mediated communication.

Conclusions: The continual evolution of communication, including the emergence of new, interactive media, provides a challenging opportunity to identify effective approaches and techniques. There are benefits to a renewed focus on relationships between people and between people and information, and we offer additional criteria and metrics to supplement established means of heuristic analysis.

Keywords: technology-mediated communication, design heuristics, usability metrics, usability toolkit, assessing usability

Practitioner's Takeaway

  • The definition of technology-mediated communication is rapidly expanding. Many examples blur the line between author and audience.
  • Established heuristic analyses offer attractive simplifications, but may overlook elements key to the success of contemporary technology-mediated communication and useful to any consideration of communication usability and effectiveness.
  • This toolkit of expanded heuristics, agreed upon by an interdisciplinary group and based in established metrics, aims to support deeper consideration of relationships and broader assessment of diverse examples of technology-mediated communication.

Introduction

The Tech-Mediated Communication (TMC) Toolkit is the result of several years of development, interdisciplinary research using formal and informal testing methodologies, established heuristics, and the experience of a diverse research group, all focused on the shifting challenges posed by technology-mediated communication. The five exemplars evaluated include an indigenous culture Web site, an image design to promote HIV/AIDS awareness, distance learning classes, a wiki in higher education, and an information gallery for use by children, teens, parents, and their community. Our researchers, faculty, and graduate students in multiple disciplines with diverse professional backgrounds and varying levels of experience with professional, technical communications were drawn together by a shared fascination with the promises and perils of contemporary communication technology. We set out to observe and document tech-mediated communication both broadly and precisely, to measure it against established standards, and then to re-examine those standards in light of our findings, hoping to identify principles that support successful communication across multiple media, platforms, and a wide range of technical means.

The Changing Face of Technical Communication: Expanded Perspectives

A new cell phone, the purchaser's fifth, built by a prominent Asian manufacturer and sold to him in a box store by a leading U.S. mobile phone service provider, displays an error message. Though he plans to use the phone primarily for audio and text conversations with other people for his small business and thinks in those terms, he is aware of many additional features, including a QWERTY keyboard, mobile Internet browser, flash memory card slot (for a fingernail sized card with 3,000 times the capacity of his first computer hard drive), Bluetooth and USB connections, still and video camera, wireless headset, and software for games, calendar, calculator, and other functions. The full-color quick-start guide that came with the phone helped him begin, but it contains no information about the error. He emails technical support and receives an automated message including a support-line phone number, which he calls from his home line. He is connected to a service representative within two minutes and then to another after a few minutes more. Each walks him through a series of steps, the first seeming to test his basic ability to turn on the phone and the second taking him through receiving a software update for it via the network. Neither is successful in solving his problem, which may be due, he is told, to “a bug” in the new phone. Not convinced that a replacement phone is the only solution, he types the error message into a search engine in his Internet browser and reviews the results, many of which seem to lead to a social networking site containing a variety of videos, including, to his surprise, some produced by and featuring a pre-teen expert on this phone and addressing this particular error. By following the steps in the video, he solves the problems, and the phone begins to work as expected.

This sample case, the actual experience of one of the contributing authors, demonstrates why a broader consideration of technology-mediated communication is warranted and illustrates the increasing complexity of real-world technology-mediated communication as it relates a given product's design and a user's experience with it. Usability assessments focused solely on human interaction with the device described above or with the quick-start guide alone suggest avenues for improvement but necessarily oversimplify to the point that they fail to wholly describe this case. Likewise, assessments of user experience focused primarily on the unsuccessful support service or on the successful online search for an answer would not adequately describe what actually occurred. If the focus of the analysis is narrowed too far, a researcher might erroneously conclude that the phone is entirely unusable, that the start-up guide should contain information on every possible error message, or that support is entirely unnecessary because all answers to all possible questions are available online for free. The lessons of the case are, in fact, more subtle, reflecting not only the increased complexity of technology-mediated communication itself but also the difficulties inherent in realistic usability analyses. The case illustrates that products and users are part of larger systems influenced by product life span and influencing user experience and product usability:

  • Products are not used in isolation but rather become part of larger systems (in this case including the cellular network, another telephone network, the computer network and Internet, the social network, the search engine, the netbook, and other hardware, software, and documentation).
  • Technology has a relatively short life. Short product lives may offer increased certain profits but also drive up associated design costs, accelerate design and production cycles, and increase learning demands on users and on support professionals to the point that proficiency with a given device and the positive experience presumed to accompany it are increasingly rare.
  • Usability and experience-related problems occur even with multiple systems in place to keep them functioning as intended. The phone itself functioned as designed in offering a precise error message, the support service provided by email and phone was promptly available, and the service provider's system to download updates to the phone was in place. Even so, the problem was not solved or even moderated by these means.
  • People are also part of increasingly larger systems. For every technology, there may be an extended product-related community that includes uncompensated users acting as support personnel. This type of community has, perhaps, always existed for every technology in sufficiently widespread use. Today, however, access to that community (via social networking and communication technology) has vastly expanded, and with it the range of information and services such a community may—in this case did—provide.

The Changing Face of Media: Assessment-Related Implications

Too narrow a focus may, however fine the evaluation, yield an unreliable, unrealistic result. Though rapid technological advancement often accompanies a proliferation of competing models, professionals whose earnings depend on a particular business model are slower to shift away from once-profitable assumptions. Traditional models of technology-mediated communication, therefore, focus on professional services and relative costs and lead to narrow avenues of assessment, as shown in Figure 1.

 

 

 

Figure 1. A Simplified Traditional View of Tech-Mediated Communication Centers on Professional Services and Associated Costs in Older and Newer Mediums

 

 

 

This model assumes that communication develops from corporate or contracted professional sources. It is, to an extent, accurate, correctly reflecting, for example, the lower costs associated with updating electronic versions of pages, but note that it entirely neglects peer-to-peer relationships and anything outside of a centralized network, both key characteristics of contemporary tech-mediated communication, as shown in Figure 2.

Social networks have no doubt existed alongside every new technology. But where once usability and user-experience-related research could disregard the likelihood of third-party involvement with few consequences, it is now far more likely that any given user, including the one is the case described, will encounter and might find it difficult to entirely avoid unofficial sources. These additional sources may directly influence usability and user-experience, and as a result a broader scope of inquiry is recommended.

Defining Tech-Mediated Communication

As Figure 1 suggests, traditional technical communication is a transfer of information from content producers to content users, perhaps including a negotiation between them. Tech-mediated communication today is a negotiation between producers and users of information mediated by new, emerging, and continuously changing communication technologies, including many now familiar features of the Web: blogs, wikis, social-networking sites and technologies, and all of the audio, visual, and interactive elements embedded within them (Bolter & Gromala, 2003; Bruns, 2008; Lessig, 2008; Norman, 2004; Shedroff, 2001; Tapscott & Williams, 2006).

 

 

 

Figure 2. A Broader View of Tech-Mediated Communication Features a Multi-centered information System in Which Professionally Prepared or Sponsored Information Appears Alongside Large Quantities of Shared and User-Created Information

 

 

 

This continuing emergence of new communication technologies as active and dynamic components of technical-communication processes changes the nature of the negotiation between producers and users, with the result that these technologies can no longer be viewed simply as transparent channels or conduits of information between them (Brinck, Gergle, & Wood, 2002; Nielsen, 1993, 2000). Rather they must be acknowledged to be active elements that influence the quality of the total user experience with a technology, with information producers, and also with other users (Bruns, 2008; Lessig, 2008; Tapscott & Williams, 2006).

Bolter and Gromala (2003) accurately challenge the traditional metaphor of the transparent window, suggesting a metaphor of a reflective mirror instead: a “compelling experience” that invites users to look “at” rather than or “through” a user interface (p. 67). The interface and everything connected to it is an undeniable part of the user's experience as “successful digital artifacts are designed to be experienced, not simply used” (p. 22). Norman (2004) extends this line of reasoning from the user interface design to design in general, which, he argues, encompasses the functional and also the visceral and the reflective, that is, the effectiveness of use, the appearance, and users' personal engagement and satisfaction. The quality of the total user experience begins with that positive encounter with a mediating technology and continues onward to play a role in the user's relationships with information producers and with other users. Bruns (2008) specifically notes that these technologies enable and encourage users to become producers themselves—”produsers”—rather than merely passive recipients of information (see Figure 2), creating a stark contrast between the new information economy, which seeks to produce consumer engagement and satisfaction, and the old industrial economy, which focused on maximizing production and worker efficiency and is succinctly captured in Henry Ford's axiomatic promise to his customers: “you can have any color you like, as long as it's black” (p. 10). The new information economy does not have this luxury, and so it must instead cultivate “patterns and protocols of interaction and collaboration,” as illustrated in a range of examples from open-source software development to blogs and wikis to creative photo- and video-sharing applications and much more (p.16). Tapscott and Williams (2006) refer to these new producers-users as “prosumers” and claim that they are active consumers who “increasingly satisfy their desire for choice, convenience, customization, and control by designing, producing, and distributing products themselves” (pp. 52). Lessig (2008) describes this culture of participation and sharing as “remix” culture, noting that users mix text, sound, and images to produce new creative works or “remixes.” Even setting aside the hype surrounding “produser” or “prosumer” culture, it is clear that there has been a fundamental change in the relationship between producers and users that blurs the boundaries between them.

Mediating technologies have also changed relationships between users and other users. Tapscott and Williams (2006) herald the new Web as the dawn of “a new era of collaboration and participation”—tagged “wikinomics”—and celebrate “the rise of a global, ubiquitous platform for computation and collaboration that is reshaping nearly every aspect of human affairs” and opening the floodgates “to a worldwide explosion of participation” (pp. 18-19, 64). Anderson (2006) similarly describes a new “architecture of participation” wrought by communication technologies that democratize the tools of production and distribution of information (pp. 82-84). These tools promote collaborative activity by both individual users and communities of users and so alter the relationship between users and other users, as illustrated by the dramatic successes of Amazon, eBay, Flickr, Google, Wikipedia, YouTube, and other commercial and social sites of information exchange and sharing (Anderson, 2006; Lessig, 2008; Tapscott & Williams, 2006).

Therefore, to understand and assess the quality of the total user experience with mediating technologies necessarily requires more than evaluation of user performance in the execution of specified tasks. As an explanation of this new orientation toward the user, Jordan (2000) deplores what he describes as an overemphasis within the human-factors community on “the effectiveness, efficiency and satisfaction with which specified users can achieve specified goals in particular environments”, insisting on a more holistic understanding encompassing “the wider role that products play in people's lives” (pp. 7-8). Similarly, McCarthy and Wright (2004) note the dual emphasis on functionality and experience evident in IBM's twofold commitment to its users: “User Experience Design fully encompasses traditional Human-Computer Interaction (HCI) design and extends it by addressing all aspects of a product or service as perceived by users” (p. 10).

Incorporating Standing Rules and Measures of Usability

In many instances, old rules still apply and serve people well. In transactional systems, information-retrieval systems, and other systems that support task-oriented activities, people's goals are still to be quick, accurate, and efficient in the completion of a task, and in these instances people are not necessarily looking for engagement or long-term commitment. For this reason, and as the earlier models in Figures 1 and 2 suggest, our group sought to review, consider, and, wherever possible, include industry-standard checklists and protocols (Hargis et al., 1998; Nielsen, 1994, 2006; De Jong & Van der Geest, 2000; Van der Geest & Spyridakis, 2000). At the same time, we sought to widen the scope of our evaluation to include dynamics evident on social media sites such as Facebook, where the desire for efficient completion of an operation in a few clicks is actually at odds with the site's apparent objectives and typical uses, which invite conditions in which users linger on and spend increasing amounts of time engaged with the site's offerings. Similarly, many educational sites similarly encourage extended and repeat visits rather than seeking to optimize content delivery in single sessions.

TMC Toolkit Development Methodology

Our methodology began with familiar heuristics, which were expanded upon via an iterative process until they more fully described the usability and user experience in the social-media environment associated with each of our five disparate exemplars.

Initial Heuristics

Nielsen's (1994) ten heuristics address (1) visibility of system status, (2) match between system and real world, (3) user control and freedom, (4) consistency and standards, (5) error prevention, (6) recognition rather than recall, (7) flexibility and efficiency of use, (8) aesthetic and minimalist design, (9) help allowing users to recognize, diagnose, and recover from errors, and (10) help and documentation. We also referenced Hargis et al.'s (1998) checklist system, developed at IBM Corporation's Santa Teresa Laboratory and based on the proposition that quality technical information is:

  • Easy to use (task orientation, accuracy, and completeness)
  • Easy to understand (clarity, correctness, and style)
  • Easy to find (organization, retrievability, and visual effectiveness)

Each exemplar was evaluated in two rounds. The first, including researchers and Rensselaer graduate students Elia Nelson, Mohamad Hizar Khuzaimah, Jessica Woods, Dale Bass, and Noah Schaffer, reviewed each exemplar in terms of these established metrics and was also used tentatively to identify experience and usability issues or qualities that the established heuristics did not seem to adequately describe. The second review was conducted by faculty and students on campus and at distance using an expanded set of criteria, the result of general agreement among the research group on appropriate additions to the set of heuristics.

Additional Criteria

The following criteria were the result of suggestions arising from the first round of evaluations:

  • Readiness / pre-use
    • Style appropriately suggests author authority / professionalism
    • Apparent value of communication / motivation is to engage
    • Technological requirements for access are minimized
    • Communication (appears to be) crafted with audience in mind, for a known context
    • Required background knowledge is available (unless intentionally excluded)
  • Navigation
    • Readability (for example, text large enough to read)
    • Similarity / compatibility with familiar tools
    • Clarity of control mechanisms and interactive objects
    • Flexibility and comfort with communication modes
    • Clear, efficient, and effective communication protocols
    • Meaningful categorizations
    • Meaningful hierarchy of media and text
    • Consistency of visual cues
    • Minimal syntactical complexity
  • Experience
    • Emotionally gripping / involving the affective domain
    • Incorporating rich communication modes matching user accessibility needs
    • Evoking confidence in the technology
    • Incorporating an appropriate degree of personalization
    • Displaying appropriate chunking of information
    • Visually supporting an immersive experience
  • Action / post-use
    • Call to action / next steps or additional information available

Evaluation Scenarios

The second round of evaluations was conducted by the researchers and volunteer faculty, students on campus, and students in Rensselaer's distance education program, presenting logistical challenges in line with those realistically associated with tech-mediated communication. Three synchronous approaches were used:

1. Large group in a single location: Although the piece of communication being evaluated and observed was mediated by technology, the actual evaluation itself was not.

2. Local test team / remote testers: Since we had the ability to share screens with remote participants, an on-campus evaluation team could “observe” an evaluator who was not on campus, documenting the evaluator's interaction with the screen and hearing his or her spoken comments via a telephone or voice-over-Internet protocol (VOIP). While this approach may not always provide the same richness of observation possible when evaluator and observers are in the same room, it does provide a useful data set.

3. Remote testers / remote observers: In this scenario, all participants and observers connect from remote locations. This scenario is, in effect, very similar to the previous scenario, though with the increased number of systems and connections comes an additional potential for technical problems.

The TMC Toolkit: Heuristics and Associated Metrics

The TMC Toolkit consists of two directly related parts, (I) a set of heuristics that can be used to guide design or assess usability and user experience, and (II) a set of operationalized metrics that can be used to more deeply examine how optimally a design meets the criteria outlined in the heuristics (Table 1). Each metric includes a defining semantic differential ranging from unmet to fully met, criteria to be used when assessing a product, usability and user experience through behavior, and survey guidelines.

Table 1. Overview of Heuristics and Associated Metrics

I. Heuristics and sub-items II. Operationalized metrics

1. Design for diverse users
a. Recognize that nothing is intuitive to everyone User is confused <> User understands everythingProduct Metric: Use is logical and straightforward.Behavioral Metric: User understands the interface without assistance, does not get confused.

Survey Metric: User describes experience as logical or intuitive.

b. Design for the inevitability of diverse audiences Greater confusion for some groups of users <> Diverse users understandProduct Metric: Experience is consistent across user types. Design elements have the same meaning for all users.Behavioral Metric: User (type) not stumped by the design.

Survey Metric: User describes experience as easy to follow.

c. Provide users with options for differential experience using different views or levels User is limited by design <> User has optionsProduct Metric: Experience customizable for different users; customization does not hinder design use.Behavioral Metric: User is able to customize with ease/finds and enjoys a suitable view.

Survey Metric: User rates customization highly.

2. Design for usability
a. Follow standard usability guidelines Confusing non-traditional design <> User recognized standard elementsProduct Metric: Design follows usability guidelines.Behavioral Metric: User understands the design based on other experiences.

Survey Metric: User describes experience as a familiar one.

b. Enforce readability (font large enough to read; break up blocks of text) User disoriented or led astray <> User easily perceives site contentProduct Metric: Design is well organized and easy to navigate.Behavioral Metric: User finds what he or she is looking for in a timely manner.

Survey Metric: User describes experience as efficient.

c. Use professional quality design components Design perceived to be standard <> Design perceived to be enhancedProduct Metric: Appearance and content suggest professionalism to user.Behavioral Metric: User prefers design vs. other designs.

Survey Metric: User describes experience as professional.

d. Follow general conventions where available Highly unfamiliar <> User experiences familiarity where expectedProduct Metric: The design is organized and consistently familiar.Behavioral Metric: User is more comfortable with the design vs. others.

Survey Metric: User describes the experience and familiar and enhanced.

e. Offer simple ways to do what users want to do Many navigation complications <> Quick, free user motion throughoutProduct Metric: Components are in correct locations. Links work.Behavioral Metric: User efficiently navigates through site/design.

Survey Metric: User describes experience as uncomplicated.

3. Test the technical requirements “backbone”
a. Specify the technical requirements or technological backbone needed by users User uncertain about requirements <> User understands what is needed Product Metric: Requirements for access and use are clearly specified (particularly if unmet).Behavioral Metric: User is not confused about requirements.

Survey Metric: User rates the requirements as clear, highly visible when needed, and easy to understand.

b. Ensure the necessary technical requirements or technological backbone needed by the system is in place User uncertainty about system status <> User aware system is workingProduct Metric: System status is clearly visible (particularly if unavailable)Behavioral Metric: User shows no confusion about system status.

Survey Metric: User rates system as reliably functional and easy to access. User does not doubt the system is working as intended; if there is a problem, user reports a clear understanding of system status.

4. Make users feel welcome
a. Make users feel welcome User feels ‘put off' or unwelcome <> Users feel welcome Product Metric: Design and experience feels welcoming and friendly.Behavioral Metric: User lingers/spends more time in initial, welcoming screens or areas.

Survey Metric: User describes experience as welcoming or inviting.

b. Use visuals to draw users in Users is intrigued by visuals <> User is annoyed by visualsProduct Metric: User is engaged by visuals, not distracted by them.Survey Metric: User describes visuals as enhancing the experience or as highly useful and helpful.
c. Use sound to enhance experience User is engaged by sounds <> User is distracted or annoyed by soundsProduct Metric: Sounds are used constructively.Behavioral Metric: User stays focused, finds sounds useful or engaging, is not distracted or put off by sounds.

Survey Metric: User describes sounds as helpful, useful, or enhancing the experience / understanding of the content.

d. Engage the affective domain with visual language (color, icons, symbols) User unresponsive to design <> Appropriate user emotions are triggeredProduct Metric: Visual elements stimulate user emotional engagement.Behavioral Metric: User responds to visual language, is drawn in.

Survey Metric: User describes visual language used as engaging, enhancing the experience, or in terms of appropriate emotional response.

5. Set the context
a. Design activities that allow users to become prepared for the experience User feels unready or unprepared <> Users feel preparedProduct Metric: Experience has appropriate precursor activities that allow for familiarization.Behavioral Metric: User encounters an appropriate introductory experience that supports what follows.

Survey Metric: User rate preparation as useful or helpful.

b. Provide users introductory context User lacks context to perform <> User has sufficient backgroundProduct Metric: Background information needed is provided.Behavioral Metric: User is not puzzled at any stage.

Survey Metric: Users rates their contextual readiness as high.

c. Motivate users to move through any necessary initiation User has no drive to continue <> User moves smoothly throughProduct Metric: Experience motivates users to familiarize themselves with the interface, moves them smoothly through as they are ready.Behavioral Metric: User responds to incentives, increases familiarity or demonstrates proficiency, and moves through the experience.

Survey metric: User finds the introduction worthwhile, is not frustrated or unprepared at any stage, or describes initiation as enhancing.

d. Limit setup time to a small portion of the total experience User spends a long time on setup <> User passes through setup quicklyProduct Metric: Setup is quickly completed by any user.Behavioral Metric: User is not confused at any stage of setup.

Survey Metric: User perceives setup as taking a reasonable or minimal amount of time.

6. Make a connection
a. Engage people in what is going on; create connectedness User feels detached <> Users feel drawn in Product Metric: Users can relate to elements of the experience.Behavioral Metric: User is focused on the product. User takes less time to learn. User is immersed in the experience.

Survey Metric: User rates the “connectedness” of the experience highly, or describes it as immersive.

b. Understand potential barriers and offer users identifiable ways to overcome them Users get stuck <> Users overcome barriers quickly and easilyProduct Metric: Barriers are minimal; universally identifiable and easily grasped ‘hooks' offer routes through any necessary barriers.Behavioral Metric: User does not encounter design barriers, or easily overcomes obstacles.

Survey Metric: User perceives experience to be barrier-free. Users describe hooks they encounter as easily understood.

c. Use well-crafted storytelling to immerse users in the encounter User uninvolved, rejects premise <> User is drawn into story/encounter Product Metric: Story is worked into experience seamlessly.Behavioral Metric: User is invested in story and encounter, does not want to leave experience.

Survey Metric: User rates storytelling highly, describes encounter as immersive.

7. Share control
a. Follow standard usability guidelines User feels isolated and powerless <> User feels in charge Product Metric: Experience flows, contains elements to which user can relate and over which he or she feels a sense of control.Behavioral Metric: User is focused on the product, takes less time to learn, finds the experience immersive.

Survey Metric: User rates experience “connectedness” highly and describes experience as immersive.

b. Provide users with resources to construct something User lacks resources <> User has ample resources for creating contentProduct Metric: Experience includes sufficient resources to create things; participation yields in new content.Behavioral Metric: User finds resources with ease, encounters no difficulty constructing things.

Survey Metric: User rates availability of resources highly.

c. Provide a selection of professional-quality components for users No access to quality components <> High quality components availableProduct Metric: Experience includes access to high quality elements.Behavioral Metric: User locates desirable components, is able to use them. User created-content reflects inclusion of quality components.

Survey Metric: User is happy with component selection.

d. Make the process of interpretation participatory User is left out of interpretation <> User is involved in analysisProduct Metric: Experience offers opportunities to interpret encounter.Behavioral Metric: User sees chances to be a part of the process, participates in interpretation.

Survey Metric: User is happy with their involvement in the process.

e. Ensure user actions will not have bad or irreversible consequences User is locked in to actions <> User can reverse undesirable actionsProduct Metric: Actions, including errors, can be easily undone.Behavioral Metric: Users are confident in their actions and unafraid to act.

Survey Metric: User reports comfort with error, understands mistakes are not final.

8. Support interactions among users
a. Create opportunities for users to interact User feels isolated from other users <> Users interactProduct Metric: Experience contains easily accessible interaction opportunities.Behavioral Metric: User encounters chances to interact with others. User interacts with others.

Survey Metric: User rates the experience as very interactive. User is happy with the quality of interactions present.

b. Allow users to share what they create Users cannot share creations with others <> Users share their creations Product Metric: The experience includes easy ways to distribute user work.Behavioral Metric: User utilizes the sharing options.

Survey Metric: User rates sharing options highly or reports sharing to be a key part of the experience.

c. Provide clear protocols for interaction with others User is confused re: interaction <> User understands sharing procedure Product Metric: The experience embeds obvious protocols for interaction with others.Behavioral Metric: User recognizes and makes use of interaction procedures easily and without errors.

Survey Metric: User rates the interaction procedures as obvious.

9. Create a sense of place
a. Give users a sense of place, cues about where they are User has no clue regarding location <> User has a sense of place Product Metric: The interface features clear, easily visible, and easy to understand indicators of user position.Behavioral Metric: Users recognize location indicators, understand where they are and where they are about to go.

Survey Metric: User rates the location cues as very clear.

b. Provide consistency in look and feel to foster a sense of place User is confused by different styles <> User has a feeling of unity Product Metric: Experience has a unified theme.Behavioral Metric: User welcomes the consistent look and does not get confused.

Survey Metric: User rates the look and feel as cohesive.

c. Allow for efficient search as well as exploration User lacks tools for exploration <> User can search and explore Product Metric: Design includes search and allows for exploration.Behavioral Metric: User finds what they are looking for quickly. User both searches and explores.

Survey Metric: User rates search and exploration features as effective.

d. Use natural relationships (categories, hierarchies, similarity, temporal order) Seemingly arbitrary connections <> Natural, easily grasped relationshipsProduct Metric: Natural connections support progress through the experience and interface.Behavioral Metric: User quickly and easily navigates, understands relationships.

Survey Metric: User rates movement through the site as natural.

10. Plan to continue the engagement
a. Design for the next engagement User is stuck in the past <> User is ready to continueProduct Metric: Relationship with product is ongoing, can persist beyond a single experience or task.Behavioral Metric: User is drawn into/stays with experience, is willing to return to it.

Survey Metric: User rates continuity of engagement highly, spends more time with the experience.

b. Make calls to action clear User is stuck deciding <> User understands what to do nextProduct Metric: The experience includes beneficial guidance for the user and clear action options.Behavioral Metric: User makes easy progress from action to action.

Survey Metric: User rates calls to action as clear and easily understood.

c. Invite users to continue connections past the current encounter User has no interest in continuing <> User pursues deeper connection(s)Product Metric: Progress beyond any given point is available to the user.Behavioral Metric: User moves deeper into the experience.

Product Metric: User rates access to further experiences highly.

Case Studies: Application of Heuristics and Metrics to Exemplars

Case One: “Tshinanu, All of Us”, A Culturally-based Web site

The Tshinanu, All of Us Web site (www.tshinanu.tv) was designed as a companion to the Tshinanu (Us Together) television series (2006), and seeks to provide a multisensory experience depicting social, economic, and cultural aspects of life in and around “First Nations” communities in the Canadian province of Quebec. Available in HTML and Flash and in English and French, the site incorporates warm, inviting colors (yellow, red, and brown) with traditional and contemporary images from First Nations cultures (Figure 3). The site design announces to visitors that they are entering a new experience, sets the scene for content they will encounter, and leads them to increased awareness of a unique cultural consciousness.

The Multisensory Experience. Multisensory experiences like this one draw on audio and visual elements to provide content, offering images, colors, forms, and sounds to which users may respond on subconscious and emotional levels. Ideally, the emotional connection creates affective domains that connect users to information and experiences. Users who are engaged and comfortable on an emotional level, are likely to be receptive to the cognitive information that follows (Gazda & Flemister, 1999), resulting in an intuitive experience that envelopes participants in a learning or cultural space (Search, 2007).

 

 

 

Figure 3. “Tshinanu, All of Us”, www.tshinanu.tv, Features Warm Colors, Traditional Symbols, and Contemporary Faces Inviting Visitors to Enter a New Cultural Experience

 

 

 

Digital Storytelling. Storytelling is a powerful design element, offering an opportunity to connect participants together through shared experiences, information, and other content. Narratives feature prominently in marketing, and in a wide variety of applications in education, business, and e-commerce (Search, 2007). Narratives on Web sites often appear in the form of testimonials, product reviews, wiki, blogs, discussion boards, and social messaging, and are a key element of social networking sites such as Facebook, Twitter, and MySpace. Narratives create a sense of community and give participants a sense of identity. Stories help users relate general principles to specific contexts and personal experiences (Edelson, 1993). As a result, designs including stories can help users understand diverse cultural perspectives by mapping new traditions to their own personal experiences (Search, 2002). Such stories are particularly engaging when they create new learning opportunities or communicate human experiences reflecting familiar emotions and cross-cultural themes (including humor, success, failure, and death). Visitors to Tshinanu, All of Us have an opportunity to ‘meet' people through numerous videos, learning about their cultural traditions through the stories they share.

Proposals to Enhance the Experience. The experience might be enhanced with additional content, such as maps showing the locations of these featured communities, audio demonstrations of the pronunciation of native words (including Tshinanu), and additional background information about the featured content and the multiple languages included on the site. The following guidelines might create a more lasting and unified experience for a broader audience:

  • Inclusion of universal themes, emotional contextualization, or experiences similar to those of the audience to create a sense of community.
  • Providing additional background information to moderate the differences between audience cultural experiences and those of the featured subjects.
  • Reduction of communication barriers using contextual help such as descriptions of featured languages, with links to resources for additional study.
  • Addition of visual landmarks such as geographic maps and timelines to orient the user and situate the content in a broader, real-world and historical context.
  • A means for the audience to engage in the discussion or dialog with others, or to obtain additional information (such as by asking questions).

Case Two: The Interactive Image

The HIV/AIDS awareness and prevention campaign exemplar is a Web-based, interactive image designed to facilitate interaction between users and communication designers in the design process through the use of interactive cultural esthetics—that is, a predetermined set of visual elements that the user can customize to suit their cultural preferences prior to production of the final form (Bennett, 2012). It is based on a printed image designed for and with Kenyans through a tech-mediated, participatory workshop process facilitated remotely by expert communication designers in the United States and locally by a graduate student situated in the Kenya.

 

 

 

Figure 4. Initial Image Design

 

 

 

Initial Evaluation. The first heuristic evaluation of the printed image posted in Figure 4 was conducted with a culturally diverse group of participants through an Internet-based survey on SurveyMonkey.com. The researcher asked local and remote evaluators to complete a dozen tasks based on Nielsen's (1994) ten usability principles and representing typical interactions between a user and an image. For example, she asked:

  • If you saw this image on a wall, would you go over to it to read it?
  • What does the red ribbon mean to you? What does the image of Kenya mean to you? What does the image of the woman mean to you?
  • What emotions do you feel as you look at the image? Which parts of the image make you feel that way?
  • If you were working in a health office and this image was given to you, what would you do? Who would you tell about it?
  • Could this image influence your behavior? Could this image influence the behavior of others?

Most evaluators had a weak emotional response to it and concluded they would not engage with it beyond a first glance. The image of Kenya was seldom recognized by and meant very little to participants living in the United States. Similarly, the red ribbon, though used in the US to represent HIV/AIDS awareness, was not universally recognized, and one Nigerian evaluator thought it represented Kenya as a gift. Even evaluators recognizing the message that Kenya has an HIV/AIDS problem did not understand how to “Act Now” as the image advocates, and felt that the call to action required clarification. One African-American evaluator stated that she would share the message of the image with family member. Most said the image would not influence their sexual behavior, though they believed it might influence the behavior of others.

Redesign and Second Evaluation. The new set of heuristics guides a transformative redesign of the printed image into the Web-based, interactive one shown in Figure 5. This version has multiple pages. The first welcomes the viewer and offers background information. The second provided instructions on how to use the interactive image. The third displays the interactive image with a given set of modifiable visual elements. For instance, the user can change the identity of the featured person by clicking on the woman and selecting another image, and, by the same means, alter the featured country, font, typestyle, point size, text color, and message (within a set character limit). By clicking on the margin, the user can print the image to a PDF for email distribution or to a local or networked printer. The user can also click on the image's background to view a sub-menu containing links to additional social-networking-based sharing options and information, continuing the engagement beyond the image.

The second evaluation reviewed both the original printed design and the interactive image. Participants were of both genders (1:1), a variety of ethnicities, ranged in age from 12 to 55, and possessed a minimum of six years' experience using the Internet. These evaluators found the interactive design more engaging and enjoyable, and reported that they both felt they belonged in its targeted group and easily understood what to do next throughout the experience. There was an increase in reported interest in the subject of HIV/AIDS awareness. Note, however, that evaluators still did not feel compelled to recommend the interactive image to others or make a deep connection to its content or to a community.

Case Three: Collaborative Wikis in Higher Education

To investigate the collaborative nature of wikis and the potential value of this tool for higher education courses which require multi-authored writing projects, another team embarked on a three-year project involving a wiki prototype. Over four design and testing cycles, the wiki transformed and developed across three wikiware platforms (Mediawiki, Twiki, and ultimately the commercially available Clearspace). In the first two rounds of prototyping and testing, the team created an initial exemplar to better understand the issues involved with asking students to write in a public platform. The team tried to determine the relative influence of users' perceptions of privacy, intellectual property, and general Web usability on their motivation to contribute. The questions that guided this initial research included: What would motivate participants to contribute? What helps them understand the tool as an aid for collaborative knowledge production? What helps them feel safe enough to contribute content?

 

 

 

Figure 5. Interactive Image Screens, Showing Interactive Visual Elements

 

 

 

After two rounds of design and testing without explicit incorporation into a specific class context, the team realized they needed to design collaborative wiki writing assignments in conjunction with other contextual classroom elements that influence and affect a user's overall learning experience. In the second half of the project, they worked directly with Rensselaer Polytechnic Institute faculty in Engineering and Product-Design and Innovation to create a writing assignment that would use wikis and also fit with the professors' overall course goals. After creating and implementing a pilot assignment in an Engineering course, “Introduction to Air Quality,” in Fall 2007, the team worked with two professors who agreed to use the assignment in their communication-intensive courses in Spring 2008. Professor Lupita Montoya used the revised wiki-based group writing assignment in Introduction to Engineering Design—a required, first-year writing-intensive engineering course—and Professor Dean Nieusma used it in Product, Design, and Innovation, Studio 6—the sixth in a series of design courses where junior students worked together to generate a product concept. By incorporating the assignment in two very different courses whose culminating final assignments included collaborative writing, the team generated a broader range of contextualized responses and feedback.

Although the exemplar was designed and tested for an educational context, students' experiences with the wiki led the team to develop some design principles which can be applied more generally to problems of engagement users confront when asked to perform familiar tasks in unfamiliar ways. The wiki specifically helped to facilitate four desirable conditions in educational, professional, and other writing contexts: (1) collaborative knowledge production, (2) better understanding of writing as iterative, recursive, and collaborative, (3) iterative and recursive content development, and (4) and more polished writing.

The group also found that a successful wiki facilitates users' collaborative writing, participation, and contribution while reinforcing Web writing conventions in a more direct way than other online media dependent on user-generated content, such as blogs or social networking sites. Wikis highlight the value of traditional usability prescriptions (Nielsen, 1994) for Web writing. Text heavy pages or a lack of chunked, bulleted, or visually highlighted information can lead to slower content development, miscommunication, and failures to meet project objectives. Wikis benefit from clear and easy to follow navigation and from following other Web conventions, such as the use of underline and color to denote hyperlinks. Additionally, wiki writing spaces that resemble or evoke established desktop publishing interfaces familiar from Microsoft Word or Open Office often have a gentler learning curve, leading to increased participation and content production. Users expect formatting options such as italic, bold, and underlined text, and may also use font color and size options. They also expect the ability to upload, embed, or link to media such as presentations (in this case PowerPoint, Keynote, and other slide-based formats), video, sound, and images. While other Web-based writing platforms may also invoke or create strong community elements, wikis are the only medium whose community focus begins with and expands out from users' desire to write together with others, across both space and time.

Case Four: Distance Learning

A wide variety of technologies have been used by corporations and educational institutions as a means to deliver professional training and academic education, including computer-based drills, multi-user role playing simulations, text-based asynchronous chat systems, and Internet-based synchronous meeting and classroom systems. Despite the apparent advantages of distance systems, which, in theory, offer convenience, and reductions in travel time and related expenses, Driscoll (2008) and Shank (2008) point out that many such systems fail due to problems including high development costs and the small percentage of professionals targeted who actually make use of the content.

Rensselaer Polytechnic Institute has provided professional training and education in technical communication for sixty years, and by electronic means for about ten years. A research group consisting of faculty members Robert Krull and Roger Grice and graduate students David Lumerman, Michael Madaio, and Dustin Kirk assessed the effectiveness of electronic delivery systems in use for distance education via questionnaires, observation of learner performance during classes, and using simulations of class workshops. The quantitative and qualitative data collected documented learner performance, perceptions, and preferences. RPI's distance learning program uses real time video instruction, and our data are not unreservedly applicable to the full range of electronic and non-electronic systems used in teaching and training (listed by Driscoll, 2008) which includes courses totally based in asynchronous and text-based delivery systems like those described by Rubens and Southard (2005, 2000).

Factors in the Success or Failure of Distance Learning. The research team concluded that the success or failure of distance learning depends both on the technology and on the ways it is used. Overall, distance learning can provide a valuable experience for all participants if several factors are considered.

Participants need to cope with a learning platform's technical requirements and technological components. As the number of technologies used increases, the time necessary for users to launch the system or recover from technical problems increases. The time spent coping with the technology must be added to the time spent directly with instructional content. Participants reported that they needed 20-30 minutes to get all requisite hardware and software running before each class began, and many needed to reboot and relaunch during a class session when any component of the system ceased to function. When the complexity of the learning platform pushes technological boundaries (as our allowing increasing numbers of students to access audio and video simultaneously often did), the delivery system may become fragile. Instructors and learners must make the most of opportunities to discover which stresses on a delivery system lead to failure, and plan to avoid them.

A more elaborate delivery platform may have educational advantages, provided there is enough support for the platform to keep it running. Two online delivery methods (employed by the Society for Technical Communication), recorded conference presentations with slides and audio and webinar conference systems including text chat, work reliably within the capabilities of technology in general use and are less prone to failure, and these work best when information moves downstream from presenters to an audience. RPI's system incorporating more upstream and downstream channels was more fragile as a result. Technological delivery systems thus tend to reinforce notions of an either/or trade-off between stability and broader participation and interactivity, even while instructional design literature shows that instructors and students benefit from going beyond a purely downstream-oriented master-teacher system (Danchak & Huguet, 2004). The instructional design literature for classroom instruction (Gagné, Briggs, & Wager, 1992), computer-based means (Alessi & Trollip, 2000) or via blended face-to-face (F2F) and electronic means (Horton, 2000), stresses the importance of learner engagement with subject matter beyond passive reception of lectures. However, though active participation and collaboration are generally important, they do not operate in the same way in all learning environments (Benbunan-Fich & Hiltz, 2003).

Particularly when learners are professionals with considerable work experience, they desire to and are capable of collaborating in peer-to-peer instruction. One study of distance learning courses at RPI showed that peer-to-peer collaboration networks yielded individuals who were regarded as particularly knowledgeable, leaders of their learning communities (Sundararajan, 2009).

Learners are inventive, even with simple tools like text chat. Our learners got clarification from each other regarding administrative issues, such what course content would be on examinations, and on the course content itself, such as the meaning of terms and ramifications of theoretical concepts. This type of learner communication was found to be a key benefit of distance education. Our data suggest that it is important that teachers are able to nurture a collaborative learning community in which learners engage with each other through the upstream components of a distance learning platform. Upstream technical components include text chat, audio through telephone or VOIP Internet networks, and the content of upstream information includes the knowledge generated by learners in their peer-to-peer interactions. Delivering instruction over a purely down-stream system is less effective, and workarounds may be worthwhile. For example, downstream lectures could be supplemented by asynchronous learner interactions whose products are integrated into subsequent lectures.

A blended learning platform that delivers instruction to face-to-face (F2F) classrooms through live discussion and consigns isolated distance learners to collaborating electronically can produce two separate learning communities, one in the classroom and one in the ether.

F2F participants can interact with each other without an interposed electronic medium, but they are at a disadvantage in that they have to take turns speaking or they interrupt learners or the instructor. Distance students must interact through electronic media, but those media can both constrain and enhance communication.

At RPI distance learners could interact with each other in real-time via text chat, but for them to draw the attention of participants in the F2F classroom, they needed to have someone in that classroom paying attention to the chat window speak up on their behalf. If instructors directed their attention primarily to the F2F classroom, they might not notice new items in the chat window. To compound the problem, video from the F2F classroom reached distance students after delays of up to one minute. By the time distance learners saw the video, digested its content, and typed in a reaction, the F2F class had moved on. In that sense, the media constrained distance learners from interacting naturally with F2F learners. RPI tried to ameliorate this problem by assigning a teaching assistant to monitor the chat window, to compile related comments, to address some topics without involving the instructor, and to bring important issues to the instructor's attention. This procedure helped, but F2F and distance students still felt themselves to be part of separate communities. An additional way to ameliorate the problem was to establish an etiquette that indicated it was acceptable for distance learners to signal instructors, even when the signal appeared late. Since everyone recognized that the delivery platform entailed unavoidable delays, they could accept that some adjustments to normal “conversation” needed to be made for the electronic media.

The electronic media also expanded communication possibilities, such as when reliable Wi-Fi signals became available in the F2F classroom, students began to use laptop computers to join distance students in the chat space. Because there was almost no time delay in the chat space, F2F and distance students were able to share comments synchronously. One unanticipated benefit of text chat was that learners could share information without interrupting speakers in the F2F classroom. That benefit made it possible for all learners to make text-based comments on what was said in the classroom verbally, thereby elaborating on the instructor-centric interaction.

These findings can be extrapolated to other educational situations. For example, the Society for Technical Communication's archive of conference presentations offers downstream information, predominantly in a lecture format. Some other organizations have offered live feeds of video or audio from conference sessions to electronically connected participants who are attending sessions live, but at a distance. Either the STC format or the live-feed format might be augmented by chat spaces or bulletin boards in which participants are able to exchange information electronically. Though the STC's webinars do have a text-chat component, participants may need additional encouragement to use it to engage in discussion while presenters are speaking. Participants who value the context are more reluctant to engage in side conversations, and this sensibility informs their initial use of text-chat despite the less-intrusive nature of this medium. At RPI, we found that it took a while, sometimes multiple sessions, to establish an etiquette allowing for that kind of discussion but that when such an etiquette is in place, participants are likely to try it out. The STC could explore approaches to communicating to participants that their peer-to-peer interactions are welcome and could be educationally helpful. One method might be to assign a moderator to monitor these interactions as they happen and bring some of the points made in them to the attention of presenters during question and answer periods or even during the presentation itself.

Despite the problem of face-to-face and distance students perceiving themselves to be part of separate, parallel universes, all students felt that mixing face-to-face with distance instruction was a valuable part of learning. Students with very different backgrounds and goals could gain from sharing information with each other.

The two graduate students who developed the first version of the questionnaire we used in various versions over three years, William Wetmore and Louis Ruggerio, suggested that perhaps learners might prefer to watch recorded lectures on their own time and to reserve class time for open discussion. Questionnaire responses showed that, instead, students valued the opportunity to ask questions and make comments during live lectures as well.

Observations of Learners during Classes and Virtual Laboratory Tests. The questionnaire data provided useful information about learner preferences. We added to it by observing F2F and distance learners during classes. We also conducted virtual tests involved observing users collaborating using the whiteboard of the distance platform, then discussing their collaboration. Finally, we held de-briefing interviews after conclusion of the session. Findings included the following:

  • Learners needed about half an hour in advance of the class to get hardware and software running, and this time period did not diminish with experience.
  • The distance technology needed regular attention during the class. Learners regularly lost connection with the live classroom and needed to re-launch the learning platform. Learners missed some instructional material as a result, though not enough to feel the instructional model was threatened.
  • In the two- to three-person groups in the virtual laboratory tests, learners overwhelmingly preferred having an audio connection for discussion to being linked by text chat. For larger groups, audio connections were likely to produce unwelcome echoes or feedback when learners tried to speak simultaneously. The unreliability of multi-source audio led to compensatory behavior on the part of participants, such as repeatedly asking if others could hear them. The etiquette regarding use of the audio channel in class expanded at RPI to include, when possible, the use of microphone enabled headsets to cut down feedback and the muting of participants' microphones unless they were actually speaking.
  • Similarly, a whiteboard feature of a distance learning platform that allowed multiple cursors to appear, one for each user with their names attached, was initially appealing but led learners in full classes to report they felt they were ‘being attacked by swarms of cursor bees.' Learners found that they could only work effectively when one person was assigned to control the content on the whiteboard, assuring that only one person's named cursor would appear, and this revised procedure was incorporated into their etiquette.

The distance learning research team found that the incorporation of this research into their classes took them beyond the feedback typically obtained from end-of-semester evaluations, giving them useful information they would not have otherwise. This type of research might help professional associations, corporate trainers, and other distance educators to fine tune their own electronic systems for information delivery to members.

Case Five: Connected Kids Information Gallery

The Connected Kids information system is a youth services resource for Troy and Rensselaer County, New York, featuring information for teens and adults and a gallery of images (http://connectedkids.rpi.edu/, retrieved March 4, 2012). The gallery is an experiment in the development of user-generated visual, audio, and textual information. This type of design has been celebrated as collaborative and participatory, and it has also condemned as anti-social, dehumanizing, and potentially threatening (Bruns, 2008; Keen, 2007; Lanier, 2010; Lessig, 2006; Tapscott & Williams, 2006). To illustrate these challenges, the gallery explores the opportunities and the problems that accompany the creating and sharing of private or proprietary information resources in an open and public medium. Our tests of the gallery documented some of the special problems—not unique to resources of this kind—related to information sharing among teens. Given the extraordinary popularity of resources featuring user-generated content, organizations of all types—commercial, civic, and social—are challenged to share user content and comments openly and transparently even as they take steps to ensure that they protect their own proprietary information and interests with selective linking, active filtering, community moderators and other means.

A fundamental belief in the collaborative efforts, in the power of “collective intelligence,” and in the “wisdom of crowds” drives the hope (and hype) surrounding this stage in the development of the World Wide Web (Bruns, 2008; Raymond, 2001; Suroriecki, 2005; Tapscott and Williams, 2006), and it also drives predictions of enhanced collaborations, increased participation, and productive social collectives. Tapscott and Williams describe this “new Web” as a model of collective intelligence, “wikinomics” (pp. 18-19), and Axel Bruns posits “collaborative produsage” as the driving force behind collective intelligence, envisioning a new era of “information, knowledge, and creative work, collaboratively developed, compiled, and shared under a produsage model” representing “a fundamental reconfiguration of our cultural and intellectual life, and thus of society and democracy itself” (pp. 16, 34).

Others question this unbridled optimism, pointing to the limitations and even dangers inherent in Web-based information resources (Keen, 2007; Lanier, 2010; Lessig, 2006). Andrew Keen views “the wisdom of the crowd” as illusory—the product not of user-generated content but rather of “user-generated corruption”—and claims that “the cult of the amateur” is responsible for a decline in the quality and reliability of information and the “distorting” and “corrupting” of “our national civic conversation” (pp. 27, 93-94). Jaron Lanier is less pessimistic about the promises of collective intelligence, but warns of the dehumanizing potential of Web-based information systems, which impose technical constraints and thereby reduce human potentials to predefined categories. Whereas Bruns notes that photo, music, and video sharing creates an audience of millions of potential viewers, Lessig cautions that powerful search capacities ensure ready access to these resources for both innocent and not-so-innocent users.

Hope for collaboration and participation on a global scale has thus offset a need to guard against potential abuses, to balance opportunities for rich and diverse user-generated content against the need to safeguard information quality and protect information that is private or proprietary. As a resource designed for teen users, the Connected Kids information system and gallery faces special problems of privacy and protection. The system offers self-serve data entry for local youth-services organizations, with simple copy-and-paste functionality for ease of use. The system also includes separate interfaces for parents and young children, teens and adults, and children or teens of middle-school age. The gallery seeks to collect information about youth services and activities in visual and audio formats rather than solely as text. It is built on open-source Gallery software (http://gallery.menalto.com/, retrieved March 4, 2012) with sophisticated search and comment functions, user-owned albums with thumbnail images, slideshows, show-and-hide customization, and flexible administrative and oversight options. The content in the gallery includes visual, audio, and textual components representing teen school, after-school, and summer-camp activities, such as school science projects, skating images with coach and skater interviews, and summer camp educational content on issues related to local ecology and basic wilderness survival. This content is largely user-generated, posted by teachers or camp counselors who own and manage their own ‘albums', sometimes with our assistance.

Special Measures: Privacy and Protection. To meet some of the special challenges associated with protection and privacy of information sharing by teens, we require signed permissions for all photos of teens posted to the gallery, and we permit only school officials, teachers, and youth services personnel to post content of any kind. We also prohibit users from posting comments without moderator oversight. Since the gallery initially features a comment function without moderator oversight, we developed this function ourselves and shared it with the larger open-source community. The function, however, does delay response times, serving as a deterrent to its use by teens, who, even more than adults, expect their actions to generate immediate results.

Assessing the Gallery's Features. To assess the gallery's features, we conducted user tests on site at a local high school, including open-ended small-group discussions. Our tests showed a mixed response to the gallery, with general appreciation for its sophisticated features but reservations about its limitations as an information-sharing resource. From a usability perspective, students encountered little difficulty with the gallery though they seemed to prefer browsing to searching. From an experiential perspective, however, students noted the apparent lack of clarity of purpose and limited opportunities for image and information sharing. They expected more information in the form of locations, directions, maps, hours of operation, and the like. They also expected more activities or games and more color and visual appeal generally. Most strongly, they felt that a gallery directed to teens should include more of their own content and opportunities for them to add their own descriptions and captions for their work. As one of them observed, “the people who made this know more about it than anyone else.” In the high-school and summer-camp albums, especially, they wanted to see more comments by people their own age. In follow-up interviews with teachers, we learned that the students had regular experience with Google searches but especially enjoyed browsing in Photobucket (http://photobucket.com/, retrieved March 4, 2012). We suspect that this prior experience influenced their preference for browsing and also their expectation of ease of access and use of photo-sharing and comment functions.

These challenges are not unique to teen users but reflect broader trends toward heightened expectations for information sharing coupled with the need to protect private or proprietary information. Our tests results suggest that these challenges are substantial and will likely increase as teen users become adults. To address these challenges, we suggest a kind of compromise that permits but delimits user-generated content by selective linking and/or strict moderator oversight of discussion groups, blogs, or forums. Web-savvy users regularly access product information from resources such as CNET (http://reviews.cnet.com/, retrieved March 4, 2012) and Newegg (http://www.newegg.com/Feedback/Reviews.aspx, retrieved March 4, 2012) and from discussion groups, blogs, and forums, which provide ready answers to troubleshooting questions about products and services. Game forums, moreover, provide information not only about basic product functions and features but also information about how to use products strategically to get positive results (see, for example, http://forums.worldofwarcraft.com/, retrieved March 4, 2012).

To paraphrase our teen user, “the people who use this product know more about it than anyone else.” Commercial, civic, and social organizations are challenged by the next generation of users to deploy user-generated information resources to best advantage, to promote their products and services and, at the same time, through selective linking and active moderator oversight, to protect against inaccurate or negative information and also to protect their proprietary information and interests.

Conclusion

The emergence, and continual evolution, of new, interactive communication media and techniques provide many opportunities for technical communicators to communicate more effectively with their audiences; they also provide challenges. A major challenge is to determine what techniques and approaches to usability are effective and which ones are not. Many usability principles and metrics developed in the past enable us to assess certain aspects of technical communication in a tech-mediated world, but new technologies and new communication forms challenge us to identify additional assessment tools and metrics suited to the new communication environments in which we live and work.

In this study, we have examined aspects of technical communication that could benefit from renewed focus on the relationship of people and the information that they use. We have developed a TMS toolkit that consists of a set of ten TMS heuristics and a set of metrics that technical communicators can use to assess how well individual pieces of tech-mediated communication meet the goal of satisfying those heuristics.

This project demonstrated to our research group that tech-mediated communications in a variety of contexts moves users from control, through identity, and toward community, by processes distinct from those evident in traditional document-centered technical communication. It also documented a few of the many ways that the proliferation of technologies and of information influences issues of usability and user experience. Traditional metrics for the evaluation of document usability—efficiency, accuracy, and satisfaction—though still highly relevant are no longer adequate, by themselves, for use in designing or evaluating tech-mediated communications. Though the TMC Toolkit can be used to evaluate a wide range of tech-mediated communications and is, itself, an end product here. It also represents one stage in a process of progressive re-evaluation that will continue—must continue—as communication and technology continue to change. Broadening the scope of the evaluation of communication need not mean abandoning tried and true ideas, nor should it involve a relaxation of rigor. It simply involves surveying more connections, considering the blurring line between author and audience, and carefully, cautiously measuring what we find.

Acknowledgments

The researchers thank the Society for Technical Communication and Rensselaer Polytechnic Institute for their support. We also acknowledge the contributions of our undergraduate and graduate student participants, including Elia Nelson, Mohamad Hizar Khuzaimah, Jessica Woods, Dale Bass, Noah Schaffer, William Wetmore, Louis Ruggerio, Yeggor Pavlov, Jon Buckley, Dave, Wallack, Tim Mansfield, and Jeff Linehan.

References

Alessi, S. M., & Trollip, S. R. (2000). Multimedia for learning: Methods and development. New York, NY: Allyn & Bacon.

Anderson, C. (2006). The long tail: Why the future of business is selling less of more. New York, NY: Hyperion.

Benbunan-Fich, R., & Hiltz, S. R. (2003). Mediators of the effectiveness of online courses. IEEE Transactions on Professional Communication, 46, 298-311.

Bennett, A. (2012). Engendering interaction with images. Bristol, UK: Intellect.

Bolter, J. D., & Gromala, D. (2003). Windows and mirrors: Interaction design, digital art, and the myth of transparency. Leonardo. Cambridge, MA: MIT Press.

Brinck, T., Gergle, D., & Wood, S. D. (2002). Usability for the web: Designing web sites that work. San Francisco, CA: Academic Press, Morgan Kaufmann.

Bruns, A. (2008) Blogs, wikipedia, second life, and beyond: From production to produsage. Digital Formations 45. New York, NY: Peter Lang.

Danchak, M. M., & Huguet, M. P. (2004). Designing for the changing role of the instructor in blended learning. IEEE Transactions on Professional Communication, 47, 200-210.

De Jong, M., & Van der Geest, T. (2000). Characterizing web heuristics. Technical Communication, 47, 311-325.

Driscoll, M. (2008). Hype versus reality in the boardroom. In S. Carliner and P. Shank (Eds.), The e-learning handbook: Past promises, present challenges (29-54). San Francisco, CA: Pfeiffer.

Edelson, D. S. (1993). Aesops, and the computer: Questioning and storytelling with multimedia. Journal of Educational Multimedia and Hypermedia, 2, 393-404.

Gagné, R. M., Briggs, L. J., & Wager, W. W. (1992). Principles of instructional design. 4th ed. Orlando, FL: Harcourt Brace Jovanovich.

Gazda, R., & Flemister, M. (1999). Design and production of video for instructional multimedia: Psychological implications and proposed guidelines. Journal of Visual Literacy, 19(1), 85-98.

Hargis, G., Carey, M., Hernandez, A. K., Hughes, P., Longo, D., Rouiller, S., & Wilde, E. (1998). Developing quality technical information. New Jersey, NJ: Prentice Hall.

Jordan, P. W. (2000). Designing pleasurable products: An introduction to the new human factors. London, UK: Taylor and Francis.

Keen, A. (2007). The cult of the amateur: How today's internet is killing our culture. New York, NY: Doubleday, Currency Books.

Lanier, J. (2010). You are not a gadget: A manifesto. New York, NY: Alfred A. Knopf.

Lessig, L. (2006). Code: Version 2.0. New York, NY: Perseus Books Group, Basic Books.

Lessig, L. (2008). Remix: Making art and commerce thrive in the hybrid economy. New York, NY: Penguin Press.

Lévy, P. (1997). Collective intelligence: Mankind's emerging world in cyberspace. (R. Bononno, Trans.). Cambridge, MA: Perseus Books Group, Helix Books.

McCarthy, J.,& Wright, P. (2004). Technology as experience. Cambridge, MA: MIT Press.

Molich, R., & Nielsen, J. (1990). Improving a human-computer dialogue. In Communications of the ACM 33, 338-348.

Nielsen, J. (1993). Usability engineering. San Diego, CA: Academic Press, Morgan Kaufmann.

Nielsen, J. (1994). Enhancing the explanatory power of usability heuristics. In C. Plaisant (Ed.), Proceedings of the ACM CHI 94 Human Factors in Computing System Conference (152-158). Boston, MA: Association of Computing Machinery.

Nielsen, J. (2000). Designing web usability: The practice of simplicity. Indianapolis, IN: New Riders.

Nielsen, J., & Mack, R. (1994). Usability inspection methods. New York, NY: John Wiley.

Nielsen, J., & Molich, R. (1990). Heuristic evaluation of user interfaces. In J. Carrasco & J. Whiteside (Eds.), Proceedings of the ACM CHI 90 Human Factors in Computing Systems Conference (249-256). Seattle, WA: ACM.

Norman, D. A. (2004). Emotional design: Why we love (or hate) everyday things. New York, NY: Perseus Books Group, Basic Books.

Raymond, E. S. (2001). The cathedral and the bazaar: Musings on linux and open source by an accidental revolutionary. Rev. ed. Sebastopol, CA: O'Reilly Media.

Rubens, P., & Southard, S. (2000). Using technologies for communication and learning. In Proceedings of 2000 Joint IEEE International and 18th Annual Conference on Computer Documentation (IPCC/SIGDOC 2000), 185-9. Cambridge, MA: IEEE.

Rubens, P. & Southard, S. (2005). Students' technological difficulties in using web-based learning environments. In K. C. Cook & K. Grant-Davie (Eds.), Online Education: Global Questions, Local Answers (193-205). Amityville, NY: Baywood.

Search, P. (2002). HyperGlyphs: New multiliteracy models for interactive computing. In R. Griffin, J. Lee, & V. Williams (Eds.), Visual Literacy in Message Design (171-177). Loretto, PA: International Visual Literacy Association.

Search, P. (2007). Digital storytelling for cross-cultural communication in global networking. In R. Griffin, M. Avgerinou, & J. Giesen (Eds.), History, Community, and Culture: Celebrating Tradition and Transforming our Future (1-6). Loretto, PA: International Visual Literacy Association.

Shank, P. (2008). Thinking critically to move e-learning forward. In S. Carliner & P. Shank (Eds.), The e-learning handbook: past promises, present challenges (15-26). San Francisco, CA: Pfeiffer.

Shedroff, N. (2001). Experience design 1. Indianapolis, IN: New Riders.

Sundararajan, B. (2009). Impact of communication patterns, network positions and social dynamics factors on learning among students in a CSCL environment. Electronic Journal of e-Learning, 7(1), 71 – 84.

Surowiecki, J. (2005). The wisdom of crowds. New York, NY: Random House, Anchor Books.

Tapscott, D., & Williams, A. D. (2006). Wikinomics: How mass collaboration changes everything. New York, NY: Penguin Group, Portfolio.

Van der Geest, T., & Spyridakis, J. H. (2000). Developing heuristics for web communication: An introduction. Technical Communication, 47, 301-310.

About the Authors

Roger A. Grice is a professor of practice in technical communication and human-computer interaction in the Department of Communication and Media at Rensselaer. He was elected Fellow of the Society for Technical Communication and received the society's Jay R. Gould Award for Excellence in Teaching Technical Communication and IEEE Professional Communication Society's Alfred N. Goldsmith Award for Contributions to Engineering Communication. He is retired from IBM, and now conducts HCI research as a member of the Rensselaer faculty, as well as teaching courses on human-computer interaction, communication design for the World Wide Web, information usability, and technical communication. Contact: gricer@rpi.edu.

Audrey G. Bennett is an associate professor in the Department of Communication and Media at Rensselaer. She teaches courses in communication design theory and research and conducts research on collaborative and participatory design as methods for cross-cultural communication. She is editor of Design studies: Theory and research in graphic design published by Princeton Architectural Press that chronicles historical and contemporary efforts of designers to broaden the scope of the profession of graphic design to include user research. Her research includes development of a theory of interactive esthetics that democratizes the design process and places designers in virtual collaboration with lay users. Contact: bennett@rpi.edu.

Janice W. Fernheimer is assistant professor of Writing, Rhetoric, and Digital Media at The University of Kentucky where she teaches courses in rhetoric, technology, and pedagogy; digital writing; and Jewish rhetorical studies. Her research focuses on questions of identity, invention, and cross-audience communication. Previously, she was an assistant professor in the Department of Language, Literature, and Communication at Rensselaer. Contact: jfernheimer@uky.edu.

Cheryl Geisler is Dean of the Faculty of Communication, Art, and Technology at Simon Fraser University, as well as a professor in the School of Interactive Arts and Technology. Prior to joining SFU, she spent more than 20 years at Rensselaer in multiple leadership roles including two terms as head of the Department of Language, Literature, and Communication. Recently, she was the principal investigator of RAMP-UP, a National Science Foundation-funded project for institutional transformation. Contact: cheryl_geisler@sfu.ca.

Robert Krull is an independent researcher and recently retired as professor in the Department of Communication and Media at Rensselaer Polytechnic Institute. He has been involved with multiple projects for the Society for Technical Communication, and received the Jay R. Gould Award for Excellence in Teaching Technical Communication. His research areas include instructional television, media effects, computer documentation and interfaces, and acquisition of physical skills. Contact: rkrull@nycap.rr.com.

Raymond A. Lutzky is a doctoral candidate in the Department of Communication and Media at Rensselaer. His research includes usability, graphic design, and visual rhetoric. He supported public relations strategy for the 2007 STC Annual Conference and previously served as assistant to the STC President for student outreach. Contact: lutzkr3@rpi.edu.

 

Matthew G. J. Rolph is a doctoral candidate in Communication and Rhetoric in the Department of Communication and Media at Rensselaer. Previously, he served as adjunct faculty in the departments of English and Interdisciplinary Studies at Plymouth State University from 2001-2004, then as coordinator for Plymouth State's College of University Studies from 2004-2008. Contact: rolphm@rpi.edu.

Patricia Search is a multimedia artist and professor at Rensselaer Polytechnic Institute. She has exhibited her artwork in 31 solo exhibitions and numerous international shows. Her art has been featured in over 35 publications including three documentaries. She received best research paper awards from the World Conference on Educational Multimedia and Hypermedia and the International Visual Literacy Association (IVLA). She was awarded a Fellowship in Computer Arts from the New York Foundation for the Arts, Fulbright Senior Specialists Grant for research in Australia, and the Creative Achievement Award from IVLA. She was President of IVLA from 2009-2010. Contact: searcp@rpi.edu.

James P. Zappen is a professor in the Department of Communication and Media at Rensselaer. He is author of The rebirth of dialogue, published by the State University of New York Press, and has also published in Journal of Technical Writing and Communication, Technical Communication Quarterly, Philosophy and Rhetoric, Rhetoric Review, Rhetoric Society Quarterly, and other journals. He is former President of the Council for Programs in Technical and Scientific Communication and has served as a consultant to the Dow Corning Corporation, the Michigan Judicial Institute, the New York State Department of Labor, and other organizations. Contact: zappenj@rpi.edu.

Manuscript received 21 January 2011; revised 13 April 2012; accepted 5 August 2012.