By Morgan Dennis
This article describes a situation in which a team of writers at IBM developed a model for writing scenarios to be published with the first release of a new product, InfoSphere BigInsights, in May 2011. To determine the scenarios’ effectiveness at reaching the target audience and satisfying audience goals, we conducted usability testing with target participants in order to understand their expectations of the scenario genre, as well as their impressions of our scenarios, and to implement audience-driven changes to the scenarios’ structure and content.
This article explains the impetus for the testing and some of the challenges we encountered, all the while emphasizing the importance of usability testing for writers and the insights that testing with target populations can offer. It also outlines practical approaches to conducting testing and offers recommendations and considerations for adapting usability research to different team situations.
Background
In preparation for the InfoSphere BigInsights V1.1 release in May 2011, we produced a new type of documentation—the scenario, or extended example or use case (used interchangeably here). The function of our scenario is to identify a common business problem of Big Data users and to describe how InfoSphere BigInsights solves that problem, underscoring the value of InfoSphere BigInsights to potential customers.
At the time these scenarios were written, the Information Development (ID) team was composed of technical writers, two of whom developed the scenarios; a team lead, who directed the project’s schedule and resources; an information architect, who collaborated with the writers to develop the scenario template; a technical editor, who reviewed the template and drafts of the scenario; and a manager, who oversaw the project and provided insight and direction.
When it was published, the scenario consisted of five topics:
- An overview describing the business problem and explaining, at a high level, how InfoSphere BigInsights could solve that problem
- A description of the business and technical challenges, which identifies common challenges that users face as they deal with their business problem
- An overview of the solution, written for an audience of decision-makers, that summarizes the high-level process of implementing InfoSphere BigInsights
- A detailed description of the solution, targeted to a technical audience of database administrators, which provides details about architecture and implementation
- A thorough explanation of the results and benefits of the InfoSphere BigInsights solution to potential customers
Testing Objectives
While developing these scenarios, we researched the ways in which other IBM teams had written similar information. Unable to find a model that presented the precise content that we wanted to convey in a way that was suitable for our purposes and audiences, we developed the template based on a rhetorical analysis of the needs and expectations of our target audience, covering only those areas we thought our audiences expected from this genre.
Unsure, however, whether the depth and coverage of our content met our users’ needs, whether the subject matter identified a problem to which they could relate, and whether users understood the value of InfoSphere BigInsights to their enterprises, we wanted to conduct usability tests to:
- Understand our target users’ expectations of the scenario genre
- Determine user satisfaction with the scenario structure and content
- Identify recommendations for improving existing and future scenarios
Challenges
Once we decided to user test our scenarios, we faced interesting challenges, specifically relating to:
- Identifying the target audience of the scenarios
- Recruiting representative test participants
- Designing test questions that addressed the testing goals
- Analyzing test data
- Validating changes to the scenario template and content
Identifying Target Audience
After eliciting the input of the development and solution enablement teams and considering the functions of the information, we concluded that the primary readership of the scenarios consisted of two main groups:
- Business analysts, who evaluate new products to determine whether those products have value to their organizations
- Technical users, who evaluate new products to determine whether those products can be integrated into their organizations’ infrastructures
Given the requirements of these two audiences, the scenarios had to be written in such a way so as to satisfy the needs and goals of each group quickly and effectively—another challenge when considering the requirements of each group in terms of presentation of the information, the role of tone and voice, the level of detail, and the structure of the information for both audiences within the same information unit.
Recruiting Representative Participants
Our team worked closely with the user experience coordinator at IBM, who helped define our user profiles and recruit test participants who had experience in one or both of these roles. She developed a recruitment screener that participants completed describing the types, natures, and durations of their industry backgrounds. We recruited a total of nine participants—five business analysts and four technical administrators; some participants noted that they had experience in both roles.
Designing Meaningful Test Questions
Our primary research questions queried how users felt about the scenarios—namely, if the structure and content of the scenarios facilitated their understanding of the value of InfoSphere BigInsights to their organization. To this end, we developed pre-test questions that asked about participants’ experiences with and expectations of the scenario genre:
- When have you written or read scenarios in order to make purchasing decisions?
- When you are considering a new product, what types of information do you access to help you narrow down options?
- When has scenario documentation met or not met your needs?
- How likely are you to read multiple scenarios for a single product?
- What kinds of information must scenarios include to enable you to assess whether a product will satisfy your business need?
Following the pre-test interview, we asked participants to read two separate scenarios, noting aloud any observations or reactions as they read. After each scenario, we asked whether the business problem that each scenario identified was relevant to their jobs.
Finally, we wanted to understand participants’ overall impressions of our scenarios. To this end, we developed questions that would explore participants’ reactions to the scenarios’ structure and content as well as their impressions of the product. We asked the following questions after participants had read both scenarios:
- How well do the scenarios help you understand whether InfoSphere BigInsights has value for your enterprise?
- Do the scenarios contain enough detail for you to understand how InfoSphere BigInsights would help solve a business problem, and then to implement that scenario in your organization?
- Are the scenarios general enough that you can understand how they can apply to your enterprise?
- What types of information are the scenarios missing or are underdeveloped that would help you better understand or implement InfoSphere BigInsights?
- On a scale of 1 to 5 (with 1 being very dissatisfied and 5 being very satisfied), how satisfied are you with the structure (form) of the scenarios?
- How satisfied are you with the completeness of the scenarios?
- How satisfied are you with the organization of the scenarios?
- How would you rate your overall satisfaction with the scenarios?
- If you would be interested in this product, what would your next steps be?
Analyzing Test Data
Following completion of all nine tests, we collated our session notes and analyzed participants’ responses to each question, noting trends and patterns in their sentiment. We identified quotes that captured key ideas, noted nonverbal expressions, and reviewed other qualitative feedback. Additionally, we tracked the positive and negative responses, suggestions, questions, and comments, and we tallied the quantitative scores for each question.
Based on this analysis, we developed conclusions about the stronger and weaker portions of the scenario, and we drafted, reviewed, and implemented a prioritized set of recommended changes to the scenario template. Furthermore, with the deeper insight into users’ habits and preferences of reviewing documentation and their expectations around the scenario genre, we were able to draw larger conclusions about how users want to access and consume information, enabling us to implement changes to other information deliverables as well.
Validating Recommendations and Changes
After incorporating the revisions to the scenario model, we wanted to confirm that our changes improved users’ overall perceptions of the scenario quality. We recruited six additional participants—three new participants and three who had tested during the first round. Our goal for this second round of testing, in addition to confirming that participants perceived the revised scenarios as better than the first versions, was to ascertain whether repeat participants would rank the quality of the scenarios higher or lower than participants who were seeing the content for the first time.
New participants were given the same test protocol as the participants who tested during the first round. Repeat participants were asked to review the revised scenarios and then asked post-test questions that assessed their comparison of the first and second drafts of the scenarios:
- Did you notice changes to the content or structure of either scenario? If so, what? What can you say about the (positive? negative?) impact of these changes?
- Is there anything that is missing or that is underdeveloped?
- Do the scenarios contain enough detail for you to understand how InfoSphere BigInsights would help solve a business problem, and then to implement that scenario in your organization?
- Should the "Results and Benefits" topic appear in the navigation list?
- Since your first review of the scenarios, we added a "Learn More" topic into the scenario template. Do you think it is in the right place? Do you want to see it in the TOC? Would you like to see it include anything else?
- On a scale of 1 to 5 (with 1 being very dissatisfied and 5 being very satisfied), how satisfied are you with the structure (form) of the scenarios?
- How satisfied are you with the completeness of the scenarios?
- How satisfied are you with the organization of the scenarios?
- How would you rate your overall satisfaction with the scenarios?
During the data analysis of the second round, we were able to:
- Confirm that participants held higher opinions of the revised scenarios than they did of the original scenarios.
- Validate the impact of the revisions.
- Explore unanswered questions that arose during the first round.
Adapting and Applying Our Testing Methodology
While our methods were developed according to the specific needs of our situation, including our schedule, team, research questions, and testing objectives, our methodology can be generalized and applied to other teams that might want to conduct similar research. For example, before designing any user research session, we recommend that writers or designers explore questions such as:
- Who is really reading this topic? This section? This information?
- What do they expect from it?
- What do they want to get out of it?
- What do they think about the structure, depth, coverage, and presentation of the information?
- What is at stake?
- How can this topic, section, or information be improved?
- What else do I not know about my users, their needs, or their expectations about this information?
After an analysis of the rhetorical situations surrounding potential testing opportunities, research teams can develop test plans with meaningful questions that deliver actionable insight to improve deliverables according to audience expectations.
Discussion and Recommendations
In considering the rhetorical situation around a potential user testing situation, consider the team schedules and resources:
- What are the constraints of the release schedule and when can the test be run?
- Does the team schedule allow for a subsequent round or rounds of testing?
- Given the content or subject of the test and the maturity of the design or information, how iterative should the testing be?
- Does the test require internal or external participants?
- Does the team collaborate with a usability team or representative who could review the test plan, act as a pilot participant, take notes during test sessions, help facilitate the test, and assist with test data analysis?
Consider, also, the context of the issue or problem that is being explored:
- What question or problem is the team trying to figure out?
- What data or feedback will solve the problem?
- What is the scope or range of the research question?
- What type of test is best suited to returning the necessary information or data?
- What is the target user of the product or information?
Once our team explored these questions, we were able to develop a test plan that reflected our questions and objectives, recruit representative participants, conduct test sessions, and obtain valuable data that helped us improve our information deliverable.
Morgan Dennis joined IBM as an information developer in 2008 and is currently working on the Big Data portfolio. She completed her PhD at Purdue University in rhetoric and composition, where she was a part of a research team that conducted usability testing on the Purdue OWL. She completed her MA at West Virginia University in professional writing and editing and her BA at Assumption College in English. Her research interests include issues and developments in the fields of professional writing and usability, especially around documentation and information design.
I was expected to learn how the users reacted to scenario-based documentation. Disappointed that the findings were not included in the article.
I appreciate the article for providing information about the development and testing of scenarios. I would like to see a follow-up article that describes how you applied the scenarios to your tech pubs. I would also be very interested to see artifacts from the development of the scenarios and how you applied them to your publications.