62.1, February 2015

Quality in Product Reviews: What Technical Communicators Should Know

Jo Mackiewicz

Abstract

Purpose: Measuring the quality of product reviews via helpfulness votes is problematic for several reasons. I delineate the components of product review quality in order to assist technical communicators who manage their organizations’ user-generated content in identifying quality content and in helping reviewers produce quality content.

Method/Corpus: I analyze results from secondary research on product reviews and discuss six important components of review quality. I focus most attention on five components of review quality that technical communicators can assess—informativeness, valance, credibility, conformity, and readability—and briefly describe a sixth component—user characteristics. I also exemplify these components, drawing from a corpus of 8,973 product reviews gathered in 2013 from a variety of retail and review websites.

Results: Based on this analysis, I recommend strategies that technical communicators can use (1) to identify these components of review quality, (2) to develop a rich data set from which they can glean consumer wants and needs as well as trends related to their organizations’ products, and (3) to help reviewers write better reviews.

Conclusions: As the amount of user-generated content grows, the need to learn from it and the need to improve it grow. By using their knowledge and skills in new ways, technical communicators who manage and develop product reviews can stay relevant and necessary as organizations rely more and more heavily on user-generated content.

Keywords: credibility, informativeness, product reviews, quality, readability, user-generated content

Practitioner’s Takeaway

  • Using helpfulness votes to ascertain the quality of product reviews is problematic for several reasons; for example, new reviews often have few or no votes.
  • With a research-based heuristic, technical communicators can identify six components of quality reviews and then mine product reviews for product trends and other information.
  • Being able to assess review quality enables technical communicators charged with developing and managing their organizations’ product reviews and other user-generated content to improve the content that users contribute.

Introduction

Quality in technical and professional communication eludes hard and fast definitions, but some technical communication scholars have attempted the task of defining quality. Shelby (1998), for example, asserts that a quality document bridges individual and collective tastes, conforms to expectations, and is fit to use (p. 392). To most effectively analyze quality, a technical communicator considers a document’s context: an audience’s needs and expectations at a given time, a writer’s communicative goals, and the texts with which that document intersects and corresponds.

However, as technical communicators look for ways to grapple with an influx of user-generated content (UGC)—product reviews in particular—they can benefit from a general heuristic for assessing quality across contexts. This article delineates the components of product review quality in order to assist technical communicators who manage their organizations’ UGC in identifying quality content and in helping reviewers produce quality content.

Analyzing and Managing UGC

In multiple forms of UGC, people share their opinions on nearly every imaginable topic. In the immense UGC category of online product reviews, consumers evaluate everything from theme parks to yoga mats. Not surprisingly, then, review quality hinges on multiple variables. By delineating those components of quality—all the while keeping in mind the variety of rhetorical situations that encapsulate reviews—technical communicators can better use them to assess and promote content quality.

Take as an example this excerpt from a review of La Mer facial cream. The reviewer offers a recommendation targeted toward a specific group of potential consumers:

I recommend this product but this is definitely not for everyone. It took some time before I adjusted to the “weird’ feeling of not feeling traces of my moisturiser after application. If you feel that the regular creme is too heavy for your skin or that the gel creme does not provide you with enough of a hydrated feeling, this soft creme format may be your perfect medium.

As I discuss in more detail below, research shows that explicit recommendations such as the one in this review (“I recommend this product but this is definitely not for everyone”), particularly recommendations that point to specific types of consumers who might want (or might want to avoid) the product, contribute to review quality. Technical communicators who can identify and foster such recommendations can better meet the needs of their organizations and of review users.

Sophistication in analyzing product reviews and other UGC grows more important as the amount of UGC continues to increase. According to Ian Tenenbaum of Crowdtap, a start-up company that analyzes social media for companies such as Ikea and American Express, users generated 80% of 2013’s online content—up 35% in five years (2013). Technical communicators are more and more often called upon to marshal and make sense of the vast amount of data that companies and organizations need to understand and, eventually, to use in choosing and improving products—a change that many see as positive:

This particular trend [ubiquity of and reliance on social media] presents fantastic opportunities for technical communicators to engage in conversation with end users, be more responsive to their issues, and tap into their knowledge to create even better, more meaningful content. (Adobe Systems, Inc., 2011)

That technical communicators increasingly engage in such conversations is readily apparent in the findings of Frith’s (2014) study of 23 moderators of online help forums. He found that the roles forum moderators took on “closely resembled the roles many technical communicators play in the workplace” (p. 180). Specifically, he found that forum moderators, like technical communicators, act as (1) knowledgeable nonexperts on the subject matter, (2) quality control experts, (3) translators of complicated, technical material, (4) information architects (for example, as FAQ and SOP developers), and (5) tone setters who establish “what is and is not appropriate” behavior (pp. 177–180). Based on his analysis, Frith (2014) concludes that technical communicators can “make a persuasive case that they have the technical and rhetorical skills to manage large communities” of content-generating users (p. 182). Technical communicators already are playing a substantial role in the development and management of UGC, and it appears that need for their expertise will only continue to grow.

Aside from the need for technical communicators to analyze and manage a vast amount of UGC, the need for technical communicators to play a role in improving the quality of UGC appears to be imminent. As O’Mahony and Smyth (2010) point out about UGC quality, “Anybody motivated to create content is virtually free to do so, and there is little or no quality assurance applied a priori to such content” (p. 164). With little oversight, the quality of UGC ranges widely; indeed, research supports the perception that much UGC lacks quality (for example, Rello & Baeza-Yates, 2012). Given the continued growth of UGC and users’ (unsurprising) preference for quality content (for example, Ghose & Ipeirotis, 2011), technical communicators charged with mining and managing the content that users contribute have an exciting opportunity to put their knowledge and skills to work in new ways.

One way that retailer, brand, and review websites have addressed the problem of assessing quality in product reviews is to allow users to vote on or rate the helpfulness of reviews. These votes and ratings serve as one means to determine review quality. They have become a common means by which marketing, natural language processing, technical and professional communication, and other researchers operationalize review quality. Some sites, including the retailer Amazon.com, use a yes/no question (“Was this helpful to you?”) to gauge a review’s helpfulness. Other sites use comparable methods of assessing helpfulness. Allrecipes.com, for example, uses thumbs-up or thumbs-down votes. Such votes matter because sites often have a mechanism for sorting reviews by helpfulness votes, so reviews with votes and with the best ratio of helpful votes to total votes can get more exposure. Such systems for assessing quality via helpfulness votes, however, possess substantial validity problems and are therefore insufficient for technical communicators who monitor and manage UGC for their organizations.

Relying on helpfulness votes to determine review quality is problematic for several reasons. First, as Cao, Duan, and Gan (2011) point out, many reviews—even ones that have been posted for some time—have no helpfulness votes at all (p. 512). In their data, 3,500 reviews from CNET Download, 51% of the reviews had no helpfulness votes (p. 513). Liu et al. (2008) neatly delineate several other problems with using helpfulness votes to operationalize review quality:

  1. New reviews often have few or no votes. Liu et al. (2007) call this “early-bird bias” (p. 334). However, as Otterbacher (2011) points out, the extent to which review prominence relies on recency differs among sites containing reviews. On some sites, new reviews are more prominent than older reviews and thus are more likely to obtain helpfulness votes (p. 433).
  2. Some reviews fall victim to spam voting.
  3. Presentation according to helpfulness rankings causes a “rich-get-richer” scenario. Users see only the highest-ranked reviews, “leaving no opportunities for the newly published yet unvoted reviews to show up on users’ radar” (Liu et al., 2008, p. 443). Liu et al. (2007) call this phenomenon “winner circle bias” (p. 336).

When quality reviews fall through the cracks, users miss out on useful content that could help them make purchasing decisions. As important, when quality reviews get lost in the UGC shuffle, hard-working reviewers may lose motivation and thus be less likely to contribute to a site again. For technical communicators charged with analyzing reviews to understand consumers’ preferences and needs and to understand product trends and for those charged with helping reviewers generate more useful reviews will need to move beyond solely looking to helpfulness votes for indications of quality.

My goal here is to help technical communicators supplant quality measures of helpfulness votes. To do so, I discuss six important components of product review quality, focusing most attention on five components of review quality that technical communicators can assess—informativeness, valance, credibility, conformity, and readability—and briefly describing a sixth component—user characteristics (see figure 1). I focus on the first five components because prior research indicates their important contribution to review quality (Mackiewicz & Yeats, 2014; Yeats & Mackiewicz, 2014) and, on a pragmatic level, technical communicators can readily influence these five components of quality. Assessing user characteristics, such as users’ purpose in reading reviews, their tolerance for risk, or their intent to buy, requires methods beyond analysis of review text. In addition to describing these quality components, I also exemplify them, drawing from a corpus of 8,973 product reviews gathered in 2013 from a variety of retail and review websites. In being able to identify these components of review quality, technical communicators can develop a rich data set from which they can glean consumer wants and needs and trends related to their organizations’ products and help reviewers write better reviews.

Figure 1. The Components of Review Quality. Technical Communicators Can Influence Five of the Six Components: Review Informativeness, Valence, Credibility, Conformity, and Readability.
Figure 1. The Components of Review Quality. Technical Communicators Can Influence Five of the Six Components: Review Informativeness, Valence, Credibility, Conformity, and Readability.

Review Corpus

As mentioned above, the examples here come from a corpus of 8,973 reviews with helpfulness votes—either positive or negative or a combination of both—randomly scraped from a wide range of brand and retailer sites in 2013. The reviews covered products such as these: Rayovac LED Tactical Flashlight, Marcella Wing Collar Evening Shirt, Jif Irresistible Peanut Butter Cookies, Williams Sonoma Breville Crispy Crust Pizza Maker, PepBoys Peak Performance 900 Peak Amp Jump Starter, Pampers Cruisers Diapers, Avon Foot Works Healthy Rough Skin Remover, Valspar Duramax Exterior Paint, Fidelity Rollover IRA, and TurboTax Online Federal Free Edition 2012.1 I chose examples in this article from this corpus based on their ability to illustrate the components of quality that, as I discuss, research has shown to play an important role in generating review quality.

An important, related topic—one beyond the scope of my purpose here—is the challenge of detecting fake reviews. See Ong, Mannino, and Gregg (2014) and Ott, Choi, Cardie, and Hancock (2011) for two important studies related to fake (or “shill”) reviews.

Components of Review Quality

In the next section, I discuss and exemplify six components of product review quality, paying particular attention to the five components that technical communicators can readily influence: informativeness, valance, credibility, conformity, and readability.

Informativeness

Perhaps the most important component of quality—certainly the one that is most obviously necessary—is the extent to which a review informs users so that they can make good purchasing decisions. Review research operationalizes informativeness in a variety of ways:

  • review length (word count)
  • a balance of subjective (evaluative) and objective (descriptive) statements
  • explicit statements of recommendation and of expectations met.

Here I discuss these characteristics of review informativeness.

Review Length. Obviously, critically important to quality is the extent to which the review contains evidence or explanation in support of a reviewer’s claims. Perhaps the easiest way to operationalize review informativeness is through review length—the number of words that the review contains. Mudambi and Schuff (2010) and Pan and Zhang (2011) tested the relationship between a review’s length and the number of helpful votes it received and found the two were associated. Similarly, Schindler and Bickart (2012) found a relationship between review length and users’ ratings of review valuableness. These results suggest that technical communicators who are trying to identify quality reviews would do well to start with reviews that are at least longer than average. In my corpus of 8,973 reviews, the average review contained roughly 124 words. I calculated this average by dividing the average character count by 5, the average character count of a word in English (for example, WolframAlpha, 2014).

Related to such research on review length is Cao, Duan, and Gan’s (2011) study of the impact of the length of the review title on users’ perceptions of review quality. In contrast to the findings of studies of review length, the more words a review had in its title, the fewer helpfulness votes it received (p. 518). Cao, Duan, and Gan (2011) write about this finding, “Too much information contained in the title may discourage people from reading the entire review before voting on it” (p. 518). Although more research would help solidify the advice to limit the word length of review titles, acting on this finding by creating succinct yet meaningful titles is one that technical communicators can freely put to use when they interact with reviewers about ways to improve the quality of their reviews. One simple way to improve review quality, it seems, is to create review titles that briefly sum up the main point.

A Balance of Subjective and Objective Statements. Researchers have also operationalized informativeness by the degree to which a review balances objective (descriptive) and subjective (evaluative) content. Studying reviews of DVDs, audio and video equipment, and digital cameras, Ghose and Ipeirotis (2011) found that reviews containing a mix of objective and subjective content, especially “extreme,” or strong, subjective content, received high helpfulness ratings. Schindler and Bickart (2012) presented their participants with online purchasing scenarios and asked them to evaluate review valuableness. They divided the review content into two categories: (1) product evaluative (positive or negative) and (2) descriptive (reviewer descriptive or product descriptive). First, they found an association between product-descriptive statements and valuableness. In fact, participants appeared to tolerate “a large proportion of statements” that provided product description without any evaluation even more than they did “a large proportion” of positive evaluative statements (p. 238). Too many positive statements, they postulate, might “lead the reader to question the reviewer’s motives” whereas product-descriptive statements “may simply provide more useful information” and thus help users make purchasing decisions (p. 240). Balancing subjective statements with objective statements might increase a review’s value because it indicates a reviewer’s care in supporting his or her opinions.

The following excerpt, taken from a review of a Shoei GT-Air helmet, illustrates the review’s blend of evaluative and descriptive content. The review begins with a descriptive statement about the reviewer’s familiarity with the brand (a statement that builds credibility, a quality component discussed later) and then moves on to product evaluation:

…This will be the 3rd Shoei helmet I’ve owned, the other two being the Hornet dual sport and RF-1100. I’ve tried on the Qwest and Neotec several times, so I can speak to the fit comparison to them as well. For this review, I was able to go on a solid 30 minute ride in this helmet at sustained speeds up to 70mph. Right out of the box, the finish quality is everything we’ve come to expect from Shoei.

After this introduction, the reviewer moves on to product description:

My helmet is solid white and the paint and clear coat are flawless. I didn’t put the GT Air on a scale but I’d guess it’s about the same weight as the RF-1100. The breath deflector and pinlock lens are included but separate. …

After this description of the helmet, the reviewer switches back to product evaluation—a blend of positive and mitigated negative evaluative statements about the helmet’s lining, vents, and face shield:

The liner is plush and padded on the sides but a little rough on top (I’m bald, FWIW#, almost like a soft scouring pad. It’s not uncomfortable, just noticeable. It’s the same fabric used on the crown of the liner in my Hornet, which softened up with use. The vents and face shield operate with strong, positive response, though I wish the lowest detent of the shield was a bit lower. …

From this positive commentary, qualified with negative phrases such as “a little rough,” the reviewer moves on to stronger (more extreme) positive evaluation:

The star attraction to this helmet is the internal sun visor. I can testify that it lives up to the hype. The slide mechanism functions very smoothly and positively, is easy to find #even with thick, winter gloves), distortion-free, dark, and drops down further than other internal shades, fully shielding the eyes. …

While this review’s length, 1036 words, certainly contributes to its quality, so does the reviewer’s ability to blend product description with product assessment, particularly positive evaluation, using words and phrases such as “lives up,” “easy,” and “fully.” Technical communicators who want to move beyond review length to assess informativeness can analyze a review’s blend of product description—objective statements—and positive and negative evaluation—subjective statements—to get greater insight into a review’s quality. In addition, technical communicators who work with reviewers to improve the quality of their content can do more than advise reviewers to “expand” their reviews or “add detail” to them but can instead explicitly state the kind of content—descriptive or evaluative—that rounds out a review and helps users make purchasing decisions.

Explicit Statements of Recommendation and of Expectations Met. Certain types of explicit subjective statements affect perceptions of review quality. Mackiewicz, Yeats, and Thornton (in review) tested the effect of two kinds of explicit subjective statements. First, they tested explicit statements of recommendation: (1) recommendations aimed at any potential purchasers (“I would recommend him and his staff to anyone”) and (2) recommendations aimed at a more limited set of consumers (“I would recommend this product for anyone who likes lizards, but doesn’t want to buy a big lizard that can bite”). Consumer research suggests that, in general, people take the easiest path to a solution, particularly when they are in a goal-oriented mode, such as making a purchasing decision (Van Schaik & Ling, 2009). As “cognitive misers” (Fiske & Taylor, 1991), consumers tend to rely on information that is easy to evaluate more than they do detailed information. Thus, Mackiewicz, Yeats, and Thornton (in review) hypothesized that participants would rate reviews with explicit recommendations as higher in quality, and their results supported this hypothesis. In addition, Mackiewicz, Yeats, and Thornton (in review) found that a statement about how well the product met the reviewer’s expectations also contributed to quality (“We love our new Tuscany windows as they exceeded our expectations in all respects”). Sparks and Browning (2011) note that the “impetus for writing a review is most likely to be due to a deviation from the norm resulting in disconfirmation of expectations” (p. 1312), so users might then particularly appreciate reviews in which reviewers explicitly point out that a product did indeed meet expectations. Users may perceive such statements of direct experience related to the gap between the reality of the product and their expectations for it as useful and thus as a contributor to quality.

Section Conclusion. To sum up this section, in relation to informativeness, technical communicators can identify and improve review quality by looking for (1) reviews that are longer than average; (2) reviews that balance subjective and objective statements; (3) reviews that contain explicit recommendations; and (4) reviews that explicitly state the how well the product met expectations.

Valence

A second important component of review quality is valence—the degree of positivity or negativity of a word, a statement, or an entire text. Using sentiment analysis, also called opinion mining, researchers differentiate among positive, negative, and neutral words, sentences (or statements), and documents (see Pang & Lee, 2008, for an overview). For example, in the following review of a hardwood floor, the adjectives “rewarding,” “outstanding,” and “easy” contribute positive sentiment:

As a professional fitter of 14 years I can say that this is a very rewarding floor. The finished result is outstanding. The locking system is very easy to work with, as the name goes (Easy-fit).

In contrast, “disappointed” contributes negative sentiment:

As a purchaser for many years of the 840 line, I am so disappointed in this newest version.

With sentiment analysis techniques, researchers can assign a sentiment rating to words that commonly convey positivity and negativity to gauge sentiment at a local or global level. But technical communicators looking to locate quality reviews in order to mine them for trends and insights into product users’ wants and needs do not necessarily have to learn such sophisticated techniques. For example, a more simple, albeit more crude, indication of a review’s valence is its product rating, usually measured on a 1-to-5 star scale that accompanies the text. Valence measured through star rating is just one way, however, to determine a review’s positivity or negativity. Technical communicators who understand some of the strong tendencies at play in relation to review valence, namely, positivity and negativity bias, can better identify quality reviews.

Positivity Bias. Studies of valance reveal two important tendencies—both called “positivity bias”—in relation to reviews. The first type of positivity bias refers to the tendency of reviewers to write positive reviews more often than they write negative reviews. McGlohon, Glance, and Reiter (2010), for example, gathered a data set of 8 million ratings of 560,000 products reviewed by 3.8 million reviewers. They found an “overwhelming majority” to be positive (p. 116). In the 8,973 reviews collected for this article, 6,334 (70.5%) were 4- or 5-star reviews. (In contrast, out of 8,973 reviews, 2,066 [23%] were 1- and 2-star reviews.) Hu, Pavlou, and Zhang (2009) deftly explain the reasons for a so-called J-shaped distribution of review ratings—the tendency toward rating extremes and positive extremes in particular. They say that people with positive opinions of a product will be more likely to purchase the product (purchasing bias) and subsequently write a review about it. Also, people with extreme opinions—whether positive or negative—are more likely to articulate their opinions in a review (reporting bias).

The second type of positivity bias says that “all else being equal, positive reviews have a greater probability of being rated as helpful than negative ones” (Pan & Zhang, 2011, p. 604). Users will rate a 5-star review as helpful more often than they will a 1-star review. The review of a carpet cleaner below illustrates positivity:

When I got my new carpet cleaner after my old Bissell quit working, I was so excited I had to use it right away! It was easy to put together, and easy to use. I did not have to stop and refill the water tank even once, which was nice, as I would have to stop two or three times with my old cleaner. I have two dogs and two cats, so we have a lot of pet hair around and lot of accidents. That said, we clean the carpets pretty often, and with the cleaner coming completely apart and being very easy to clean and put back together, it is incredibly convenient. I would highly recommend this carpet cleaner to my friends and family.

The reviewer assigned the product, a Bissell Deep Clean Premier, 5 stars—the highest star rating. Prior research indicates valence plays a substantial role in the extent to which a review user will perceive it as credible (Eisend, 2006; Schlosser, 2005, 2011), but exactly how valence affects credibility depends in part on whether the product is a search product or an experience product. Search products are those for which consumers can obtain information before they make a purchase, thus reducing uncertainty about making the purchase. Carpet cleaners like the Bissell, as well as products such as bed frames and lawn mowers, are search products in that their utility stems from tangible, objective criteria such as dimensions, materials, and performance. The relative ease with which users can evaluate and compare search products makes them more likely to “feel rather comfortable relying on other consumers’ evaluations” (Sen & Lerman, 2007, p. 79), as “claims about tangible attributes are more easily substantiated” (Mudambi & Schuff, 2010, p. 189). An extremely positive review such as this one for the Bissell Deep Clean Premier, then, jibes with findings from prior research in that it is both highly positive and highly helpful.

In contrast, experience products such as books, movies, music, and food are those for which attributes “cannot be assessed without direct experience” (Bae & Lee, 2011, p. 256; Hu, Liu, & Zhang, 2008). Thus, as Nakayama, Sutcliffe, and Wan (2010) point out, the quality of an experience product is more salient after purchase and use. Zhao et al. (2013) write that users look to reviews of experiential products in particular because “unlike other products, these are consumed solely for the pleasure and experience they provide” (p. 154). For experience goods, moderate reviews as opposed to reviews with extremely high or low star ratings are positively associated with higher levels of helpfulness (Mudambi & Schuff, 2010, p. 194). Moderate star ratings mean reviewer has taken a temperate approach to his or her experience and has avoided extreme (unreasonable) opinions. In the following 3-star review, the reviewer narrates an experience at a resort that started out bad but was eventually rectified because the timeshare company intervened:

We arrived at Fort Lauderdale Beach Resort on Friday February 10, late that evening and I had called ahead to request if available an ocean view. Although I had called with my request more than once the attendant said there was no note of my request. He assigned us to unit 406 which turned out to be a lockout unit. This unit was so bad you could hear every word clearly that the people in the adjoining were saying even them making love. The only separation for this unit is a very thin wooden door that didn’t even block the light…. We got up and as we stared to clean up and shower we discovered there was still not only no hot water but there was no water at all.

The water came back on. This was about 1:00 PM. Saturday February 11, 2011. They had wasted a day and a half of our vacation depriving us of basic needs issues such as cleanness, noise and no water. No one ever explained we were getting a lockout unit nor no one called to tell us our water was off in our unit.

RCI had evidently called the resort to verify our complaints because things then happened for the best.

After all the above, they offered us a clean fresh unit with an ocean view that was not a lockout, So we moved…

This review of an experience product—a rented timeshare condo—delineates the problems the reviewer encountered upon arrival. The reviewer balances the list, however, by recognizing the customer service of the management company: “RCI had evidently called the resort to verify our complaints because things then happened for the best.” This review of an experience product shows quality in that it takes a moderate approach.

Negativity Bias. In contrast, some studies of valence’s effect on quality point to a so-called negativity effect on users’ perceptions of a review (for example, Roggeveen & Johar, 2002; Sen & Lerman, 2007). Some prior research, for example, shows that negative reviews have more influence than positive reviews on readers’ perceptions of review credibility and on their purchasing decisions (Chevalier & Mayzlin, 2006; Gupta & Harris, 2009). This negativity effect (Baumeister et al., 2001; Rozin & Royzman, 2001) says that consumers place greater emphasis on negative information because they encounter it less frequently (because of reviewers’ tendency to write positive reviews). People see negativity as counter normative (Feldman, 1966; Kanouse & Hanson, 1972; Zajonc, 1968) and, therefore, it is more “‘alerting,’ possibly triggering a ‘be cautious’ attitude in potential consumers” (Fiske, 1993, p. 318). Cao, Duan, and Gan (2011) found that the greater number of words in a review’s “con” section, the more helpfulness votes that review received: “More words in [the] ‘cons’ part of the review may encourage more people to read it and then vote on it” (p. 518). It makes sense then, that Metzger, Flanagin, and Medders (2010) and Sparks and Browning (2011) found that people relied more heavily on negative reviews in making purchasing decisions. To the extent that their participants perceived the negative reviews as helpful, they would be more likely to use them to decide what to buy. Cao, Duan, and Gan’s (2011) results are also consistent with Yang and Mai’s (2010) findings, along with the findings of Papathanassis and Knolle’s (2011) grounded-theory study, which showed a tendency for negative reviews to have more impact than positive reviews.

The following review illustrates negativity. The reviewer asserts familiarity with the brand, establishing credibility, and then details the many problems she has experienced with the Lulumon Wunder Under Crop:

As a long-time Lulu customer who has spent many(!) of her precious dollars on Lulu products over the years (and who has frequently urged friends/family to join in the Lulu love), I absolutely echo all of the complaints about plummeting product quality, ridiculously wrong re-designs of previously well-loved and highly rated products, and skyrocketing prices to accompany all of the wrongness!

In terms of design for this specific product — please please please fix the gussett issue and declare that you’ve done so such that I don’t have to trial-and-error my way to a decent pair of WUs! In my opinion, it may not just be the triangle/diamond swap issue and, unfortunately, it may also stem from poor construction (mass production perhaps caused compromises in the quality, no?). I notice in my recent pairs with the diamond, its awkward back placement (which is different than older pairs) also creates issues. Whatever the cause, this much I know to be true: WUs now fit horribly and they used to do just the opposite. In terms of fabric quality, I again reiterate others’ concerns.

Overall, I am incredibly sad and frustrated that I have to work so hard and spend so much money, time and effort to get my hands on products that I used to cherish and thoroughly enjoy.

This negative, 1-star review showcases the kind of product information that technical communicators can mine from UGC for product improvement. The reviewer not only delineates the product’s flaws, she also performs a call to action—a pleading request that Lulumon “please please please fix the gussett issue.” This review also exemplifies Sen and Lerman’s (2007) finding that negativity bias more strongly affects users’ perceptions of search products like the Lulumon Wunder Under Crop.

Section Conclusion. To sum up this section, technical communicators can use valence to identify information that can benefit their organizations if they look for (1) reviews of search products that are positive, (2) reviews of experience products that are moderate, and (3) reviews that contain some negative evaluation.

Credibility

A third important component of product review quality is the credibility of the reviewer. Dividing the credibility construct into two component parts, essentially viewing credibility through the lens of traditional, Aristotelian rhetoric, helps reveal characteristics that influence review quality. Traditional rhetoric discusses credibility as ethos. Invented ethos arises out of a single rhetorical situation, from the text-at-hand, such as a product review. Situated ethos, a reviewer’s “good reputation in the community” (Crowley & Hawhee, 2008, p. 198), develops over time. Separating the two helps upon encountering situations in which a reviewer with a good reputation contributes a review that fails to demonstrate (to invent) credibility. For example, users might perceive a review containing spelling and grammatical errors to be carelessly and hurriedly written, and they then might reconfigure their perception of the reviewer’s situated credibility based on this (poor) invented credibility. Alternatively, a reviewer who lacks a reputation within a discourse community could begin the process of building one by inventing credibility in his or her first review.

Situated Credibility. As mentioned above, situated credibility refers to reputation, a history of good practice in the community. Reviewers build situated credibility by contributing to the site in helpful ways. Over time, others in the community develop trust in the reviewer’s sincerity and goodwill. As Hu et al. (2008), citing Chiles and McMackin (1996) pointed out, “Trust reflects all of the historical trustworthy behaviors exerted by the entity and is a strong signal of reliability to third parties, no matter whether they have or have not conducted transactions with the entity before” (p. 205). A reviewer’s situated credibility can manifest itself in a variety of ways. On the review site Epinions.com, for example, reviewers developed a “web of trust”—a set of users who categorized a reviewer as trusted.

In a study of the effects of reviewer profile characteristics on credibility, Xu (2014) manipulated reviewer reputation by manipulating the number of members who indicated trust in that reviewer and found that a large number of trusted members “led to more perceived review credibility than [a] small number of trust members” (p. 141). In addition, Xu found a relationship between members who trusted a reviewer and review valence. In the case of positive reviews, the number of members who trusted the reviewer did not matter to users, but in the case of negative reviews, that number did matter. Users considered a negative review to be more credible when a larger number of members trusted the reviewer than a smaller number (p. 141). Situated credibility then, though perceived at a single point in time, develops over time as reviewers build a profile for themselves and add useful content. Users’ perceptions of credibility and, therefore, quality stem from a reviewer’s longitudinal commitment to a generating content with sincerity and goodwill.

Researchers have also examined the effects of situated credibility on review helpfulness in terms of the helpfulness of a reviewer’s previous reviews. O’Mahony and Smyth (2010) found that the helpfulness of a reviewer’s previous reviews was a strong predictor of review helpfulness (p. 165). Hu et al. (2008) examined the effect of situated credibility by accounting for the total number of useful votes a reviewer received on prior reviews and dividing that number by the reviewer’s total number of reviews (p. 208). They found that the quality of a reviewer as measured by ratio of helpful to total reviews matters to a review’s impact on sales: “Consumers react to favorable and unfavorable news differently when the review is written by a higher quality reviewer”; however, they did not find the same effect on sales for lower quality reviewers. In the case of lower quality reviewers, participants were “indifferent” to the reviews (p. 209). Technical communicators looking to gauge review quality should look to a reviewer’s history—his or her track record of producing content that users perceive to be useful. In the case of product reviews, past behavior predicts users’ perceptions of a reviewer’s current performance.

Studying expertise—another component of credibility (Hovland, Janis, & Kelley, 1953; Hu, Liu, & Zhang, 2008)—as opposed to trustworthiness, Lim and Van Dear Heide (2014) studied Yelp, looking at the effects of a reviewer’s number of friends and number of reviews. They found that Yelp users perceived reviewers with more friends and more reviews as having greater expertise/competence. In addition to recognitions that reviewers earn, on some sites, most notably on Amazon.com, reviewers can build situated credibility by attaching their real names to their reviews and by disclosing other identity-descriptive information on their profile pages (one click away from their reviews). Forman, Ghose, and Wiesenfeld (2008) found that such disclosure of identity information positively and significantly associated with users’ perceptions of review helpfulness and with sales of the product under review (p. 308). With ready access to reviewers’ reputations as trustworthy experts, users are more likely to perceive quality in reviewers’ content.

Invented Credibility. In relation to invented credibility, Mackiewicz and Yeats (2014) tested the extent to which reviewers’ statements, or assertions, of their expertise about the product or matters related to the product (such as familiarity with the brand) affected perceptions of review credibility. The study built on Mackiewicz’s (2010) description and analysis of assertions of expertise in product reviews and on Connors, Mudambi, and Schuff’s (2011) study of statements of expertise. Connors, Mudambi, and Schuff (2011) found that expertise statements had an effect; participants perceived a review with the expert statements as a greater aid in making a purchasing decision, as providing greater insight into the product, and as more helpful than a review without them (p. 5). They write, “Consumers may pay more attention to a self-described expert just on the basis of that declaration [of expertise]” (p. 7). This review of Adams A12 OS Hybrid golf clubs illustrates such a declaration:

I have tried may os iron from Taylormade, Callaway, Mizuno, nothing really helped. You can only work on your swing so much, it pretty much is what is is. I decided to try these clubs, and what a difference. I am 56 years old with back and shoulder probs. so I bought graphite shafts. The ball gets airborne so easy with good impact, distance is acceptable, and the 4-6 hybrids are also easy to hit. I wish I would have tried Adams before.

This reviewer asserts expertise stemming from experience from using similar products from several other brands. Mackiewicz and Yeats (2014) found that a statement about the reviewer’s prior experience with a similar product had a positive effect on participants’ perceptions of review credibility. They also found that a statement about expertise gained from conducting research (for example, online research on the product) had a positive effect on perceptions of credibility as well. Technical communicators can look (typically) to the first or second statements in reviews to determine whether the reviewer has attempted to invent credibility through assertions of expertise.

Finally, while not investigating invented credibility per se, Pan and Zhang (2011) investigated the role of reviewer “innovativeness,” specifically, the relationship between reviewers’ innovativeness and review helpfulness. They operationalized innovativeness with 21 attributes closely associated with innovators, for example, education, comfort with abstraction, attitude toward change, ability to cope with uncertainty, knowledge of innovations or new products (pp. 610–611). They found a U-shaped relationship between innovativeness and helpfulness. That is, moderately innovative reviewers were most helpful. These findings suggest that reviewer innovativeness—as expressed in a review—makes a difference to users’ perceptions of quality. More research will show which of these 21 characteristics of innovativeness technical communicators should look for as they assess review quality.

Section Conclusion. In sum, technical communicators can locate review quality by looking for (1) reviews by reviewers with good reputations and (2) reviews in which reviewers assert expertise, especially by asserting prior experience with similar products and by asserting that they have conducted research on the product.

Conformity

Two types of review conformity influence quality: (1) a review’s external conformity, the extent to which its rating corresponds to the rating consensus of surrounding reviews and (2) a review’s internal conformity, the extent to which a review’s text corresponds to its star (or other) rating.

External Conformity. External conformity is the extent to which a product review’s evaluation diverges from or norms with the average evaluation of other reviews of the same product. This component of review quality reflects findings supporting what researchers call “the conformity hypothesis”—the idea quality does not solely reside within a review but instead arises from how that review accords with other reviews. As Korfiatis, García-Bariocanal, and Sánchez-Alonso (2012) write, “Reviews closer to consensus may be considered more helpful by potential consumers than those exhibiting extremes of opinion” (p. 206). Adhering to the norm appears to generate perceptions of quality. However, Danescu-Niculescu-Mizil et al.’s study (2009) complicates this broad statement somewhat. They found a slightly modified version of the conformity hypothesis to hold in their study of over one million reviews on Amazon.com with at least 10 helpfulness votes. They found that slightly negative reviews that deviated from the average product rating were less helpful than slightly positive reviews that deviated from the average (p. 143). Technical communicators looking to identify quality reviews might determine the average star rating for a given product and then look to reviews with that rating.

Internal Conformity. While a number of researchers have examined the role of external conformity in determining perceptions of review quality, Schlosser (2011) examined a review’s internal consistency—the consistency between the review’s star rating for the product and the review text. She found participants perceived reviews with two-sided arguments (that is, reviews that showed balance) as more helpful when the star rating was moderately favorable. If the review rating was extremely favorable, users did not perceive a two-sided argument as helpful (Schlosser, 2011, pp. 230–231; see also Schlosser, 2005). This 3-star (moderate) review of a 20-piece flatware set shows internal consistency in that it examines pros and cons about the product, a two-sided argument:

Buying things online is always a risk, and upon receiving these I found some good and bad. I actually really like this set: the pieces have a smooth, modern, yet industrial style that I love. Most of the pieces (see below) feel good in the hand and have a nice, solid weight to them. However I took off a couple of points for the following reasons

1. the online listing doesn’t tell you what a little piece of paper that comes with them instructs: they need to be “hand dried” to keep them looking this way. …

2. the salad forks are TINY! Teeny tiny, to be exact. I set them aside to be used as cocktail forks if I ever have need for such a thing. The smaller spoons are bordering on too small, but I think they will be fine.

3. the back of each piece (on the silver part not the handles) states “Stainless Steel 18/0 China” in black, obvious letters. It would have been nice if they could have printed this info (it’s probably required) somewhere more discreet.

This reviewer leads off with positive evaluation of the flatware (“I actually really like this set…), but then moves on to delineate three problems. The balance of pros and cons, though, jibes with the 3-star rating that the reviewer assigned to the product. The text and the rating create internal conformity.

Section Conclusion. To sum up, technical communicators can look for conformity—both external and internal—to identify quality reviews. They can look for (1) reviews with star ratings comparable to the average star rating and (2) reviews with text that backs up their star ratings—whether positive, negative, or moderate.

Readability

A fifth component of review quality is readability. As pointed out by Riley and Mackiewicz (2011), the term “readability” has (at least) two meanings. First, the word refers to the extent to which a reader can comprehend, or cognitively process, a text. Readability formulae such as the Flesch-Kincaid Grade Level and Flesch Reading Ease (which come bundled with Microsoft Word) provide one measure (albeit a disputed one) of text comprehensibility. Second, “readability” also refers to the extent to which users perceive a document as comfortable to read, a characteristic stemming from a document’s visual design. For example, most people find it uncomfortable to read long stretches of small type, especially when that stretch of text suffers from insufficient leading as well. Product reviewers control the first type of readability; they can write and edit their reviews so that users can easily understand them. However, product reviewers have little control over the visual design of their reviews—how their reviews will appear on the screen. They don’t choose the typeface or leading of their review text. They also don’t decide where or how their reviews will display on the webpage, for example, whether an entire review displays at once or whether readers have to click on a “more” or similar link to see the complete review. They can, however, usually control whether to use bold or italics, whether to use headings, and whether to insert white space between lines.

To determine whether better readability in the first sense, the sense of comprehensibility, relates to perceptions of review helpfulness, researchers have used a variety of readability formulae to analyze review text. Korfiatis, García-Bariocanal, and Sánchez-Alonso (2012) applied four readability formulae and found a positive relationship between readability and helpfulness. Ghose and Ipeirotis (2011) used six readability formulae to analyze review text and, similar to Korfiatis, García-Bariocanal, and Sánchez-Alonso (2012), found that greater readability “has a positive and statistical impact on review helpfulness” (p. 1510). These findings suggest that improving readability will improve helpfulness.

However, testing a possible relationship between product sales and readability scores, Korfiatis, García-Bariocanal, and Sánchez-Alonso (2012) found a relationship between higher readability scores (suggesting less-readable texts) and higher product sales. They explain their finding this way: “This [negative relationship between readability and product sales] is likely to happen if such reviews are written in more authoritative and sophisticated language” (p. 1504). Less readable texts—if they convey expertise and certainty—might more readily persuade users to purchase the product. O’Mahony and Smyth (2010) got a similar result in their four-formulae analysis of the helpfulness of Amazon and TripAdvisor reviews: “Helpful review texts required a higher degree of reading ability on the part of the reader to understand” (p. 166). Such findings indicate a complex relationship between readability and helpfulness score: reviewers’ use of specialized language might in some cases motivate users to make a purchase more readily than reviews containing fewer instances of specialized language.

The following review exemplifies readability in online reviews. It contains 821 words (an excerpt appears below), with an average of 14.7 words per sentence (average sentence length). The Flesch-Kincaid Grade Level is 7.1, and the Flesch Reading Ease score is 69.7, which means most 13 year olds could understand it.

The good:
Fast CPU makes easy work of complicated word, excel, and power point documents. The fifth low power companion core runs most tasks so the four main cores seldom get used which really lowers the power usage. GPU renders 3d games with playable frame rates and details. Easily displays Blu-Ray movies in 1920×1080 at full Blu-ray data rates so the quality you see is the same as what you see on your TV….

The bad:
No separate USB port. Manufacturers, when are you going to realize we need separate USB ports! Like Apple, there’s a big connector on the bottom. Tablet comes with a 40 pin to USB adapter for connecting to a computer or USB host. You have to use the USB host dongle or buy the dock to get USB host functionality. …

The strange:
Dock is a full laptop sized keyboard with a touch pad and either one or two USB host ports. It also has a large battery that can run both it and the tablet for a reported run time of 17 hours! That’s great but it basically converts it into a small laptop, which can be bought cheaper, has more storage, and runs PC applications. I guess it depends on your applications on what you need….

Even though this review contains some fairly technical terminology (for example, GPU, 40 pin to USB adapter), it mainly employs common words. It also uses active voice, another facilitator of text comprehension. Besides being fairly comprehensible, this review is also comfortable to read. The reviewer chunked content with white space and organized with headings. These formatting choices are easy to implement and help enhance the user’s reading experience.

Section Conclusion. Technical communicators who want to identify quality reviews can assess readability—both varieties. They can look for reviews that employ white space and headings and are thus more comfortable to read. They can also look for reviews that users can readily understand (as measured through readability metrics). Even reviews that employ some specialized terminology (and thus signal expertise) should facilitate users’ comprehension.

User Characteristics

As I mentioned earlier, another variable has an impact on the extent to which users will perceive quality in a review: user characteristics. However, unlike the five quality components described above, this component is one that technical communicators cannot readily influence. Even so, research suggests the importance of users’ goals and traits to their perceptions of a particular review. Zhu and Zhang (2010) studied the effect of users’ Internet experience; they found that product reviews strongly influenced the purchasing decisions of consumers with relatively greater Internet experience. Ibrahim, Suki, and Harun (2014) studied the interaction between product reviews and consumers’ perceived risk of shopping online. They broke the construct of perceived risk into five types: financial risk, performance risk, time-loss risk, psychological risk, and source risk. They found that product reviews significantly moderated the positive relationship between perceived risk and unwillingness to make an online purchase. Zhang, Craciun, and Shin (2010) studied the role of users’ goals for a product on a review’s persuasiveness. In their study, consumers showed negativity bias for prevention products—products that help people avoid negative outcomes—as opposed to goal-promoting products—products that move consumers toward positive outcomes (p. 1337). They write, “The consumption goals that consumers associate with the reviewed product trigger consumers’ regulatory foci, which, in turn, bias consumers’ evaluations of positively and negatively valenced product reviews” (p. 1340). Although user characteristics play a role in perceptions of review quality, technical communicators have no control over user characteristics such as familiarity with the Internet or purpose in investigating and, potentially, purchasing a product. Further research might investigate users’ perceptions of review quality as they encounter reviews based on their browsing and purchasing behaviors and the effects of messages aimed at ameliorating users’ perceptions of risk.

Conclusions and Implications

Document quality hinges on context—no technical communicator would argue with that. However, certain characteristics of product review quality that appear to apply across contexts shake out from the extant research. These characteristics together build upon Shelby’s (1988) definition of a quality technical document: one that bridges individual and collective tastes, conforms to expectations, and is fit to use (p. 392). Although technical communicators cannot readily ascertain or influence the characteristics of review users, they can move beyond use of reviews’ helpfulness votes to identify other components of quality reviews and then mine those reviews for information.

In addition, being able to assess review quality beyond the problematic measure of helpfulness votes enables technical communicators charged with developing and managing their organizations’ UGC to improve the content that users contribute. And as Frith (2014) shows, in taking on responsibility for their organizations’ UGC, technical communicators have an opportunity to put knowledge and skills that they already possess to work in new contexts. Technical communicators who work with content contributors—reviewers—can help them improve their reviews in a variety of ways. They can help reviewers improve review informativeness, for example, by encouraging them to state their recommendations explicitly and by encouraging them to discuss the extent to which products met their expectations. They can help reviewers calibrate the valence of their reviews, for example, the strength with which they convey positivity toward search versus experience products. They can help reviewers improve their credibility by ensuring that they discuss their relevant expertise, particularly research on the product that they have conducted and their prior experience with similar products. They can readily influence a review’s internal conformity by working with reviewers to ensure that their review text corresponds to the assigned star rating. And they can also help reviewers revise and edit to improve readability in both senses of the term.

And seizing such opportunities to engage with product reviewers and other users who contribute content, it seems, will become more important as the amount of UGC grows. As the mass of UGC grows, the need to learn from it—for example, product trends and consumers’ ideas for products—will grow as well. In addition, as the mass of UGC grows, the need to improve it—for example, making it more informative and more readable—will also grow. In this article, I have delineated components of quality in product reviews and described how technical communicators can locate those components to mine reviews and to work with reviewers to improve the quality of the content they provide. By using their knowledge and skills in new ways, technical communicators can stay relevant and necessary as organizations rely more and more heavily on UGC.

Acknowledgment

I am grateful to Bazaarvoice, Inc., and particularly Alex Barrera, product manager in data and analytics, for giving me access to this corpus of reviews. I am also most grateful to my friend and coauthor Dave Yeats for our ongoing research on product review quality.

References

Adobe Systems Inc. (2011). Multi-channel, rich, and social: Exploring the illustrative edge of Adobe Technical Communications 2.0. ISTC Communicator. Retrieved from https://www.adobe.com/go/tcstechillustration.

Baumeister, R. F., Bratslavsky, E., Finkenauer, C., & Vohs, K. D. (2001). Bad is stronger than good. Review of General Psychology, 5(4), 323–370.

Cao, Q., Duan, W., & Gan, Q. (2011). Exploring determinants of voting for the “helpfulness” of online user reviews: A text mining approach. Decision Support Systems, 50(2), 511–521.

Chevalier, J., & Mayzlin, D. (2006). The effect of word of mouth on sales: Online book reviews. Journal of Marketing Research, 43(3), 345–354.

Connors, L., Mudambi, S. M., & Schuff, D. (2011). Is it the review or the reviewer? A multi-method approach to determine the antecedents of online review helpfulness. In 2011 44th Hawaii International Conference on System Sciences (pp. 1–10). Piscataway, NJ: IEEE.

Crowley, S., & Hawhee, D. (2008). Ancient rhetorics for contemporary students (4th ed.). New York, NY: Longman.

Danescu-Niculescu-Mizil, C., Kossinets, G., Kleinberg, J., & Lee, L. (2009). How opinions are received by online communities: A case study on Amazon.com helpfulness votes. In WWW 2009: Proceedings of the 18th International Conference on World Wide Web (pp. 141–150). New York, NY: ACM.

Feldman, S. (1966). Motivational aspects of attitudinal elements and their place in cognitive interaction. In S. Feldman (Ed.), Cognitive consistency: Motivational antecedents and behavioral consequents (pp. 75–108). New York, NY: Academic Press.

Fiske, S. (1993). Social cognition and social perception. Annual Review of Psychology, 44(1), 155–194.

Fiske, S. T., & Taylor, S. E. (1991). Social cognition (2nd ed.). New York, NY: McGraw-Hill.

Forman, C., Ghose, A., & Wiesenfeld, B. (2008). Examining the relationship between reviews and sales: The role of reviewer identity disclosure in electronic markets. Information Systems Research, 19(3), 291–313.

Frith, J. (2014). Forum moderation as technical communication: The social web employment opportunities for technical communicators. Technical Communication, 61(3), 173–184.

Ghose, A., & Ipeirotis, P. G. (2011). Estimating the helpfulness and economic impact of product reviews: Mining test and reviewer characteristics. IEEE Transactions on Knowledge and Data Engineering, 23(10), 1498–1512.

Gupta, P., & Harris, J. (2010). How e-WOM recommendations influence product consideration and quality of choice: A motivation to process information perspective. Journal of Business Research, 63(9–10), 1041–1049.

Hovland, C., Janis, I., & Kelley, H. (1953). Communication and persuasion. New Haven, CT: Yale University Press.

Hu, N., Liu, L., & Zhang, J. (2008). Do online reviews affect product sales? The role of reviewer characteristics and temporal effects. Information Technology Management, 9(3), 201–214.

Hu, N., Zhang, J., & Pavlou, P. A. (2009). Overcoming the J-shaped distribution of product reviews. Communications of the ACM, 52(10), 144–147.

Ibrahim, S., Suki, N. M., & Harun, A. (2014). Structural relationships between perceived risk and consumers’ unwillingness to buy home appliances online with moderation of online consumer reviews. Asian Academy of Management Journal, 19(1), 73–92.

Kanouse, D. E., & Hanson, L. R. (1972). Negativity in evaluations. In E. E. Jones, D. E. Kanouse, H. H. Kelley, R. E. Nisbett, S. Valins, & B. Weiner (Eds.), Attribution: Perceiving the causes of behavior (pp. 47–62). Morristown, NJ: General Learning Press.

Korfiatis, N., García-Bariocanal, E., and Sánchez-Alonso, S. (2012). Evaluating content quality and helpfulness of online product reviews: The interplay of review helpfulness vs. review content. Electronic Commerce Research and Applications, 11, 205–217.

Liu, H., Lim, E.-P., Lauw, H. W., Le, M.-T., Sun, A., Srivastava, J., & Kim, Y. A. (2008). Predicting trusts among users of online communities: An Epinions case study. In Proceedings of the 9th ACM Conference on Electronic Commerce (pp. 310–319). New York, NY: ACM.

Liu, J., Cao, Y., Lin, C.-Y., Huang, Y., & Zhou, M. (2007). Low-quality product review detection in opinion summarization. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (pp. ٣٣٤–٣٤٢). Stroudsburg, PA: Association for Computational Linguistics.

Mackiewicz, J. (2010). Assertions of expertise in online reviews. Journal of Business and Technical Communication, 24(1), 3–28.

Mackiewicz, J., & Yeats, D. (2014). Product review users’ perceptions of review quality: The role of credibility, informativeness, and readability. IEEE Transactions on Professional Communication, 57(4), 309–324.

Mackiewicz, J., Yeats, D., & Thornton, T. (in review). The impact of review environment on review credibility. IEEE Transactions on Professional Communication.

McGlohon, M., Glance, N., & Reiter, Z. (2010). Star quality: Aggregating reviews to rank products and merchants. In Proceedings of the Fourth International AAAI Conference on Weblogs and Social Media (pp. 114–121). Palo Alto, CA: AAAI Press.

Metzger, M. J., Flanagin, A. J., & Medders, R. M. (2010). Social and heuristic approaches to credibility evaluation online. Journal of Communication, 60(3), 413–439.

Mudambi, S. M., & Schuff, D. (2010). What makes a helpful online review? A study of customer reviews on Amazon.com. Management Information Systems Quarterly, 34(1), 185–200.

O’Mahony, M. P., & Smyth, B. (2010). A classification-based review recommender. Knowledge-Based Systems, 23(4), 323–329.

Ong, T., Mannino, M., & Gregg, D. (2014). Linguistic characteristics of shill reviews. Electronic Commerce Research and Applications, 13(2), 69–78.

Ott, M., Choi, Y., Cardie, C., & Hancock, J. T. (2011, June). Finding deceptive opinion spam by any stretch of the imagination. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1 (pp. ٣٠٩–٣١٩). Stroudsburg, PA: Association for Computational Linguistics.

Otterbacher, J. (2011). Being heard in review communities: Communication tactics and review prominence. Journal of Computer-Mediated Communication, 16(3), 424–444.

Pan, Y., & Zhang, J. Q. (2011). Born unequal: A study of the helpfulness of user-generated product reviews. Journal of Retailing, 87(4), 598–612.

Papathanassis, A., & Knolle, F. (2011). Exploring the adoption and processing of online holiday reviews: A grounded theory approach. Tourism Management, 32(2), 215–224.

Rello, L., & Baeza-Yates, R. (2012). Social media is NOT that bad! The lexical quality of social media. In Proceedings of the Sixth International AAAI Conference on Weblogs and Social Media (pp. 559–562). Palo Alto, CA: AAAI Press.

Riley, K., & Mackiewicz, J. (2010). Visual composing: Document design for print and digital media. Upper Saddle River, NJ: Prentice Hall.

Roggeveen, A. L., & Johar, G. V. (2002). Perceived source variability versus familiarity: Testing competing explanations for the truth effect. Journal of Consumer Psychology, 12(2), 81–91.

Rozin P., & Royzman, E. B. (2001). Negativity bias, negativity dominance, and contagion. Personality and Social Psychology Review, 5(4), 296–320.

Schindler, R. M., & Bickart, B. (2012). Perceived helpfulness of online consumer reviews: The role of message content and style. Journal of Consumer Behaviour, 11(3), 234–243.

Schlosser, A. E. (2005). Posting versus lurking: Communicating in a multiple audience context. Journal of Consumer Research, 32(2), 260–265.

Schlosser, A. E. (2011). Can including pros and cons increase the helpfulness and persuasiveness of online reviews? The interactive effects of ratings and arguments. Journal of Consumer Psychology, 21(3), 226–239.

Sen S., & Lerman, D. (2007). Why are you telling me this? An examination into negative consumer reviews on the web. Journal of Interactive Marketing, 21(4), 76–94.

Shelby, A. N. (1998). Communication quality revisited: Exploring the link with persuasive effects. Journal of Business Communication, 35(3), 387–404.

Sparks, B. A., & Browning, V. (2011). The impact of online reviews on hotel booking intentions and perception of trust. Tourism Management, 32(6), 1310–1323.

Tenenbaum, I. (2013, July 9). Brands that dominate with user-generated content. (Web log comment). Retrieved from http://www.imediaconnection.com/content/34507.asp#singleview.

van Schaik, P., & Ling, J. (2009). The role of context in perceptions of the aesthetics of web pages over time. International Journal of Human-Computer Studies, 67(1), 79–89.

WolframAlpha Computational Knowledge Engine. (2014). Average English word length. Retrieved from http://www.wolframalpha.com.

Yang, J., & Mai, E. S. (2010). Experiential goods with network externalities effects: An empirical study of online rating system. Journal of Business Research, 63(9), 1050–1057.

Yeats, D., & Mackiewicz, J. (2014, May). Perceptions of product review quality: Testing credibility, informativeness, and readability. Panel Trust and credibility on the internet, conducted at the International Communication Association conference, Seattle, WA.

Zajonc, R. B. (1968). Attitudinal effects of mere exposure. Journal of Personality and Social Psychology, 9(2), 1–27.

Zhang, J. Q., Craciun, G., & Shin, D. (2010). When does electronic word-of-mouth matter? A study of consumer product reviews. Journal of Business Research, 63(12), 1336–1341.

Zhu, F., & Zhang, X. (2010). Impact of online consumer reviews on sales: The moderating role of product and consumer characteristics. Journal of Marketing, 74(2), 133–148.

About the Author

Jo Mackiewicz is an associate professor of rhetoric and professional communication at Iowa State University. Recently, with Isabelle Thompson, she published Talk about Writing: The Tutoring Strategies of Experienced Writing Center Tutors. She is the editor of the ATTW Book Series in Technical and Professional Communication. Contact: jomack@iastate.edu.

Manuscript received 17 October 2014; revised 5 February 2015; accepted 13 February 2015.