By David Wright
Purpose: As digital media continue to proliferate into every aspect of our lives, sound is playing an ever-increasing role in the conveyance of information. Because digital messages via the Internet, cell phones, smart devices, and personal devices are typically shorter and more direct than traditional text-based communication, digital communication will necessarily improve via other, less textual cues. Visually, we already see this through the use of digital cues, such as emojis and graphical displays (e.g., GPS charts). But digital communication, in the future, will gain much from the expanded use of sound. As yet, no analytical framework for classifying sound in technical communication has been established.
Method: The author revisits some historical uses of sound in technical communication before proposing a model for analyzing current and future sounds.
Results: Tools that encompass signaling, linguistics, paralinguistics, extralinguistics, and rhetoric can be used to analyze complex sonic combinations and to generate new sounds for technical communication.
Conclusions: A first model is proposed along with recommendations for the future.
Keywords: technical communication, sound, linguistics, historical sound, sonic rhetoric
- Presents a discussion of sound as technical communication in historical context.
- Discusses linguistic properties and rhetoric as they pertain to sound in technical communication.
- Proposes a first tool for use in analyzing and creating sounds for technical communication.
Sounds of the Past
Historical study allows us to see where we come from, but it can also illuminate the present and point the way to the future. As Malone (2007) writes,
The earliest justifications of such studies were that they help to legitimize the field by showing that it has a history and that they validate current practices. There will always be a place for these kinds of studies, especially as we venture into new areas, such as the history of previously unexplored forms of nonverbal technical communication. (p. 343)
There is a long history of nonverbal technical communication, although most technical communication research on the subject has been limited in scope. While the forms and functions of sound in technical communication have varied over time, the fact that sounds have long been used to convey technical information is undeniable. Ancient construction projects were often directed and controlled through sound, allowing engineers and building supervisors to control and coordinate efforts over large project areas. As Sawyer (2015) notes, even ancient fortifications in China, often built by hand and backbreaking labor, were controlled by sound, including horns and drums. For example, the work of building the Chou dynasty capital was “not only supervised, but also controlled by the beat of a drum” (p. 55). Chinese states developed complex signaling systems to warn of the impending arrival of enemy troops, among other things. Similarly, Chanta-Martin (2015) discusses African “talking drums” in terms of their ability to disseminate coded messages to recipients that often included complex instructions.
Later, in medieval times, as Corbin (1998) and Arnold and Goodson (2012) show, bells had a profound effect on the lives of ordinary European townspeople by marking the hours of the day, calling people to worship, ordering assembly, or calling the militia to arms. In effect, controlling the loudest sound within a community allowed one to largely control the community. Arnold and Goodson (2012) refer to bell ringing as a form of “shorthand” (p. 112), which directed Christian life by the twelfth century; the authors note that the bells were so intertwined in Christian traditions that they were collected as trophies by Muslim armies and created a new industry devoted to bell design and manufacturing.
Other musical instruments have been widely used as a means of communication as well. Historically, the bugle is perhaps the most prolific musical instrument used for communication, especially in the military. Although the bugle is a relatively simple instrument, it has much more communicative range than might be expected. As an article in the St. Louis Globe Democrat circa 1885 states,
The language of the instrument is not at all limited. A language with only five words may be thought easy to learn, and yet the different arrangements of these ‘words’ (‘sentences,’ as I may call them) are endless.” (Bugle Calls of the English Army, 1885, p. 7)
As Powels (2002) shows, by the time of the Civil War, the bugle was well entrenched in military life and maneuvers:
During the Civil War, as in conflicts through the centuries, military orders were often communicated through the use of various musical instruments. The most common instruments were the drum and the bugle, whose distinctive sounds could be clearly heard on a battlefield. At the outbreak of the Civil War, the U.S. Army had dozens of bugle calls to direct tactics and regulate the lives of the soldiers. (p. 10)
When armies of the Civil War were preparing for movement, separate bugle calls announced that the infantry should prepare to move, that companies should form a line in their camp streets, and that individual companies should move onto the highway. When in camp, the bugle called for breakfast, drill, and every other important activity of the day. Artillery and cavalry brigades were no different. Each responded to bugle calls of their own, and their horses soon knew the calls of the bugle as well as their riders.
The Milwaukee Sentinel reports (1895) that Lieutenant Brewer’s troop of the 7th Cavalry had mastered the “Cossack Drill,” using bugle calls to signal horses to lie down with their riders and rise again, to form breastworks and allow their riders to fire over the top of them, and to carry their riders standing up in the saddle. So trained to the bugle calls and their routine were they, that several of the horses and riders were “granted leaves of absence to perform in Buffalo Bill’s Wild West Show at the World’s Fair” (p. 3). Similarly, in discussing the U.S. Cavalry’s Troop F of the 3rd Cavalry in 1897, the Boston Daily Advertiser reports that commander Captain Dodd was able to lead his 58-member cavalry through a 35-minute “music drill,” consisting of complex maneuvers, dancing horses, and sabre touching by mounted troops, without a single verbal command. Accompanied by the army band, the cavalry members and their mounts were signaled only by a series of bugle calls designed to indicate changes in their elaborate routine.
Sounds of the Present
Modern military units use PA systems, radio, satellites, cell phones, and all manner of technological “gadgetry” to control troop movements both in training and on the battlefield, but, even now, the United States Coast Guard maintains extensive maneuvering and warning signals that mimic bugle calls through shipping horns. There are even separate codes for international and inland shipping, and some signals are specifically for vessels in sight of each other such as,
- one short blast to mean “I am altering my course to starboard”;
- two short blasts to mean “I am altering my course to port”;
- three short blasts to mean “I am operating astern propulsion”.
Other signals are specific to narrow channels, such as:
- a vessel intending to overtake another shall in compliance with Rule 9 (e)(i) indicate her intention by the following signals on her whistle:
- two prolonged blasts followed by one short blast to mean “I intend to overtake you on your starboard side”
- two prolonged blasts followed by two short blasts to mean “I intend to overtake you on your port side”
- the vessel about to be overtaken when acting in accordance with 9(e)(i) shall indicate her agreement by the following signal on her whistle: one prolonged, one short, one prolonged and one short blast, in that order. (U.S. Coast Guard Rule 34: Maneuvering and Warning Signals, para. 1–5)
In civilian lives, numerous sounds alert us to danger or provide us with information. From a young age, children learn to associate sounds with information. Older individuals may remember toys such as the classic “see and speak” that taught us to associate sounds with animals or, for more adventurous types, games like “Operation” that teach children motor skills with “negative” sounds representing improper movements. Today, sounds accompany video games, offering reinforcement for success and often signaling impending danger.
As adults, everyday activities, like crossing the street, are often accompanied by sounds such as beeps that let us know we can cross, and we become accustomed to processing auditory information as technical information. Forklifts make certain sounds when they back up, alarms sound when an item leaves a store without scanning, car horns and sirens alert us to danger and emergencies, and more annoying sounds, like cell phone ringtones and email alerts, tell us that our attention is required. In fact, ringtones, in particular, have become so ubiquitous that one can hardly visit a restaurant or grocery store within hearing their constant call. This list is virtually endless. Commercials, sirens, text message alerts, and the like constantly vie for our attention and send us sonic information. Similar examples can be found all over the world. Historically, Native Americans used drums extensively in religious ceremonies and continue to use them to enhance social gatherings to this day. And in other parts of the world, such as Turkey, drummers have moved through the streets for centuries before dawn during Ramadan to signal the morning meal before fasting begins at sunrise and to wake people up.
While the full measure of historical and current sounds used to communicate technical information is beyond the scope of this article, a few examples like these show that technical communication has been and continues to be delivered through sound. However, in order to properly analyze sonic messages; which often incorporate a variety of tonal, musical, and linguistic properties; we must consider them from multiple perspectives.
Sound as Plain Language
Britton’s (1965) reference to the bugle call as a metaphor for technical communication with one unambiguous meaning was an early attempt to define the discipline. But Britton’s article is over 50 years old, and technical communication scholarship has largely moved on from his somewhat simplistic description. Nevertheless, sound is capable of conveying complex messages quickly, plainly, and effectively, as shown by its use in historical construction and military affairs. Britton believes that scientific and technical communication can be recognized and judged on the basis of that effectiveness, saying that, “scientific analyses and descriptions, instructions, and accounts of investigations quickly reveal any communication faults by the inability of the reader to comprehend and carry on” (p. 116). But, in using the bugle as a metaphor for communication with clear meaning, Britton overlooks the fact that the signaling function of the bugle, rather than the use of a musical instrument, made the bugle so successful. Virtually any sound can be used as a signal, provided the signaler and the listener possess a shared understanding of its meaning.
Britton’s (1965) publication was a precursor to the plain language versus rhetorical humanism debate among technical communicators. Typically, this argument centers on the nature of technical communication and what place rhetoric and humanism have in such communication. While that argument is beyond the scope of this article, scads of sources can be located on the subject. Some authors (Katzoff, 1964; Rathjens, 1985; Petelin, 2010; Stewart, 2010) have argued for the merits of plain, unambiguous language, while others (Miller, 1979; Dobrin, 1985; Sanders, 1988; Rutter, 1991; Tebeaux, 1991) have argued in favor of a more rhetorical approach to communicating technical information.
Regardless, Britton’s (1965) failure to identify its signaling function as the source of the bugle’s clarity shaped that debate into one over message content, rather than one focusing on the medium of delivery, sound, which has clearly been used throughout history to convey precise technical information. Sound’s ability to do so, while simultaneously incorporating rhetorical properties, is precisely why sound has such potential for technical communication.
In fact, even Britton notes that there are complexities and aesthetics to sounds made by the bugle that impart a range of emotions. I suggest that most authors, whether they openly profess to be on one side of that debate or are simply perceived to be on one side, would admit that some degree of plain language, rhetoric, and humanism are all key ingredients for good technical communication. As with all things communication, the amount of each ingredient is dependent upon the circumstances.
Sound as Rhetoric
Rickert (2013) discusses ambiance as a rhetorical factor. He compares ambience to the fermenting of wine, with many factors influencing its flavor (soil, sun, etc.). In addition, the flavor is further affected by the company and setting in which it is consumed. Rhetoric, then, according to Rickert, should be considered among all of its influences if we are to better understand it and become better rhetors. Sound and music are among these influences and are, in turn, influenced, much as the wine mentioned previously, by their surroundings.
In the same way, words are not just markings on a page to be interpreted solely as factual information but are both an influencing factor and simultaneously influenced by their surroundings, or the ambiance of their environment. As Goodale (2011) says, “Even when we study speeches or the lyrics to popular songs, we rarely study the sounds of voices and music. Rather, we convert sounds into words on a page . . . And yet, we learn from taste, touch, smell, and sound as well as from sight” (p. ix). Goodale cites F.D.R.’s famous inaugural address (all we have to fear . . . ) as evidence of sound’s impact on information but laments the fact that the text of the speech is now most often separated from the sounds that accompanied it, in essence, the from the voice inflections and the speech’s surroundings and the ambience that Rickert identifies. Indeed, silently reading the speech does nothing to capture its true effect upon listeners.
This sonic rhetoric becomes even more pronounced when words are replaced by sounds, as has become more common with our increasingly digital existence. Technology companies recognized this fact years ago. As Rickert (2013) points out, no obvious need warrants the Windows operating system startup music. In fact, it serves no obvious function other than to signal that Windows has started. Rickert points out that this apparent lack of sonic function is often used to dismiss rhetoric as “persuasive, or seductive but in the end unimportant” (p. 131). This argument, which dates back to Plato, casts rhetoric as an impediment to transparency. In response, Rickert points out that something must be amiss when music and sound are “described simultaneously as powerful, indeterminate, and inessential” (p. 132). Further, he argues that a true understanding of rhetorical appeals must account for their transmission through nonverbal and ambient means.
Technology companies, at least, seem to agree. Why else would Microsoft have hired Brian Eno (of Beatles production fame) to create the Windows startup music, investing literally millions of dollars? It is tempting to answer this question sarcastically, because of the corporation’s wealth. But Rickert (2013) shows that Microsoft “wanted a piece of music not just to evoke an experience of using its operating system but to tailor it in specific ways, in essence, that is, to situate a user’s emotional frame of reference according to certain parameters” (p. 134). This is clearly a rhetorical strategy based on sound.
Cell phone sounds have taken on similar form. Far from the old bells of the medieval church and landline telephones, ring tones, text message alerts, and the like have evolved into personalized statements that tell us about people, their loyalty to subcultures (de Vries and van Elferen, 2010), who is calling, and whether we want to answer before the phone is ever out of our pocket. In fact, some researchers even equate ringtones with self-identity formation and projection (Schneider, 2009). When a friend’s phone recently rang, I immediately recognized the tone as the Empire’s theme from Star Wars. When I inquired as to the purpose of that particular tune, he said it signaled his office was calling (in a disgusted tone) without ever looking at the phone.
Even traditional sounds, such as African drum beats, are not purely informational but evoke an emotional response that “becomes the impetus for motion that compels action to get things done in the rhetorical situation—solving problems or celebrating an occasion or event” (Bokor, 2014, p. 184). These emotions, combined with the implicit information provided by the beat of the drum, become a surrogate for speech and body language by providing, “the junction between human speech (serving as a surrogate form) and body motions (resulting from its impact on the audience)” (p. 175). The beats are chosen for a specific purpose and for a specific situation. Thus, the questions surrounding communication via sound should concern not only their signaling function but which types of sounds are being paired with specific circumstances and, perhaps more importantly, who is choosing those sounds and for what purpose. When cell phone users choose a specific ring tone for a specific caller, for example, they control those sounds and, at least to some degree, the rhetorical effect of those sounds. When others choose, however, users may be unaware of their impact or, at the very least, subject to unwanted influence.
For example, slot machines have greatly expanded their use of sound since their evolution in the early 1900’s. Rivlin (2004) shows that, until the early 1990’s, the original sounds (ringing bells to signal winning) had changed very little but that since that time, the average number of sounds on a slot machine has increased to over four hundred. Dixon et al. (2014) found that those sounds are used to both psychologically and physiologically affect slot machine players, making them prefer both to play machines with upbeat, “winning” sounds and to believe that they are winning when they are not. In later research, Dixon et al. (2015) showed that sounds were directly responsible for reinforcing losses on slot machines to the point that players thought they were winning when they were actually losing.
Therefore, sonic analysis for technical communication must incorporate not only types of sounds but also their rhetorical purpose and the source of their design. It is tempting to dismiss the rhetorical dimensions of sound as something apart from technical communication, but, much like textual information, it is nearly impossible to separate the rhetorical effects of sound from their signaling functions. This is especially true if the sound in question progresses beyond a simple beep. Similarly, it is tempting to dismiss the linguistic properties of sonic communication, but the very meaning and persuasive appeal of sound depends upon paralinguistic functions associated with those sounds. Thus, to be understood as a facet of technical communication, sound must be viewed from an interdisciplinary perspective. Linguistic, paralinguistic, and extralinguistic communication, along with rhetorical appeals, are critical to understanding sonic technical communication.
As a form of audio communication, linguistic communication probably requires the least explanation. We are all accustomed to communicating verbally. While there are tomes of linguistic research concerning verbal communication, for the purposes of a rhetoric of auditory technical communication, there are two main sources: the human voice and machine voices. Much of today’s communication is simply digital audio. Phone answering machines still announce the number of messages awaiting playback, for example. Smart home products are also increasingly vocal in their presentation of information. Amazon’s Echo is a good example of a digital machine that communicates linguistically. There will be many more in the near future as our homes and offices become smarter and interact with us concerning a range of environmental controls. A second source is the recorded human voice. Commercials, presentations, and training materials are some examples of recorded voices in action. Regardless, both sources are designed to deliver scripted information, and even computer-generated linguistics are designed to mimic the human voice.
Of more interest as a tool for sonic rhetoric in technical communication is paralinguistic communication, which can best be described as a manner of speaking to convey particular meanings. In conversation we routinely process voice inflection, non-linguistic noises, facial expressions, body language, hand gestures, and a host of other signals to more accurately identify meaning. People depend on these cues to understand both the meaning of speech and the speaker’s emotional state (Siegman, 1978). But most conversational cues are dependent upon interpersonal proximity. In order to distinguish those cues, we must be in close proximity to the speaker. If we remove that proximity from our communications, we must do without those signals, which is why sarcasm is so poorly reflected in emails and the reason that the written phrase “that’s impossible” may mean many different things. The meaning of the phrase is dependent upon the context of its delivery, vocal expression, and which words are emphasized.
In Reading Sounds: Closed-Captioned Media and Popular Culture (2015), Zdenek refers to paralinguistic speech sounds as either “Paralanguage”—sounds made by speakers that either can’t or shouldn’t be described as distinct speech—or “Manner of speaking identifiers,” which describe a speaker’s distinct way of pronouncing words (p. 39). As he shows, these two types of non-speech identifiers are used in closed captioning to directly offset the lack of paralinguistic cues (such as grunting noises or sarcastic speech) created by silence. Zdenek also shows that extracting those cues from other (linguistic) captioning reveals patterns that can be easily missed (p. 47), and that “captioning is the difference between understanding and misunderstanding” (p. 70). We are dependent upon paralinguistic cues and their delivery patterns for accurate communication. Without them, we are missing part of the message. Complete compensation for the missing cues, even by other means such as closed captioning, is difficult.
Fortunately, both writing and sound can be delivered without proximity, which is precisely why both have been so historically valued. Sound, in addition, can embody rhetorical elements that articulate emotion and stimulate action via auditory cues that move beyond textual word choice. Furthermore, music and some musical instruments, in particular, have an extended sound range and the same ability to impart emotion as has the human voice. Sounds can effectively mimic many emotional properties of the human voice and were, in fact, designed to do so, as the human voice is the original musical instrument.
For example, anxiety and stress tend to accelerate speech, while depression tends to slow speech and results in more pauses. Part of the study of prosody concerns describing vocal variations that accompany speech and help to convey meaning. Bhatara, Luakka, and Levitin (2014) state that “In social interactions, we must gauge the emotional state of others in order to behave appropriately. We rely heavily on auditory cues, specifically speech prosody, to do this” (p. 1). Leathers (1997) identifies nine different parts of vocal sound that can be consciously controlled, including loudness, pitch, rate, duration, quality, regularity, articulation, pronunciation and silence (p. 13). These, in turn, are manipulated by the speaker to manage impressions, manage emotions, and to regulate communication.
But we do not automatically recognize these cues. As Knabb, Hall, and Horgan (2014) state, “Most of our ability to send and receive nonverbal signals is derived from ‘on-the-job-training,’ the job being the process of daily living” (p. 61). We are trained over time to distinguish the sounds and their meaning, both through direct training from others and by watching what others do in response to those sounds. Despite the fact that these sounds may originate from a different culture, we are typically able to learn to decipher them relatively quickly (Collett, 1971).
We do the same thing with non-linguistic sound and, in fact, impart the inflective cues that we have learned from speech onto sonic stimuli. We associate slow, tonally low sounds with sluggishness and depression, for example. Paquette, Peretz, and Belin, (2013) show that listeners of a prerecorded set of musical “bursts” were able to identify the correct emotional state associated with the musical piece (happiness, fear, sadness, neutrality) at a rate of 80.4%. Finally, Scherer (2001) played vocal portrayals designed to convey specific emotions for listeners from nine different countries and found that similar emotional inference rules across cultures, even though the native languages of the listeners varied.
The rhetorical advantage of such uniformity across cultures is that music and paralinguistic influence are predictable. The disadvantage, if there is one, is that the same sophistical applications of paralinguistic cues have been available since Aristotle’s time and have not always been used for ethical purposes. As our digital environment continues to evolve, auditory cues will become increasingly important, as will sound-producing devices and paralinguistic cues.
In addition to paralinguistic communication, extralinguistic communication affects our perception. Extralinguistic communication is best defined as sound that affects communication apart from language. Or, as Zdenek (2015) calls them, “sound effects” that do not emanate from vocal chords (p. 39). Background music is undoubtedly one of the most prolific examples of extralinguistic communication. Advertising is replete with sound as a means of conveying mood and rhetorical intent, as are movies and other types of digital recordings. Pharmaceutical ads offer a quick glimpse into the type of extralinguistic communication that dominates much of our daily media.
A commercial for a depression medication might, for example, start with a discussion of life before the drug, complete with somber music, while the second half of the commercial often features a revitalized person and a much livelier soundtrack. Product branding also routinely involves both music and narration to create a feeling about a particular product or service. For example, ASPCA advertisements feature slow, sad, piano music coupled with emotionally charged narration and voice inflection. The purpose of this combination, of course, is to use pathos to draw us into the suffering of the animals. On the other end of the spectrum, a recent advertisement for a psoriasis medication features quicker, more energetic narration coupled with Fleetwood Mac’s “Go Your Own Way” as background music to impart a feeling of excitement and freedom. We are all aware of these manipulations when we stop to think about them, but we generally accept them as part of communication without much thought.
However, imagine the Windows startup sound being the Price is Right’s sound for losing (http://www.orangefreesounds.com/price-right-losing-horn/). We know immediately, as Paquette, Peretz, and Belin, (2013) show, that this is not an appropriate sound and that it is meant to signify a negative outcome. Our “on-the-job-training” sees to that. But much like paralinguistic sound, many of the rhetorical qualities of extralinguistic sound are subtler, and their effect depends upon why they are being put to use and by whom.
Music’s original alternative, silence, functions much like white space in a document. White space is a break from the narrative, a signal that we are moving on to a new topic or section, hence the reason for silence between chapters in audio books. Rhetorically, however, silence can mean a range of things. Silence while playing a slot machine, for example, is indicative of nothing happening. It is a removal of the audio reward system designed to keep players upbeat and convinced of their monetary progress. Only by playing additional money can the sounds be recovered. But silence can also be used for emphasis. Silence is sometimes most effective in audio as a break, after a salient point has been made, much like white space in a written document.
Toward a Model for Analyzing Technical Communication Sounds
By combining the rhetorical and linguistic properties of sound with its function as a signaling device, we can begin to analyze more complex combinations of sound. Many of the sonic messages we hear today are but part of an overall message that may also include textual and visual elements. Nevertheless, sound is part of the message and is often ignored. Whereas a single musical instrument or sound can be used to transmit instructions or alert us to a new condition, our digital lives encompass increasingly complex combinations of sounds, including the human voice, machine voices, recorded sounds, designed sounds, and music. Tools must be developed to analyze and classify those combinations and to aid in developing new sounds.
Some work in classifying elements of sound does exist. For example, Ephratt (2009) presents the following table concerning auditory communication as part of the five senses that serve overall communication (Figure 1). In doing so, he includes linguistic, extralinguistic, and paralinguistic communication.
Figure 1. Ephratt’s table of the five senses in auditory communication
|Human Body Exclusively
|Beyond the Human Body
|Symbol pure sign
|Index pure + function
|Symbol pure sign
|Index pure + function
|Language “verbal” (including prosody)
|Sounds; qualities; voices and sounds other than words -paralinguistic
|sound reflexes: cry, yawn, digestion, and other body sounds
|telephone ring tones
|Phonokinesics: sounds of shoe tapping; water dripping
These categories are useful as a part of technical communication auditory analysis. However, as a technical communication tool, any such model would need to include a more interdisciplinary approach. One possible approach is shown below (Figure 2).
Figure 2. A beginning table of sound source for technical communicators
|Extralinguistic and silence
|Extralinguistic and silence
|Speech Narration Song
|Prosody including pitch, rate, loudness, etc.
|Gasps, pauses, vocal noises, etc.
|Computerized voice, tonal warnings, etc.
|Prosody including pitch, rate, loudness, etc.
|Musical Instrument, Digital Sound, Background Music, Animal Sounds, etc.
|Logos, Pathos, Ethos
|Rhetorical type or branch
|Deliberative, Forensic, Epideictic
|TopoiCommonplace or generative
|Consumer, corporate, educational, political etc.
|Linguistic + Auditory + Rhetorical elements + Audience = Effect
It may seem unnatural, at first, to connect linguistic, paralinguistic, and extralinguistic qualities with sounds other than the human voice. Those qualities can, however, be used to describe the affective properties of sound, even when the sound is not that of a computerized voice. In the same way we conversationally manipulate phrasing, voice tone, and volume levels, sounds can be used to make auditory stimuli mean different things. For example, most readers are probably familiar with the “charge” call of the bugle, if only from old western movies. Although the notes in that call are the same notes used to play “taps” (the bugle only plays five different notes), the calls are dramatically different, because of the player’s ability to shape pitch through the use of facial muscles and through the way those notes are delivered (pitch, tempo, etc.). Thus, while one song is associated with loss and sadness, the other is designed to stimulate troops and convey a sense of urgency, and there can be no mistaking the difference between the meanings of the two, even if one has never heard them before.
Many familiar sounds can be easily classified in the same way using the table above. An ambulance siren, for example, does not create a narrative argument but is a recognizable argument nonetheless. The siren makes a rational argument, an appeal to logos that persuades us to pull over and let the ambulance pass. There is someone seriously injured or ill in the ambulance and that person must be transported to the hospital as quickly as possible. An ambulance siren can be classified as a commonplace—a ready-made argument. We are asked to empathize with the situation of the passenger. Hearing the siren, we may be further influenced by pathos related to our own fear of dying, etc. Also, the law requires it, and we know that without being told each time in words. Therefore, an ambulance siren also makes a character-based argument.
Of course, there are many possible derivative combinations based on the categories above, and any resultant categorization would still require at least some explanation. But classifications offer a means for analysis that is recognizable and proven. For example, the pharmaceutical advertisement mentioned previously for a depression medication might utilize several different combinations of audio for several different purposes. As such, application of this model might require segmenting the advertisement for analysis, much as storyboarders do when creating those advertisements. Such an analysis might separate the commercial into three sections: an introduction of crisis, a solution, and an aftermath. This is a common formula for pharmaceutical ads, but they often incorporate multiple sound types to create multiple appeals or arguments. As such, analysis of the “crisis” phase might take the following form (Figure 3).
Figure 3. Sample section analysis of the “crisis” phase
|Extralinguistic and silence
|Extralinguistic and silence
|Descending pitch, slowed speech, quiet tones.
|Silence to create emphasis
|Slow tempo, sad tones
|Background Music, pauses for emphasis
|Rhetorical type or branch
|Commonplace acknowledgement of common symptoms
|Designed to instill a mood of sadness, despair, and a feeling of mounting crisis. Somber music paired with moving narration. Deliberative in that it sets the stage for supplying an answer to the crisis and supplies motivation for action through empathy. Establishes commonplace knowledge of depression effects with depression sufferers.
Consider the crisis section of this Cymbalta advertisement for comparison: https://www.youtube.com/watch?v=InYASbxhQ3M.
The first section of the advertisement may be said to present an argument based on sadness and/or fear by using linguistic, paralinguistic, and extralinguistic appeals via human voice narration, background music, and other sounds, each of which may be individually coded. By using a commonplace symptom description, for example, the ad’s narration seeks to establish empathy with depression sufferers. This can be said to be “text” in that it emanates from a script, but the fact that it is recorded and delivered digitally makes it subject to both paralinguistic and extralinguistic influence. We might even question whether, due to the fact that it is recorded, the narration should be classified as human or non-human. At best, it is human once removed, because a digital recording is still a computer replication.
In contrast, the next section of an advertisement (usually offering a solution to the crisis) may play on different emotions such as hope through increased narrative rate, higher voice pitch, less somber music, and appeals to ethos, such as the ubiquitous commonplace, “My doctor said” or, “Ask your doctor” phrases often heard in pharmaceutical ads. Consider the example from a Symbicort commercial: https://www.youtube.com/watch?v=oG9MxLwnapE.
The analysis of this “solution” section would likely be quite different from the table above. In addition, this table would likely require further description of the advertisement itself, including the script and an analysis of visual elements for a complete picture of the rhetorical elements in play. It may also require an accompanying discussion of the intended audience. But, we already have tools for those facets of communication. What is lacking in technical communication is a sonic analysis tool. Hopefully, this table offers a starting point (albeit incomplete, no doubt) for further discussion about such a tool for technical communicators. If nothing else, it offers a first step in explaining those sounds systematically.
It would be interesting to see what other analyses might look like. Analysis of different audio types, such as ringtones or personal device alerts, for example, might look quite different, especially if they are customizable by users. Computers have now replaced many of the musical instruments of old, but the sounds that we depend upon for meaning remain (can you hear the sound of an incoming email as soon as I mention it?). In the future, interpersonal communication, medicine, financial transactions, and all sorts of everyday activities will depend even more on machines and sound to alert us to new information and to direct our activities. Personalization of those sounds will play a vital role in future technical communication.
Because personal communication devices are so prolific, and because sound is such an integral part of those devices, designing sounds will also become an integral part of technical communication. Wearable technologies, for example, have only begun to tap their potential. While some ideas, such as Google Glass, have met with mixed reviews, technology companies are actively designing replacements for the cell phones we use today. Watches capable of controlling our communications are already popular, and we may expect to see ever less cumbersome and more capable devices in the near future.
We are moving into a time where technology allows for more than simple signaling—something beyond plain language while still incorporating plain language. But sound is underutilized, because we still think in terms of textual cues for digital communication. What is to prevent a smart washing machine, for example, from sending a simple sound (perhaps simply the sound of an active agitator) to a wearable device informing us that the wash cycle has completed? Or an integrated pharmacy from sending an auditory message informing us that a prescription has been filled? Or that our blood pressure is too high? Or that a child has left a defined perimeter? If horses were capable of learning bugle calls, surely, we are capable of reacting to auditory messages without textual cues.
These are the simple, signaling sounds referred to previously, of course. But what of a set of instructions? It seems perfectly logical for a set of auditory instructions to be delivered via wearable technology. But instructions may require warnings, for example, and would also depend upon paralinguistic cues to be maximally effective. As sounds continue to proliferate as a digital means of conveying technical information, we will be forced to design new sounds and to digitize existing sounds. Therein lies the additional value of this type of tool. Technical sound designers will need to be aware of their impact, both practically and ethically, lest they design sounds that are misunderstood or manipulative.
More sounds are designed to convey specific messages every day. Personalized communication and technical information through sound will continue to proliferate in the future. Healthcare, especially personal health monitoring, is one avenue that is certain to grow rapidly, as is smart home technology. With that growth will come the opportunity for technology users and designers to personalize the sounds that alert them to new information.
The digital change that we have seen since the 1960’s and 1970’s is this: technical communication has transformed from being figuratively like a bugle call to being a bugle call, including the sonic qualities that come with audio messages. This type of communication has only marginally been viewed as a proper concern of technical communicators but should be incorporated into our discipline. Our field requires systematic studies of such sounds that will lead to a paradigm that can be used to properly transmit technical information through sound.
Arnold, J. H., & Goodson, C. (2012). Resounding community: The history and meaning of medieval church bells. Viator, 43(1), 99–130.
Bhatara, A., Laukka, P., & Levitin, D. J. (2014). Expression of emotion in music and vocal communication: Introduction to the research topic. Frontiers in Psychology, 5, 1–2.
Bokor, M. J. K. (2014). When the drum speaks: The rhetoric of motion, emotion, and action in African societies. Rhetorica: A Journal of the History of Rhetoric, 32(2), 165–194.
Britton, W. E. (1965). What is technical writing? College Composition and Communication, 16(2), 113–116.
Bugle Calls in the English Army. (1885, March 28). St. Louis Globe-Democrat, p. 7. Fortier, G. (2004). Bugle calls. Esprit de Corps, 11(3), 14.
Chanta-Martin, N. (2015). Dance perspectives on drum language: A Yoruba example. Acta Ethnographica Hungarica, 60(1), 10–17.
Collett, P. (1971). Training Englishmen in the non-verbal behaviour of Arabs. International Journal of Psychology, 6(3) 209–215.
Corbin, A. (1998). Village bells: sound and meaning in the 19th century French countryside. New York, NY: Columbia University Press.
de Vries, I., & van Elferen, I. (2010). The musical madeleine: Communication, performance, and identity in musical ringtones. Popular Music and Society, 33(1), 61–74.
Dixon, M. J., Harrigan, K. A., Santesso, D. L., Graydon, C., Fugelsang, J. A., & Collins, K. (2014). The impact of sound in modern multiline video slot machine play. Journal of Gambling Studies, 30, 913–929.
Dixon, M. J., Harrigan, K. A., Santesso, D. L., Graydon, C., Fugelsang, J. A., & Collins, K. (2015). Using sound to unmask losses disguised as wins in multiline slot machines. Journal of Gambling Studies, 31, 183–196.
Dobrin, D. N. (1985). Is technical writing particularly objective? College English, 47, 237–251.
Goodale, G. (2011). Sonic persuasion: Reading sound in the recorded age (studies in sensory history). Champaign, IL: University of Illinois Press.
Katzoff, S. (1964). Clarity in technical reporting. NASA.
Knabb, M. L., Hall, J. A., & Horgan, T. G. (2014). Nonverbal communication in human interaction. Boston, MA: Wadsworth.
Leathers, D. G. (1997). Successful nonverbal communication: Principles and applications. Boston, MA: Allyn & Bacon.
Lynn’s Carnival Capt. Dodd’s Cavalrymen Are the Best Drilled in the World. (1897, October 6.) Boston Daily Advertiser, p. 8.
Malone, E. A. (2007). Historical studies of technical communication in the United States and England: A fifteen-year retrospection and guide to resources. IEEE Transactions on Professional Communication, 50(4), 333–349.
Miller, C. R. (1979). A humanistic rationale for technical writing. College English, 40(6), 610–617.
Paquette, S., Peretz, I., & Belin, P. (2013). The “Musical Emotional Bursts”: A validated set of musical affect bursts to investigate auditory affective processing. Frontiers in Psychology, 4, 1–7.
Rathjens, D. (1985). The seven components of clarity in technical writing. IEEE Transactions on Professional Communication, 28(4), 42–46.
Rickert, T. J. (2013). The attunements of rhetorical being. London, UK: Muse Publishing.
Rivlin, G. (2004, May 9). The Tug of the Newfangled Slot Machines. New York Times. Retrieved from http://www.owlfoundation.net/web-pix/pdf-files/slots-casino-addiction-gambling.pdf
Rutter, R. (1991). History, rhetoric, and humanism: Toward a more comprehensive definition of technical communication. Journal of Technical Writing and Communication, 21, 133–153.
Sanders, S. P. (1988). How can technical writing be persuasive? In L. Beene & P. White (Eds.),Solving problems in technical writing (55–78). New York, NY: Oxford University Press.
Sawyer, R. D. (2011). Ancient Chinese warfare. New York, NY: Basic Books.
Scherer, K. R., Banse, R., & Wallbott, H. G. (2001). Emotion inferences from vocal expressioncorrelate across languages and cultures. Journal of Cross-Cultural Psychology, 32, 76–92.
Schneider, C. J. (2009). The music ringtone as an identity management device: A research note.Studies in Symbolic Interaction, 33, 35–45.
Siegman, A.W. (1978). The telltale voice: Nonverbal messages of verbal communication. In A. W. Siegman & J. M. FeldStein (Eds.), Nonverbal Behavior and Communication (183–243). Mahwah, New Jersey: Lawrence Erlbaum Associates.
Stewart, J. (2010). Plain language: From movement to profession. Australian Journal of Communication, 37(2), 51–72.
Tebeaux, E. (1991). Technical communication, literary theory, and English studies: Stasis, change, and the problem of meaning. The Technical Writing Teacher, 18(1), 1–27.
U.S. Coast Guard. Rule 34: Maneuvering and Warning Signals. Retrieved from http://www.navcen.uscg.gov/?pageName=Rule34
Zdenek, S. (2015). Reading sounds: Closed-captioned media and popular culture. Chicago, IL: University of Chicago Press.
About the Author
David Wright is Associate Professor of English and Technical Communication at Missouri University of Science and Technology. Prior to his university appointment, Dr. Wright worked for the NASA Aerospace Education Center, as an instructional designer for the State of Oklahoma, and in the software industry. His research interests include technology diffusion, technical marketing, and strategic communication. He is available at email@example.com.
Manuscript received 19 February 2018, revised 7 May 2018; accepted 22 July 2018.