66.4, November 2019

Communication Strategies for Diagnosing Technical Problems at a Help Desk

By Vincent D. Robles

Abstract

Purpose: This study intended to help technical support providers understand different communication strategies for diagnosing users’ technical problems and how users may communicate those problems. Also, the study intended to help future researchers to better understand users’ experience in seeking technical help.

Method: To contribute to this research area, I examined 18 help desk visits between 11 users and 6 technical support providers in an in-person help desk at a four-year university in the Midwest United States. I analyzed the communication for stages of the conversation in which the component devoted to diagnosing technical problems existed, closely analyzed the dialogue of both speakers during this stage, and explored associations between the communication strategies they used and the user satisfaction based on a customer-support satisfaction survey.

Results: The statistical tests suggested that more dialogue about the users’ technical problem seems to associate with user satisfaction. The tests did not reveal a strong association between specific communication strategies and user satisfaction. The analysis of the visits showed how users shared their experiences through narratives and minimal responses, and how technical support providers used inquiries to understand user needs and experiences, providing a framework for understanding what the strategies look like.

Conclusions: This research provides a reliable way of identifying and categorizing the ways two speakers communicate to diagnose a technical problem, which provides a framework for new technical support providers to communicate during this part of the discussion.

Keywords: technical support, help communication, problem diagnosing, one-to-one conversations

Practitioner’s Takeaways:

  • Phrase inquiries according to specific user information: to understand users’ needs, to understand users’ experiences, to understand users’ previous actions, and to understand where users experienced problems.
  • Listen for associated answers to types of inquiries, despite users’ propensities to give various types of information in response to these inquiries.
  • Use open-ended questions to avoid miscommunication and to avoid using time to repair misunderstanding.
  • Promote more discussion, especially from users, when diagnosing a technical problem, which appears to strongly associate with user satisfaction.
  • Conduct small-scale analyses of technical support conversations using similar research procedures to understand how well technical support providers are communicating with your users.

Introduction

Users commonly need help with a technology, and technical communicators use written and visual communication to mitigate the strains this need places on users. To better understand and improve this communication, technical communication researchers usually explore relatively asynchronous types of technical communication (e.g., documentation, forums, wikis), but because technical support providers (sometimes called customer support providers) communicate similar technical information in one-to-one synchronous interactions, technical communicators should find interest in learning how this communication works. This study explores such interactions.

During these interactions, technical support providers communicate with users to understand and resolve users’ problems (Clark, Murfett, Rogers, & Ang, 2012; Xu, Wang, Forey, & Li, 2010). The existing research on these visits outlines the benefits they give to both users and organizations, but current research has not fully explored what the communication looks like in these visits. In identifying this lack of research, Lam and Hannah (2016), who explored technical support on Twitter, argued that “the relative lack of recent, specific technical communication scholarship about help desk interactions” should prompt technical communication researchers to “consider more fully how technical communicators can and ought to design for and deliver customer service as part of the technical support work they do” (p. 39). A study on technical support interactions contributes to our understanding of how users receive the help that technical support providers give.

As customers, users value person-to-person help desks because the information they receive addresses their specific goals and concerns (Steehouder, 2003; van Velsen, Steehouder, & de Jong, 2007). Person-to-person help desks provide “the total user support package” because they complement existing technical communication infrastructure such as documentation or support forums (van Velsen, Steehouder, & de Jong, 2007, p. 228). Also, users value help desks because they assume that technical support providers will resolve their technical problems quickly and that technical support providers will express concern and investment for the specific problems the users face (Callaghan & Thompson, 2002). Documentation may not fulfill these expectations for quick and empathetic help, especially when users may feel it easier and quicker to ask for help rather than to read documentation.

Because such communication helps users and their individual needs, it maintains user loyalty and technology acceptance. This user loyalty and acceptance brings value to organizations. When organizations provide technical support for the technologies they produce for customers or for the technologies they require their employees to use, organizations maintain trust with these customers or employees, and they also enable these users to develop more confidence and trust in the technologies themselves (Bell, Hall, & Smalley, 2005; Hall, Verghis, Stockton, & Goh, 2014; Lee, Hsieh, & Ma, 2011; Nguyen, Groth, Walsh, & Henneg-Thurau, 2014). Thus, promoting user satisfaction with these visits not only serves the goals and concerns of users but also the goals and concerns of organizations.

Other fields such as organizational behavior (Barely, 1996; Das, 2002; Pentland, 2002), management studies and management information science (Armestad, Keily, Hole, & Prescott, 2002; Burgers, de Ruyter, Keen, & Streukens, 2000; Callaghan & Thompson, 2002), and marketing (Bell, Auh, & Smiley, 2005; Hall, Verghis, Stockton, & Goh, 2014; Nguyen, Groth, Walsh, & Henneg-Thurau, 2014) have explored the important role these conversations have within businesses, but they do not analyze the conversation itself, despite agreeing that service quality links to many business metrics such as customer loyalty, word-of-mouth referral, price insensitivity, sales growth, and market share (Bell, Auh, & Smalley, 2005, p. 169). Research on the communication between users and technical support providers can give technical support providers different strategies for communicating with users and provide for them insight in what they can expect when users communicate about their technical problems.

This article describes the communication of technical support providers in 18 face-to-face technical support visits at a help desk. Many descriptive studies of conversations have contributed to research using smaller sample studies of this type, especially when the goal was to provide a framework for understanding features of those conversations and for setting up hypotheses for future research (Beldad & Steehouder, 2015; Walker & Elias, 1987; Mackiewicz & Thompson, 2014; 2015). These smaller sample studies mirror workplace research in which small samples can help practitioners to improve internal practices without the time consuming and, at times, costly requirements of larger studies. As organizations turn to text analysis to understand their customers, a study like this one can model text analysis research that organizations can adapt for their own contexts.

In this article, I focus on the communication technical support providers used for diagnosing problems because this stage is part of the two-part process of technical support (first diagnosis and then resolution). I sought to answer three research questions. I could answer the first two questions using a qualitative discourse analysis: (1) In help desk visits, how do technical support providers communicate to diagnose problems? (2) In help desk visits, how do users communicate to diagnose problems? Also, to explore further these conversations and to help develop hypotheses for future research, I explored the answer to a final research question with a Chi-square test for independence: (3) What communication strategies for diagnosing technical problems are associated with user satisfaction?

Literature Review

While technical and customer support research shows that quality customer service does depend on technical support providers’ meeting efficiency metrics (Armestead, Keily, Hole, & Prescott 2002; Burgers, de Ruyter, Keen, & Streukens, 2000), this study focuses on the “interpersonal performance” described by Armistead et al. (2002), specifically, the communication.

Meta-analyses of hiring surveys used by human resources to determine quality candidates found that technical or customer support providers must have personality characteristics coherent with a service orientation (Bettancourt & Brown, 1997; Burgers, de Ruyter, Keen, & Streukens, 2000; Frei & McDaniel, 1998; Mount, Barrick, & Stewart, 1998). However, the research in management, customer service, and marketing does not provide specific ways that newly hired workers can deliver that service through their communication.

Technical communication research has done considerable work to explore these conversations. First, studies have been quite unified in understanding how the conversations are structured. Baker, Emmison, and Firth (2005) identified seven main phases of technical support conversations: Opening, Problem Analysis, Diagnosis, Solution, Instruction, Evaluation, and Closing. Steehouder and Hartman (2003) and Steehouder (2007) used this phase structure in their analyses of technical support transcripts. Clark, Murfett, Rogers, and Ang (2012) identified six “phases”: Greeting, Identifying, Defining, Negotiating, Resolving, and Closing (p. 128). Similarly, Xu, Wang, Forey, and Li (2010) found five “moves:” Greeting, Purpose, Information, Service, and Farewell (p. 458–459). Underlying these structures is what Agar (1985) calls institutional discourse, which he theorizes is composed of three main stages: Diagnosis, Directives, and Reporting.

Ultimately, these studies agree that the conversations involve an opening component and a closing component. Within these two bookends, they agree that the conversation is devoted first to diagnosing the problem and then devoted to resolving the problem. This well-established structure provides researchers and practitioners a global understanding of the conversations. Research on the phrases and sentences within these larger components, however, could provide a fuller grasp of the conversations, yet few studies have done this examination.

The research that looked more closely at the conversation in this way uses some variation of conversation analysis, which focuses on speaker-to-speaker turn-taking, to explore the way the turn-taking reveals features of the conversation, such as how technical competence is expressed (Baker, Emmison, & Firth, 2005) or what moments of miscommunication look like (Beldad & Steehouder, 2015; Kelly, 2014). This approach lends helpful insight into the nature of these conversations or specific exchange moments of the conversation, but it does not discuss the way individual speakers use language to accomplish goals or intentions. In other words, this approach does not show speakers’ strategies.

Two studies, however, focus on communication strategies. Clark, Murfett, Rogers, and Ang (2012) described empathetic communication strategies in customer support calls, and, for each communication strategy, they also described what they called “inhibitors,” which worked against the intentions of the empathetic communication strategy. Their study provides a rich groundwork on which to build future study strategies associated with satisfactory support conversations. While empathy is a key personality marker of successful support providers (Bordoloi, 2004; Burgers, Ruyter, Keen, & Streukens, 2000; D’Cruz & Noronha, 2008; Dorman & Zijlstra, 2003; Pontes & Kelly, 2000), successful support providers should also be efficacious (execute the work process) (Bearden, Malhotra, & Uscategui, 1998).

Steehouder (2007) provides such a focus on the way the speakers communicate to execute the work process. He provides an important, and thus far the single, examination of communication when two people work together in “formulating the problem” (p. 2). For the technical support providers, he found the following strategies (pp. 3, 7): acknowledgments about the users’ speculations, pauses to prime users to elaborate, inquiries, and simulations. For users, he found the following strategies (p. 2): assertions about the urgency of the problem, expressions about their lack of expertise, reports about previous attempts to solve the problem, descriptions of events or technology, and speculations about the cause of the problem.

Steehouder (2007) notes many users employ “historical reports” of the events that occurred using past tense verbs (p. 3). He also noted that some users with low technical expertise may be unsure “what might be relevant and what is not” for explaining their problem (p. 7). Steehouder’s paper provides an excellent framework. He notes, however, that research could explore in more detail such strategies (p. 8). Specifically, the close analysis Steehouder presented from an example transcript does not account for how the analysis was conducted or how it might apply to other visits beyond the one presented. This article builds on Steehouder’s research by demonstrating how to identify the strategies within the conversations, which can help practitioners to understand the technical support communication within their own organizations. Also, this study identifies more examples and categories of strategies, which gives technical support providers a range of strategies for communicating with users. Also, it gives future researchers hypotheses for large-scale research projects on this topic. Lastly, this study explores the relationship between strategies of the conversation and user satisfaction, a variable that Steehouder’s (2007) work did not explore but which could engage practitioners interested not only in keeping these conversations efficient but also in keeping them effective and satisfying to the users seeking help.

Methods

The following sections outline details about the research site and participants, the data collection, and the data reduction and analysis.

Research Site and Participants

The setting for this study was the technical support team for an English department at a large, Midwestern university in the United States. I discuss later how this university setting still provides implications for industry practices. The support team’s supervisor, affiliated with the larger university online learning initiatives, agreed to the study, and, after receiving approval for this study from the Institutional Review Board (IRB) at the university, I recruited technical support providers and users. The IRB stipulated that I not reveal identifying characteristics, such as the name of the university and, especially, the names of participants. In total, this study explored the communication from six providers. These providers were graduate students from various disciplines—English, Technical and Professional Communication, and Linguistics—who demonstrated a proficiency in the technologies they supported, who demonstrated a proficiency for explaining the technologies to others, and who were hired by the support team supervisor. Users of these technical systems were English course instructors. Enrollment in the study entailed participants’ assent to the data collection procedures described later in this section.

The department used an open source learning management system (Moodle) to administer course content for many of its courses. Users could visit the help desk for this system to address issues related to course and instructional design, and system features and procedures. The department also used an open-source electronic portfolio system (ePortfolio). The system allowed students to build online portfolios of their work. During the study, the department also developed a secondary use for the ePortfolio, called “eProfiles,” which allowed instructors and students to create professional profile websites for professional development purposes. Because instructors had to help students use these systems, instructors could seek help in effectively employing it in their courses or for their own professional development. Lastly, the department administered a teaching activities repository to which instructors could submit and retrieve teaching activities. Users could seek help in either retrieving or submitting activities.

Although 40 users enrolled, only 11 visited the support office throughout the seven-month data collection process, and two of them visited more than once. I told users that they could use their own or the support team computers to facilitate their support sessions and that they should only visit the support office for genuine technical problems they faced. Because these problems had to be genuine, they occurred incidentally and intermittently. Given time constraints, the data collection process had to end, though the data sample does mirror those of similar qualitative studies, as I discuss later.

Data Collection Procedures

Once they had confirmed that visiting users were part of the study, the technical support providers began recording the conversation. Audio recording captured the conversation for the analysis, while the screen recording and facial recordings captured what Brown and Yule (1983) call “paralinguistic cues” (p. 4), which can be used by speakers to reinforce the meaning of their communication. Paralinguistic cues, such as leaning forward, laughing, smiling, breathing, screen activity, and others provided cues for interpreting the speakers’ meaning. After beginning the recording, the technical support visits continued as they usually would without a study going on. Organizations could feasibly replicate these procedures were they to want to examine their technical support visits or if they wanted to provide models of particularly effective visits, which they could use to train new practitioners. Further, they could use these recordings for performance evaluations.

After the session was over, the users and the technical support providers each completed a post-session satisfaction survey designed for each of their roles that was based on a customer-service survey designed by Burgers, de Ruyter, Keen, and Streukens (2000) (see Appendix A for this survey). For this analysis, only the users’ scores are reported and discussed, since their perceptions of the service, rather than the providers’, serve as the most realistic business case for gauging for how well a technical support visit went. Organizations could use their own surveys or the one offered in this study. Taken together with the recordings, organizations could examine highly rated visits using the analysis procedures I outline later.

I later interviewed technical support providers by walking each of them through selected excerpts from two visits they facilitated, except for one provider who only had one visit in the entire study. I played back the video and audio recordings of these moments for the providers and asked them what they were trying to accomplish with their communication. Their answers helped me to interpret the data and lent credibility to my interpretations. Organizations need not follow this procedure for their own research, but they could do so if they wanted to gain more insight into the practices of their own providers by asking them to consider specific visits with users.

As mentioned, data were collected for seven months and reached 20 visits total, given the time constraints associated with awaiting enrolled users to visit the help desk. This number provides enough data to develop a way to classify the language and follows similar sample sizes of other qualitative studies of this nature. For example, Beldad and Steehouder (2015) analyzed 25 technical support calls, Elias and Walker (1987) analyzed 10 tutoring visits, and Mackiwiecz and Thompson (2014; 2015) also analyzed 10 tutoring visits. Only when the research moved beyond the qualitative phase to a more quantitative focus did the sample size grow. For example, Mackiewicz (2017) built on her study of 10 visits by examining 47 visits for a larger scale corpus analysis. Indeed, Charney (1996) argued that while results from qualitative studies with small sample sizes and local contexts, such as this study, should not be touted as definitive results on a given research area, they are still the beginning of a research trajectory from which hypotheses can be developed (p. 591), which are what I contribute in the conclusion to this article. Organizations could easily (and more quickly) collect a sample like this one to gain insight about their own visits, especially if these samples are collected based on the user satisfaction scores. Echoing Nielsen and Landauer’s (1993) well-known argument, later popularized on Nielsen’s blog (2000), organizations could quickly gain the insight they need with smaller samples.

Data Reduction

Two undergraduate students and I transcribed the visits. For each visit, I checked the transcription three times before data coding, lending integrity to the data. Organizations could feasibly use transcription services for the visits they are interested in exploring, provided they give the appropriate legal considerations. Or they could skip transcription because they could simply consider the recordings themselves and take notes about moments within the recording.

I divided the transcript from each visit to mark the beginning and ends points of where I saw changes in the topic episodes. As Mackiewicz and Thompson (2015) state, topic episodes are identified by “monologic or dialogic strings of conversation that coherently address one subject” (p. 16). I found strings of the conversation that cohered around one subject and also looked for language markers that signaled a change (“Now” and “I also had another question”). At the phrase level, I segmented the speakers’ speech by dividing it whenever the purpose of the speech changed, which is a conventional approach to dividing streams of language so they can be analyzed (Geisler, 2018; Rourke, Anderson, Garrison, & Archer, 2000). Organizations could feasibly find these episodes were they to consider such transitional moments.

Data Coding Scheme

I employed a discourse analysis of the visits between these users and technical support providers. Discourse analysis studies the language in use when participants accomplish communication goals (Paltridge, 2012, p. 1). Specifically, I analyzed communication that signaled actions that participants accomplished with their speech, called Speech Acts (Brown and Yule, 1983; Searle, 1976).

To partition conversations into stages, I developed a coding scheme based on the studies outlined previously in the literature review. Specifically, I used my synthesis of the different coding schemes to create four larger categories that captured what each of the previous researchers had found: Specifically, these stages were Identifying, Defining, Resolving, and Closing. During the code development process, an independent coder and I determined one additional code would better describe the data. We found that a code about attempts to solve the problem could describe stages of the visit when the problem was not successfully resolved and either the visit had to continue another time or the problem was seemingly unresolvable. An independent coder and I coded the visit for 10% of the data, and, after three rounds of improving the codes, we arrived at acceptable reliability measures between us using Cohen’s Kappa (k=.90). Table 1 describes the coding scheme for stages. Organizations could feasibly employ this coding scheme to partition visits they want to examine.
Table 1. Stages coding scheme

Stage Definition
Identifying Identifying the user as part of the technical system such as Moodle, including obtaining the user’s name and any other pertinent identifying information about the user, such as course section.
Defining Outlining, summarizing, and/or indicating that there is a problem or question. Often prompted by the user but could also be prompted by the technical support provider.
Attempting Working through possible solutions to the problem or possible answers to the question. The problem does not get resolved fully or the question answered fully in that session. The technical support provider or user may not be satisfied with a proposed resolution or answer, or the user and/or technical support provider move on to a new problem without a resolution or answer.
Resolving Providing information, instruction, and/or solutions for a problem and confirming a specific problem is resolved. The technical support provider and user are satisfied with the resolutions or answers. The problem has to be resolved or the question answered in that session. Making plans to solve the problem at another time (e.g., following-up through e-mail or another meeting, or trying something later at home) does not mean the problem or question was resolved or answered.
Closing Confirming that the user is satisfied and the user has no more problems to talk about. Also saying good bye and/or setting-up a follow-up meeting or email conversation; includes taking the post-session survey if recorded

For developing a scheme for categorizing communication strategies, code development was more challenging because no systematic coding scheme existed for analyzing these visit, though the results from Steehouder’s (2007) study provided a beginning point. Using coding schemes developed for one-to-one tutoring visits (Mackiewicz & Thompson, 2015), I helped develop the codes so that they were more relevant to a technical support visit. An independent coder and I coded 10% of the data, and, after several rounds of improving the codes, we arrived at acceptable reliability measures using Cohen’s Kappa (k=.79). I then coded the remainder of the data set using the reliable coding schemes, which organizations can use or modify for their own visits. Further, while this code development process was time consuming and necessary for academic research of this type that uses discourse analysis (Giesler, 2018), organizations need not spend as much effort on reaching reliability measurements because they can consider (and modify) the coding scheme in Table 2 for their own workplace sites.

Instrumentation

To determine a satisfactory or unsatisfactory outcome from a visit, I used a post-session survey (Appendix A). I designed one post-session survey for users based on Burgers, de Ruyter, Keen, and Streukens’ (2000) instrument, which was developed and validated to determine customer’s expectations for support conversations. Two other researchers reviewed the survey, providing feedback on it for clarity and ease of use. This process checked face validity, which is the informed examination of an instrument to determine if the instrument appears to measure what the researcher intends (Wrench, Thomas-Maddox, Richmond, & McCrosky, 2013, p. 234). I could not adapt the wording of the instrument much further than replacing key words (such as “technical support provider” instead of “call center representative”) to avoid tampering with an instrument that was already validated. Also, because this analysis focused on the process of diagnosing the technical problem, I concentrated on three items from this survey to determine satisfactory conversations. These three constructs all related to the process of diagnosing the problem: (1) the technical support person helped define specifically the problem(s); (2) the technical support person was able to imagine what I was going through with my problem(s); and (3) the technical support person treated my problem(s) as important.

To determine whether or not a visit was satisfactory, I recoded the six-point scale so that “Strongly Agree” counted as 6, “Agree” counted as 5, “Agree Somewhat” counted as 4, “Disagree Somewhat” counted as 3, “Disagree” counted as 2, and “Strongly Disagree” counted as 1.

Table 2. Communication strategy coding scheme

Communication Strategy Description for Technical Support Provider and User
Inquiring to understand needs or background information Inquiring to understand or confirm listener’s needs or background information
Inquiring to learn about the technology Inquiring to learn about the technology, its settings or features, and/or how to use them
Inquiring to check comprehension Inquiring to check if listener comprehends what speaker said, did, or saw/sees
Inquiring to gain permission Inquiring to gain permission to do something at that moment during the conversation
Stating needs Stating needs for the technology’s settings/features or for the session’s procedures
Giving background information Giving background information about the problem or question
Confirming or denying Confirming or denying what listener or speaker said, did, or asked with a yes- or no-type answer, an I-don’t-know-type answer, or a noncommittal answer
Declaring the problem or problems as solved Declaring a problem as solved or a question answered
Judging the technology Judging the technology and/or its features through criticism or frustration
Observing Describing what speaker sees, hears, or notices while using or observing the technology at that moment during the session
Speculating Speculating about a problem or question
Signaling Signaling what speaker is doing at that moment or what speaker will do next
Planning Planning what to do either within or after the session
Showing how the technology works or how to do something with it Showing listener how the technology works or how to do something with it by using the technology itself
Explaining how the technology works or how to do something with it Explaining to listener how the technology works or how to do something with it without using the technology itself
Telling Telling listener what to do at that moment in the session

I then summed the scores from each of the three items on the scale for each user participant in one visit and determined the mean for that participant (sum of scores/number of questions). The outcome for each visit was thus the mean for the users’ responses for the 3 questions. Taking the average for each user participant across the 20 visits, I compared the average for one visit to the average for all visits. For example, I compared U11’s average for all the questions in visit 1 to the average user response for all the visits. The average for the users for all 20 visits was 5.77 (SD=0.44). Therefore, U11’s average for visit 1 was just below average (M=5.67, SD=0.38).

However, all participants tended to evaluate their experience highly, probably because they were trying to be polite to the technical support providers they may have known (though they submitted their responses to a private ballot box for which only I had the key). To control for the smaller range between satisfactory scores (5.33–6.00), I categorized visits within one-half standard deviation of the mean. Specifically, 5.77 was the mean for all conversations and 0.44 was the SD; thus, one-half standard deviation was 0.22 (0.44/2=0.22). Therefore, above average conversations rated 5.99 or higher (5.77+0.22=5.99). Below average conversations rated 5.55 or lower (5.77-0.22=5.55). These measurements ensured that only cases on the ends of the scale contributed to the analysis. Table 3 describes the cases and what category (above average, average, and below average) described them.

Table 3. Average, above average, and below average technical support conversations

Conversation User Technical Support Provider Length Mean Score Group
1 U11 TS2 6.11 5.67 Average
2 U11 TS2 10:16 6.00 Above Average
3 U2 TS4 1:56 6.00 Above Average
4 U20 TS4 31:02 6.00 Above Average
5 U14 TS2 8:55 5.33 Below Average
6 U2 TS7 11:47 5.33 Below Average
7 U19 TS7 7:33 6.00 Above Average
8 U2 TS7 8:58 5.33 Below Average
9 U2 TS8 6:27 6.00 Above Average
10 U23 TS2 10:31 5.33 Below Average
11 U2 TS6 11:12 6.00 Above Average
12 U5 TS2 1:57 5.33 Below Average
13 U40 TS2 17:36 6.00 Above Average
14 U2 TS8 7:39 6.00 Above Average
15 U2 TS3 1:49 6.00 Above Average
16 U40 TS7 33:12 6.00 Above Average
17 U35 TS3 12:52 5.67 Average
18 U32 TS2 5:47 5.33 Below Average
19 U41 TS7 27:42 6.00 Above Average
20 U40 TS2 15:24 6.00 Above Average

Ultimately, the data had more above average than below average visits (12 and 6), with only two average visits. This partition of the above average and below average visits allowed me to compare the communication strategies for each speaker in order to determine the strategies that appeared to be associated with more satisfactory visits. I only explored proportional frequencies to normalize the data and control for the length of the visits. Further, chi-square tests of independence allowed me to determine significant differences between both groups while accounting for length differences between cases (the visits) and different sample sizes (12 and 6).

Results

As indicated, this article focuses only on the communication strategies found in the defining stage, which serves the global purpose of diagnosing the technical problems. For this analysis, I only report on results from the above average and below average visits, excluding average visits. I give the quantitative analysis before the qualitative analysis. This organizational approach follows the presentation of results that Elias and Walker (1987) employed in their similar qualitative study.

Most Frequent Communication Strategies

Table 4 describes the raw frequencies of communication strategies in the 18 visits. The total number of segments devoted to the Defining Stage was 510 segments (of the 1795 segments in the 18 visits, which was 28% of the visits). The most frequent strategy from both speakers in the defining stage was “giving background information” (133 times), followed by “inquiring to understand needs or background information” (86 times) and “confirming or denying” (79 times).

To explore the way strategies’ frequencies associated with each group (below average and above average), I conducted a chi-square test of independence for each of the strategies except for those that appeared fewer than 5 times (Bewick, Cheek, & Ball, 2004). This test revealed that no individual strategies associated with the above average group in a statistically significant way (see Table 5). However, the test revealed that more talk, overall, associated with the above average group in a statistically significant way (p=0.00). To test the strength of this association, I conducted a Phil and Cramer’s V test, and I found the results strong (V=0.424). I discuss a potential conclusion and implication of this result in a later section.

Table 4. Frequencies for each communication strategy

Communication Strategy Total
Giving Background Information 133
Inquiring to Understand Needs and Background Information 86
Confirming and Denying 79
Inquiring to Understand the Technology 40
Telling 37
Signaling 36
Stating Needs 36
Observing 16
Explaining How the Technology Works or How to Do Something with It 12
Inquiring to Check Comprehension 9
Speculating 7
Inquiring to Gain Permission 7
Judging the Technology’ 5
Showing How the Technology Works or How to Do Something with It 4
Planning 3
Total 510

Communication Strategies Across and Within Subjects

To explore how speaker type associated with the strategies, I found the frequencies of each strategy for each speaker. First, I wanted to explore the frequencies across subjects (i.e., across columns). Table 6 shows that of the 133 times both speakers employed “giving background information,” the users provided it 93.2% of the time. Table 6 also demonstrates that of the 86 times speakers employed “inquiring to understand needs or background information,” the technical support providers provided it 89.5% of the time. Also, Table 6 demonstrates that of the 79 times speakers employed “confirming or denying,” users provided it 63.3% of the time. Finally, of the 36 times “stating needs” appeared, users said it 97.2% of the time.

Analyzing the communication within subjects revealed further insight. Table 7 gives the same raw frequencies of each strategy, but it provides the percentage each speaker used each strategy of their own speech (within columns).

Table 5. Frequency, percentages, expected frequencies, and chi-square test results for communication strategies between above and below average groups

Communication Strategy Above Avg. (%) EF Below Avg. (%) EF df χ² P Value Total
Giving Background Information 92 (69.2) 94.7 41 (30.8) 38.3 1 0.352 0.553 133
Inquiring to Understand Needs or Background Information 66 (76.7) 61.2 20 (23.3) 24.8 1 1.563 0.211 86
Confirming or Denying 55 (69.6) 56.2 24 (30.4) 22.8 1 0.110 0.740 79
Inquiring to Understand Technology 30 (75.0) 28.5 10 (25.0) 11.5 1 0.309 0.578 40
Telling 22 (59.5) 26.3 15 (40.5) 10.7 1 2.670 0.102 37
Signaling 22 (61.1) 25.6 14 (38.9) 10.4 1 1.913 0.167 36
Stating Needs 29 (80.6) 26.0 7 (19.4) 10.0 1 1.358 0.244 36
Observing 8 (50.0) 8 (50.0) 16
Explaining How the Technology Works or How to Do Something with It 11 (91.7) 1 (8.3) 12
Inquiring to Check Comprehension 6 (66.7) 3 (33.3) 9
Speculating 6 (85.7) 1 (14.3) 7
Inquiring to Gain Permission 7 (100.0) 0 (0.00) 7
Judging the Technology 4 (80.0) 1 (20.0) 5
Showing How the Technology Works or How to Do Something with It 3 (75.0) 1 (25.0) 4
Planning 2 (66.7) 1 (33.3) 3
Total 363 (71.2) 255.0 147 (28.8) 255.0 1 182.965 0.000 510

Table 6. Across-subjects analysis for each communication strategy and speaker

Communication Strategy TS (%) U (%) Total
Giving Background Information 9 (6.8) 124 (93.2) 133
Inquiring to Understand Needs and Background Information 77 (89.5) 9 (10.5) 86
Confirming and Denying 29 (36.7) 50 (63.3) 79
Inquiring to Understand the Technology 4 (10.0) 36 (90.0) 40
Telling 5 (13.5) 32 (86.5) 37
Signaling 29 (80.6) 7 (19.4) 36
Stating Needs 1 (2.8) 35 (97.2) 36
Observing 8 (50.0) 8 (50.0) 16
Explaining How the Technology Works or How to Do Something with It 9 (75.0) 3 (25.0) 12
Inquiring to Check Comprehension 2 (22.2) 7 (78.8) 9
Speculating 1 (14.3) 6 (85.7) 7
Inquiring to Gain Permission 1 (14.3) 6 (85.7) 7
Judging the Technology’ 1 (20.00) 4 (80.0) 5
Showing How the Technology Works or How to Do Something with It 3 (33.3) 1 (66.7) 4
Planning 1 (33.3) 2 (66.7) 3
Total 510

Table 7 shows that technical support providers employed “inquiring to understand needs or background information” in 42.8% of all of their own speech. Also, users employed “giving background information” in 37.6% of their own speech. They only spent 10.9% of their time “inquiring to learn about the technology.”

Thus, both analyses suggest that the primary role of technical support providers during this stage may be to ask questions while users’ role may be to provide answers. Users gave more information than technical support providers, and technical support providers asked more questions than users, with the exception of “inquiring to understand the technology.” That users rarely asked questions, particularly about the technology, seems to indicate that they were more interested in sharing experiences about their technical problems than they were in learning about the technology or how to do something with it. I discuss this result and implications from it in a later section.

Table 7. Within-subjects analysis for each communication strategy and speaker

Communication Strategy TS (%) U (%)
Giving Background Information 9 (5) 124 (37.6)
Inquiring to Understand Needs and Background Information 77 (42.8) 9 (2.7)
Confirming and Denying 29 (16.1) 50 (15.2)
Inquiring to Understand the Technology 4 (2.2) 36 (10.9)
Telling 5 (13.5) 32 (9.75)
Signaling 29 (16.1) 7 (2.1)
Stating Needs 1 (0.6) 35 (10.6)
Observing 8 (4.4) 8 (2.4)
Explaining How the Technology Works or How to Do Something with It 9 (5.0) 3 (0.9)
Inquiring to Check Comprehension 2 (1.1) 7 (2.1)
Speculating 1 (0.6) 6 (1.8)
Inquiring to Gain Permission 1 (0.6) 6 (1.8)
Judging the Technology 1 (0.6) 4 (1.2)
Showing How the Technology Works or How to Do Something with It 3 (1.7) 1 (0.3)
Planning 1 (0.6) 2 (0.6)
Total 180 330

To determine the role of each speaker type in facilitating above and below average visits, I conducted a chi-square test of independence between the above and below average groups for each speaker. However, no significant differences resulted. When noting the percentages in each group (see Table 8), I saw that the percentages for each speaker for each strategy were relatively close. For example, users gave background information and technical support providers inquired about information at roughly the same percentage in both groups. The chi-square test results confirmed this assumption (not provided in Table 8). This result from this exploratory analysis indicated that no specific strategy associated with satisfactory outcomes, a finding I discuss in a later section.

Table 8. Comparison of above average and below average groups between speakers

Communication Strategy Above Average Below Average
TS (%) U (%) Total TS (%) U (%) Total
Giving Background Information 6 (6.5) 86 (93.5) 92 3 (7.3) 38 (92.7) 41
Inquiring to Understand Needs or Background Information 57 (86.4) 9 (13.6) 66 20 (100.0) 0 (0.0) 20
Confirming or Denying 20 (36.4) 35 (63.6) 55 9 (37.5) 15 (62.5) 24
Inquiring to Understand the Technology 3 (10.0) 27 (90.0) 30 1 (10.0) 9 (90.0) 10
Stating Needs 1 (3.4) 28 (96.6) 29 0 (0.0) 7 (100.0) 7
Signaling 16 (72.7) 6 (27.3) 22 13 (92.9) 1 (7.1) 14
Telling 4 (18.2) 18 (81.8) 22 1 (6.7) 14 (93.3) 15
Explaining How the Technology Works or How to Do Something 8 (72.7) 3 (27.3) 11 1 (100.0) 0 (0.0) 1
Observing 4 (50.0) 4 (50.0) 8 4 (50.0) 4 (50.0) 8
Inquiring to Gain Permission 1 (14.3) 6 (85.7) 7 0 (0.0) 0 (0.0) 0
Speculating 1 (16.7) 5 (83.3) 6 0 (0.0) 1 (100.0) 1
Inquiring to Check Comprehension 1 (16.7) 5 (83.3) 6 1 (33.3) 2 (66.7) 3
Judging the Technology 1 (25.0) 3 (75.0) 4 0 (0.0) 1 (100.0) 1
Showing How the Technology Works or How to Do Something 3 (100.0) 0 (0.0) 3 0 (0.0) 1 (100.0) 1
Planning 0 (0.0) 2 (100.0) 2 1 (100.0) 0 (0.00) 1
Total 126 (34.7) 237 (65.3) 363 54 (36.7) 93 (63.3) 147

Example Communication Strategies

To better understand the communication strategies employed during the diagnosing process, the following results provide a qualitative picture of the exchanges between the two participants. I portray the exchanges that appeared most frequently, especially technical support provider (TS) inquiries and user (U) responses. This presentation describes how the technical support providers served as the listeners and the users served as the reporters.

Technical support provider inquiries

The most common strategy for technical support providers was inquiring to understand needs or background information (77 times). Technical support providers inquired to gain background information to acquire contextual information about the users’ needs to adequately define the problem. Implied in the name of the strategy is that speakers sought two types of information: needs or background information.

To gain the background information, they inquired about where the problem might be located so that they could have access to the problem. In one example, TS4 asked U20, “Could you let me know the [name of the] student who dropped your course?” TS4 did this while searching a roster of students, clearly looking to find that student so that she could suspend that student from the course website. In another instance, TS7 asked U2, “‘Grades’?” while navigating to the place where U2 stated she was having a problem marking course attendance as extra credit. TS7 appeared to ask if that was the place where she should check.

Technical support providers also inquired about users’ experiences with the technology. For example, TS4 asked U20 to confirm her experience with the textbox in the Moodle grading system: “So you said that this entire editor box gets bigger?” This information likely helped TS4 to ensure that she and U20 had similar frames of understanding about the user’s experience. Likewise, TS8 confirmed that U2’s message never went to her students’ email inboxes when she used the announcement module in Moodle: “It didn’t go out at all? At all?” These kinds of inquiries helped the technical support providers to ensure that they understood the problems by giving the problems more definition and by providing users the opportunity to help define it further through their responses to the inquiries about experiences.

Technical support providers also inquired about users’ previous actions. This information, TS2 told me, helped him to walk through potential causes of the problem, whether users or the system caused the problem. For example, TS2 asked U23 how she created a quiz that was not behaving the way she wanted: “And did you duplicate the quiz when you created the new one or did you just create a new one from scratch?” This inquiry seemed to help TS2 to determine if U23’s previous actions caused the problem or not. In another case, TS4 wanted to ensure that a student had dropped a course before removing that student from the course website: “So they dropped your course- classes? They dropped it?” This information helped TS4, she told me, to ensure that she followed protocol when handling a common request from instructors (to remove students from the course website when they dropped the course).

As mentioned previously, the strategy of inquiring to understand needs or background information implies two types of information that technical support providers sought: background information and needs. Technical support providers inquired about the specifications of users’ needs. In one example, TS2 helped U11 create a lesson module for U11’s course website. As TS2 set it up for U11, TS2 inquired about the specifications U11 wanted for the lesson’s behavior. For example, “So you want them to keep doing it until they get it right?” Here TS2 asked if U11 wanted her students to have the ability to retry tests until the students got the correct answers. Also, TS2 asked, “So you want it to display the question- again- but have it be check or-?” These examples demonstrate technical support providers’ careful tendency to understand what users wanted the technology to look like or do for them.

In other cases, technical support providers inquired to understand needs in general. As may be expected, a conventional question was “what can I help with” or a variation on it such as, “So what is your question?”

User Reports

The most frequent strategy that users employed was giving background information (124 times). I discuss two basic categories related to this strategy: (1) describing previous actions taken and (2) describing experience with the problem.

When describing previous actions taken, users often shared stories or narratives. For example, U32 began the defining stage with a long narrative about previous actions she and others had taken:

U32: when I had, um, someone from [team name], it was [TS8]. He came to my classroom and showed my students how to set up their e-portfolios, and he used mine as an example. It was the one that I had set up during a training session for [course name] students. So I had like some of my own things, and then [TS8] was like “you don’t want this visible to your students, you need to have your e-portfolio for [course name] separate from like whatever it was that I had it.” And he said he could help me if I came here, which I then completely ignored until now because I’ve actually got an assignment due, of course this week. So of course now I’m coming in to ask for help.

The information served to springboard the conversation into the defining stage. In other instances, users shared previous attempts they had taken in solving the problem. When trying to explain more about a problem with an assignment module’s cut-off settings, U2 stated what actions she had taken to solve the problem:

U2: So there, the “allow submission from” and “due date” and “cut-off date.” So I just disabled those because they were causing problems.

She also accounted for her attempt in solving it, specifically, what buttons she was using (“due date” and “cut-off date”). These examples demonstrate how the users would describe the actions they took in relationship to the problem to give more context to the problem.

When describing experiences with the problem, for example, U20 shared her experience typing feedback to students:

U20:I’ll be typing and all of a sudden it will get bigger. And I don’t know how to make it- like this part [pointing to U20’s personal computer] this whole thing will get bigger-

U40 shared also her students’ experiences, in this case, when they uploaded images to the ePortfolio system:

U40:So, my students are creating a new page. And they want to add an image. The problem that we’ve had is that message “too large won’t accept” comes up.

These descriptions of the problem provided appropriate information about the users’ experiences with the technology. In some cases, they occurred when the problem was first introduced, or in other cases, they occurred as follow-up descriptions in response to questions from the technical support providers.

User confirmations or denials

When asked about needs or background information, users also responded minimally with short confirmation or denials (yes-no responses) to technical support providers. Users employed minimal responses (1) to inquiries about specific needs, (2) to previous actions, (3) to experiences with the problem, and (4) to the location of the problem.

First, users gave minimal responses to inquiries about specific needs. For example, TS7 opened the defining stage, presumably right after the recording began with an inquiry about a specific need:

TS7:O.K., so you want to look at the gradebook [1 second] and if we can make something extra credit, right?

U20:Yep.

In other exchanges, users gave minimal responses to inquiries about previous actions. In many of these instances, the assumption underlying the inquiry was about an action that the users may have done in setting up their courses. For example, U2 wanted to make an assignment to be extra credit like another assignment she already put together. To this, TS4 asked:

TS2:Oh, this is extra credit?

U2:Yeah.

In this exchange, TS2 inquired about an action that U2 had completed (making the other assignment extra credit). The underlying assumption behind the question appears to be, “You made this extra credit?” positioning the question as an inquiry about previous actions.

In other instances, users provided minimal responses to inquiries about experiences with the problem. For example, TS2 wanted to confirm what U14 shared about students’ not seeing an assignment on the course website:

TS2:They’re not showing up clearly?

U14:Yeah.

Based on these examples, it seems that minimal responses from users appeared after technical support providers inquired to confirm something that the users already shared about their experiences. Indeed, to phrase these inquiries, technical support providers needed prior knowledge about that experience to make the question close-ended and thus amenable to a yes-no response, a result I discuss in the next section.

Lastly, users employed minimal responses to inquiries to understand where the problem was located. For example, U2 wanted to designate one assignment extra credit the way she had done with another assignment. TS4 wanted to confirm which assignment U2 wanted to make extra credit:

TS4:This one for participation?

U2:Yeah.

Similarly, when helping U20 to remove a student from her course website, TS4 scrolled through the roster to find the student that U20 wanted to remove. After U20 gave the name, TS4 asked:

TS4:This one?

U20:Mm-hmm.

Ultimately, these examples demonstrate how users employed these minimal responses to inform technical support providers when users were asked follow-up questions.

User elaborations

To better understand the nature of users’ responses to technical support providers’ inquiries, I found connections between users’ response types. One such connection was “confirming or denying” followed by “giving background information” when the users provided the response, suggesting that users giving the minimal response was not always the final moment in a given exchange.

When users expanded on their own confirmations or denials, they went beyond the closed-ended inquiry posed to them. For example, TS7 posed a close-ended question seeking background information to which U41 responded with the appropriate close-ended response. U41 then expanded on that response:

TS7:Uh, we’re working on your portfolio? Is that right?

U41:Yeah. Because I got an email- that said that my student profile is ready to be set up.

TS8 and U23 shared a similar exchange, as U23 described her problem with a duplicated quiz:

TS8:So, all the questions in here?

U23:Yeah, so these are questions from the next topic.

TS8:O.K.

U23:All I did was create the new quiz.

These exchange patterns indicate how users first confirmed or denied technical support providers’ inquiries about background information, but they often moved beyond their initial responses to provide additional information, a finding I discuss in the next section.

Discussion

The results confirm current research and anecdotal intuition on technical support conversations about the strategies in technical support diagnosing processes, but these results are specific and systematic. It shows how technical support providers assumed a role as listeners and users assumed a role as reporters, findings that can be explored further.

When comparing the frequencies of the strategies between the above and below average groups, I found no significant difference between them when analyzing strategies individually. However, I found that a larger number of strategies resulted in a strong association with the above average group. Given that previous research, especially in management, found that the quality of communication deserves exploration, this finding signals that quantity of communication, especially the strategic communication explored in this study, may also associate with satisfactory conversations. Perhaps users felt they were given the appropriate opportunity to express their experiences and did not feel rushed through the diagnosis component. Or perhaps they felt they understood more completely what the problem was because they had more time to consider it. This result implies that both speakers’ freedom, and especially the users’ freedom, to communicate about the problem can possibly yield more satisfactory outcomes. Thus, two initial hypotheses for future research are the following:

H1: The quantity of strategic communication in a defining stage of a technical support conversation can affect higher user satisfaction.

H2: The quantity of strategic communication users contribute to a defining stage of a technical support conversation can affect higher user satisfaction.

Second, the within- and across-subjects analyses indicated how technical support providers inquired more than users did and how users gave information more than technical support providers did. These results illustrate Agar’s (1985) theory that the process of diagnosing the problem involves the clients’ and institutional representatives’ aligning their understanding. This study thus provides a framework for classifying strategies that technical support providers can employ.

Furthermore, users communicated this contextual information, as Steehouder (2007) points out, through narratives or sequences of events, or what he calls “historical reports” (p. 3). This study illustrates how users share this information, and it provides a framework for considering users’ communication as they respond to inquiries. Notably, it also finds that users did not make inquiries as much as they gave information. They appeared to focus on sharing their experiences. Perhaps users generally come to help with undesirable experiences they want resolved rather than with a desire to learn something new. Three hypotheses for future research may be the following:

H3: More instances of technical support provider inquiring of users in the defining stage can affect higher user satisfaction.

H4: More instances of users giving information in the defining stage can affect higher user satisfaction.

H5: Users more often seek help to resolve undesirable technical experiences than to learn something about the technology.

Further, hypotheses related to each type of strategy in this study may be tested for their relationships to user satisfaction, especially in a larger data sample.

Conclusion

This study provided insight about the communication strategies from each speaker in technical support conversations, but more data would reveal more about these conversations and yield more representative results to explain how the communication strategies promote satisfaction. Also, the coding scheme should be refined further and potentially split to form more precise codes. For example, inquiring to understand needs and background information implies a logical split between two sorts of inquiries that could be finessed into two reliable codes.

The study took place in a university setting, which likely influenced the findings. However, the results from this study provide implications for industry practices. Help desk visits in both higher education and industry settings require the same customer service quality features described by foundational customer service studies, such as those conducted by Parasurman, Zaithamal, and Berry (1988) and, more recently, Burgers, de Ruyter, Keen, and Streukens (2000). When help desks are not available (and perhaps only documentation is available), user satisfaction decreases. Islam (2014) found, for instance, that instructors in a university expressed dissatisfaction with an educational technology system when they felt a lack of such technical support that would otherwise help them to do their jobs (p. 255). Nawaz and Kahn (2012) stated that investing in a help facility strengthens higher education goals because such investments serve and help the main users of the systems: teachers and students (p. 42). Therefore, administrators of businesses concerned for their employees and their needs would agree with university administrators that by devoting attention to the support that employees receive in using a key organizational technology, they can further the goals of their organizations. Thus, this study’s findings, while certainly influenced by the research setting, provide reasonable applicability to other workplaces because the findings serve the business goal of providing useful help communication to key stakeholders (i.e., employees).

Implications for Practice

The study has implications for technical support practice.

First, practitioners should phrase inquiries according to specific user information: (1) to understand users’ needs, (2) to understand users’ experiences, (3) to understand users’ previous actions, and (4) to understand where the user experienced the problem. I presented examples of such inquiries, and practitioners could draw from the examples given in this study to formulate training materials with inquiry types that can yield the information they may need from users. The conscious effort to draw from categories of inquiries could facilitate more efficient results as well, especially when efficiency remains one of managerial studies’ metrics for successful technical support conversations.

Second, practitioners should listen for associated answers to each type of inquiry, conscious of which type they employed themselves. Users seem prone to give various types of information types in response to these inquiries that may not match up with the inquiry presented to them. Practitioners should follow up accordingly to align users’ reports, minimal responses, and elaborations with the information the practitioner requires to diagnose the problem. This careful communication may be all the more important in contexts in which the two speakers cannot see one another to respond to nonverbal cues, as in voice-to-voice technical support.

Third, practitioners should use open-ended questions to avoid miscommunication and to avoid using time to repair misunderstanding. In this study’s excerpts, users elaborated on their initial minimal responses, but they may not always have done so, requiring the technical support provider to pause or seek additional information. Technical support providers run the risk of miscommunication if they continually use yes-or-no questions to understand users’ experiences. Or, they must spend the time to repair this miscommunication by asking more follow-up questions. The need to follow up so frequently may present frustrations but perhaps also in spaces such as chat systems, which is also a contemporary mode for technical support. Thus, open-ended questioning could mitigate frustrations in these new media spaces.

Fourth, practitioners should consider examining the visits within their own organizations using similar procedures as those procedures outlined in this article. Although user satisfaction scores give insight about more effective technical support providers, organizations would benefit by knowing what communication strategies can help technical support providers reach that outcome. For example, insight from a few highly rated visits compared to a few poorly rated visits could benefit the help desk center by showcasing the role communication played in a visit’s successes.

Lastly, practitioners should promote more conversation overall and, especially, more user speech when diagnosing the technical problem because users may feel they are receiving the appropriate opportunity to describe their experiences. Further, practitioners may feel it necessary to ensure that users, not the practitioners only, understand the problem.

Implications for Research

These inquiries and their corresponding responses have been systematically and reliably identified for other researchers to employ, should they wish in their own studies of communication in these conversations, and also to help develop and test hypotheses about the relationship between certain strategies and satisfactory outcomes.

For example, future research should build on this coding scheme, which has gone through reliability testing, for other contexts. A study of other technical support centers may reveal other strategies and, perhaps more importantly, a variety of satisfaction levels. Also, researchers might answer questions related to contexts such as how the strategies may differ in international contexts when technical support has been offshored.

Also, future research could explore and confirm the usefulness of this coding scheme for a technical support call center with voice-voice conversations. Also, though these findings can be applied to synchronous chat spaces: What differences might exist for the strategies in such electronic spaces? And though asynchronous support spaces such as Twitter and discussion forums have had some attention in technical communication research (Lam & Hannah, 2016; Swarts, 2014), future research could further explore the strategies in these venues or similar venues such as email communication.

Ultimately, this study provided a picture of what it looks like when technical support providers and users diagnose technical problems, providing insight for technical support practice and for understanding the needs and motivations of users when they seek technical help. Although the results presented here would seem intuitive, applied research helps confirm or challenge experiences that may be based merely on anecdote or culture. This study explored and presented these experiences systematically, and it invites new ways to explore them across contexts, an endeavor important in technical communication research, which aims to improve communication with users, customers, businesses, and employees.

References

Armistead, C., Kiely, J., Hole, L., & Prescott, J. (2002). An exploration of managerial issues in call centres. Managing Service Quality: An International Journal, 12(4), 246–256. doi:10.1108/09604520210434857

Agar, M. (1985). Institutional discourse. Text & Talk: Interdisciplinary Journal of Language, discourse, & Communication Studies, 5(3), 147–168. doi:10.1515/text.1.1985.5.3.147

Baker, C., Emmison, M., & Firth, A. (2000. Calibrating for competence in calls to technical support. In C. Baker, M. Emmison, & A. Firth (Eds.), Calling for help: Language and social interaction in telephone helplines (pp. 39–62). Amsterdam, NL: John Benjamin Publishing Company.

Barley, S. R. (1996). Technicians in the workplace: Ethnographic evidence for bringing work into organizational studies. Administrative Science Quarterly, 41(3), 404–441. doi:10.2307/2393937

Bearden, W. O., Malhotra, M. K., & Uscategui, K. H. (1998). Customer contact and the evaluation of service experiences: Propositions and implications for the design of research. Psychology & Marketing, 15(8), 793–809. doi:10.1002/(SICI)1520-6793(199812)15:83.0.CO;2-0

Beldad, A., & Steehouder, M. (2015). Not that button but the other: Misunderstanding and non-understanding in helpdesk encounters involving nonnative English speakers. Technical Communication, 59, 179–194.

Bell, S. J., Auh, S., & Smalley, K. (2005). Customer relationship dynamics: Service quality and customer loyalty in the context of varying levels of customer expertise and switching costs. Journal of the Academy of Marketing Science, 33(2), 169–183. doi:10.1177/0092070304269111

Bettencourt, L. A., & Brown, S. W. (1997). Contact employees: Relationships among workplace fairness, job satisfaction and prosocial service behaviors. Journal of Retailing, 73(1), 39–61. doi:10.1016/S0022-4359(97)90014-2

Bewick, V., Cheek, L., & Ball, J. (2004). Statistics review 8: Qualitative data—tests of association. Crit Care, 8(1), 46–53. doi:10.1186/cc2428

Bordoloi, S. K. (2004). Agent recruitment planning in knowledge-intensive call centers. Journal of Service Research, 6(4), 309–323.

Brown, G., & Yule, G. (1983). Discourse analysis. Cambridge, UK: Cambridge University Press.

Burgers, A., de Ruyters, K., Keen, C., & Streukens, S. (2000). Customer expectation dimensions of voice-to-voice service encounters: A scale-development study. International Journal of Service Industry Management, 11(2), 142–161. doi:10.1108/09564230010323642

Callaghan, G., & Thompson, P. (2002). ‘We recruit attitude:’ The selection and shaping of routine call centre labour. Journal of Management Studies, 39(2), 233–254. doi:10.1111/1467-6486.00290

Charney, D. (1996). Empiricism is not a four-letter word. College Composition and Communication, 47(4), 567–593.

Clark, C. M., Murfett, U. M., Rogers, P. S., & Ang, S. (2012). Is empathy effective for customer service? Evidence from call center interactions. Journal of Business and Technical Communication, 27, 123–153. doi:10.1177/1050651912468887

D’Cruz, P., & Noronha, E. (2008). Doing emotional labour: The experiences of Indian call centre agents. Global Business Review, 9(1), 131–147. doi: 10.1177/097215090700900109

Dorman, C., & Zijlstra, F. R. H. (2003). Call centres: High on technology—high on emotions. European Journal of Work and Organizational Psychology, 12(4), 305–392. doi:10.1080/13594320344000219

Frei, R. L., & McDaniel, M. A. (1998). Validity of customer-service measures in personnel selection: A review of criterion and construct evidence. Human Performance 11(1), 1–27. doi:10.1207/s15327043hup1101_1

Geisler, C. (2018). Coding for complexity: The interplay among methodological commitments, tools, and workflow in writing research. Written Communication, 35(2), 215–249. doi:10.1177/0741088317748590

Hall, J. A., Verghis, P., Stockton, W., & Goh, J. X. (2014). It takes just 120 seconds: Predicting satisfaction in technical support calls. Psychology & Marketing, 31(7), 501–508. doi:10.1002/mar.20711

Islam, A. K. M. N. (2014). Sources of satisfaction and dissatisfaction with a learning management system in a post-adoption stage: A critical incident technique approach. Computers in Human Behavior, 30, 249–261. doi:10.1016/j.chb.2013.09.010

Kelly, K. (2014). Coding miscommunication: A method for capturing the vagaries of language. IEEE Professional Communication Conference Proceedings 2014. doi:10.1109/IPCC.2014.7020377

Lam, C., & Hannah, M. A. (2016). The social help desk: Examining how twitter is used as a technical support tool. Communication Design Quarterly, 4(2), 37–51. doi:10.1145/3068698.3068702

Lee, Y., Hseih, Y., & Ma, C. (2011). A model of organizational employees’ e-learning systems acceptance. Knowledge-Based Systems, 24(3), 355–366. doi:10.1016/j.knosys.2010.09.005

Mackiewicz, J. (2017). The aboutness of writing center talk: A corpus-driven and discourse analysis. New York, NY: Routledge.

Mackiewicz, J., & Thompson, I. K. (2015). Talk about writing: The tutoring strategies of experienced writing center tutors. New York, NY: Routledge.

———. (2014). Instruction, cognitive scaffolding, and motivational scaffolding in writing center tutoring. Composition Studies, 42(1), 54–78. Retrieved from https://www.uc.edu/journals/composition-studies/issues/archives/spring2014-42-1.html

Mount, M. K., Barrick, M. R., & Stewart, G. L. (1998). Five-factor model of personality and performance in jobs involving interpersonal interactions. Human Performance, 11(2/3), 145–165. doi:10.1080/08959285.1998.9668029

Nguyen, M. G., Walsh, G., & Hennig-Thurau, T. (2014). The impact of service scripts on customer citizenship behavior and the moderating role of employee customer orientation. Psychology and Marketing, 31(12), 1196–1209. doi:10.1002/mar.20756

Nielsen, J. (2000, March 19). Why you only need to test 5 users [Blog post]. Nielsen Norman Group. Retrieved from https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/

Nielsen, J., & Landauer, T. K. (1993). A mathematical model of the finding of usability problems. Proceedings of the INTERACT ’93 and CHI ‘93 Conference on Human Factors in Computing Systems, 206–213. doi:10.1145/169059.169166

Paltridge, B. (2012). Discourse analysis: An introduction. 2nd edition. New York, NY: Bloomsbury.

Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1988). SERVQUAL: A multi-item scale for measuring consumer perceptions of service quality. Journal of Retailing, 64(1), 12–40. Retrieved from http://psycnet.apa.org/psycinfo/1989-10632-001

Pentland, B. T. (1992). Organizing moves in software support hot lines. Administrative Science Quarterly, 37(4), 527–548. doi:10.2307/2393471

Pontes, M. C. F., & Kelly, C. O. (2000). The identification of inbound call center agents’ competencies that are related to callers’ repurchase intentions. Journal of Interactive Marketing, 14(3), 41–49. doi:10.1002/1520-6653(200022)14:3<41::AID-DIR3>3.0.CO;2-M

Rourke, L., Anderson, T., Garrison, D. R., & Archer, W. (2000). Methodological issues in the content analysis of computer conference transcripts. International Journal of Artificial Intelligence in Education, 11, 1–16. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.489.3301

Searle, J. (1976). The classification of illocutionary acts. Language in Society, 5(1), 1–24. doi: 10.1017/S0047404500006837

Steehouder, M., & Hartman, D. (2003). How can I help you? User instruction in telephone calls. IEEE Professional Communication Conference Proceedings 2003, 158–165. doi:10.1109/IPCC.2003.1245484

Steehouder, M. (2007). How helpdesk agents help clients. IEEE Professional Communication Conference Proceedings 2007, 1–9. doi: doi.org/10.1109/IPCC.2007.4464071

Swarts, J. (2014). Help is in the helping: An evaluation of help documentation in a networked age.” Technical Communication Quarterly, 24, 164–187. doi:10.1080/10572252.2015.1001298

van Velsen, L. S., Steehouder, M. F., & de Jong, D. T. M. (2007). Evaluation of user support: Factors that affect user satisfaction with helpdesks and helplines. IEEE Transactions on Professional Communication, 50, 219–231. doi:10.1109/TPC.2007.902660

Walker, C. P., & Elias, D. (1987). Writing conference talk: Factors associated with high and

low-rated writing conferences. Research in the Teaching of English, 21(3), 266–285.

Xu, X., Wang, Y., Forey, G., & Li, L. (2010). Analyzing the genre structure of Chinese call-center communication. Journal of Business and Technical Communication, 24, 445–475. doi: 10.1177/1050651910371198

About the Author

Vincent D. Robles is an assistant professor of technical communication in the Department of Technical Communication, University North Texas, Denton, TX, USA, where he teaches courses in technical editing, grant and proposal writing, and visual communication. His research interests include content and information development, technical marketing communication, and technical and professional communication pedagogy. He has articles forthcoming in Technical Communication Quarterly, IEEE Transactions on Professional Communication, and Business and Professional Communication Quarterly.

Manuscript received 28 February 2018, revised 30 April 2018; accepted 11 August 2018.

Appendix A: Post Session Survey

User Post-Session Survey

Name:_____________________________________

Date: _____________________________________

Please indicate how much you agree with the following statements about the support session you just had.

  1. The technical support person answered different question(s) or complaint(s) I had with little difficulty.
  1. Strongly Agree
  2. Agree
  3. Agree Somewhat
  4. Disagree Somewhat
  5. Disagree
  6. Strongly Disagree
  1. The technical support person adapted to every situation that occurred during the session.
  1. Strongly Agree
  2. Agree
  3. Agree Somewhat
  4. Disagree Somewhat
  5. Disagree
  6. Strongly Disagree
  1. The technical support person took my knowledge into account when helping solve the problem(s).
  1. Strongly Agree
  2. Agree
  3. Agree Somewhat
  4. Disagree Somewhat
  5. Disagree
  6. Strongly Disagree
  1. The technical support person remained calm and friendly no matter how I was feeling.
  1. Strongly Agree
  2. Agree
  3. Agree Somewhat
  4. Disagree Somewhat
  5. Disagree
  6. Strongly Disagree
  1. The technical support person helped define specifically the problem(s).
  1. Strongly Agree
  2. Agree
  3. Agree Somewhat
  4. Disagree Somewhat
  5. Disagree
  6. Strongly Disagree
  1. The technical support person was able to help with each and every problem in a timely way.
  1. Strongly Agree
  2. Agree
  3. Agree Somewhat
  4. Disagree Somewhat
  5. Disagree
  6. Strongly Disagree
  1. The technical support person clearly and thoroughly explained each and every step he or she took when solving the problem(s).
  1. Strongly Agree
  2. Agree
  3. Agree Somewhat
  4. Disagree Somewhat
  5. Disagree
  6. Strongly Disagree
  1. The technical support person clearly and thoroughly explained solutions or recommendations.
  1. Strongly Agree
  2. Agree
  3. Agree Somewhat
  4. Disagree Somewhat
  5. Disagree
  6. Strongly Disagree
  1. The technical support person was able to imagine what I was going through with my problem(s).
  1. Strongly Agree
  2. Agree
  3. Agree Somewhat
  4. Disagree Somewhat
  5. Disagree
  6. Strongly Disagree
  1. The technical support person treated me uniquely from other users.
  1. Strongly Agree
  2. Agree
  3. Agree Somewhat
  4. Disagree Somewhat
  5. Disagree
  6. Strongly Disagree
  1. The technical support person treated my problem(s) as important.
  1. Strongly Agree
  2. Agree
  3. Agree Somewhat
  4. Disagree Somewhat
  5. Disagree
  6. Strongly Disagree
  1. The technical support person had the necessary authority to solve my problem (s).
  1. Strongly Agree
  2. Agree
  3. Agree Somewhat
  4. Disagree Somewhat
  5. Disagree
  6. Strongly Disagree
  1. The technical support person will have to follow up with me to help me with the problem(s) because he or she needs to seek permission or help.
  1. Strongly Agree
  2. Agree
  3. Agree Somewhat
  4. Disagree Somewhat
  5. Disagree
  6. Strongly Disagree