61.2, May 2014

Developing and Testing a Video Tutorial for Software Training

Hans van der Meij

Abstract

Purpose: Video tutorials for software training are rapidly becoming popular. A set of dedicated guidelines for the construction of such tutorials was recently advanced in Technical Communication (Van der Meij & Van der Meij, 2013). The present study set out to assess the cognitive and motivational effects of a video tutorial based on these guidelines.

Method: Participants were 65 students (mean age 12.0 years) from elementary school. The procedure was as follows. First, students completed a pre-test. Next, they viewed videos and completed practice tasks. Finally, students completed a post-test and retention-test.

Results: The pre-test revealed low scores on task relevance, and low initial task performance. During training, students reported positive mood states, high flow, and significantly higher task relevance than in pre-testing. Task performance rose significantly during training, and was also substantially higher on the post-test and retention-test than in pre-testing. Only cognitive factors significantly predicted task performance.

Conclusion: The effectiveness of the video tutorial attests to the quality of the design guidelines on which it was based. The critical contribution of specific guidelines are a potential area for further research.

Keywords: video instructions; job-aid; learning; motivation; cognition; tutorial

Practitioner’s Takeaway

  • This study assessed the cognitive and motivational effects of a video tutorial for software training based on the set of design guidelines from Van der Meij and Van der Meij (2013).
  • The tutorial significantly increased user motivation.
  • Task performance with video access (serving as job aid) was excellent (86% correct).
  • Task performance without video access (learning) was satisfactory (68% correct immediately after training; 66% correct one week later).
  • Only cognitive factors accounted for task performance.

Introduction

With the video tutorial quickly becoming a prevalent format for software training, it is important to understand how people process videos and how the design of a video tutorial can best accommodate these processes.

Fundamental insights about video processing can be gleaned from Mayer’s (2005a) Cognitive Theory of Multimedia (CTM). This model includes four assumptions about multimedia information processing. One, there are different sensory channels that are relatively independent of one another. For video productions that often feature a combination of words and pictures, the processing mainly involves the user’s oral and visual channels. Two, working memory is limited in capacity and duration. Only a few pieces of information can be attended to at any one time. In contrast, long-term memory has virtually unlimited capacity. Three, learning can be enhanced with dual coding. Presenting information in both words and pictures can strengthen its impact in each modality and facilitate comprehension. Four, the user should actively process the information. The user should engage in processes such as information selection, organization and integration with existing knowledge to achieve meaningful learning.

CTM has been criticized for its lack of attention to individual differences, metacognition, and motivation. A popular model that has extended CTM with these factors is Moreno’s (2006, 2009) Cognitive-Affective Theory of Learning with Media (CATLM). In CATLM, Moreno argues that individual differences such as prior knowledge and cognitive style can affect how people process and what they learn from multimedia. CATLM further includes the idea that learning is mediated by metacognition. Metacognition refers to an awareness and analysis of a person’s own learning or thinking. Metacognitive processes such as planning and monitoring are assumed to play a regulatory role in learning (compare Hagemans, Van der Meij, & De Jong, 2013). Finally, an assumption in CATLM is that initial motivation and motivational mediators influence learning. Pertinent initial motivational constructs are perceptions of task relevance and self-efficacy beliefs (Eccles & Wigfield, 2002; Pintrich & Schunk, 2002). Important motivational mediators are mood states and flow (Vollmeyer & Rheinberg, 2006).

The insights from CTM and CATLM have formed the foundation for a large set of abstract principles for the construction of multimedia. For instance, CTM has led to the multimedia principle, which holds that words and pictures are more conducive to learning than words or pictures alone. Similarly, CATLM has given rise to the personalization principle, according to which people learn better when a message is delivered in a conversational style where written or spoken explanations are given in the first or second person.

Agrawala, Li, and Berthouzoz (2011) have argued that practitioners sometimes need more concrete principles to guide their designs. In addition, they indicate that the domain and intended outcome may require a specific instantiation of a principle or perhaps even the introduction of a new one. In reaction to this ‘call,’ Van der Meij and Van der Meij (2013) recently summarized the literature and proposed a set of eight guidelines for the construction of a video tutorial for software training. Like CTM and CATLM, these guidelines address both cognitive and motivational design issues. In addition, the guidelines incorporate domain-specific insights on software training, and are assumed to be concrete enough for practitioners to follow.

The present study examines the effectiveness of a video tutorial that was designed according to these guidelines. It measures user cognition and motivation before, during, and after training (compare Leutner, 2104; Park, Plass, & Brinken, 2014). More specifically, it reports on the changes in the absolute level of the user’s task performance and motivation (that is, task relevance, self-efficacy, mood states, and flow). In addition, the study explores the relationships among the various cognitive and motivational measures. Before discussing the empirical study, we briefly describe the eight guidelines.

Guidelines for the Construction of a Video Tutorial for Software Training

The primary function of many instructional videos on software programs is to enable task performance. Such videos serve solely as a job aid; they should assist the user in achieving a software task. In general, this means that the video gives instructions for completing regular tasks. Occasionally, there may be instructions for problem-solving, akin to the FAQ section on a Web site.

The construction of a video tutorial presents an additional challenge. Not only must the video facilitate task performance, it should also support task learning. The user should really get to know how to accomplish the trained tasks in order for that person to be able to handle a broad range of related software tasks (compare, Brunyé, Taylor, Rapp, & Spiro, 2006). In other words, for a video tutorial being a job aid is necessary, but not sufficient. The video also needs to enhance learning.
The guidelines proposed by Van der Meij and Van der Meij (2013) address both serving as a job aid and enhancing learning. In short, they should support designers in constructing a video tutorial. Figure 1 summarizes the guidelines. The authors talk over their theoretical and empirical foundations in considerable detail in their paper. The discussion below concentrates on the domain-specific character of the guidelines.

Screen Shot 2014-06-10 at 3.01.03 PMGuideline 1: Provide easy access. The first criterion for video tutorials to satisfy is that they should be accessible (Novick & Ward, 2006; Roshier, Foster, & Jones, 2011). Facilities such as YouTube enable user access through indexed keyword searches. After a video is selected, the Web site also presents an array of related videos that may be of interest to the user. Software companies that offer instructional video for their products sometimes do likewise (for example, Microsoft), but they may also present a table of contents as the main port of entry (for example, TechSmith). The titles in such a table of contents should succinctly describe the task that is demonstrated and should be understandable by the novice to increase accessibility (see Farkas, 1999; Van der Meij & Gellevij, 2004).

Guideline 2: Use animation with narration. This guideline resonates with the multimedia principle (Mayer, 2005a). It also fits with the modality principle, according to which learning is enhanced when words are presented as narration rather than as on-screen text (Mayer, 2001, 2005b). For a video tutorial for software training, the specific recommendation is to use a recorded demonstration (Plaisant & Shneiderman, 2005).

Guideline 3: Enable functional interactivity. The ongoing stream of information in a video constantly challenges the user to decide which information to encode, process and store (Lang, 2000). Because this can be very taxing, special attention must be paid to the communicative properties of the video, and the affordances for user control (compare Moreno, 2006). The designer can contribute to functional interactivity by carefully considering the system-based pacing of the video. The important obstacle of the fleetingness of video and the risk of lack of perception and comprehension that comes with it can also be partly overcome with user control of the playing of the video. This type of affordance is also an important means for enhancing user motivation (Keller, 2010).

Guideline 4: Preview the task. It is not uncommon for video instructions to be preceded by a preview in the form of a tour of the main screen components (Plaisant & Shneiderman, 2005). The preview should introduce the critical vocabulary for concepts and objects, and orient the user to the main goal of the task. Research on event cognition offers support for this guideline (Zacks & Tversky, 2003). An important sub-goal of the preview is motivational. The preview should tell the user what a procedure can achieve. Promoting the goal can contribute to the perception of task relevance and hence the user’s willingness for task engagement (Farkas, 1999).

Guideline 5: Provide procedural rather than conceptual information. The foremost reason why people turn to a video tutorial for software training is that they want to know how to accomplish software tasks. The core issue is therefore to create video that is task- and action-oriented (Carroll, 1990; Van der Meij & Carroll, 1998). Plaisant and Shneiderman (2005) likewise indicate that video should concentrate on conveying procedural information. In the paper we will henceforth refer to instructions about task execution as ‘procedures.’
When a procedure must be learned, in addition to carrying out the actions, the user should be stimulated to reflect (Van der Meij, Karreman, & Steehouder, 2009). A design feature that is sometimes used in video to achieve such reflection is the inclusion of a deliberate pause of 2 to 5 seconds immediately after task completion. Ertelt’s research (2007) showed that such built-in reflection moments increase learning from video.

Guideline 6: Make tasks clear and simple. This guideline reflects the apprehension principle from Tversky, Bauer-Morrison, and Betrancourt (2002), which states that animations should be readily and accurately perceived and comprehended. Minimalism suggests that software tutorials should present only the most basic or insightful method to the user (Van der Meij & Carroll, 1998). An additional requirement is that each task or sub-task should require no more than three to five actions to complete (Doumont, 2002; Spanjers, Van Gog, & Van Merriënboer, 2010; Van der Meij & Gellevij, 2004; Zacks, Speer, Swallow, Braver, & Reynolds, 2007). Furthermore, the instructions should follow the user’s mental plan in describing an action sequence. The actions and corresponding narrative should follow the path that the user’s thoughts and actions take during task execution (see Dixon, 1982; Farkas, 1999; Zacks & Tversky, 2003).

Guideline 7: Keep videos short. Plaisant and Shneiderman (2005) recommend a video length of 15 to 60 seconds for keeping the user engaged and minimizing what needs to be remembered together. For tasks that are too long to display in a single demonstration, meaningful segmentation presents an important design challenge (see Spanjers et al., 2010; Zacks et al., 2007).

Guideline 8: Strengthen demonstration with practice. The coupling of instruction and practice that is commonly used in education is recommended for software training as well (see Rieber, 1991). Practice serves to consolidate and enhance learning. Ertelt’s (2007) study found that the opportunity for practice after video instructions significantly improved user performance compared to a non-practice control condition.

Experimental Design and Research Questions

The empirical study measures student cognition and motivation before, during, and after training with a video tutorial on Word’s formatting options. Task performance is the measure of student cognition. There are four measures to gauge student motivation (that is, task relevance, self-efficacy, mood states and flow). The specific research questions are stated below.

Question 1: Is motivation affected by the video tutorial, and what predicts any changes?

According to expectancy-value theory (Eccles & Wigfield, 2002), task relevance and self-efficacy are important motivational constructs. Task relevance refers to the perceived present and future utility of an activity. It indicates the importance of a task to a person’s goals or concerns (Van der Meij, 2007). A higher perception of task relevance stimulates someone to invest more effort. In other words, when a video tutorial enhances task relevance, there is a greater chance that it will motivate users to try out certain tasks. Self-efficacy refers to a person’s expectancy for success in challenging tasks (Bandura, 1997). When self-efficacy is enhanced, people are more likely to attempt new tasks and to persist when obstacles occur. For initial motivation, the study assesses task relevance and self-efficacy.

According to the cognitive-motivational model of Vollmeyer and Rheinberg (2006), mood state is an important mediator for the influence of initial motivation on task performance. A mood state indicates the feelings that students experience during training. In this study, we measure the valence (that is, positive, neutral, negative) of these mood states (compare Plass, Heidig, Hayward, Homer, & Um, 2014). We examine whether mood state is a motivational mediator separate from initial motivation by looking at its unique contribution to task performance.

Research from Vollmeyer and Rheinberg (1999, 2006) further indicates that flow mediates the role of initial self-efficacy in task performance. Flow is a sign of concentrated effort. When a person is in a state of flow there is a good balance between that person’s capacities and the task demands (Csikszentmihalyi, 1991). We examine whether flow is a motivational mediator separate from initial self-efficacy by looking at its unique contribution to task performance.

Question 2: Is cognition affected by the video tutorial, and what predicts any changes?

Task performance is the measure of cognition. Pre-testing assesses how well the students can already perform the formatting tasks in the video tutorial. The pre-test gives baseline performance. During training, task performance on the practice tasks is measured. Because students have access to the video during training, this outcome signals the quality of the tutorial as a job aid. After training, a post-test and a retention-test assess whether students can complete the formatting tasks on their own. Thus, these tests assess learning. As with motivation, the study explores the relationships among the various motivational and cognitive measures to discover which factors predict task performance.

Method

Participants

The participants consisted of 23 male and 42 female students from fifth and sixth grade (mean age 12 years, range 10.6 – 13.0). The students were from three classrooms from three elementary schools in the Netherlands.

The Video Tutorial

As stated earlier, the construction of the video tutorial was based on the guidelines from Van der Meij and Van der Meij (2013). In the discussion of the design of the video tutorial below, these guidelines are simply referred to, or their application in the designed tutorial is briefly mentioned.

The video tutorial discusses several Word formatting tasks that are important for school reports. Earlier studies have indicated that students from this age group generally do not yet know the best method, if any, for accomplishing these tasks (Van der Meij, 2012, 2013). The tasks are organized into three ‘chapters.’ The first deals with adjusting the left and right margins for a whole document. The second concentrates on formatting paragraphs, citations and lists. The third chapter deals with automatically generating a table of contents. The tutorial includes instructions in the form of previews or procedures. The visual demonstrations (animations) in the videos are always accompanied by a spoken voice (guideline 2).

The previews (guideline 4) define the concept (for example, paragraph or citation), distinguish the key screen object(s) needed for task completion, and display the starting and ending (before-after) screens of a task. The latter feature is a motivational strategy known as the late-point-of attack sequence (Goodwin, 1991). It is included in the previews to raise the students’ appraisals of task relevance. The average length of the previews is 1.15 minutes (range 1.00 – 1.33). The preview always precedes the corresponding procedure.

The procedures demonstrate an unfolding scenario of task completion. Procedures describe, and sometimes explain, all of the user actions and software reactions in accomplishing a formatting task. Each student action on an input device is described in the narrative. The visible result on the interface is also shown in the video. Highlighting is used to draw the students’ attention to pertinent information on the screen (guideline 6). Only the most insightful method is taught. There are no discussions about alternative procedures for achieving the same task. Conceptual explanations are also virtually absent (guideline 5).

The narrative is spoken by a female voice who directs the student’s attention to the effects of actions with standard phrases such as ‘You now see …’ Occasionally, the video zooms in on screen sections, and screen objects or areas may be highlighted. The procedure ends by inviting the user to open a practice file and engage in hands-on practice (guideline 8). The average length of the procedures is 1.13 minutes (range 0.47 – 1.42). The length of both types of video (previews and procedures) thus approximates the recommended duration (guideline 7).

To facilitate access (guideline 1), the video tutorial is presented on a Web site where the screen is divided into two areas (see Figure 2). The left-hand area with the table of contents is permanently visible to facilitate access. Chapter titles refer to previews. These titles are displayed on a dark blue background with a special icon at the end to signal their status as for viewing only. Paragraph titles refer to procedures. They are presented on a lighter blue background and end with an icon that signals that viewing followed by doing.

After the student clicks on a title, its background color changes to orange (as displayed for section 2.2 in Figure 2) and the corresponding video appears on the right-hand side, along with a transparent control toolbar on the bottom. The student can set the video into motion by pressing the start icon. The student can also pause and resume, return to the starting point, and increase or decrease the sound level (guideline 3). A ruler shows how far the video track has progressed.

For pre-training, a scaled-down version of the Web site was created for students to explore Web site navigation, to acquaint them with the difference between a preview and a procedure, and to practice switching between video and practice tasks.

Screen Shot 2014-06-10 at 3.00.44 PM

Instruments

An Initial Experience and Motivation Questionnaire (IEMQ) measured the students’ initial motivation. For each training task, the student first received a Before-After screenshot plus explanation, and was then asked three questions: (a) ‘Do you ever have to do this task?’ (Experience), (b) ‘How often do you need to complete this task?’ (Task relevance), and (c) ‘How well do you think you can complete this task?’ (Self-efficacy). The student answered these questions by circling a number on a 7-point Likert scale where the end points were given as never – always, or very poorlyvery well. Good reliability scores (Cronbach alpha) were found for Task relevance (0.85) and Self-efficacy (0.81).

A pre-test asked the student to demonstrate initial task performance. During this test the student was asked to modify the format of test files for the same tasks that would be trained. A score of 0 points was awarded for each task the student could not solve correctly. A good solution yielded a score of 1. The video tutorial presented 6 tasks to the students. Two tasks were each split into two sub-tasks in data analyses. Thus, the maximum pre-test score was 8, and scores were converted to a percentage of possible points.

An adapted version of Kolb’s Learning Style Inventory (LSI) assessed the students’ learning style (see Ten Hove, 2013). According to Kolb, people have different learning preferences (Kolb & Kolb, 2005). These can be represented in a two-dimensional figure in which the vertical line represents a ‘grasping’ mode (ranging from concrete experience to abstract conceptualization), and the horizontal line represents a ‘transforming’ mode (ranging from active experimentation to reflective observation). The adapted LSI consisted of twelve statements about learning with four possible answers. For instance, item 4 was formulated as ‘I learn by ….’ and the four answers were ‘feeling,’ ‘doing,’ ‘watching,’ and ‘thinking.’ These answers represented respectively concrete experience, active experimentation, reflective observation, and abstract conceptualization. Just as in the original LSI, when joined together the appraisals on these items classify a person as having a predominantly diverging, assimilating, converging or accommodating learning style. Unfortunately, the reliability outcomes for these styles were too low (all Cronbach alpha scores were below 0.60). Therefore, these data are not reported in this paper.

The Mood States and Motivation Questionnaire (MSMQ) was presented in a booklet that asked students to state their mood, their perception of task relevance, and their flow experience after task completion (for five tasks, so five times in total). Mood states were measured with a set of five pictograms plus descriptor, from which the student was asked to select the one that best fitted his or her current motivational state (Read, 2008). Pictograms (a smiley) and text represented the following moods: happy, certain, neutral, uncertain and sad. The analyses of mood states concentrated on their valence (that is, positive, neutral or negative). Happy and certain were scored as signals of a positive mood; uncertain and sad were signals of a negative mood. Scores are given as a percentage. Thus, a score of, say 80% for positive mood state indicates that the student selected the happy or certain smiley at four of the five measurement points for mood states.

The MSMQ also presented five questions about task relevance (for example, ‘I found this task interesting,’ and ‘This task seems useful’), and four questions on flow (for example, ‘I had the feeling that I had everything under control,’ and ‘I was completely lost in thought’). The student answered these questions by circling a number on a seven-point Likert scale where the end points were given as trueuntrue. All five measurement points yielded good reliability scores (Cronbach alpha) for relevance (range 0.85 – 0.95) and flow (range 0.84 – 0.96).

Students were prompted to practice the tasks for which they had received instructions (that is, whole text margins, paragraph indents, citation and list formatting, and automatic creation of a table of contents). For this hands-on experience they used practice files. In addition to facilitating task execution, these files standardized practice, making task completion efforts comparable across students. Task performance on the practice files was computed in the same way as for the pre-test.

A post-test and a retention-test asked the student to apply their recently acquired procedural knowledge in changing a single, poorly formatted Word file into a well-formatted exemplar. The tasks that the students were asked to perform with this file were similar to those discussed in the video. Scoring was identical with the pre-test.

Procedure

The study was conducted in three sessions. In the first, students were told (5 minutes) that they would engage in software training on Word to assist them in improving the formatting of their school reports. Next, they were instructed to complete the IEMQ and pre-test (20 minutes). The students also completed the adapted LSI (10 minutes).

Training followed a day later. This session started with a ten-minute introduction for the whole class. Then the students went to the computer room(s) where they were instructed to work independently for 50 minutes and to call for assistance only when stuck. Students received the audio input from the video via headphones. They could consult the video at any time. During training the MSMQ was administered five times, always after completion of a practice task.

After training was completed there was a five-minute break followed by the post-test, which the students had 20 minutes to complete. One week later the students took the retention-test (maximum 20 minutes). Students were not allowed to consult the video during testing.

Analysis

Repeated measures ANOVAs were computed to determine whether significant changes in motivation or cognition had taken place. Multiple regression analyses (stepwise) were used to identify significant predictors for cognitive and motivational measures. Only statistically significant outcomes are reported in detail. Due to missing data, the degrees of freedom occasionally vary slightly. All tests were two-tailed with alpha set at 0.05. Cohen‘s (1988) d-statistic was used to report effect size. These tend to be qualified as small for d ≈ 0.2, medium for d ≈ 0.5 and large for d ≈ 0.8.

Results

Motivational outcomes and predictors

The IEMQ showed initial self-efficacy beliefs that were well above the scale mid-point of 3.5, indicating that the students started out with a fair degree of confidence in their capacities to complete the Word formatting tasks. In contrast, initial appraisals of task relevance were well below the scale mid-point. Students thus did not start out with considerable interest in the training tasks, supporting the design decision to address task relevance in the previews.

The mean score for task relevance during training was considerably above the scale midpoint (see Table 1). A repeated measures ANOVA revealed a statistically significant and substantial increase compared to the students’ initial task relevance, F(1,64) = 124.12, p < 0.001, d = 1.65. For flow the mean score was considerably above the scale midpoint (see Table 1).

Table 2 shows the outcomes for the mood states. According to the data, students predominantly reported positive mood states during training. Negative moods rarely occurred.

Screen Shot 2014-06-10 at 2.57.21 PMThere were statistically significant relationships between the motivational measures during training. Task relevance correlated positively and significantly with flow (r = 0.48, p < 0.01). Table 3 shows that there was considerable convergence between the three motivational measures during training. That is, high appraisals of task relevance went hand-in-hand with more frequent reports of positive moods, while a negative correlation was found for neutral mood states. Similarly, strong flow experiences correlated positively with more frequent reports of positive moods while negative correlations were found with neutral or negative mood states.

Regression analyses with initial motivation and cognition as predictors for motivation during training, yielded the following results. Task relevance before training accounted for a significant proportion of the variance in task relevance during training, R2 = 0.08, F(1,64) = 5.45, p < 0.05. Flow was predicted by initial self-efficacy, R2 = 0.08, F(1,64) = 5.65, p < 0.05, matching the prediction from Vollmeyer and Rheinberg’s (2006) cognitive-motivational process model. In both cases the relationship was positive, meaning that higher appraisals of self-efficacy before training predicted higher appraisals of flow during training. Mood states were not predicted by any initial measure of motivation. Initial (pre-test) task performance also did not predict motivation during training.

Screen Shot 2014-06-10 at 3.04.25 PM

Cognitive outcomes and predictors

Table 4 shows that students started out with low initial task performance, indicating that there was an objective need for training. During training, the students achieved a very high level of task performance. On average, they successfully completed over eighty percent of the practice tasks. The difference with the pre-test was both statistically significant and substantial, F(1,62) = 397.78, p < 0.001, d = 3.29. In short, the video tutorial was very successful as a job aid. A regression analysis with initial motivation and cognition as predictors for task performance during training yielded no significant outcomes.

Table 4 further shows that students achieved a mean score of over sixty-five percent success in follow-up testing. From pre-test to post-test there was a significant and considerable increase in task performance, F(1,60) = 172.33, p < 0.001, d = 1.96. This was likewise the case from pre-test to retention-test, F(1,62) = 138.90, p < 0.001, d = 1.75. From post-test to retention-test the students’ task performance remained relatively stable, F(1,58) < 1.

The video tutorial appeared to have satisfactory success in achieving learning. Substantial progress was recorded from pre-test to post-test and retention-test. The findings on the retention-test further indicated that the students’ procedural knowledge development was lasting. It was not a temporary spike.

The difference between the effectiveness of the video tutorial as a job aid and for learning emerged in the decline in task performance after the practice tasks. From training to post-test, there was a significant and considerable decrease in task performance, F(1,60) = 37.15, p < 0.001, d = 0.82, just as there was for the comparison between training and retention-test, F(1,60) = 39.10, p < 0.001, d = 0.86.

A regression analysis with all measures before and during training as predictors for the post-test scores yielded the following results. Performance success on the practice tasks alone accounted for 18.7% of the variance, F(1,60) = 13.54, p < 0.001, and together with the pre-test score, explained 25.3% of the variance, F(2,60) = 9.80, p < 0.001. In short, only cognitive factors predicted the outcomes on the post-test.

A regression analysis with all measures before and during training, plus the post-test scores, as predictors for the retention-test scores yielded the following results. Post-test alone accounted for 34.4% of the variance, F(1,58) = 29.92, p < 0.001. Post-test together with training explained 44.3% of the variance, F(2,58) = 22.26, p < 0.001. In short, only cognitive factors predicted the outcomes on the retention test.

Discussion and Conclusion

All measures of motivation indicated that the video tutorial was well received. The students predominantly reported having been in a positive mood state during training. In addition, measures of flow and task relevance during training showed that students felt sufficiently challenged, as well as supported, by the video tutorial.

The finding that task relevance during training was predicted by task relevance prior to training, and flow by initial self-efficacy, signalled that the video tutorial was well attuned to the students’ expectancies. More generally, the findings for motivation suggested that the eight design guidelines on which the video was grounded sufficiently address user affect. Clearly, the present study could not test this claim, nor could it point out which are the most critical guideline(s). For that, controlled experimental studies are needed.

The correlational measures indicated that all measures of motivation during training were related significantly with each other. In addition, the nature of the relationships made sense. That is, when students were more appreciative of task relevance and of their flow experiences during training, they also more often reported having been in a positive mood state. The correlations were moderate, as one probably should expect. Very low correlations would be surprising and counter-intuitive. Very high correlations would suggest that the motivational measures were indistinguishable.

In short, the outcomes for motivation during training bode well for the acceptance of similarly designed video tutorials. That is, once students have gained access to such a tutorial, they are likely to process the videos that it contains right through to the very end. Although there is no proof, the guideline to keep videos short, preferably within a 1-minute range, could be critical in this respect.

The findings indicated that the video tutorial served as an effective job aid. After having seen the videos, the students’ task performance success rose from an initial low 21% score to a high of 86%. The video presumably provided most students with their first exposure to the proper method of performing a particular formatting task in Word. However, the video may also have occasionally just jogged the student’s memory for a (partially) forgotten procedure (compare Merkt, Weigand, Heier, & Schwan, 2011). For performance during training it does not matter whether the video served as an introduction, or as a memory aid. What does matter is whether it effectively supported task achievement. The findings clearly indicated that it did. Furthermore, the regression analysis showed that success during training was not affected by the students’ initial cognition or motivation. Presumably, it was the video that made the difference.

To further investigate the usage of the video tutorial and to inform design practices, it would have been useful to log the students’ actions with screen recording software (for example, Camtasia). These data could have yielded valuable insights about the frequency of the use of video controls for speeding up or slowing down the instructions. In addition, student logs could have revealed how often the videos were consulted during practice tasks, and how the students searched to find the relevant video fragment for information about individual task actions. We did not gather student logs because of practical constraints. Much of our empirical work on software training is done in regular schools, most of which have relatively older computers that do not mix well with recording software. In short, we did not log student actions to prevent computer hiccups and crashes.

The findings further showed that the video tutorial also served its role as a support for learning effectively, with post-test and retention-test scores of 68% and 66%, respectively. Clearly, these findings leave room for improvement. But set off against the initial score of 21%, the change was both statistically significant and substantial. There are probably two reasons why the scores for acting as a job aid were higher than for learning.

One explanation could relate to a difference in context. In testing, method selection was not cued by a video immediately preceding task performance. It was only during testing that the student needed to know which method went with which problem. During training this issue simply did not arise because there was a perfect alignment between task instructions in the video and the ensuing practice task performance. Both invariably involved the same formatting task and method. That is, when the video instructed the student how to indent paragraphs, the training task also asked the student to indent paragraphs, and so on. In testing, the context was different. All formatting tasks were presented at once, which made them not so neatly aligned and pre-signaled regarding the appropriate solution method as in training. Testing thus required the student to select the proper method from among the array of methods that had been taught. That is, only during testing did the student have to know that the right method for indenting paragraphs involves usage of the ‘First Line Indent,’ rather than the ‘Hanging Indent,’ or the ‘Left Indent.’ In other words, testing required students to make the proper choice of method for each formatting task. In training this was almost a dead give-away. One might consider it to be an omission not to teach the student about selection rules in the video, or to exclude ‘trick’ formatting tasks in training files. However, since this might have entailed the risk of confusing the student, we chose not to do so.

Another explanation is that it is harder to achieve learning than to act as a job aid because of the way these two are measured. The proper way to measure acting as a job aid involves an examination of task performance with the instructions present. In contrast, testing for a learning effect requires an absence of outside help, because it should assess acquired knowledge stored in the user’s long-term memory. Thus, the lower scores for learning may have to do with the user’s memory. There can be many reasons why a user has less than perfect recollection of solution methods. Among these is that the user may have failed to encode, organize and/or integrate some information from the video during training. After training it may also be too difficult for the user to recall or reconstruct the right solution from memory.

Surprisingly, not a single motivational measure was found to predict task performance. For a long time, work on multimedia learning has concentrated on cognitive processes. The introduction of CATLM (Moreno, 2006, 2009) illustrates an attempt to complement this cognitive view on learning with motivational factors, among others. An important new facet in this more recent multimedia research concerns the motivational mediation assumption, which holds that ‘motivational factors mediate learning by increasing or decreasing cognitive engagement’ (Plass et al., 2014). This assumption was examined in the present study. The finding that only cognitive factors predicted these outcomes is perhaps a signal that research still has a long way to go to find ways to integrate and measure motivational and cognitive processes in multimedia learning. In other words, as suggested by Magner, Schwonke, Aleven, Popescu, and Renkl (2014), the influence of motivational mediators on learning may require better process data and more complex data analysis methods than those used in the present study.

We probably need to examine more closely how users’ motivation influences their decision to engage in task execution and to persevere when there are obstacles, and how this contributes to learning. The positive reports of motivation found for the video tutorial provide a good basis for taking up that challenge.

Acknowledgments

The author wishes to thank Petra ten Hove for her help in conducting this study.

References

Agrawala, B., Li, W., & Berthouzoz, F. (2011). Design principles for visual communication. Communications of the ACM, 54(4), 60-69. doi: 10.1145/1924421.1924439

Bandura, A. (1997). Self-efficacy. The exercise of control. New York, NY: Freeman and Company.

Brunyé, T. T., Taylor, H. A., Rapp, D. N., & Spiro, A. B. (2006). Learning procedures: The role of working memory in multimedia learning experiences. Applied Cognitive Psychology, 20, 917-940. doi: 10.1002/acp.1236

Carroll, J. M. (1990). The Nurnberg Funnel. Designing minimalist instruction for practical computer skill. Cambridge, MA: MIT Press.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2 ed.). Hillsdale, NJ: Erlbaum.

Csikszentmihalyi, M. (1991). Flow: The psychology of optimal experience. . New York, NY: Harper Perennial.

Dixon, P. (1982). Plans and written directions for complex tasks. Journal of Verbal Learning and Behavior, 21, 70-84. doi: 10.1016/S0022-5371(82)90456-X

Doumont, J.-L. (2002). Magical numbers: The seven-plus-or-minus-two myth. IEEE Transactions on Professional Communication, 45(2), 123-127. doi: 10.1109/TPC.2002.1003695

Eccles, J. S., & Wigfield, A. (2002). Motivation beliefs, values, and goals. Annual Review of Psychology, 53, 109-132. doi: 10.1146/annurev.psych.53.100901.135153

Ertelt, A. (2007). On-screen videos as an effective learning tool. The effect of instructional design variants and practice on learning achievements, retention, transfer, and motivation. (Doctoral dissertation), Albert-Ludwigs Universität Freiburg, Germany.

Farkas, D. K. (1999). The logical and rhetorical construction of procedural discourse. Technical Communication, 46, 42-54.

Goodwin, D. (1991). Emplotting the reader: Motivation and technical documentation. Journal of Technical Writing and Communication, 21(2), 99-115. doi: 10.2190/1TLD-2JBL-DD7X-PXK3

Hagemans, M. G., Van der Meij, H., & De Jong, T. (2013). The effects of a concept map-based support tool in simulation-based inquiry learning. Journal of Educational Psychology, 105(1), 1-24. doi: 10.1037/a0029433

Keller, J. M. (2010). Motivational design for learning and performance. The ARCS-Model approach. New York, NY: Springer.

Kolb, A. Y., & Kolb, D. A. (2005). The Kolb Learning Style Inventory, Version 3.1: Technical specifications: Experience Based Learning Systems Inc.

Lang, A. (2000). The limited capacity model of mediated message processing. Journal of Communication, 50, 46-70. doi: 10.1111/j.1460-2466.2000.tb02833.x

Leutner, D. (2104). Motivation and emotion as mediators in multimedia learning. Learning and Instruction, 29, 174-175. doi: 10.1016/j.learninstruc.2013.05.004

Magner, U. I. E., Schwonke, R., Aleven, V., Popescu, O., & Renkl, A. (2014). Triggering situational interest by decorative illustrations both fosters and hinders learning in computer-based learning environments. Learning and Instruction, 29, 141-152. doi: 10.1016/j.learninstruc.2012.07.002

Mayer, R. E. (2001). Multimedia learning. New York, NY: Cambridge University Press.

Mayer, R. E. (2005a). The Cambridge handbook of multimedia learning. New York, NY: Cambridge University Press.

Mayer, R. E. (2005b). Principles for managing essential processing in multimedia learning: Segmenting, pretraining, and modality principles. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 169-182). New York, NY: Cambridge University Press.

Merkt, M., Weigand, S., Heier, A., & Schwan, S. (2011). Learning with videos vs. learning with print: The role of interactive features. Learning and Instruction, 21, 687-704. doi: 10.1016/j.learninstruc.2011.03.004

Moreno, R. (2006). Does the modality principle hold for different media? A test of the method-affects-learning hypothesis. Journal of Computer Assisted Learning, 22, 149-158. doi: 10.1111/j.1365-2729.2006.00170.x

Moreno, R. (2009). Learning from animated classroom exemplars: The case for guiding student teacher’s observations with metacognitive prompts. Educational Research and Evaluation, 15(5), 487-501. doi: 10.1080 13803610903444592

Novick, D. G., & Ward, K. (2006). What users say they want in documentation. El Paso: University of Texas

Park, B., Plass, J. L., & Brinken, R. (2014). Cognitive and affective processes in multimedia learning. Learning and Instruction, 29, 125-127. doi: 10.1016/j.learninstruc.2013.05.005

Pintrich, P. R., & Schunk, D. H. (2002). Motivation in education. Theory, research, and applications (2nd ed.). Upper Saddle River, NJ: Merrill Prentice Hall.

Plaisant, C., & Shneiderman, B. (2005). Show me! Guidelines for recorded demonstration. Paper presented at the 2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC’05), Dallas, Texas. http://www.cs.umd.edu/localphp/hcil/tech-reports-search.php?number=2005-02

Plass, J. L., Heidig, S., Hayward, E. O., Homer, B. D., & Um, E. (2014). Emotional design in multimedia learning: Effects of shape and color on affect and learning. Learning and Instruction, 29, 128-140. doi: 10.1016/j.learninstruc.2013.02.006

Read, J. C. (2008). Validating the fun toolkit: An instrument for measuring children’s opinions of technology. Cognition, Technology & Work, 10, 119-128. doi: 10.1007/s10111-007-0069-9

Rieber, L. P. (1991). Animation, incidental learning, and continuing motivation. Journal of Educational Psychology, 83(3), 318-328. doi: 10.1037/0022-0663.83.3.318

Roshier, A. L., Foster, N., & Jones, M. A. (2011). Veterinary students’ usage and perception of video teaching resources. BMC Medical Education, 11(1). doi: 10.1186/1472-6920-11-1

Spanjers, I. A. E., Van Gog, T., & Van Merriënboer, J. J. G. (2010). A theoretical analysis of how segmentation of dynamic visualizations optimizes students’ learning. Educational Psychology Review, 22, 411-423. doi: 10.1007/s10648-010-9135-6

Ten Hove, P. (2013). Measuring children’s learning styles. An application of Kolb’s Learning Style inventory for elementary school children. Final project Pre-Master Educational Science and Technology. Twente University. Enschede, the Netherlands.

Tversky, B., Bauer-Morrison, J., & Bétrancourt, M. (2002). Animation: Can it facilitate? International Journal of Human-Computer Studies, 57, 247-262. doi: 10.1006/ijhc.2002.1017

Van der Meij, H. (2007). Goal-orientation, goal-setting and goal-driven behavior in (minimalist) user instructions. IEEE Transactions on Professional Communication, 50(4), 295-305.

Van der Meij, H. (2012). Supporting children in improving their presentation of school reports. In M. Torrance, D. Alamargot, M. Castelló, R. Llull, F. Ganier, O. Kruse, A. Mangen, L. Tolchinsky & L. van Waes (Eds.), Learning to write effectively. Current trends in European Research (pp. 169-172). Bingley, UK.: Emerald Group Publishing.

Van der Meij, H. (2013). Motivating agents in software tutorials. Computers in Human Behavior, 29, 845-857.

Van der Meij, H., & Carroll, J. M. (1998). Principles and heuristics for designing minimalist instruction. In J. M. Carroll (Ed.), Minimalism beyond the Nurnberg funnel Cambridge, MA: MIT Press.

Van der Meij, H., & Gellevij, M. R. M. (2004). The four components of a procedure. IEEE Transactions on Professional Communication, 47(1), 5-14. doi: 10.1109/TPC.2004.824292

Van der Meij, H., Karreman, J., & Steehouder, M. (2009). Three decades of research and professional practice on software tutorials for novices. Technical Communication, 56(3), 265-292.

Van der Meij, H., & Van der Meij, J. (2013). Eight guidelines for the design of instructional videos for software training. Technical Communication, 60(3), 205-228.

Vollmeyer, R., & Rheinberg, F. (1999). Motivation and metacognition when learning a complex system. European Journal of Psychology of Education, 14(4), 541-554. doi: 10.1007/bf03172978

Vollmeyer, R., & Rheinberg, F. (2006). Motivational effects on self-regulated learning with different tasks. Educational Psychology Review, 18, 239-253. doi: 10.1007/s10648-006-9017-0

Zacks, J. M., Speer, N. K., Swallow, K. M., Braver, T. S., & Reynolds, J. R. (2007). Event perception: A mind-brain perspective. Psychological Bulletin, 133(2), 273-293. doi: 10.1037/0033-2909.133.2.273

Zacks, J. M., & Tversky, B. (2003). Structuring information interfaces for procedural learning. Journal of Experimental Psychology: Applied, 9(2), 88-100. doi: 10.1037/1076-898X.9.2.88

About the Author

Hans van der Meij is senior researcher and lecturer in Instructional Technology at the University of Twente (The Netherlands). His research interests are: questioning, technical documentation (for example, instructional design, minimalism, the development of self-study materials), and the functional integration of ICT in education. He has received several awards for his articles, including a ‘Landmark Paper’ award from IEEE for a publication on minimalism (with John Carroll), and a ‘Distinguished Paper’ award from STC for his publication on design guidelines for instructional video (with Jan van der Meij). Contact: h.vandermeij@utwente.nl.

Manuscript received 27 March 2014; revised 21 April 2014; accepted 23 April 2014.