By Hans van der Meij and Constanze Hopfner
Purpose: Video is a popular medium for instructing people how to use software. In 2013, van der Meij and van der Meij proposed eight guidelines for the design of instructional videos for software training. Since then, production techniques and video features evolved, and new insights about characteristics of effective video instructions emerged.
Methods: Based on recent study outcomes and our reflections on instructional video designs, the original set of eight guidelines was restructured, updated, and extended.
Results: A new framework with 11 guidelines was constructed. For these guidelines the article provides scientifically-based advice for the design of instructional videos for software training.
Conclusion: The new framework and the illustrations of how the guidelines were applied in videos should provide useful insights for further practice and research on instructional video design.
KEYWORDS: instructional video, design characteristics, software training, procedural knowledge development
- The 11 guidelines presented in this paper extend earlier work that offered eight guidelines for designing instructional videos for software training.
- Novel guidelines about how to design the procedural discourse, video reviews, and background music are discussed. In addition, design advice on narrator presence on-screen and its effect on learning is provided.
With COVID-19, educators around the world have been challenged to provide students with ample opportunities for online arrangements to support learning. The kinds of options that are chosen partly depend on the goals that must be achieved. For procedural knowledge development, an instructional video is a good candidate because it can present a model of task performance (Grossman et al., 2013). This article concentrates on the design of this type of video. More specifically, the focus is on presenting a set of guidelines for the design of instructional videos for software training.
In 2013, van der Meij and van der Meij proposed a framework for the support of procedural knowledge development in software training that consisted of eight design guidelines. The guidelines were based on multi-media theory and demonstration-based training. The framework has since served as the foundation for constructing instructional videos in several recent studies on software training (e.g., Garrett, 2021; Kelly, 2017; Kokoç, Ilgaz, & Altun, 2020; Randhave et al., 2019). For instance, Randhave et al. used it to create an instructional video for an electronic medical record training intervention, and Garrett used the eight guidelines for constructing an instructional video on Excel’s conditional formatting features. More generally, since its conception, the framework has frequently been cited for including one or more of its guidelines in instructional video designs (e.g., Cudmore & Slattery, 2019; Espino, Artal, & Betancor, 2021; Käfer, Kulesz, & Wagner, 2017).
The present study restructured, complemented, and reformulated the eight guidelines framework based on recent research findings and our careful reflection on new insights for instructional video designs. Among others, a guideline was added to provide designers with information on the kind of content they should present in procedural discourse. According to this new guideline, such discourse should invariably give information on two components (i.e., goals, and actions and reactions) and may include information on two optional ones (i.e., prerequisites and unwanted states) depending on circumstances. In addition, some guidelines were reformulated to make these clearer. For instance, the guideline to “Provide easy access” was changed into “Support users in finding a suitable video” to which two detailed guidelines were added for more concrete support. Figure 1 presents the new and more comprehensive framework of the 11 guidelines.
Just as in the original framework, the guidelines were formulated broadly enough to cover a large audience and diverse contexts. The following sections describe the reason(s) for proposing each guideline, state their theoretical rationale, and present a summary of available empirical support (if any). The article concludes with a brief discussion on the possibilities and limitations of the new framework.
Guideline 1: Keep Instructional Videos Short
An important consideration in instructional video design is to prevent early dropout. After all, users can only learn from video segments they have actually viewed. Shorter instructional videos are better at keeping the user aboard than are longer ones. Figure 2 shows the guidelines for optimizing user attendance to an instructional video.
There are different recommendations for what is a good length of an instructional video. A questionnaire among SAP users indicated that they prefer a length of maximally 90 seconds (Huxhold & Luther, 2013). Wistia, a video hosting platform that uses data mining techniques to discover the optimal length for a video, found two minutes to be a sweet spot after which a rapid loss of audience sets in (Fishman, 2016). Finally, Guo, Kim, and Rubin (2014) who used data mining to suggest the proper length of EdX videos recommended segmenting such videos in chunks of less than six minutes. In our own experience, 90–120 seconds is cutting it a bit short, even for instructional videos that explain only one software task. We also found a six-minute length to be sufficient for nearly all task explanations. In addition, a recent survey revealed that users of YouTube videos prefer a length of maximally six minutes (Dascȁlu et al., 2020). Therefore, this is our recommended length.
Guideline 2: Support Users in Finding a Suitable Instructional Video
Users must overcome at least two hurdles when searching for a suitable instructional video. The first obstacle is finding good candidates; this means locating potentially relevant instructional videos. The second barrier is judging suitability. Users should be able to quickly appraise whether the instructional video is likely to offer a solution to their problem (see Brand-Gruwel, Wopereis, & Vermetten, 2005). Figure 3 shows the guidelines for providing easy access to a pertinent instructional video.
The heading or title plays a vital role in the user’s information search processes. In his streamlined step model, Farkas (1999) argued that a title is a mandatory component in (nearly) all procedural discourse. The model dictates that the title should be crafted carefully so that it clearly conveys the task that is demonstrated in the instructional video; the title should give the user a succinct goal description. Also, the title should be concrete rather than abstract. That is, research has shown that texts with concrete titles are deemed easier to comprehend, more interesting, and more motivating to study than texts with abstract titles (Lippmann et al., 2019). In short, the title should describe the general action in concrete terms and represent the purpose of the procedure.
To add to the concreteness of the video title, we also recommend including the software name and version. The presence of this information in the title may speed up the video selection process for users. It may alert users to the fact that software options and procedures can vary across versions and they should look for the matching one, and it facilitates the search of users who already know that they need instructions for a specific software version.
Guideline 3: Preview the Task
A preview provides a structural overview of what is covered in an instructional video. It can be presented to support the decision for further study and facilitate the user’s information processing. That is, the information can motivate the user to engage with the video, activate prior knowledge, and provide an ideational scaffold that makes it easier for the user to understand the instructions. Figure 4 shows the guidelines for preparing the users for a task demonstration in an instructional video.
Empirical research with paper-based instructional materials has repeatedly shown that advance organizers are conducive to learning (e.g., Gurlitt et al., 2012; Mayer, 1979; Roohani, Jafarpour, & Zarei, 2015; Teng, 2020). A preview is the multimedia alternative. A preview is a short, animated presentation that informs the user about the goal and the procedure to achieve it.
The goal information in the preview is important for making relevance judgments. It provides the basis for the user’s decision whether watching the video is worth the time (Almeida, Leite, & Torres, 2013). Motivational theories such as goal-setting (Locke & Latham, 2002) and expectancy-value theory (Eccles & Wigfield, 2002) invariably emphasize that goal information contributes to the users’ task engagement. Preferably the user is presented with goals that are specific and clear. One particularly effective means for presenting such information comes from showing both the start and end state. Figure 5 shows an example of an illustrated start and end state from an image editing video.
A preview should also provide the user with a succinct view on the main action steps needed to achieve the goal. The procedural information in the preview shows the user the main trajectory in task execution. This procedural overview can serve as an anchor point for the demonstration that follows.
There is very little empirical research on previews in software training, and to our knowledge, only two studies have been reported in the literature (van der Meij, 2014, 2019). These studies reveal that a preview that elucidates the goal and animates the procedure enhances the user’s task performance during and after training.
Guideline 4: Use a Screencast with Narration
Instructional videos for software training usually come in the form of recorded demonstrations (Plaisant & Shneiderman, 2005). The demonstration animates the actions on the software, providing the user with a dynamic image of task progression on the interface. The narration accompanies the unfolding scenario. It gives the user a rationale for task execution and describes the distinct action steps therein. Figure 6 shows the guidelines for presenting the words and pictures in an instructional video.
Instructional videos offer the designer the opportunity to use both pictures and words for instructing the user. The combination of the two modalities should be carefully considered. Multimedia learning theory has advanced several design principles for this coupling that have repeatedly been validated in empirical research. According to the multimedia principle, people learn more from a meaningful combination of words and pictures than from words alone (Butcher, 2014). Multimedia research has further shown that it is better to use spoken words rather than written words (e.g., on-screen text). This is captured in the modality principle which holds that written words compete with pictures for user attention which reduces learning (Mayer & Pilegard, 2014).
Furthermore, it has been found that learning is hampered when the spoken and written words in a multimedia presentation are identical. This finding has led to the redundancy principle which holds that duplication of spoken and written words unnecessarily taxes the user and hampers learning (Mayer & Fiorella, 2014). Regarding the joint presence of spoken and written words, it has, however, also been found that a careful combination of the two can actually benefit the user. Such a situation occurs when summarizing labels are included (Koumi, 2013), or when one or two key terms from the narration are presented in written form next to the relevant part of an image (Adesope & Nesbit, 2012). Such designs create a desirable difficulty that stimulates the user to actively process both kinds of information, which has been found to enhance learning (Yue, Bjork, & Bjork, 2013). Figure 7 presents an example of an instructional video that uses written (on-screen) labels to convey key terms described in the narrative.
According to the voice principle in multimedia learning theory, it is better to use a human voice over a computer-generated one (Mayer, 2014b). That is, research has found that a human voice is more conducive to learning than a computer-generated one because the latter is more difficult to understand. Empirical support for the voice principle rests on a limited set of relatively older studies, however. Because text-to-speech software is widely available nowadays and also has improved considerably in the last decade, the question can be raised whether the voice principle still holds. Recent studies comparing a modern computer-generated voice to a human voice have found no learning advantage of either type of voice (e.g., Castro-Alonso et al., 2021; Craig & Schroeder, 2019; Davis, Vincent, & Park, 2019). In other words, provided that modern software is used in production, a computer-generated voice and a human voice can be equally effective for learning.
Guideline 5: Support an Action-Oriented Approach
A majority of users are likely to consult an instructional video for assistance in task achievement; users’ primary interest lies in receiving procedural rather than conceptual information. Prioritizing showing over explaining fits an action-oriented approach to software documentation (Carroll, 1998). In an instructional video, this approach generally results in a design that revolves around a demonstration of a task performance (Brar & van der Meij, 2017; van der Meij, 2017). In that demonstration, the user should be given conceptual information only when and where such information is needed. Figure 8 shows the guidelines for making actions easy to follow and mimic in an instructional video.
People who turn to an instructional video on software are primarily interested in achieving tasks. The main body of information in the instructional video should, therefore, be action-oriented and hence consist of procedural information. It should be a priority in design to give information that enables or guides the user’s task completion (Brar & van der Meij, 2017; Kim, Nguyen, et al., 2014).
At the same time, users also often need information with which to plan and evaluate their actions. Instructional videos should, therefore, also give conceptual information to explain the rationale for a procedure and to reveal underlying principles (see Clark & Mayer, 2016). But this information should be limited. An abundance of explanatory information can make an instructional video too wordy and lose its efficiency (Shoufan, 2019), or worse yet, the user may be put off and quit viewing early.
Guideline 6: Consider the Key Components of a Well-Designed Procedure
An instructional video should inform the user of all the information needed to accomplish a task. The Four Components model (van der Meij, Blijleven, & Jansen, 2003; van der Meij & Gellevij, 2004) can serve as a framework for designing procedural discourse. According to this model, such a discourse generally consists of two mandatory and two optional elements. The information types that should always be present are the goal, and action and reaction components. Goal information gives direction and purpose to the user’s actions. Information about user actions on, and responses of the software (reactions), capture the process of human-computer interaction that is at the core of every procedural discourse involving software. During task performances, the user occasionally benefits from receiving information about prerequisites and unwanted states. Information about prerequisites alerts the user to necessary system states and pertinent prior knowledge and skill. Information about unwanted states helps the user avoid certain mistakes and assists in error management. Figure 9 shows the guidelines for presenting the content of a procedure in an instructional video.
A goal is an objective that can be achieved with a procedure. Goal information enables the user to assess whether there is a match with the desired state (Farkas, 1999). In addition, it gives the user information about the direction of the actions that are to be performed. In instructional videos, the main goal is often codified in the title, but there should also be information about intermediate and end states in the procedural discourse itself. That is, one should expect to find goal statements right before the start of a procedure and as part of an action statement that describes an interim state. Such goal statements may precede the action information (e.g., in a sound editing video: “Now we’re gonna look for a section that only has the noise.”), or follow afterwards (e.g., in an image editing video: “We’re going to open this picture in a new document so we can increase the resolution.”).
Prerequisites are conditions that must be satisfied before the user can engage in a task procedure. There are two kinds of prerequisites: system states and user knowledge and skill (van der Meij et al., 2003). System states describe the start position or necessary material for the procedure. The start position is presumed to be present, or the user is expected to know how to get there. That is, the prerequisite state is simply depicted or described (e.g., “You should see the home page.”; “The Frequencies menu should be visible.”). For necessary materials, the user is simply told to access a pertinent file. Prerequisites also concern the foundational knowledge and skills needed to successfully engage in a task procedure. An instructional video may mention these prerequisites to help the user decide whether to continue with the video or to address the prior knowledge or skills first. For instance, an instructional video may alert the user to a necessary fact or concept (e.g., “You should already know styles.”; “Recall that a style is a set of formatting characteristics.”), or it may state what the user already needs to be able to do (e.g., “You should already know how to select styles.”).
The action and reaction component forms the heart of any procedural discourse. Addressing only the user actions would unduly ignore the effects on the system; it is too easily forgotten that user actions evoke software reactions that enable new user actions. To do justice to this intricate relationship, the Four Component model considers actions and reactions in tandem (van der Meij et al., 2003; van der Meij & Gellevij, 2004). While it may not be necessary to connect each action command with a software reaction, the instructional video should provide enough system state information so that users can easily monitor task progress. A typical example of an action statement and the feedback that may follow is: “Double-click the margin at the top or bottom of your document. This will ‘unlock’ the header or footer area.”
Errors or unwanted states are situations that the user should avoid getting into. A prominent advocate of addressing error in user documentation is the minimalist approach (van der Meij & Carroll, 1998). Several empirical studies conducted from this perspective have supported claims that the presence of information with which the user can detect, diagnose, and correct mistakes contribute to motivation and learning (e.g., Lazonder, 1994; Lazonder & van der Meij, 1995). A recent literature review on error-inclusive approaches in software documentation and training (van der Meij & Flacke, 2020) substantiated these findings. It reported that error-inclusive approaches significantly enhanced task performances after training and helped develop error-management skills (including the capacity to deal with the frustration that usually comes with error).
Guideline 7: Make the Task Demonstration Easy to Follow and Mimic
An instructional video should consist of easy-to-understand, concise, prototypical descriptions on how to achieve a task. One of the ways for doing so is to employ a conversational style in the narration. Also, when tasks and actions are presented in a simple-to-complex sequence, users can gradually build up their knowledge, and effort in learning new procedures is reduced. Furthermore, it is easier for the user to process action instructions that involve menu-based choices instead of (arbitrary) keyboard shortcuts because a menu-based approach offers the user semantically meaningful information for what needs to be done. Finally, to the uninitiated user, an interface may pose a challenging context for finding the information that they need. Users, therefore, often benefit from highlighting to guide their attention to key objects or locations on the interface. Figure 10 shows the guidelines for making task demonstrations clear and simple in an instructional video.
The narrative in an instructional video can be given in a formal or conversational style. A formal style is characterized by the frequent use of the passive voice and a narrator staying in the background. In contrast, a conversational style addresses the user as “you” and the designer is foregrounded through the use of “I.” Also, an effort is often made to create a shared perspective by using words such as “we” or “us” for intentions or goals. Empirical research on multimedia presentations shows that a conversational style is easier to understand and yields more learning than a formal style (Ginns, Martin, & Marsh, 2013).
The guideline to follow the user’s mental plan in describing an action sequence refers to the desirability to present tasks in a simple-to-complex sequence so that the user can keep up with increasing levels of task complexity (van Merriënboer & Kester, 2014). When easier tasks are presented before more difficult ones, an optimal balance can be obtained between what the user already knows and the new knowledge that must be acquired. On a more detailed level, this guideline suggests that an action instruction is easier to understand and follow when it mentions successive events in the correct order (Clark & Clark, 1968). For action instructions, this means that they should begin with the antecedent condition before presenting the action information. Thus, it is better to state “On the Insert menu, click Pictures” than to use “Click Pictures, on the Insert menu.”
Another way of making tasks clear and simple lies in following a menu-based approach for triggering an action instead of telling users what keyboard shortcut they should press. A menu-based approach is semantically meaningful and, therefore, easier to learn than a keyboard shortcut approach which has the advantage of being more efficient. Occasionally, the two approaches are combined to facilitate the transition from one to the other type of user action (e.g., Cockburn, Gutwin, & Scarr, 2014; Cui et al., 2019).
A prevalent way of facilitating the processing of task information in instructional videos comes from the use of visual highlighting. Signals can guide the user’s attention to essential screen elements, helping the user perceive key points of information without adding content. Figure 11 illustrates a case of visual highlighting in a sound editing video. The red circle guides the user’s attention to the location of the cursor which could otherwise be missed. In addition, signals can support the user in organizing and integrating the information (van Gog, 2014). Highlighting has been extensively studied in multimedia research. The results of these studies are conveniently summarized in two recent meta-studies. Richter, Scheiter, and Eitel (2016) found that highlighting enhanced learning. In addition, the authors noted that this effect was especially strong for users with low prior knowledge. The meta-study from Schneider, Beege, Nebel, and Ray (2018) also found a learning effect as well as positive effects on motivation and learning time. However, this study found no moderating effect of prior knowledge.
Guideline 8: Support Users in Handling the Transitory Nature of Video
One of the complicating factors in processing instructional videos is their transitory nature. The speed of an instructional video plays an important role in how well users keep motivated and can handle the ongoing stream of information (compare Cohn & Foulsham, 2020; Huff, Meitz, & Papenmeier, 2014). Setting the right pace for the instructional video is a matter of handling the complex interplay of auditory and visual information. The visual and auditory information in an instructional video should move at a pace that enables the user to perceive the task demonstration accurately and comprehend its content (Tversky, Bauer-Morrison, & Bétrancourt, 2002). There is always a risk of creating an instructional video that is too slow for some viewers and too fast for others. If the instructional video runs too slow, users may lose patience (Johnstone & Scherer, 2000). If it runs too fast, users may not be able to comprehend the information because it overtaxes their cognitive resources (Lang, 2000; Lowe & Boucheix, 2016). In both cases, users may stop viewing. As users can only learn from the video segments they have watched, it is paramount that they are motivated to watch the entire video. Figure 12 shows the guidelines for supporting the users’ handling of the transitory nature of instructional videos.
One way to manipulate the native pace of a video is to vary the speech rate. The speech rate is a words per minute (wpm) count. Although this rate is a rather crude indicator for a video’s speed (Park & Bailey, 2018), it can be helpful for setting the pace by comparing the speech rate to a benchmark number. By and large, a rate of less than 110 wpm is considered slow, between 120–150 wpm moderate, and more than 160 wpm is seen as fast (Dugdale, 2010). An average speech rate should enable the audience to keep track during most of the video.
Users can also deal with the fleetingness of an instructional video by manipulating the video native playback speed. To give an example, YouTube enables users to play a video at 25%–200% of the original speed. When users can set the speed to match their needs, this may certainly help them process the video content properly. Nevertheless, we recommend that the video pace should be somewhat varied as more complex sections and less experienced users benefit from a slower than average presentation, whereas easy sections and more experienced users benefit from an above-average pace.
The user’s processing of an instructional video can also be facilitated by creating clearly demarcated segments. Splitting an instructional video into sections should be done in a meaningful manner. When the segments reflect the structural components or main events in an instructional video, they support the user’s mental model development (Lowe & Boucheix, 2016). Empirical research has shown that segmentation reduces cognitive load and raises learning (Rey et al., 2019).
In experimental research, segmentation is often user-paced which means that the instructional video comes to a full stop after the segment and then needs to be set into motion again by the user. A less disruptive, and in our view a better alternative to built-in stops, is to include a deliberate pause after each segment. A deliberate pause is a brief 2–5 second interruption of the flow of information. During such a pause, no new visual or auditory information is presented. After the pause, the instructional video automatically resumes its course. Empirical research has shown that deliberate pauses facilitate learning through two main effects: first, they buy processing time which allows the learner to reflect, and second, they signal event boundaries to users by demarcating segments (Spanjers et al., 2012).
The designer may also want to signal the segments with timeline markers to facilitate within-video navigation and information search. These markers are section dividers on the timeline that reflect the organizational structure of an instructional video. Therefore, they provide an easy way to browse to and from key points. Empirical research shows that timeline markers can significantly facilitate navigation in an instructional video (Cojean & Jamet, 2017, 2018, 2021; Kim, Guo, et al., 2014). Figure 13 shows an example of timeline markers on a YouTube video called “chapters” on this platform. Video owners can segment their video’s timeline and give segments informative names. Segments are visually separated through small gaps in the timeline. When users hover over a segment, its name and a preview image are shown. Such timeline markers make a search for information within a video easy and convenient for users.
The added presence of a visible narrator, usually in the form of a talking head, is becoming a more common feature in instructional videos (Pi et al., 2020). Figure 14 (next page) shows an example of a visible narrator in a sound editing video. The task demonstration and the narrator (bottom-right corner) are shown simultaneously. The primary reason for the inclusion of the narrator is to socialize the user’s experience. Recent studies show that the presence of a narrator during an instructional video can enhance the users’ feelings of social presence and enjoyment (Wang, Antonenko, & Dawson, 2020), and may draw the users’ attention to key interface aspects and thereby benefit learning (Pi et al., 2020; Wang et al., 2020). However, there is also research that found the visible presence of a narrator distracting and without learning benefits (van Wermeskerken, Ravensbergen, & van Gog, 2018). Yet another study reported that while students favored and enjoyed instructional videos with a visible narrator, learning was higher when no narrator was presented (Wilson et al., 2018). In view of these mixed findings, we favor the advice of Guo, Kim, and Rubin (2014) who suggest showing the narrator at opportune times in an instructional video. During task demonstrations, a visible narrator may distract the user’s attention from key changes in the software and these moments, therefore, seem not suitable. More opportune moments for the visible presence of a narrator are the explanations given before or after a task demonstration. Just as in segmentation, the narrator then alerts the user to the arrival of a new event.
Guideline 9: Review the Task
A review is a brief outline of a task demonstration. As a recap of the main events involved in task performance, a review provides the user with a summary of how task completion is achieved. Thus, it can serve as a frame of reference and as a check for understanding. In addition, a review can boost recall as key information is presented in condensed form. Figure 15 shows the guidelines for facilitating recall of an instructional video.
Effects of reviews in instructional videos for software training have been investigated only recently. In virtually all studies that compared a task demonstration without review (control condition) with a task demonstration that ends with a review (experimental condition), positive effects on learning have been reported. These effects appear to be constrained by task complexity. For simple procedural training tasks involving text processing software, reviews have been found to raise learning significantly (van der Meij, 2017; van der Meij & van der Meij, 2016a,b). Smaller, non-significant effects have been reported for reviews in statistics software training where learning depends on a combination of acquiring knowledge of procedures, concepts, theories, and formulas (Brar & van der Meij, 2017; van der Meij & Dunkel, 2020). Although the strength of the effect varied, the presence of a review consistently aided learning and is therefore recommended in software training.
Guideline 10: Strengthen Demonstration with Practice
Observing a model performing a software task can induce passive processing and give the user a false impression of capacity to perform the task. Practice can mitigate this risk. When the user engages in task practice, it may prompt the user to revisit the instructional video or stimulate more active processing of the instructional videos that follow. In addition, practice can consolidate a procedure when it reinforces what the user remembers. In Bandura’s (1986) social learning theory, practice enhances the production process. The main guideline for supporting this process is to facilitate the user’s hands-on experience in completing tasks that resemble the demonstrated performance. Figure 16 shows the guidelines for enhancing learning task accomplishments from an instructional video.
In instructor-led contexts, the trainer can organize moments of practice and give users feedback on their performances. No such structured support is possible for instructional videos that users can study anytime and anywhere. Designers have, therefore, looked at other ways of supporting task practice. One of the means for doing so is to give users access to a practice file. This file is sometimes the same as in the instructional videos, enabling the user to replicate the task completion shown there. However, a practice file can also be slightly different and designed to optimize the user’s learning from the task experience. In these instances, the practice file tends to be brief so that the user does not need to navigate a long file, and it contains a prototypical instance of what it is that the user needs to change. In this way, the user does not face the additional complexity of having to deal with a slightly variant case.
The important role of practice in learning is widely acknowledged and proven in educational research (e.g., Dunlosky et al., 2013; National Academies of Sciences, 2018). In contrast, software training studies on practice arrangements are few and far between, and it has been found surprisingly difficult to find unequivocal support for an effect of practice on learning in this domain. Empirical studies have found only moderate effects, and these have also varied for the kind of learning outcomes that were measured (e.g., van der Meij, 2018; van der Meij & Dunkel, 2020; van der Meij, Rensink, & van der Meij, 2018). For experimental reasons, these studies did not allow users to look back to the instructional video once they engaged in task practice, which may have severely reduced the effectiveness of the built-in moments of practice. Even with this important limitation, practice was found to support learning.
The designer can invite users to engage in practice immediately after an instructional video on a task or after showing several instructional videos. The first variant is known as a blocked practice schedule, in which all tasks revolve around the same task type (Broadbent et al., 2017). The second variant is known as random or interleaved practice. The task practice in that schedule involves different task types (Rau, Aleven, & Rummel, 2013). Empirical research on motor learning has revealed that a blocked practice schedule is more advantageous for success during training, while interleaved practice is more beneficial for learning after training (Dunlosky et al., 2013). This phenomenon is explained by the contextual interference effect where the added complexity of having to distinguish task types and their solutions, make practice more difficult and learning afterwards higher (Rohrer et al., 2020). Recently, a few empirical studies on the two practice schedules have been conducted for software training (Nuketayeva, 2021; Ragazou & Karasavvidis, 2021; van der Meij & Maseland, 2021). These studies showed that users performed better during practice with a blocked than an interleaved schedule, while no differences between schedules after training were found. Based on these findings, our recommendation for practice in software training is to enable immedate practice after a task video.
Guideline 11: Occasionally Include Background Music
Two inventory studies on YouTube videos have found that the presence of background music contributed to their popularity (Dascȁlu et al., 2020; ten Hove & van der Meij, 2015). Background music can have a positive effect on the user’s mood, but it can also interfere with learning. Therefore, the music that is selected must both fit the intended emotions and content of the video. Figure 17 shows the guidelines for enhancing mood states in an instructional video.
Surely, filmmakers would not spend millions of dollars if they did not believe in the impact of background music on the movie experience of the audience. One has only to compare the shower scene from Hitchcock’s Psycho played with or without background music, as demonstrated in the documentary “Score” (Schrader, 2016), to understand the emotional effect of background music on the viewer. Thus, the movie industry may have inspired video designers to include and capitalize on the effect of background music in their instructional videos (Liu & Chen, 2018).
Baddeley’s (2007) model of working memory gives a theoretical account of how the brain processes music. Music automatically gains access to working memory via the phonological loop which holds speech-based and acoustic information. According to Baddeley, music can put people in the right mood; it can energize them. However, Baddeley also warns that the inclusion of music may entail risks. Because music and language are both processed in the phonological loop and thus compete for the same space in working memory, music can supersede the content of a spoken narration and obstruct learning.
It is important to make a distinction between music and background music. There is a considerable body of research on the effects of music before or during task execution. These studies, for example, investigate the effects of music on mood inducement (Putkinen, Makkonen, & Eerola, 2017), exercising (Moss, Enright, & Cushman, 2018), and surgical performance (Fu et al., 2021). In these studies, music plays a primary role in inducing emotions. In contrast, in instructional videos, music plays a secondary role; it supports the content (or should do so). This kind of music is, therefore, called background music (Ansani et al., 2020).
To our knowledge, research on the role of background music in videos is limited and inconclusive. Peters (2021) recently investigated the effects of background music on mood states and learning in a short documentary video on global warming. The participants in the two experimental conditions were exposed to the video with either ominous or uplifting background music while the control group viewed the video without background music. The study expected congruence between the type of background music and the evoked mood states. Only a significant effect of ominous background music on the viewers’ negative mood states was found, however. There was no effect of background music on positive mood states or learning. A recent review study likewise concluded that there is inconclusive evidence of a beneficial effect of background music on learning (de la Mora Velasco & Hirumi, 2020). In addition, the study pointed out that there is a dearth of studies on background music in multimedia learning designs.
In view of the above, we concur with the practical advice from Koumi (2016), who proposes to include only background music that fits the content and to present that music sparingly (e.g., at the beginning and end of a video). Koumi carefully documents his advice, stating clearly that background music should never compete with the spoken narration, and therefore, can best be included in scenes where there is no narration at all.
CONCLUSION & DISCUSSION
The present article has advanced a revised and updated version of the eight design guidelines for the design of instructional videos proposed by van der Meij and van der Meij (2013). The new framework consists of 11 main guidelines, plus numerous detailed underlying guidelines for their construction. Each guideline description starts with an argument for its existence. Also, its theoretical basis is characterized and evidence from empirical research is mentioned.
It should be noted, however, that we have been unable to find empirical studies for some of the guidelines in the framework. This is the case for the main guidelines to support users in finding a suitable instructional video, to use a screencast with narration, and for including four components in presenting procedural discourse. We believe there are good arguments why these guidelines should be adopted in designing instructional videos for software training. These do not constitute empirical proof, however. While it may be hard to obtain such evidence, empirical research on these guidelines can contribute to a better understanding and may reveal whether restricting conditions apply.
Our review of the literature further revealed that sometimes the empirical research did not involve software training but consisted solely of multimedia studies. The most prominent research in the latter field is from Mayer (2014a) who has proposed 12 multimedia principles that have been investigated in numerous empirical studies. While the general findings of these studies are relevant for research on software training, there are also important differences in medium and aims. That is, multimedia research usually does not involve narrated screencasts, and the aim of the instruction is often problem-solving or conceptual knowledge development rather than procedural knowledge development. In short, while the evidence-based multimedia principles may hold for software training, it is necessary to discover whether they can also be found to support software training with instructional videos.
Finally, there are also differences in the strength of the empirical evidence for the guidelines. Some guidelines have received more support than others. For instance, there is substantial proof that the task instructions in a video should be made easy to follow and mimic, and that there should be support for handling the transitory nature of video. In contrast, empirical research is more equivocal for the recommendation to include background music.
The framework presented in this article concentrates on the delivery of information in instructional videos for software training. The guidelines that were presented constitute only part of what is required in design, however. From start to finish, many more development steps and actions are involved. Design-based research distinguishes between four main phases in design that each require a unique set of activities. The first phase should involve an extensive analysis of the content that is to be presented, along with an inventory of the primary and secondary audience characteristics, and an assessment of the context for learning. Next, there should be prototyping and pilot testing followed by the stages of full-fledged development and evaluation (McKenney & Reeves, 2012). For the specific demands involved in the stepwise development of an instructional video, a recent article by Mogull (2021) provides an excellent overview. The article gives extensive information on how to create a project plan and a storyboard, and for constructing a script that precedes the actual creation of an instructional video. The guidelines from the framework presented in the present article provide complementary information that should prove helpful in creating effective instructional videos.
In summary, the 11 guidelines in the framework offer standard solutions to recurrent design issues. They generally provide sound advice, but this advice should not be adopted blindly. Based on certain audiences, domain or context characteristics, it may be beneficial to divert from the guidelines, and, in such cases, we encourage designers to do so. As one of our reviewers aptly stated: the guidelines should be considered as heuristics rather than standards, allowing for and acknowledging some situational variance.
Adesope, O. O., & Nesbit, J. C. (2012). Verbal redundancy in multimedia learning environments: A meta-analysis. Journal of Educational Psychology, 104(1), 250–263. doi:10.1037/a0026147
Almeida, J., Leite, N. J., & Torres, R. d. S. (2013). Online video summarization on compressed domain. Journal of Visual Communication and Image Representation, 24, 729–738. doi:10.1016/j.jvcir.2012.01.009
Ansani, A., Marin, M., D’Errico, F., & Poggi, I. (2020). How soundtracks shape what we see: Analyzing the influence of music on visual scenes through self-eassessment, eye tracking, and pupillometry. Frontiers in Education, 11, Article 2242. doi:10.3389/fpsyg.2020.02242
Baddeley, A. D. (2007). Working memory: Thought and action. Oxford, UK: Oxford University Press.
Bandura, A. (1986). Social foundations of thought and actions: A social cognitive theory. Englewood Cliffs, NJ: Prentice Hall.
Brand-Gruwel, S., Wopereis, I., & Vermetten, Y. (2005). Information problem solving by experts and novices: Analysis of a complex cognitive skill. Computers in Human Behavior, 21, 487–508. doi:10.1016/j.chb.2004.10.005
Brar, J., & van der Meij, H. (2017). Complex software training: Harnessing and optimizing video instructions. Computers in Human Behavior, 70, 1–11. doi:10.1016/j.chb.2017.01.014
Broadbent, D. P., Causer, J., Williams, A. M., & Ford, P. R. (2017). The role of error processing in the contextual interference effect during the training of perceptual-cognitive skills. Journal of Experimental Psychology: Human Perception and Performance, 43(7), 1329–1342. doi:10.1037/xhp0000375
Butcher, K. R. (2014). The multimedia principle. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (Vol. 2nd, pp. 174–205). New York. NY: Cambridge University Press.
Carroll, J. M. (Ed.) (1998). Minimalism beyond the Nurnberg Funnel. Cambridge, MA: MIT Press.
Castro-Alonso, J. C., Wong, R. M., Adesope, O. O., & Paas, F. (2021). Effectiveness of multimedia pedagogical agents predicted by divers theories: A meta-analysis. Educational Psychology Review, 33(3), 989–1015. doi:10.1007/s10648-020-09587-1
Clark, H. H., & Clark, E. V. (1968). Semantic distinctions and memory for complex sentences. Quarterly Journal of Experimental Psychology, 20, 129–138.
Clark, R. C., & Mayer, R. E. (2016). e-Learning and the science of instruction (4th ed.). Hoboken, NJ: Wiley.
Cockburn, A., Gutwin, C., & Scarr, J. (2014). Supporting novice to expert transition in user interfaces. ACM Computing Surveys, 47(2), Article 31, 1–36. doi:10.1145/2659796
Cohn, N., & Foulsham, T. (2020). Zooming in on the cognitive neuroscience of visual narrative. Brain and Cognition, 146. doi:10.1016/j.bandc.2020.105634
Cojean, S., & Jamet, E. (2017). Facilitating information-seeking activity in instructional videos: The combined effects of micro- and macroscaffolding. Computers in Human Behavior, 74, 294–302. doi:10.1016/j.chb.2017.04.052
Cojean, S., & Jamet, E. (2018). The role of scaffolding in improving information seeking in videos. Journal of Computer Assisted Learning, 34(6), 960–969. doi:10.1111/jcal.12303
Cojean, S., & Jamet, E. (2021). Does an interactive table of contents promote learning from videos? A study of consultation strategies and learning outcomes. British Journal of Educational Technology, 53(2), 269–285. doi:10.1111/bjet.13164
Craig, S. D., & Schroeder, N. L. (2019). Text-to-speech software and learning: Investigating the relevancy of the voice effect. Journal of Educational Computing Research, 57(6), 1534–1548. doi:10.1177/0735633118802877
Cudmore, A., & Slattery, D. M. (2019). An analysis of physical and rhetorical characteristics of videos used to promote technology projects, on the Kickstarter crowdfunding platform. Technical Communication, 66(4), 319–346.
Cui, W., Zheng, J., Lewis, B., Vogel, D., & Bi, X. (2019, May 4–9). HotStrokes: Word-gesture shortcuts on a trackpad [Paper presentation]. The CHI 2019: Proceedings of the 2019 Conference on Human Factors in Computing Systems, Glasgow, UK.
Dascȁlu, C. G., Anthone, M. E., Moscalu, M., & Purcȁrea, V. L. (2020). Study about the YouTube didactic movies features preferred by students in dental medicine [Paper presention]. The 16th International Scientific Conference on eLearning and Software for Education, Bucharest, Romania.
Davis, R. O., Vincent, J., & Park, T. (2019). Reconsidering the voice principle with non-native language speakers. Computers & Education, 140, 103605. doi:10.1016/j.compedu.2019.103605
de la Mora Velasco, E., & Hirumi, A. (2020). The effects of background music on learning: A systematic review of literature to guide future research and practice. Educational Technology Research and Development, 68(6), 2817–2837. doi:10.1007/s11423-020-09783-4
Dugdale, S. (2010). What’s your speech rate? Developing a flexible speaking rate. Retrieved January 10, 2022 from https://www.write-out-loud.com/speech-rate.html
Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques: Promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14(1), 4–58. doi:10.1177/1529100612453266
Eccles, J. S., & Wigfield, A. (2002). Motivation beliefs, values, and goals. Annual Review of Psychology, 53(1), 109–132. doi:10.1146/annurev.psych.53.100901.135153
Espino, J. M. S., Artal, C. G., & Betancor, S. M. G. (2021). Video lectures: An analysis of their useful life span and sustainable production. International Review of Research in Open and Distributed Learning, 22(3), 99–118.
Farkas, D. K. (1999). The logical and rhetorical construction of procedural discourse. Technical Communication, 46, 42–54.
Fishman, E. (2016). How long should your next video be? Retrieved January 10, 2022 from https://wistia.com/learn/marketing/optimal-video-length
FL Studio Guru. (2010, January 14). Noise reduction with edison [Video]. YouTube. https://www.youtube.com/watch?v=Z1CyFNoAWZc
Fu, V. X., Oomens, P., Kleinrensink, V. E. E., Sleurink, K. J., Borst, W. M., Wessels, P. E., Jeekel, J. (2021). The effect of preferred music on mental workload and laparoscopic surgical performance in a simulated setting (OPTIMISE): A randomized controlled crossover study. Surgical Endoscopy, 35, 5051-5061. doi:10.1007/s00464-020-07987-6
Garrett, N. (2021). Segmentation’s failure to improve software video tutorials. British Journal of Educational Technology, 52(1), 318–336. doi:10.1111/bjet.13000
Ginns, P., Martin, A. J., & Marsh, H. W. (2013). Designing instructional text in a conversational style: A meta-analysis. Educational Psychology Review, 25(4), 445–472. doi:10.1007/s10648-013-9228-0
Grossman, R., Salas, E., Pavlas, D., & Rosen, M. A. (2013). Using instructional features to enhance demonstration-based training in management education. Academy of Management Learning & Education, 12(2), 219–243. doi:10.5465/amle.2011.0527
Guo, P. J., Kim, J., & Rubin, R. (2014, March). How video production affects student engagement: An empirical study of MOOC videos [Paper presentation]. The L@S ‘14: Proceedings of the first ACM conference on Learning @ scale conference, Atlanta, GA.
Gurlitt, J., Dummel, S., Schuster, S., & Nückles, M. (2012). Differently structured advance organizers lead to different initial schemata and learning outcomes. Instructional Science, 40(2), 351–369. doi:10.1007/s11251-011-9180-7
Huff, M., Meitz, T. G. K., & Papenmeier, F. (2014). Changes in situation models modulate processes of event perception in audiovisual narratives. Journal of Experimental Psychology: Learning, Memory, and Cognition, 40(5), 1377–1388. doi:10.1037/a0036780
Huxhold, M., & Luther, A. (2013). Produkt- und Lernvideos als ideale Ergänzung zur klassischen Dokumentation. Paper presented at the Tekom, Wiesbaden, Germany.
Johnstone, T., & Scherer, K. R. (2000). Vocal communication of emotion. In M. Lewis & J. Haviland (Eds.), The handbook of emotion (pp. 220–235). New York: Guilford.
Jordan, L. (2013, February 7). Adobe audition cs6 – how to remove noise from a clip [Video]. YouTube. https://www.youtube.com/watch?v=Y-_JGy6fWeY
Käfer, V., Kulesz, D., & Wagner, S. (2017). What is the best way for developers to learn software tools? The Art, Science, and Engineering of Programming, 1(2), Article 17, 11–41.
Kelly, S. L. (2017). First-year students’ research challenges: Does watching videos on common struggles affect students’ self-efficacy? Evidence-Based Library and Information Practice, 12(4), 158–172.
Kim, J., Guo, P. J., Seaton, D. T., Mitros, P., Gajos, K. Z., & Miller, R. C. (2014, March 4–5). Understanding in-video dropouts and interaction peaks in online lecture videos [Paper presentation]. The L@S ‘14: Proceedings of the first ACM conference on Learning @ scale conference, Atlanta, GA.
Kim, J., Nguyen, P., Weir, S., Guo, H. J., Miller, R. C., & Gajos, K. Z. (2014, April 26–May 1). Crowdsourcing step-by-step information extraction to enhance existing how-to videos. [Paper presentation]. CHI ‘14: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Toronto, ON, Canada.
Kokoç, M., Ilgaz, H., & Altun, A. (2020). Effects of sustained attention and video lecture types on learning performances. Educational Technology Research & Development, 68(6), 3015–3039. doi:10.1007/s11423-020-09829-7
Koumi, J. (2013). Pedagogic design guidelines for multimedia materials: A call for collaboration between practitioners and researchers. Journal of Visual Literacy, 32(2), 85–114. doi:10.1080/23796529.2013.11674711
Koumi, J. (2016). Tutorial – Pedagogic video design principles. Retrieved January 10, 2022 from https://www.academia.edu/40074782/Pedagogic_Video_Design_Principles
Lang, A. (2000). The limited capacity model of mediated message processing. Journal of Communication, 50(1), 46–70. doi:10.1111/j.1460-2466.2000.tb02833.x
Lazonder, A. W. (1994). Minimalist computer documentation. A study on constructive and corrective skills development [Doctoral thesis]. University of Twente, Enschede, the Netherlands.
Lazonder, A. W., & van der Meij, H. (1995). Error-information in tutorial documentation: Supporting users’ errors to facilitate initial skill learning. International Journal of Human Computer Studies, 42, 185–206.
Lippmann, M., Schwartz, N. H., Jacobson, N. G., & Narciss, S. (2019). The concreteness of title affects metacognition and study motivation. Instructional Science, 47(3), 257–277. doi:10.1007/s11251-018-9478-9
Liu, C.-L., & Chen, Y.-C. (2018). Background music recommendation based on latent factors and moods. Knowledge-Based Systems, 159, 158–170. doi:10.1016/j.knosys.2018.07.001
Locke, E. A., & Latham, G. P. (2002). Building a practically useful theory of goal and task motivation. American Psychologist, 57(9), 705–717. doi:10.1037//0003-066X.57.9.705
Lowe, R. K., & Boucheix, J. (2016). Principled animation design improves comprehension of complex dynamics. Learning and Instruction, 45, 72–84. doi:10.1016/j.learninstruc.2016.06.005
MagicalFruitTuts. (2013, May 21). How to improve a low resolution photo [Video]. YouTube. https://www.youtube.com/watch?v=CzFDKV9FDJg
Mayer, R. E. (1979). Can advance organizers influence meaningful learning? Review of Educational Research, 49(2), 371–383.
Mayer, R. E. (2014a). The Cambridge handbook of multimedia learning (2nd ed.). New York, NY: Cambridge University Press.
Mayer, R. E. (2014b). Principles based on social cues in multimedia learning: Personalization, voice, image, and embodiment principles. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (2nd ed., pp. 345–368). New York, NY: Cambridge University Press.
Mayer, R. E., & Fiorella, L. (2014). Principles for reducing extraneous processing in multimedia learning: Coherence, signaling, redundancy, spatial contiguity, and temporal contiguity principles. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (Vol. 2nd, pp. 279–315). New York, NY: Cambridge University Press.
Mayer, R. E., & Pilegard, C. (2014). Principles for managing essential processing in multimedia learning: Segmenting, pre-training, and modality principles. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (2nd ed., pp. 316–344). New York, NY: Cambridge University Press.
McKenney, S. E., & Reeves, T. C. (2012). Conducting educational design research. London: Routledge.
Mogull, S. A. (2021). Developing technical videos: Genres (or “templates”) for video planning, storyboarding, scriptwriting, and production. Technical Communication, 68(3), 56–75.
Moss, S. L., Enright, K., & Cushman, S. (2018). The influence of music genre on explosive power, repetitions to failure and mood responses during resistance exercise. Psychology of Sport & Exercise, 37, 128–138. doi:10.1016/j.psychsport.2018.05.002
National Academies of Sciences, Engineering, and Medicine. (2018). How people learn II: Learners, contexts, and cultures. Washington, DC: The National Academies Press.
Nuketayeva, K. (2021). Schedule of practice matters. Does it matter for video-based software training? [Masters thesis]. University of Twente, Enschede, the Netherlands.
Park, B., & Bailey, R. L. (2018). Application of information introduced to dynamic message processing and enjoyment. Journal of Media Psychology, 30(4), 196–206. doi:10.1027/1864-1105/a000195
Peters, M. (2021). The effect of background music in documentaries on viewers’ mood states, risk perception and retention of content [Master thesis]. University of Twente, Enschede, the Netherlands.
Pi, Z., Xu, K., Liu, C., & Yang, J. (2020). Instructor presence in video lectures: Eye gaze matters, but not body orientation. Computers & Education, 144. doi:10.1016/j.compedu.2019.103713
Plaisant, C., & Shneiderman, B. (2005). Show me! Guidelines for recorded demonstration [Paper presentation]. The 2005 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC’05), Dallas, Texas. http://www.cs.umd.edu/localphp/hcil/tech-reports-search.php?number=2005-02
Putkinen, V., Makkonen, T., & Eerola, T. (2017). Music-induced positive mood broadens the scope of auditory attention. Social Cognitive and Affective Neuroscience, 12(7), 1159–1168. doi:10.1093/scan/nsx038
Ragazou, V., & Karasavvidis, I. (2021). The effects of blocked and massed practice opportunities on learning software applications with video tutorials. Journal of Computers in Education, 9(2), 173–193 doi:10.1007/s40692-021-00198-5
Randhave, G. K., Schachak, A., Courtney, K. L., & Kushniruk, A. (2019). Evaluating a post-implementation electronic medical record training intervention for diabetes management in primary care. BMJ Health and Care Informatics, 26(e100086). doi:10.1136/ bmjhci-2019-100086
Rau, M. A., Aleven, V., & Rummel, N. (2013). Interleaved practice in multi-dimensional learning tasks: Which dimension should we interleave? Learning and Instruction, 23, 98–114. doi:10.1016/j.learninstruc.2012.07.003
Rey, G. D., Beege, M., Nebel, S., Wirzberger, M., Schmitt, T. H., & Schneider, S. (2019). A meta-analysis of the segmenting effect. Educational Psychology Review, 31, 389–419. doi:10.1007/s10648-018-9456-4
Richter, J., Scheiter, K., & Eitel, A. (2016). Signaling text-picture relations in multimedia learning: A comprehensive meta-analysis. Educational Research Review, 17, 19–36. doi:10.1016/j.edurev.2015.12.003
Rohrer, D., Dedrick, R. F., Hartwig, M. K., & Cheung, C.-H. (2020). A randomized controlled trial of interleaved mathematics practice. Journal of Educational Psychology, 112(1), 40–52. doi:10.1037/edu0000367
Roohani, A., Jafarpour, A., & Zarei, S. (2015). Effects of visualisation and advance organisers in reading multimedia-based texts. The Southeast Asia Journal of English Language Studies, 21(2), 47–62.
Schneider, S., Beege, M., Nebel, S., & Rey, G. D. (2018). A meta-analysis of how signaling affects learning with media. Educational Research Review, 23, 1–24. doi:10.1016/j.edurev.2017.11.001
Schrader, M. (2016). Score: A film music [documentary].
Shoufan, A. (2019). What motivates university students to like or dislike an educational online video? A sentimental framework. Computers & Education, 134, 132–144. doi:10.1016/j.compedu.2019.02.008
Spanjers, I. A. E., van Gog, T., Wouters, P., & van Merriënboer, J. J. G. (2012). Explaining the segmentation effect in learning from animations: The role of pausing and temporal cueing. Computers & Education, 59(2), 274–280. doi:10.1016/j.compedu.2011.12.024
Stratvert, K. (n.d.). How to add chapters to a youtube video [Video]. YouTube. https://www.youtube.com/watch?v=8OeETNVoO94
ten Hove, P., & van der Meij, H. (2015). Like it or not. What characterizes YouTube’s more popular videos? Technical Communication, 62(1), 48–62.
Teng, M. F. (2020). Vocabulary learning through videos: Captions, advance-organizer strategy, and their combination. Computer Assisted Language Learning. 1–33. doi:10.1080/09588221.2020.1720253
Tversky, B., Bauer-Morrison, J., & Bétrancourt, M. (2002). Animation: Can it facilitate? International Journal of Human-Computer Studies, 57(4), 247–262. doi:10.1006/ijhc.2002.1017
van der Meij, H. (2014). Developing and testing a video tutorial for software learning. Technical Communication, 61(2), 110–122.
van der Meij, H. (2017). Reviews in instructional video. Computers & Education, 114, 164–174. doi:10.1016/j.compedu.2017.07.002
van der Meij, H. (2018). Cognitive and motivational effects of practice with videos for software training. Technical Communication, 65(3), 265–279.
van der Meij, H. (2019). Advance organizers in videos for software training of Chinese students. British Journal of Educational Technology, 50(3), 1368–1380. doi:10.1111/bjet.12619
van der Meij, H., Blijleven, P., & Jansen, L. (2003). What makes up a procedure? In M. J. Albers & B. Mazur (Eds.), Content & complexity. Information design in technical communication (pp. 129–186). Mahwah, NJ: Erlbaum.
van der Meij, H., & Carroll, J. M. (1998). Principles and heuristics for designing minimalist instruction. In J. M. Carroll (Ed.), Minimalism beyond the Nurnberg Funnel (pp. 19–53). Cambridge, MA: MIT Press.
van der Meij, H., & Dunkel, P. (2020). Effects of a review video and practice in video-based statistics training. Computers & Education, 143. doi:10.1016/j.compedu.2019.103665
van der Meij, H., & Flacke, M. L. (2020). A review on error-inclusive approaches to software documentation and training. Technical Communication, 67(1), 83–95.
van der Meij, H., & Gellevij, M. R. M. (2004). The four components of a procedure. IEEE Transactions on Professional Communication, 47(1), 5–14. doi:10.1109/TPC.2004.824292
van der Meij, H., & Maseland, J. (2021). Practice schedules in a video-based software training arrangement. Social Sciences & Humanities Open, 30(1), 100–133. doi:10.1016/j.ssaho.2021.100133
van der Meij, H., Rensink, I., & van der Meij, J. (2018). Effects of practice with videos for software training. Computers in Human Behavior, 89, 439–445. doi:10.1016/j.chb.2017.11.029
van der Meij, H., & van der Meij, J. (2013). Eight guidelines for the design of instructional videos for software training. Technical Communication, 60(3), 205–228.
van der Meij, H., & van der Meij, J. (2016a). Demonstration-Based Training (DBT) for the design of a video tutorial for software instructions. Instructional Science, 44, 527–542. doi:10.1007/s11251-016-9394-9
van der Meij, H., & van der Meij, J. (2016b). The effects of reviews in video tutorials. Journal of Computer Assisted Learning, 32(4), 332–344. doi:10.1111/jcal.12136
van Gog, T. (2014). The signaling (or cueing) principle in multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 263–278). New York, NY: Cambridge University Press.
van Merriënboer, J. J. G., & Kester, L. (2014). The four-component instructional design model: Multimeda principles in environments for complex learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (2nd ed., pp. 104–148). New York, NY: Cambridge University Press.
van Wermeskerken, M., Ravensbergen, S., & van Gog, T. (2018). Effects of instructor presence in video modeling examples on attention and learning. Computers in Human Behavior, 89, 430–438. doi:10.1016/j.chb.2017.11.038
Wang, J., Antonenko, P. D., & Dawson, K. (2020). Does visual attention to the instructor in online video affect learning and learner perceptions? An eye-tracking analysis. Computers & Education, 146. doi:10.1016/j.compedu.2019.103779
Wang, J., Antonenko, P. D., Keil, A., & Dawson, K. (2020). Converging subjective and psychophysiological measures of cognitive load to study the effects of instructor-present video. Mind, Brain, and Education, 14(3), 279–291.
Wilson, K. E., Martinez, M., Mills, C., D’Mello, S., Smilek, D., & Riso, E. F. (2018). Instructor presence effect: Liking does not always lead to learning. Computers & Education, 122, 205–220. doi:10.1016/j.compedu.2018.03.011
Wondershare Fimora Video Editor. (2016, September 16). How to remove background noise from audio/video in filmora [Video]. YouTube.
Yue, C. L., Bjork, E. L., & Bjork, R. A. (2013). Reducing verbal redundancy in multimedia learning: An undesired desirable difficulty? Journal of Educational Psychology, 105(2), 266–277. doi:10.1037/a0031971
ABOUT THE AUTHORS
Hans van der Meij is a senior researcher and lecturer in instructional design & technology at the University of Twente in the Netherlands. His research interests are technical documentation (e.g., minimalism, instructional video) and instructional technology. He has received several awards for his articles, including an IEEE “Landmark Paper” award for a publication on minimalism (with John Carroll). He is available at: h.vanderMeij@utwente.nl.
Constanze Hopfner graduated from the University of Twente in 2020 with a master’s degree in educational science and technology. She currently works as a content specialist for learning management system at Instructure. Her research interests include instructional video and technology, and public speaking skills. She is available at email@example.com and https://www.linkedin.com/in/constanzehopfner/.