Khan-style video

(playback not allowed)

 

Planning. Pre-production of the drawn example video was not limited by time constraints, the video narrator had time to focus on the content creation, and the created script was followed diligently. The script was validated by experts in the field of microeconomics. The students were guided comprehensively through a complicated formula without making any assumptions of their prior knowledge. Several takes were taken and reflected upon as the output and information were evaluated by the author and the lecturer of a market demand class with the goal of creating a compact and informative package.

Method. “Khan-style” method was used as it is considered to be suitable for step-by-step walkthroughs and problem solving (Guo et al. 2014). The problem-solving scenario complemented the topic discussion as a logical funnel approach type transition towards diving into the specifics of a formula broadly described in the topic discussion video.

Control. As with the topic discussion video, control was provided by the micro-level activities (Merk et al. 2011) of the YouTube multimedia player.

Segmentation. The video was a separate video from the topic discussion video and thus provided the viewers a distinct gap between the two (Clark and Mayer 2008). The duration of the video was five minutes and thirty seconds and, thus conforming to the duration portion of the guideline as suggested by Guo et al. (2014).

Visual elements. The use of graphics was largely determined by the “Khan-style” method. To remain true to the method, it was necessary to avoid using complicated graphical elements. However, augmentations to the method, such as picture-in-picture might have been beneficial in increasing student engagement. However, relational graphics (Mayer and Moreno 2003) were used to depict relations between quantitative elements in the form of an empty graph with the price on the vertical axis and quantity on the horizontal axis. This was done due to the fact that no extra value was perceived to be gained from drawing the graph free-hand. The problems with the messiness of the handwriting method are discussed extensively in the research of Cross et al. (2013). Highlighting (Paik and Schraw 2013; Mayer 2005; Atkinson 2002; Craig et al. 2002; Jeung et al. 1997) was used in the form of a pointer. For example, when the narrator was talking about a specific part of the drawings, the pointer hovered on top of that portion, reducing required scanning.

Audio. The narration was provided with a conversational style (Clark and Mayer 2008; Beck et al. 1996), which was paced to the speed of the drawing (Guo et al. 2014; Calandra et al. 2008). The tone of the narration (Calandra et al. 2008) was designed to be polite (Wang et al. 2008) and enthusiastic (Guo et al 2014). Continuous writing motion and tight cuts between segments within the video aimed for a high words-per-minute pace to keep the viewers engaged (Guo et al. 2014). Audio quality (Reeves and Nass 1996) was ensured by professional audio recording equipment. However, the narrator was not a professional speaker and does not consider himself to be a spokesperson in general. Furthermore, the narrator was not a native speaker as is suggested by Atkinson et al. (2005) and Mayer et al. (2003). Thus, it can be assumed that the level of the verbal output would affect the agreeableness negatively.

Hardware and software details. The technical execution of the drawn video was done by using a Wacom Bamboo pen tablet. The drawing was done on the tablet and recorded on a computer using Adobe Photoshop as a drawing board and Adobe Captivate to record the screen. The audio was recorded by using a Zoom H5 audio recorder with a Zoom SGH-6 Shotgun Microphone Capsule. Post-processing was done with Adobe Premiere, which was used to cut and combine video and audio.

 

References:

Atkinson, R. 2002. Optimizing Learning From Examples Using Animated Pedagogical Agents. Journal of Educational Psychology, 94, 416–427.

Beck, I., McKeown, M.G., Sandora, C., Kucan, L., & Worthy, J. 1996. Questioning the author: A year-long classroom implementation to engage students in text. Elementary School Journal, 96, 385–414.

Calandra, B., Barron, A. and Thompson-Sellers, I. 2008. Audio Use in E-Learning: What, Why, When, and How?. International Journal on E-Learning. 7(4), 589-601.

Clark, R. C., and Mayer, R. E. 2008. E-Learning and the science of instruction. San Francisco: Pfeiffer.

Clark, R. C., and Mayer, R. E. 2008. E-Learning and the science of instruction. San Francisco: Pfeiffer.

Craig, S. D., Gholson, B. & Driscoll, D. M. 2002. Animated Pedagogical Agents in Multimedia Educational Environments: Effects of Agent Properties, Picture Features, and Redundancy. Journal of Educational Psychology, 94, 428-434.

Cross, A., Bayyapunedi, M., Cutrell, E., Agarwal, A., and Thies, W. 2013. TypeRighting: Combining the Benefits of Handwriting and Typeface in Online Educational Videos. CHI ’13, ACM (New York, NY, USA, 2013).

Guo, P., Kim, J. and Rubin, R. 2014. How Video Production Affects Student Engagement: An Empirical Study of MOOC Videos. (website) http://pgbovine.net/publications/edX-MOOC-video-production-and-engagement_LAS-2014.pdf (29.4.2015)

Jeung, H., Chandler, P. & Sweller, J. 1997. The role of visual indicators in dual sensory mode instruction. Educational Psychology, 17, 329-343.

Mayer, R. 2005. The Cambridge handbook of multimedia learning. Cambridge, U.K.; New York: Cambridge University Press.

Mayer, R. and Moreno, R. 2003. Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38(1), 43-52.

Merkt, M., Weigand, S., Heier, A. and Schwan, S. 2011. Learning with videos vs. learning with print: The role of interactive features. Learning and Instruction, 21(6), 687-704.

Paik, E. and Schraw, G. 2013. Learning with Animation and the Illusion of Understanding. Journal of Educational Psychology, 105, 278-289.

Wang, N., Johnson, W., Mayer, R., Rizzo, P., Shaw, E. and Collins, H. 2008. The politeness effect: Pedagogical agents and learning outcomes. International Journal of Human-Computer Studies, 66(2), 98-112.