Developing a program that produces lasting learning can require some incredibly nuanced decision-making. From planning to delivery to assessment, each step is land-mined by complexly interwoven (yet often-contradictory) philosophies and research. So it’s no surprise that, when a technique does start to get close to something like universal acceptance, we tend to latch on to it a little too tightly. And, perhaps, breathe a small sigh of relief.

Streaming video has been just such a tool. The literature on the topic, which has historically been based on student self-reporting and outcome achievement comparisons, was comfortably consistent. Students told us that they wanted video, they responded favorably to having access to video (Chen et al, Lambert, and Guidry, 2010), and they generally seemed to do better when video was there (Demetriadis and Pombortsis, 2007).

But this was always an incomplete picture. Self-reporting is a woefully imperfect way to gather behavioral information, for one thing. How we act, how we perceive our actions, and how we report them to others are three independent variables that do not really enforce any consistency or accuracy on one another. Similarly, outcome improvements associated with video remain inconsistent, and quiz score comparisons can never really tell us how or why one video succeeded where another failed.

We've certainly had reason for faith where video is concerned. It remains one of the most provocative communication tools available to trainers and designers, and among the most appreciated by students. But we have also lacked key insights about what actually happens once the video is released, and we have had to fill many of those insight gaps with assumptions.

Diagnosis in progress

Enter the era of analytics.

Our ability to track learner behaviors may still be in its relative infancy, but like an infant it has grown significantly during a very short period of time. As it has grown, it has begun to uncover natural behavior patterns and study habits among online learners that will eventually help us understand what the student experience truly is, and how to best use our instructional tools.

In the case of video, we’re beginning to see a much more complex type of learner adoption than we previously (reasonably) assumed. And far from being a certain, direct connection between our developed instruction and the students’ need to learn, video appears to come with just as large a set of “if” and “when” caveats as any other instructional tool.

Consider the example findings below, which often clash significantly with self-reporting studies of video adoption.

Sample case studies

1. A three-year study of streaming video adoption completed at an Illinois medical school (McNulty et al., 2011) tracked three student cohorts across five different courses. The figures they reported were surprising; 64 percent of students viewed less than 10 percent of the lectures available to them. In fact, the study credited less than one student in 20 with viewing a “large number” of videos per course.

2. A study comparing system data logging to the self-reported use of streaming videos by more than 5,000 students at two separate universities in the Netherlands (Gorissen, Van Bruggen, and Jochems, 2012) found marked discrepancies between the amount of video students reported viewing and their actual usage. In fact, most students in the study never watched a full recorded lecture and, in one course that featured 34 videos, the most that any one user viewed was 20.

3. An evaluation of how instructor-developed video impacts student satisfaction levels unintentionally uncovered an interesting trend: The students who accessed the videos were markedly less likely than their peers to download the other instructional materials in the course, even though the other files contained unique content that was not available in the videos (Draus, Curran, and Trempus, 2014).

4. In 2014, researchers tracking learner view patterns and interaction peaks (Juho et al., 2014) found that more than 53 percent of student viewers exited a five-minute instructional video early, with most of those dropouts occurring in the first half of the video. The percentage of dropouts rose as the video length increased, with the dropout rate climbing as high as 71 percent for videos over twenty minutes in length. Re-watchers showed even higher dropout rates, as did learners accessing step-by-step tutorials as opposed to lectures. This may support the idea that students go in pursuit of the information they think they need (such as a visual example of how to complete the third step of a four-step process), when they think they need it, rather than perceiving a complete instructional video as essential viewing.

5. A massive review of edX courses (Guo, Kim, and Rubin, 2014) involving data from more than 127,000 students found that even the shortest videos (zero-to-three minutes in length) lost a fourth of all viewers in the first 75 percent of the video’s runtime, meaning a large number of learners would have missed a substantial portion of the instruction even when the attention request was miniscule. The numbers were again even more pronounced for longer videos. These figures are particularly telling, as the authors had discounted any session lasting under five seconds from the figure to prevent their data from being skewed by accidental plays.

6. Another MOOC-based review of a large MITx course (Seaton et al., 2014) found that nearly a fourth of all successful certificate earners accessed less than 20 percent of course videos.

Lessons learned

Again, any user data-based discussion of eLearning must be built on the understanding that this is only the beginning. It’s early, yet, for developers to start drawing broad conclusions. Even so, the growing collection of research does begin to make video look a lot like every other tool in our toolkit: there are conditions under which it is effective, and conditions under which it is definitely not. Fully detailing and understanding those conditions will take time, and will eventually help us to see how variables like program type, subject area, and learner background can influence video adoption rates.

Until then, because video can be expensive relative to other eLearning development work, we can't help wanting to glean some early insights from what we’ve read. At the very least, by following the principles below, we can attempt to prevent committing costly resources to circumstances that will fail to provide acceptable educational returns.

Principles for the future

Do...

Don’t…

…expect students to respond favorably to the inclusion of streaming video.

 

…treat favorable learner response as proof that students are actually watching the videos.

...leverage assessments and assignments as tools to help you encourage video adoption.

…ask students to view a video without an immediate follow-up activity or purpose.

 

 

…make videos accessible to students during review periods, and tag them so that content is easy to locate outside of its original instructional context.

…discount other development tools when planning instruction. The educational ROI on video may not always justify its use as a front-end instructional tool. Consider your content and objectives.

 

…keep content videos short (three-to-five minutes) whenever it is appropriate to do so.

…assume that a short video length will be enough to guarantee full student adoption.

 

…make it clear to students what content is covered in each of the available educational resources.

…assume that students will recognize that other files or media in a lesson will contain unique, essential information not found in the video.

Works cited

Chen, P.-S. D., Amber D. Lambert, and Kevin R. Guidry. “Engaging Online Learners: The Impact of Web-based Learning Technology on College Student Engagement.” Computers & Education. 54, 4. 2010.

Demetriadis S, and Andreas Pombortsis. “e-Lectures for Flexible Learning: a Study on their Learning Efficiency.” Educational Technology & Society.10. 2007.

Draus, Peter J., Michael J. Curran, and Melinda S. Trempus. “The Influence of Instructor-generated Video Content on Student Satisfaction With and Engagement in Asynchronous Online Classes.” Journal of Online Learning and Teaching 10.2. 2014.

Gorissen, Pierre, Jan Van Bruggen, and Wim Jochems. “Usage Reporting on Recorded Lectures Using Educational Data Mining.” International Journal of Learning Technology 7.1 2012.

Guo, Philip J., Juho Kim, and Rob Rubin. “How Video Production Affects Student Engagement: An Empirical Study of MOOC Videos.” Proceedings of the first ACM conference on Learning at Scale. ACM, 2014.

Kim, Juho, et al. “Understanding In-Video Dropouts and Interaction Peaks in Online Lecture Videos.” Proceedings of the first ACM conference on Learning at Scale. ACM, 2014.

McNulty, John A., et al. “A Three-year Study of Lecture Multimedia Utilization in the Medical Curriculum: Associations with Performances in the Basic Sciences.” Medical Science Educator 21.1. 2011.

Seaton, Daniel T., et al. “Who Does What in a Massive Open Online Course?” Communications of the ACM 57.4 2014.