Take any e-Lesson — show it to five people and ask them what they think. My bet is you will get five different opinions about the quality of the courseware. But, wait! What if the five reviewers are educational “experts” — specialists with advanced degrees in training and education? Now you might expect a greater consensus. Based on my experience over the past three years reviewing courses with experts, I predict a little more agreement; but it’s not likely to be anything close to a consensus.
Unlike classroom training, e-Learning is very visible. While much of the classroom experience is packaged in the instructor, and in fact varies from class to class, you can easily see and hear all elements of e-Learning. Everything from screen color to content accuracy to the types of practices is readily available for scrutiny. I believe that this high visibility will prove to be a good thing. With this much more accessible instructional environment, we will be able to more readily identify effective and ineffective training. But to do so, we have to move beyond a reliance on end-user (or even expert) opinions. After a year of work on a commission tasked to identify the qualities of effective e-Learning, and hearing a great deal of (often contradictory) views, I decided I needed fewer opinions and more data.
Decisions about e-Learning courseware must begin with an understanding of how the mind works during learning and of what research data tell us about what factors lead to learning. This is where decisions must begin. Naturally factors other than psychological effectiveness come into play in your multimedia learning decisions. For example, instructional strategies will be shaped by parameters of the technology like bandwidth and hardware, and by environmental factors such as budget, time, and organizational culture.
What is e-Learning?
Since the term e-Learning is used inconsistently, let’s start with a basic definition. For the purposes of this discussion, e-Learning is content and instructional methods delivered on a computer (whether on CDROM, the Internet, or an intranet), and designed to build knowledge and skills related to individual or organizational goals. This definition addresses:
The what: training delivered in digital form,
The how: content and instructional methods to help learn the content, and
The why: to improve organizational performance by building job-relevant knowledge and skills in workers.
In this article, the main focus and examples are drawn from business self-study courseware that may include synchronous or asynchronous communication options. For example, the screen in Figure 1 is part of a Web-delivered course designed to teach the use of software called Dreamweaver to create Web pages. The main content is the steps needed to perform this particular task with Dreamweaver. The instructional methods include a demonstration of how to perform the steps along with an opportunity to practice and get feedback on your accuracy.
Figure 1 Practice exercise from an e-Lesson on Dreamweaver. With permission from Element K.
There is a distinction among three important elements of an e-Lesson: the instructional methods, the instructional media, and media elements. In spite of optimistic projections of the positive impact of technology on learning, the reality has not lived up to expectations. From film to the Internet, each new wave of technology has stimulated prospects of revolutions in learning. But research comparing learning from one medium such as the classroom with another medium such as the Internet generally fails to demonstrate significant advantages for any particular technology. These repeated failures lead us to abandon a technology-centered approach to learning in favor of a learner-centered approach. Having participated in many poor training sessions in the classroom and on the computer, we recognize that it’s not the medium that causes learning. Rather it is the design of the lesson itself and the best use of instructional methods that make the difference. A learner-centered approach suggests that we design lessons that accommodate human learning processes regardless of the media involved.
Instructional methods are the techniques used to help learners process new information in ways that lead to learning. Instructional methods include the use of techniques such as examples, practice exercises, simulations, and analogies.
Instructional media are the delivery agents that contain the content and the instructional methods including computers, workbooks, and even instructors. Not all media can carry all instructional methods with equal effectiveness. For each new technology that appears on the scene, we typically start by treating it like older media with which we are familiar. For example, much early Web-based training looked a lot like books — mostly using text on a screen to communicate content. As the technology behind a given medium matures, we get better at exploiting the features unique to that medium for learning.
A third component of multimedia learning is the media elements. The media elements refer to the text, graphics, and audio used to present content and instructional methods. For example in the Dreamweaver screen shown in Figure 1, the content is the steps needed to perform the particular task which is the focus of this lesson. The instructional methods include a demonstration and simulation practice with feedback. The media elements include a graphic of the screen and (during the demonstration) audio narration that explains the steps seen in the animation.
For the past ten years, Richard Mayer and his colleagues at the University of California at Santa Barbara have conducted a series of controlled experiments on how to best use audio, text, and graphics to optimize learning in multimedia. Six media element principles can be defined based on Mayer’s work. What follows is a summary of these principles along with supporting examples, psychological rationale, and research. Use this information as guidelines regarding the benefits of graphics, the placement of text and graphics on the screen, and the best way to present words that describe graphics among others.
The multimedia principle: Adding graphics to words can improve learning.
By graphics we refer to a variety of illustrations including still graphics such as line drawings, charts, and photographs and motion graphics such as animation and video. Research has shown that graphics can improve learning. The trick is to use illustrations that are congruent with the instructional message. Images added for entertainment or dramatic value not only don’t improve learning but they can actually depress learning (see the coherence principle below).
Mayer compared learning about various mechanical and scientific processes including how a bicycle pump works and how lightning forms, from lessons that used words alone or used words and pictures (including still graphics and animations). In most cases he found much improved understanding when pictures were included. In fact, he found an average gain of 89% on transfer tests from learners who studied lessons with text and graphics compared to learners whose lessons were limited to text alone. Therefore we have empirical support that should discourage the use of screens and screens of text as an effective learning environment. However not all pictures are equally effective. We will need more principles to see how to best make use of visuals to promote learning.
Learning occurs by the encoding of new information in permanent memory called long-term memory. According to a theory called Dual Encoding, content communicated with text and graphics sends two codes — a verbal code and a visual code. Having two opportunities for encoding into long-term memory increases learning.
While graphics can boost learning, it will be important to select the kind of graphic that is congruent with the text and with the learning goal. As I’ll discuss below, graphics that are irrelevant or gratuitous actually depress learning. Consider selecting your graphics based on the type of content you are teaching. Table 1 summarizes some graphics that work well to illustrate five key content types: facts, concepts, processes, procedures, and principles. Processes for example, are effectively illustrated by animations or by still graphics that show change through arrows. Figure 2 shows an effective illustration of a process in e-Learning.
Figure 2 e-Learning illustrating a biological process.
The contiguity principle: placing text near graphics improves learning.
Contiguity refers to the alignment of graphics and text on the screen. Often in e-Learning when a scrolling screen is used, the words are placed at the top and the illustration is placed under the words so that when you see the text you can’t see the graphic and vice versa. This is a common violation of the contiguity principle that states that graphics and text related to the graphics should be placed close to each other on the screen.
Mayer compared learning about the science topics described above in versions where text was placed separate from the visuals with versions where text was integrated on the screen near the visuals. The visuals and text were identical in both versions. He found that the integrated versions were more effective. In five out of five studies, learning from screens that integrated words near the visuals yielded an average improvement of 68%.
Learning occurs in humans by way of working memory which is the active part of our memory system. You have probably heard of “seven plus or minus two.” This refers to the severe limits placed on working memory. Working memory is not very efficient, and can only hold seven (plus or minus two) facts or items at a time.
Since working memory capacity is needed for learning to occur, when working memory becomes overloaded, learning is depressed. If words and the visuals they describe are separate from each other, the learner needs to expend extra cognitive resources to integrate them. In contrast, in materials in which the words and graphics are placed contiguously, the integration is done for the learner. Therefore the learner is free to spend those scarce cognitive resources on learning.
As mentioned above, scrolling screens sometimes violate the contiguity principle by separating text and related visuals. But it is not the scrolling screen itself which is to blame. One way to use scrolling screens effectively is to embed smaller graphics on the screen with related text close by. For example, a screen from my online design course is shown in Figure 3. You can see that the visual has been reduced and placed on the screen near the text.
Figure 3 An example of application of the contiguity principle.
The modality principle: explaining graphics with audio improves learning.
If you have the technical capabilities to use other modalities like audio, it can substantially improve learning outcomes. This is especially true of audio narration of an animation or a complex visual in a topic that is relatively complex and unfamiliar to the learner.
Mayer compared learning from two e-Learning versions that explained graphics with exactly the same words — only the modality was changed. Thus he compared learning from versions that explained animations with words in text with versions that explained animations with words in audio. In all comparisons, the narrated versions yielded better learning with an average improvement of 80%.
As described under the contiguity principle, working memory is a limited resource that must be preserved for learning purposes. Cognitive psychologists have learned that working memory has two sub-storage areas — one for visual information and one for phonetic information. One way to stretch the capacity of working memory is to utilize both of these storage areas. Figure 4 illustrates how the use of graphics which enter visual memory and audio which enters phonetic memory maximize working memory capacity.
Figure 4 Visual and supporting auditory information maximize working memory resources.
Audio should be used in situations where overload is likely. For example, if you are watching an animated demonstration of maybe five or six steps to use a software application, you need to focus your visual resources on the animation. If you have to read text and at the same time watch the animation, overload is more likely than when you can hear the animation being narrated.
This does not mean that text should never be used. For example, some information in e-Learning, such as directions to an exercise, needs to be available to the learner over a longer period of time. Any words that are needed as reference should be presented in text. Also, when using audio to explain an animation, a replay option should be available for learners to hear the explanation again.