The other day a friend sent a video-based eLearning program he’d found online somewhere. It was “how to cook a hamburger,” produced by a well-known fast-food chain. It was a case study in everything we seem to think eLearning should not be—garish colors, cheesy special effects, with information delivered by a bad rapper.
eLearning snobs—and heaven knows I can be one—would have criticized everything about it.
I did learn
But here’s the thing: At the end, I could cook a burger, on time, the first time, according to performance specifications. Bigger picture? If I’d been an actual employee, there would be less waste, many more happy customers, and steady workflow (or whatever other metric the company used to determine the value of this performance).
Why did I learn?
Why did it work? Well, I saw a number of things. For one, it avoided the wall-of-words page-turning disease so common to many programs.
It adhered to Mayer’s principles of multimedia learning, especially as they relate to low-knowledge learners: There was multiple representation of information through explanation in words and pictures—meaningful ones, presented contiguously. (I’m a low-knowledge learner here, by the way. I’ve cooked plenty of burgers, but not according to this company’s specs that made sense, and I’ll use them in future home-cooking efforts.)
The rap-plus-demo approach was consistent with Mayer’s redundancy principle.
And there was no extraneous content: the module was tightly focused on how to cook a hamburger, not on adding condiments, cleaning the grill, or cooking fries. And it didn’t begin with the history of hamburgers or a review of the company’s burger philosophy.
Others admitted they learned, too
I showed the video while waiting to start a conference presentation last week and the attendees agreed: Yes, the cosmetics were dreadful. But yes—grudgingly—they agreed the program worked. They felt confident they, too, could perform according to spec. And the spec wasn’t a drag-and-drop interaction of putting steps in order, or a multiple-choice quiz asking about the temperature of the grill.
Now, to be fair: This one video worked well for me once. The novelty would wear off very quickly. I don’t want to see 17 videos exactly like this covering every discrete job task. Perhaps that’s an argument for novelty rather than the usual uniformity and consistency across programs or modules?
Here’s the thing: What I saw was an example of someone who took a very specific performance goal and designed a very specific instructional solution for it. I won’t argue that it wasn’t cheesy, and not to everyone’s taste. I won’t argue that the colors weren’t garish and the rap not to my liking. (Also: I’ve seen worse. At least it wasn’t boring.) But the production values weren’t so overdone as to be distracting. And … it worked.
Forget the snobbery: Can the learner perform?
So my thinking? Snobbery about cosmetics aside, over the years I’ve seen a lot of lists of criteria for buying eLearning, for developing a product, and for choosing a vendor or developer. I agree we have to go in having some idea of what “good” is, at least enough to keep us away from all text, or bedtime-reading narration of that text, or seductive but irrelevant elements. And I want to be clear that I’m not trying to make some argument for video-based demonstration-based delivery as being somehow “better,” although YouTube has certainly proven how effective it can be. (Oh, and did I mention that I accessed the video on my phone, which I could hold while I cooked the hamburger?) But I am musing—having spent hours I can’t get back in discussions over the color of an avatar’s shirt—that we maybe sometimes overthink aesthetics when the experience and outcomes should be of greater concern. And I’m thinking that the experience (not just module or course) is “good” if, at the end, the learner can perform. The trick? Finding an explicit performance need, getting clear on assessments first (listing the steps, or cooking a hamburger?), and sticking to a plan that helps the learner learn.