E-Learning 2.0 interventions are coming to a workplace near you. Maybe they surround you already. Whether you’re an expert or a virgin, it’s time to think about evaluation.
Whether you think that e-Learning 2.0 is a force for good or evil, sometime in the near future you will likely have a responsibility to determine the effectiveness of e-Learning 2.0 interventions. (See Sidebar 1 for a definition of “e-Learning 2.0.”)
By evaluating our results, we can refine and improve what we’re doing, or discard what’s not working. We can also give a coherent answer when someone in management (or our clients) asks us to prove that this e-Learning 2.0 stuff works. E-Learning 2.0 technology offers great promise, but only those who are getting the quickest, most robust feedback will be able to maximize that promise. It takes good evaluation design to produce that sort of feedback.
In last year’s Guild report, Measuring Success (Wexler, Schlenker, and others, 2007), I outlined 18 reasons (see Sidebar 2) that we might measure learning. These included giving learners grades or credentials, helping learners learn, comparing one learning intervention to another, and so on. (Please see the References at the end of this article for all citations.)
For the purpose of this article, I’m only going to focus on enabling you to:
- Determine what level of benefit (or harm) your e-Learning 2.0 interventions produce.
- Use evaluation results to improve your e-Learning 2.0 interventions.
- Compare the effectiveness of your e-Learning 2.0 intervention to some other learning intervention.
This article will NOT cover how to decide whether to implement e-Learning 2.0 strategies in your organization. Rather, this article should help you think through the many issues and complexities involved in evaluating e-Learning 2.0 interventions. At the end, I outline a short list of the most critical things we should be doing as we evaluate e-Learning 2.0. As you will see, getting started with a few simple imperatives may be the best strategy.
Beware of seduction
In thinking about how to evaluate e-Learning 2.0, the first thing to remember is that EVERY new learning technology brings with it hordes of booming evangelists rhapsodizing utopian visions. These visions may or may not be true or realistic, yet they may seduce us beyond all rationality or evidence. Programmed instruction, 16mm movies, and filmstrips were the first such seductive technologies. (Many Learning Solutions readers are not old enough to remember them, of course.) Later, radio, television, and computer-based training were magical technologies that many believed would completely transform the learning landscape.
E-Learning 2.0 is no different. Some tout it as the key to unlocking the unlimited promise of informal learning. Some supporters present it as a way to democratize organizations. Others promote it as a way to empower employees to help each other rise to their fullest potential. Because these visions can be so enticing, we have to make an extra effort to be objective. We have to shield ourselves from temptation by investing in evaluation — and in doing evaluation right.
Grassroots content development
E-Learning 2.0 differs from most traditional learning methodologies in allowing — even encouraging — everybody to contribute in creating learning messages. The term “learning messages” refers to the learning points that a learning event conveys within them. I prefer this term to “content” or “learning materials.” The reality is that learning only occurs when the learning materials convey learning messages, and learners attend to and receive the learning messages. Too many of us design instruction as if the creation of learning materials guarantees that learning will take place.
Traditionally, a central authority created learning messages. Experts vetted the messages before learners saw them. In the workplace, the training department typically created learning messages, and management, legal, and subject-matter experts vetted them. Only then were they ready for presentation to employees. In education, the writers compiled learning messages from textbooks and journal articles, and from individual experts, including professors, teachers, and curriculum specialists. Whether training or education, everyone assumed that someone had vetted the learning messages to validate them for learners.
E-Learning 2.0 offers a different model, enabling “grass roots” creation of learning messages. Experts and people closest to the issue may create such messages. However, an authoritative editorial function does not necessarily vet the messages. Individuals at the grassroots level can create information in e-Learning 2.0, and vet it prior to release, or others at the grassroots level may vet the information after the fact. Finally, institutional agents monitoring the material may check the information, instead of, or in addition to, grassroots verification. Recent data from Guild Research (August 2008) illustrates the various ways companies deal with user-generated content (see Figure 1):
There are two sets of employees involved in e-Learning 2.0. There are those who learn from the content (“learners”) and those who create the content (“creators”). Because of this, we need to evaluate e-Learning 2.0’s effects on both groups of people. Of course, one person can play both roles, depending on the issue that’s in play.
Because e-Learning 2.0 produces learning messages that arise from non-vetted sources, one aspect of evaluation that may appear to differ from traditional evaluation involves assessing the truth or completeness of the learning messages. Of course, far too many of us assume that traditional training and education courses provide good information. For example, many of us in the United States learned of our first President’s legendary honesty in a story that told of him chopping down a cherry tree, and then telling the truth about it. The story is almost certainly a fabrication, because cherry trees did not grow in the area near his family’s farm (Wilford, 2008). The bottom line is that content matters for both e-Learning 1.0 and e-Learning 2.0.
It is easy to verify some information, and it is difficult to verify other information. For example, if I learn from a Microsoft PowerPoint users group how to do something in PowerPoint, I can test out the solution rather quickly. I can verify for myself how well that information solved my problem. I may not be able to tell whether a better approach exists. I may not be able to say whether the author could have conveyed the same approach in a better manner. But, I can, at least, verify that the information is generally good.
On the other hand, suppose I go to a blog to read about leadership techniques. One blog entry tells me that as a leader I should encourage my team to push for innovation and change. Over a month or two I try several recommended techniques, and my team appears to be coming up with more ideas. At the same time, my team uses a lot of time deciding which ideas are best, my boss doesn’t like a lot of the ideas, and my team morale seems to be plummeting. It is hard for me to verify the benefits of implementing the blog-post ideas, because it seems to have an effect on so many factors. Also, I’ve long forgotten which blog I got the idea from, so I have no way of providing feedback.
To complicate things more, our focus tends to be on intentional learning. Verifying learning is even harder when we’re learning without intention or conscious effort. For example, a blog post might say something like,
“I read about this new technique on Stephanie’s blog. We ought to incorporate her idea starting at the senior management level. Here’s the idea…blah, blah….If only we used this, I think people would start getting fired up again.”
The main learning point of the blog post is about the new technique (i.e., in the “blah, blah” above), but we might also learn some other things from this blog post. They include: (a) Stephanie’s blog is a trusted go-to source, (b) our senior management isn’t performing well enough, (c) we are a company with a morale or productivity problem, and (d) we are a company in trouble. Because readers will process these learnings with little conscious effort, they are even less likely than they would be in the case of consciously considered content to read them with a critical eye. In other words, learners won’t even know that they might want to verify these nuggets. They’ll just accept them.
As far as I can tell, most e-Learning 2.0 technologies present information to learners with only the thinnest facilitating learning support, if any. Learners do not receive support in the form of intentional repetitions, worked examples, retrieval practice, tests for understanding, intentional spacing, or augmenting visuals. There is, though, one key difference between e-Learning 1.0 and e-Learning 2.0 content creations today.
E-Learning 1.0 content tends to come from people who have at least some expertise in learning design and presentation, and a lot of learner-empathy. E-Learning 2.0 content creation may have an advantage in being created by peers. However, it may not provide all the learning supports that would help learners (a) understand the content, (b) remember the content, and (c) apply the content to their jobs.
Given the current state of e-Learning 2.0 technologies, Table 1 summarizes my view of the best fit for e-Learning 1.0 and e-Learning 2.0 technologies. As you can see, where learners need extra supports (e.g., to spur long-term remembering and/or implementation), e-Learning 2.0, as it is currently deployed, may not provide the best fit.
Let me offer two caveats to this depiction. First, experts in a domain may not need learning supports for remembering as much as novices do. Experts are likely to have a rich web of knowledge structures in place that enables them to integrate and remember information better than novices. Novices have no such knowledge structures (or inadequate structures) in which to integrate the new information. Second, if people use an e-Learning 2.0 system extensively on a particular topic, the spaced repetitions and retrieval practice (when generating content) can be so powerful that the effect will mimic the benefits of a well-designed e-Learning 1.0 intervention.
When remembering or implementation is critical, e-Learning 1.0 (if well designed) seems a better choice. Most current e-Learning 2.0 interactions don’t support remembering or implementation. Also, given that e-Learning 2.0 technologies are not typically set up to consider sequencing of learning material, e-Learning 1.0 methods seem best when conveying lots of information or complicated topics.
In the areas in which e-Learning 1.0 can provide better learning support, it might not be fair to compare our e-Learning 2.0 interventions to well-designed e-Learning 1.0 interventions. On the other hand, if we are using e-Learning 2.0 technologies to replace Learning 1.0 technologies, comparing results seems desirable.
Designers can use e-Learning 2.0 on its own — not as a replacement for Learning 1.0, but as a separate tool to improve learning and performance. In these cases, we don’t use evaluation in comparison to e-Learning 1.0 technology. We compare it to the default situation without the e-Learning 2.0 technology.
Of course, the distinctions I’ve drawn are too pure. We can certainly use an e-Learning 2.0 intervention to support an e-Learning 1.0 effort (a blended approach). For example, a trainer might add blogging as a requirement for a course on merchandising techniques. When blending e-Learning 2.0 into an e-Learning 1.0 intervention, it makes sense to determine whether adding the e-Learning 2.0 methodology supports the goals of the course. In other words, when e-Learning 2.0 augments e-Learning 1.0, our highest priority must be to verify the intended e-Learning 1.0 outcomes.
We can’t focus solely on these e-Learning 1.0 outcomes however. We also have to analyze e-Learning 2.0 methodologies separately to determine their effects, both positive and negative. On the positive side, an e-Learning 2.0 technology such as a wiki may enable our learners to do a better job of learning on their own about merchandising after the course is over. If we only followed traditional measurement practices, we might never think to measure our learners’ ability to learn on-the-job after the formal training is over. On the negative side, we need to evaluate e-Learning 2.0 separately to determine if it has hurt learning or utilized too many valuable resources.
First do no harm
Because doctors work in situations of uncertainty, they take an oath to “First Do No Harm.” We ought to do the same, especially when it comes to new learning technologies. The first question we should ask in evaluating e-Learning 2.0 is whether it is in fact doing any harm.
“Harm?” you might ask incredulously. How can learning be harmful? Learning can be harmful in a number of ways. Here is a short list:
- Learners can learn bad information.
- Learners can spend time learning low-priority information.
- Learners can learn the right information but learn it inadequately.
- Learners can learn the right information but learn it inefficiently.
- Learners can learn at the wrong time, hurting their on-the-job performance.
- Learners can learn good information that interferes with other good information.
- Learners utilize productive time in learning. Learners can waste time learning.
- Learners can learn something, but forget it before it is useful.
- Previous inappropriate learning can harm learners’ on-the-job learning.
- Content creators may utilize productive time to create learning messages.
- Content creators may reinforce their own incorrect understandings.
- And so on.
Wow. “That’s a long list,” you might be thinking. Being practical about evaluation, we probably don’t want to separately examine each of these potential repercussions. Fortunately, we can boil the list down to two essential points. We need to recognize that people may (a) develop inadequate knowledge and skills because of our e-Learning 2.0 interventions, and (b) waste time as a learner or creator in the e-Learning 2.0 enterprise. We ought to evaluate these possibilities where possible.