Your Source for Learning
Technology, Strategy, and News
ARTICLES       RSS feed RSS feed

Evaluating E-Learning 2.0: Getting Our Heads Around the Complexity

by Will Thalheimer

August 18, 2008

Feature

by Will Thalheimer

August 18, 2008

E-Learning 2.0 technology offers great promise, but only those who are getting the quickest, most robust feedback will be able to maximize that promise. It takes good evaluation design to produce that sort of feedback.

E-Learning 2.0 interventions are coming to a workplace near you. Maybe they surround you already. Whether you’re an expert or a virgin, it’s time to think about evaluation.

Whether you think that e-Learning 2.0 is a force for good or evil, sometime in the near future you will likely have a responsibility to determine the effectiveness of e-Learning 2.0 interventions. (See Sidebar 1 for a definition of “e-Learning 2.0.”)


Sidebar 1 Some handy definitions

E-Learning 2.0: The idea of learning through digital connections and peer collaboration, enhanced by technologies driving Web 2.0. Users/Learners are empowered to search, create, and collaborate, in order to fulfill intrinsic needs to learn new information.
Vetting: an investigative process of examination, fact-checking, and evaluation.

Web 2.0: The stage of the World Wide Web where the Internet has become a platform for users to create, upload, and share content with others, versus simply downloading content.

By evaluating our results, we can refine and improve what we’re doing, or discard what’s not working. We can also give a coherent answer when someone in management (or our clients) asks us to prove that this e-Learning 2.0 stuff works. E-Learning 2.0 technology offers great promise, but only those who are getting the quickest, most robust feedback will be able to maximize that promise. It takes good evaluation design to produce that sort of feedback.

In last year’s Guild report, Measuring Success (Wexler, Schlenker, and others, 2007), I outlined 18 reasons (see Sidebar 2) that we might measure learning. These included giving learners grades or credentials, helping learners learn, comparing one learning intervention to another, and so on. (Please see the References at the end of this article for all citations.)

For the purpose of this article, I’m only going to focus on enabling you to:

  1. Determine what level of benefit (or harm) your e-Learning 2.0 interventions produce.
  2. Use evaluation results to improve your e-Learning 2.0 interventions.
  3. Compare the effectiveness of your e-Learning 2.0 intervention to some other learning intervention.

Sidebar 2 Why Do We Measure Learning? Eighteen Reasons

(From The eLearning Guild Measuring Success Report (2007, pp. 118-119).
To support the learners in learning and performance
1. To encourage learners to study
2. To give learners feedback on their learning progress.
3. To help learners better understand the concepts being taught, by giving them tests of understanding and follow-up feedback.
4. To provide learners with additional retrieval practice (to support long-term retrieval).
5. To give successful assessment-takers a sense of accomplishment, a sense of being special, and/or a feeling of being in a privileged group.
6. To increase the likelihood that the learning is implemented
later.
To support certification, credentialing, or compliance
7. To assign learners with grades, or give them a passing score.
8. To enable learners to earn credentials.
9. To document legal or regulatory compliance.
To provide learning professionals (i.e., instructors/developers) with information
10. To provide instructors with feedback on learning.
11. To provide instructional designers/developers with feedback.
12. To diagnose future learning needs.
To provide additional information
13. To provide learners’ managers’ with feedback and information.
14. To provide other organizational stakeholders with information.
15. To examine the organizational impacts of learning.
16. To compare one learning intervention to an alternative one.
17. To calculate return-on-investment of the learning program.
18. To collect data to sell or market the learning program.

 

This article will NOT cover how to decide whether to implement e-Learning 2.0 strategies in your organization. Rather, this article should help you think through the many issues and complexities involved in evaluating e-Learning 2.0 interventions. At the end, I outline a short list of the most critical things we should be doing as we evaluate e-Learning 2.0. As you will see, getting started with a few simple imperatives may be the best strategy.

Beware of seduction

In thinking about how to evaluate e-Learning 2.0, the first thing to remember is that EVERY new learning technology brings with it hordes of booming evangelists rhapsodizing utopian visions. These visions may or may not be true or realistic, yet they may seduce us beyond all rationality or evidence. Programmed instruction, 16mm movies, and filmstrips were the first such seductive technologies. (Many Learning Solutions readers are not old enough to remember them, of course.) Later, radio, television, and computer-based training were magical technologies that many believed would completely transform the learning landscape.

E-Learning 2.0 is no different. Some tout it as the key to unlocking the unlimited promise of informal learning. Some supporters present it as a way to democratize organizations. Others promote it as a way to empower employees to help each other rise to their fullest potential. Because these visions can be so enticing, we have to make an extra effort to be objective. We have to shield ourselves from temptation by investing in evaluation — and in doing evaluation right.

Grassroots content development

E-Learning 2.0 differs from most traditional learning methodologies in allowing — even encouraging — everybody to contribute in creating learning messages. The term “learning messages” refers to the learning points that a learning event conveys within them. I prefer this term to “content” or “learning materials.” The reality is that learning only occurs when the learning materials convey learning messages, and learners attend to and receive the learning messages. Too many of us design instruction as if the creation of learning materials guarantees that learning will take place.

Traditionally, a central authority created learning messages. Experts vetted the messages before learners saw them. In the workplace, the training department typically created learning messages, and management, legal, and subject-matter experts vetted them. Only then were they ready for presentation to employees. In education, the writers compiled learning messages from textbooks and journal articles, and from individual experts, including professors, teachers, and curriculum specialists. Whether training or education, everyone assumed that someone had vetted the learning messages to validate them for learners.

E-Learning 2.0 offers a different model, enabling “grass roots” creation of learning messages. Experts and people closest to the issue may create such messages. However, an authoritative editorial function does not necessarily vet the messages. Individuals at the grassroots level can create information in e-Learning 2.0, and vet it prior to release, or others at the grassroots level may vet the information after the fact. Finally, institutional agents monitoring the material may check the information, instead of, or in addition to, grassroots verification. Recent data from Guild Research (August 2008) illustrates the various ways companies deal with user-generated content (see Figure 1):

 

Figure 1 Policies for dealing with user-generated content vary widely across organizations of all sizes.


There are two sets of employees involved in e-Learning 2.0. There are those who learn from the content (“learners”) and those who create the content (“creators”). Because of this, we need to evaluate e-Learning 2.0’s effects on both groups of people. Of course, one person can play both roles, depending on the issue that’s in play.

Because e-Learning 2.0 produces learning messages that arise from non-vetted sources, one aspect of evaluation that may appear to differ from traditional evaluation involves assessing the truth or completeness of the learning messages. Of course, far too many of us assume that traditional training and education courses provide good information. For example, many of us in the United States learned of our first President’s legendary honesty in a story that told of him chopping down a cherry tree, and then telling the truth about it. The story is almost certainly a fabrication, because cherry trees did not grow in the area near his family’s farm (Wilford, 2008). The bottom line is that content matters for both e-Learning 1.0 and e-Learning 2.0.

It is easy to verify some information, and it is difficult to verify other information. For example, if I learn from a Microsoft PowerPoint users group how to do something in PowerPoint, I can test out the solution rather quickly. I can verify for myself how well that information solved my problem. I may not be able to tell whether a better approach exists. I may not be able to say whether the author could have conveyed the same approach in a better manner. But, I can, at least, verify that the information is generally good.

On the other hand, suppose I go to a blog to read about leadership techniques. One blog entry tells me that as a leader I should encourage my team to push for innovation and change. Over a month or two I try several recommended techniques, and my team appears to be coming up with more ideas. At the same time, my team uses a lot of time deciding which ideas are best, my boss doesn’t like a lot of the ideas, and my team morale seems to be plummeting. It is hard for me to verify the benefits of implementing the blog-post ideas, because it seems to have an effect on so many factors. Also, I’ve long forgotten which blog I got the idea from, so I have no way of providing feedback.

To complicate things more, our focus tends to be on intentional learning. Verifying learning is even harder when we’re learning without intention or conscious effort. For example, a blog post might say something like,

“I read about this new technique on Stephanie’s blog. We ought to incorporate her idea starting at the senior management level. Here’s the idea…blah, blah….If only we used this, I think people would start getting fired up again.”

The main learning point of the blog post is about the new technique (i.e., in the “blah, blah” above), but we might also learn some other things from this blog post. They include: (a) Stephanie’s blog is a trusted go-to source, (b) our senior management isn’t performing well enough, (c) we are a company with a morale or productivity problem, and (d) we are a company in trouble. Because readers will process these learnings with little conscious effort, they are even less likely than they would be in the case of consciously considered content to read them with a critical eye. In other words, learners won’t even know that they might want to verify these nuggets. They’ll just accept them.

Learning supports

As far as I can tell, most e-Learning 2.0 technologies present information to learners with only the thinnest facilitating learning support, if any. Learners do not receive support in the form of intentional repetitions, worked examples, retrieval practice, tests for understanding, intentional spacing, or augmenting visuals. There is, though, one key difference between e-Learning 1.0 and e-Learning 2.0 content creations today.

E-Learning 1.0 content tends to come from people who have at least some expertise in learning design and presentation, and a lot of learner-empathy. E-Learning 2.0 content creation may have an advantage in being created by peers. However, it may not provide all the learning supports that would help learners (a) understand the content, (b) remember the content, and (c) apply the content to their jobs.

Given the current state of e-Learning 2.0 technologies, Table 1 summarizes my view of the best fit for e-Learning 1.0 and e-Learning 2.0 technologies. As you can see, where learners need extra supports (e.g., to spur long-term remembering and/or implementation), e-Learning 2.0, as it is currently deployed, may not provide the best fit.

 

Table 1 What are the “best fits” for e-Learning 1.0 and for e-Learning

 

 

Support for Remembering and Implementation

 

 

Support Not Critical

Support Critical

Information that Needs to be Learned:

Small Chunks of Information

e-Learning 2.0

e-Learning 1.0

Complex System of Information

e-Learning 1.0

e-Learning 1.0

 

Let me offer two caveats to this depiction. First, experts in a domain may not need learning supports for remembering as much as novices do. Experts are likely to have a rich web of knowledge structures in place that enables them to integrate and remember information better than novices. Novices have no such knowledge structures (or inadequate structures) in which to integrate the new information. Second, if people use an e-Learning 2.0 system extensively on a particular topic, the spaced repetitions and retrieval practice (when generating content) can be so powerful that the effect will mimic the benefits of a well-designed e-Learning 1.0 intervention.

When remembering or implementation is critical, e-Learning 1.0 (if well designed) seems a better choice. Most current e-Learning 2.0 interactions don’t support remembering or implementation. Also, given that e-Learning 2.0 technologies are not typically set up to consider sequencing of learning material, e-Learning 1.0 methods seem best when conveying lots of information or complicated topics.

In the areas in which e-Learning 1.0 can provide better learning support, it might not be fair to compare our e-Learning 2.0 interventions to well-designed e-Learning 1.0 interventions. On the other hand, if we are using e-Learning 2.0 technologies to replace Learning 1.0 technologies, comparing results seems desirable.

Designers can use e-Learning 2.0 on its own — not as a replacement for Learning 1.0, but as a separate tool to improve learning and performance. In these cases, we don’t use evaluation in comparison to e-Learning 1.0 technology. We compare it to the default situation without the e-Learning 2.0 technology.

Of course, the distinctions I’ve drawn are too pure. We can certainly use an e-Learning 2.0 intervention to support an e-Learning 1.0 effort (a blended approach). For example, a trainer might add blogging as a requirement for a course on merchandising techniques. When blending e-Learning 2.0 into an e-Learning 1.0 intervention, it makes sense to determine whether adding the e-Learning 2.0 methodology supports the goals of the course. In other words, when e-Learning 2.0 augments e-Learning 1.0, our highest priority must be to verify the intended e-Learning 1.0 outcomes.

We can’t focus solely on these e-Learning 1.0 outcomes however. We also have to analyze e-Learning 2.0 methodologies separately to determine their effects, both positive and negative. On the positive side, an e-Learning 2.0 technology such as a wiki may enable our learners to do a better job of learning on their own about merchandising after the course is over. If we only followed traditional measurement practices, we might never think to measure our learners’ ability to learn on-the-job after the formal training is over. On the negative side, we need to evaluate e-Learning 2.0 separately to determine if it has hurt learning or utilized too many valuable resources.

First do no harm

Because doctors work in situations of uncertainty, they take an oath to “First Do No Harm.” We ought to do the same, especially when it comes to new learning technologies. The first question we should ask in evaluating e-Learning 2.0 is whether it is in fact doing any harm.

“Harm?” you might ask incredulously. How can learning be harmful? Learning can be harmful in a number of ways. Here is a short list:

  1. Learners can learn bad information.
  2. Learners can spend time learning low-priority information.
  3. Learners can learn the right information but learn it inadequately.
  4. Learners can learn the right information but learn it inefficiently.
  5. Learners can learn at the wrong time, hurting their on-the-job performance.
  6.  Learners can learn good information that interferes with other good information.
  7. Learners utilize productive time in learning. Learners can waste time learning.
  8. Learners can learn something, but forget it before it is useful.
  9. Previous inappropriate learning can harm learners’ on-the-job learning.
  10. Content creators may utilize productive time to create learning messages.
  11. Content creators may reinforce their own incorrect understandings.
  12. And so on.

Wow. “That’s a long list,” you might be thinking. Being practical about evaluation, we probably don’t want to separately examine each of these potential repercussions. Fortunately, we can boil the list down to two essential points. We need to recognize that people may (a) develop inadequate knowledge and skills because of our e-Learning 2.0 interventions, and (b) waste time as a learner or creator in the e-Learning 2.0 enterprise. We ought to evaluate these possibilities where possible.

 


Topics Covered

(10)
Appreciate this!
Google Plusone Twitter LinkedIn Facebook Email Print
Comments

Login or subscribe to comment

Be the first to comment.

Related Articles