Your Source for Learning
Technology, Strategy, and News
ARTICLES       RSS feed RSS feed

Evaluating E-Learning 2.0: Getting Our Heads Around the Complexity

Other issues to consider: What the Kirkpatrick Model leaves out

Measurement affects behavior, so we have to be aware that measuring e-Learning 2.0 might affect its success. First, there is the tradeoff between precision and workability. The more extensive our measurement instruments, the more precise they are, but the more likely they are to adversely impact the learning and measurement processes. Novices in the measurement game often hurt their cause by creating measurement instruments that take too much time for people to complete. As time requirements increase, fewer and fewer people engage the measurement instruments with attention and care. And, because some types of people tend to drop out earlier than others, increasing time requirements increases the bias of the folks you’re actually sampling.

Other concerns are also in play. Measuring or monitoring people may actually change their behavior. When an e-Learning 2.0 system feels unmonitored, people are likely to feel free to be themselves. There is a certain power and motivation, which result in feeling that one is doing something on his or her own volition. As a sense of monitoring, oversight, or “doing it to look good” rises (Dweck, 1986, 2006), some people may disengage completely. Others may engage with little enthusiasm, or in a manner that is so personally protective that it loses value, or minimizes the opportunity for relationship-building.

To minimize these issues, design e-Learning 2.0 measurement as much as possible to balance precision and workability, taking care to limit the amount of perceived time required to complete evaluation instruments. Alternatively, where possible, we want to design assessment tools to feel like a part of the interaction, not as an add-on that requires extra effort. If possible (and if saying so is true) assessments should be framed as beneficial to the user, designed to improve their experience, and to enable them to focus their time on what is most valuable to them.

Measuring the effect on future learning

Learning interventions don’t just help people perform in the future (for example, on the job); they also can help people learn more in the future (again, on the job) (see Bransford and Schwartz, 1999; Schwartz and Martin, 2004). When people learn basic techniques in a software program, they may be better able to learn advanced techniques while using the program (that is, depending on the design of the original training). When people begin to learn some of the fine taste distinctions in wine, they are better able to learn even finer distinctions as they continue to sample wines (unless of course they are sampling too much wine all at once). When people in a leadership-development class learn that managers can hurt productivity by telling people what to do, they may not only learn that fact. They may also begin to see other ways that they hurt productivity. For example, by learning that “telling is unmotivating” in the training class, Sam may be more likely to notice that Sally stays more motivated when he first asks her to evaluate her own performance, before he steps in to provide feedback. He doesn’t learn this in the training class, but the training class helps him notice this later on the job.

We almost never measure bolstering future learning directly for most traditional training programs, and most training interventions are not designed specifically to aid future learning. E-Learning 2.0 interventions, because they prompt users to generate content, may be especially helpful in supporting future learning, both for the creators of the learning messages, and for the learners. Of course, there are some counter-arguments as well.

Here’s my thinking on this: Writing instructional messages forces creators to think deeply about the topic they are writing about. It also forces them to consider how a topic looks to a novice. It prompts them to reflect on the context in which the learner will utilize the learning. All these processes are likely to help the creator of the learning to deepen and reinforce what they have learned — enabling future learning. Of course, some creators may become so effective in creating learning messages, that they may become too narrow in their own thinking, not opening up to new ideas from others.

For the learners, e-Learning 2.0 may support future learning by creating a rich network of resources that they can rely on in the future to learn more and different concepts. Of course, some learners may forgo their own learning if these resources are available, stunting their ability to further their own learning in the future.

How do we measure the ability of our learning interventions to improve future learning? This is obviously a very tricky business. We could just measure on-the-job performance, and ignore the causal pathway through future learning. Alternatively, to measure future learning we could provide people with problems to solve or cases to analyze, and see how fast they learn from working on those problems or cases. We could track people’s promotions or job responsibilities, assuming that on-the-job learning is required for advancement. We could measure people’s learning through self-assessment or multi-rater feedback from colleagues. We could also decide that future learning is just too difficult to measure, given the current state of expertise about how to go about it.

Summary

Measuring e-Learning 2.0 is fraught with complexities, but we absolutely have to figure out a way to do it, and do it well. In this article, I’ve tried to give you some things to think about as you begin to plan how you might measure e-Learning 2.0 interventions.

Certainly, there are no easy recipes to follow.

In the Guild’s latest survey, respondents saw evaluation as one of the biggest areas of need for e-Learning 2.0. They felt strongly that it was important to evaluate e-Learning 2.0. Look at the second item in Figure 3.

 

Figure 3 Not knowing how to measure interactions was the second most-often cited barrier to adoption of e-Learning 2.0


Most respondents seemed to be ready to rely on user reactions, an inadequate strategy if utilized alone. In Figure 4, you can see the responses to the survey question on evaluating e-Learning 2.0. Over 75% of respondents were heading down the path of measuring learner reactions, with far fewer using other metrics. Respondents could choose more than one item, so hopefully they will use learner reactions along with other corroborating evidence.

 

Figure 4 Most respondents favored measuring learner reactions to e-Learning 2.0 as the basis for evaluating success.


Recommendations

Despite the complexities of measuring e-Learning 2.0, I can offer the following recommendations:

  1. Because e-Learning 2.0 is already on the fad upswing, we ought to be especially careful about assuming its benefits. In other words, we ought to measure it early and often, at least at first until our implementations prove to be beneficial investments.
  2. Because there are two sets of employees involved in e-Learning 2.0, those who learn from the content (“learners”) and those who create the content (“creators”), we need to evaluate the effects of e-Learning 2.0 on both groups of people.
  3. We have to determine if the created content is valid. Your content may not need to be 100% perfect, but you do need to know if the content is valid enough for its intended purposes.
  4. Measuring only the most obvious “learning content” may miss important aspects of the information that e-Learning 2.0 messages communicate.
  5. For situations in which e-Learning 1.0 is better positioned to provide necessary learning supports than e-Learning 2.0 (e.g., when long-term remembering is required), it might not be fair to compare our e-Learning 2.0 interventions to well-designed e-Learning 1.0 interventions. On the other hand, if we are using e-Learning 2.0 technologies to replace e-Learning 1.0 technologies, comparing results seems desirable.
  6. When we blend e-Learning 2.0 to support an e-Learning 1.0 intervention, we must focus first on whether the e-Learning 2.0 methodology supports the e-Learning 1.0 intended outcomes. We must also look at whether the e-Learning 2.0 methodology creates separate benefits or damage.
  7. Because e-Learning 2.0 can create harm, part of our measurement mission ought to be to determine whether people are developing inadequate knowledge or skills and/or wasting time as learners and creators.
  8. Asking people for their reactions to learning can provide some valuable knowledge, but is often fraught with bias. Therefore, we cannot consider asking for reactions to our e-Learning 2.0 interventions a sufficient measurement design.
  9. In thinking about measuring our e-Learning 2.0 interventions, we first have to decide what we designed our intervention to support: (a) Understanding, (b) Long-term Retrieval, (c) Future On-the-job Learning, (d) On-the-job Performance, (e) Organizational Results. Then we ought to devise a measurement regime to measure the outcomes we hope for — as well as the factors on the causal pathway to those outcomes.
    1. On-the-job performance is a necessary component for organizational results, therefore we must measure it and its prerequisites.
    2. Understanding and remembering are necessary components for learning-based performance improvement. Therefore, it is critical that we track them to help diagnose the cause of on-the-job performance failures.

Making it simple

While the in-depth thinking represented in this article may be helpful in providing you with rich mental models of how to think about measuring e-Learning 2.0 (and that was my intent), some of you will probably just want a simple heuristic about what to do. In lieu of a detailed conversation, here goes:

Don’t:

  1. Don’t just ask users for their feedback or rely on usage data.
  2. Don’t look only at benefits — consider potential harm too.
  3. Don’t look only at the learners — consider the creators too.

Do:

  1. Pilot test your e-Learning 2.0 intervention in a small way before full deployment. This will enable you to actually be able to invest in gathering the requisite data.
  2. Measure your users compared to those who are not using the e-Learning 2.0 intervention (after having used random assignment to groups), or compare results over time, or both.
  3. Use multiple measurement methods to gather corroborating evidence.

References

Alliger, G. M., Tannenbaum, S. I., Bennett, W. Jr., Traver, H., and Shotland, A. (1997). A meta-analysis of the relations among training criteria. Personnel Psychology, 50, 341-358.


Bransford, J. D., and Schwartz, D. L. (1999). Rethinking transfer: A simple proposal with multiple implications. Review of Research in Education, 24 , 61-100.


Dweck, C. S. (1986). Motivational processes affecting learning. American Psychologist, 41, 1040-1048.


Dweck, C. S. (2006). Mindset: The new psychology of success. New York, NY, US, Random House.


Schwartz, D. L., and Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.


Shrock, S., and Coscarelli, W. (2008). Criterion-Referenced Test Development, Third Edition. San Francisco: Pfeiffer. 


Thalheimer, W. (2007, April). Measuring learning results: Creating fair and valid assessments by considering findings from fundamental learning research. Retrieved August 7, 2008, from http://www.work-learning.com/catalog/


Wick, C., Pollock, R., Jefferson, A., and Flanagan, R. (2006). The Six Disciplines of Breakthrough Learning: How to Turn Training and Development into Business Results. San Francisco: Pfeiffer.


Wexler, S., Schlenker, B., Coscarelli, B., Martinez, M., Ong, J., Pollock, R., Rossett, A., Shrock, S., and Thalheimer, W. (2007, October). Measuring success: Aligning learning success with business success. Retrieved from: www.e-Learningguild.com/showfile.cfm?id=2513.


Wilford, J. N. (2008). Washington’s Boyhood Home Is Found. Retrieved from www.nytimes.com/2008/07/03/science/03george.html


(Author’s Note) I’d like to thank the many Guild members in attendance at my espresso café roundtable at the Guild’s most recent Learning-Management Colloquium, who helped me see the unsettling complexities that are involved in evaluating e-Learning 2.0 interventions. I would also like to thank Steve Wexler, Bill Brandon, Jane Hart, and Mark Oehlert who provided trenchant commentary on a first draft — enabling significant improvement in this article — just like I might hope from a well-designed e-Learning 2.0 community.


Topics Covered

(10)
Appreciate this!
Google Plusone Twitter LinkedIn Facebook Email Print
Comments

Login or subscribe to comment

Be the first to comment.

Related Articles