It may be a cliché, but it’s one that L&D teams—and their managers—need to heed: You get what you measure. According to Will Thalheimer’s recent Guild report Evaluating Learning: Insights from Learning Professionals, most organizations measure the wrong things when evaluating eLearning. Here’s a look at what L&D should measure in order to meet business leaders’ goals and avoid a data gap.

Thalheimer’s research found L&D professionals believe that senior executives want them to be able to demonstrate that training is effective. In addition, nearly 61 percent said that their organizations’ senior leaders want them to evaluate eLearning so they can improve it—a goal that depends on measuring whether eLearning is effective.

L&D professionals and their senior leaders have cleared a significant hurdle. They’ve identified clear goals for their eLearning data: measuring, validating, and improving the effectiveness of eLearning.

There’s only one problem.

What they are measuring won’t tell them what they want to know.

What are L&D teams measuring?

Thalheimer asked survey respondents what L&D measures when evaluating eLearning. Unsurprisingly, nearly 83 percent measure learner attendance at training, and about the same number measure program completion. These data points might allow a manager or company to tell regulators or the legal team that employees have been trained on a particular topic. But they reveal nothing about whether the training was effective.

Many respondents, about 72 percent, ask for learners’ perceptions of the training. Some L&D teams (and managers) confuse learners’ perceptions with training impact. They believe that if you ask learners whether they think the training will help them do their jobs better, and the learners overwhelmingly say it will, then you’ve “proven” that the training is effective. You haven’t. All you’ve done is show that learners liked the training or thought it was relevant.

Knowing who showed up and who liked the eLearning might matter for some business goals, but it is useless in helping L&D teams or managers gauge or improve the effectiveness of training.

What should L&D measure?

Instead of (or in addition to) measuring attendance, completion, and satisfaction, those hoping to shed light on the effectiveness of training should measure:

  • Performance in scenario-based eLearning: Providing learners with opportunities to complete tasks or make decisions—in training scenarios that are realistic stand-ins for situations they face on the job—and measuring their performance can offer insight into how they’ll do in real life. Scenario-based learning and role-play simulations allow learners to practice and get comfortable with skills, conversations, and tasks they need to do on the job. But only about a third of respondents—32 percent—measure learners’ ability to do realistic tasks in training, and less than a quarter—24 percent—measure their ability to make realistic decisions in training. That’s unfortunate. “Decision-making is a sweet spot for eLearning (for example, using scenario-based questions), and the 24 percent rate seems like an underutilization—and thus a big opportunity,” Thalheimer wrote.
  • Performance on the job: Here’s a crazy idea: Measure learners’ performance on the job before and after training. What better way to know whether completing training had an impact on the learner’s job-related skills? Yet only 20 percent of respondents measure learners’ job performance. Thalheimer suggests creating—and measuring—evaluation objectives that relate directly to performance within the training or on the job.
  • Business performance or outcomes: Organizations that send large numbers of employees to training, as well as those that conduct focused or in-depth training of all employees or specific groups, might want to know whether it helped. Did the firm become more productive? Did market share grow? Did the team that got intensive training do better overall? Sometimes the correlation between training and business outcomes can be hazy, but comparing performance before and after significant training initiatives can offer important information. About 16 percent of respondents measure this; and only very small numbers of respondents report measuring overall impact on the community or environs.

Closing the gap

Improving eLearning is a challenge under the best of circumstances; it’s impossible if you don’t have a baseline against which to measure improvement. Thalheimer has a few suggestions for closing the gap between what’s usually measured and what L&D should measure:

  • Build evaluation into eLearning from the very beginning of the design stage.
  • Create an evaluation strategy.
  • Create better learner surveys—Thalheimer has written extensively on the use of performance-focused smile sheets.
  • Offer learners the opportunity to practice decision-making and skills during training—and measure their performance.
  • Step back and see the bigger picture: Figure out how to measure the effect of training on the entire organization’s performance.

Download the full report to learn more.