Turning old ideas about learner feedback upside-down could lead to improved training, higher learner engagement, and fewer knowledge gaps company wide.

According to Evaluating Learning: Insights from Learning Professionals, an eLearning Guild research report by Will Thalheimer, most organizations collect information about learner attendance and course completion; more than 70 percent also solicit feedback from users on their perception of the learning. But what if L&D also asked learners to provide feedback on the content of eLearning?

If there’s an error in a test question, especially on something lots of learners know well, they’ll notice. “If everybody gets it wrong, they’ll talk about the fact that everybody got it wrong,” said Susan Hurrell, the director for inbound marketing at Neovation. “That diminishes the trust in the training.”

On the other hand, “If you give the learner the opportunity to say, ‘I got this wrong; here’s my understanding—are you sure your answer is right?’ then they own the training. They have a vested interest in making the training better so they as an individual can understand.”

Neovation has built this ability into its newest training product, OttoLearn®. There’s a cost, of course: “By inviting the trainee to say, ‘I didn’t get it. Can you help me understand why?’—yes, there’s going to be overhead in terms of communication back and forth, but the training will be better,” Hurrell said.

“If you have 100 people on the team that are trying to make the training better for themselves, they are making that training better for everyone on the team,” she said. “The learners will improve more quickly. And they will care. People love to have a voice; it gives them ownership.”

Annual updates vs. continual maintenance

Much training is delivered on an annual basis. This means that errors might only be found once a complex—and expensive—eLearning course has been delivered. The next update isn’t due to take place for a year. “If there’s an error in the material, then you’ve trained an entire year’s worth of people incorrectly,” Hurrell points out.

Her company’s approach—making it easy to fix errors in “almost real time,” as well as to add and update content—means corrections can be deployed quickly. That creates “an immediacy that makes training much more conversational,” she said. “As opposed to being passive consumers of canned training, that opportunity really allows [learners] to take ownership and participate in training that benefits the cohort.”

Even though the feedback is delivered through online forms, not an actual conversation, the mechanism of providing feedback and then seeing results right away can galvanize learners, improving their engagement because they see that the quality of the training matters to the organization—not just the ability to say that everyone “completed” the training.

Each company must, of course, have “a trainer- or organization-directed conversation on how they want to approach the feedback that they are going to get,” Hurrell said; not all will have the ability to maintain training on a continuous basis. “But the opportunity—in a broad brush kind of a way—for that feedback, I believe, truly makes training better.”

Learn how your peers evaluate training

Compare your organization’s approach to evaluating eLearning with what your peers are doing, and learn how to collect more effective data and feedback. Download your copy of Evaluating Learning: Insights from Learning Professionals today.