Marketers and journalists employ A/B testing to measure the relative effectiveness of content, headlines, images, and more. Lynne McNamee, president of Lone Armadillo Marketing Agency and a speaker at The eLearning Guild’s recent Learning Personalization Summit, suggests using A/B testing in eLearning to measure—and improve—effectiveness.

A/B testing refers to trying out two variations of a design, content item, title, or other element and measuring the response. In marketing, the desired response might be click-throughs or completed purchases; for journalists, the goal might be shares, click-throughs, or longer time spent on the page. What’s in it for eLearning designers and developers?

Information. Using A/B testing is an opportunity to gather data. “One of the big ones I can suggest is the time it takes people to get the information that’s being conveyed,” McNamee said.

It’s also a way to increase choices for learners, which can make eLearning more engaging and more effective.

Test only one variable at a time

For an A/B test to guide design, the change must be limited to a single element of the design or content. An A/B test could be performed to compare a huge range of variables, such as:

  • Response to different color schemes or typefaces and sizes
  • Ease of use of two navigational paradigms
  • Response to different storylines
  • Comprehension of text-based content vs. an infographic
  • Response to text presentation vs. video
  • Comprehension of video vs. video with transcript and/or closed captioning
  • Engagement with text headed with two different titles
  • Application of knowledge after playing a game to learn a skill vs. reading instructions
  • Performance after using a simulation vs. completing asynchronous eLearning module and answering questions at the end

Multivariate testing can be done, McNamee said, but it doesn’t tell you which variable is causing the difference in response or results, so it’s not all that helpful in guiding design. If you want to test three options—say, the effectiveness of using text, video, and an infographic to present the same content—she advises doing two separate A/B tests. In each one, she said, the text group serves as the control. You’d test the text vs. the infographic, then test (with different learners) the text vs. the video.

She advises starting with a simple test: “Have two different versions—one that has the same information presented as text and one that has, say, an infographic. Not in the assessment; in the actual course delivery. And then see how long it takes someone to go through that section.”

Offering choices to learners

Focusing on A/B testing to identify the stronger option and use only that—a common use of A/B testing in marketing or journalism—could be a mistake in eLearning. McNamee emphasizes increasing learner choice. “Have the employee self-select,” she said. “If you have the choice between having some information as text or having an infographic, which would you prefer?”

McNamee continued, “You look at the general educational system and teachers are trying different approaches. That sort of variation and individual preference has been reinforced [with K–12 and college students] over 12 to 16 years, so to then abandon it and ‘everyone has to learn things the same way’ just seems counterintuitive to me.” She points out that younger employees, people now in their 20s and 30s, have been learning on tablets and personalizing their digital environments practically “since birth.”

On the other hand, people might self-select a modality that is not ideal for them. McNamee offers a solution to that conundrum: “They choose, and for that section of the module you show that version the first time through. If they don’t pass that piece of the assessment, when you redeliver the course, you actually show the alternate version,” she said. This is more effective than repeatedly showing the same content, the same way, when the learner is clearly not mastering it. Having options therefore benefits both the learners, who have more control over their learning, and the organization, which can immediately present an alternative to a learner who is struggling. This echoes ideas inherent in universal and user-centered design approaches, such as “plus-one thinking.”

A/B testing requires clear goals

Whether an L&D team uses A/B testing to narrow down the options they will offer to learners or to expand the options available, the testing is useful for gathering data. “All of this is capturing data and then using the data to deliver a better experience and a more personalized experience,” McNamee said.

But it’s essential to know what the longer-term goal is. “What are we trying to improve? What are we trying to learn? It almost seems like it’s a blank slate. So many organizations haven’t been using an LRS; they haven’t been capturing much beyond the registrations and the completions,” she said. Most LMSs do not capture data about which content learners engage with and for how long, but an LRS-based, xAPI-compatible system can capture that—and more.

Knowing how learners engage with content is a starting point, not an end goal, though. Tying learning goals to business goals is a theme for McNamee. She pushes back against the idea that encouraging learners to spend more time with an eLearning module is, itself, a desirable goal, emphasizing instead the business goal underlying much corporate training: improved productivity. “The time people spend learning is time they’re not doing what the boss hired them to do,” she said. And, “increasing learner engagement” is a nebulous target. “What’s engaging and motivating for one person is not the same as for someone else,” McNamee said.

Rather than trying to get people to engage with learning, L&D managers might ask: “Can they pass the assessment the first time and in less time?” If so, then that’s an indication of effective eLearning. “These are really smart, passionate professionals. They know what they want to do. If we start with that level 5 of Kirkpatrick, of those business measures, that’s what I think one of the biggest hurdles is.”(The Kirkpatrick Model considers four levels of evaluation for training: participant reaction, learning, behavior, and results; Dr. Jack Phillips added a fifth level, measuring ROI.)

Earn a ‘seat at the table’

Gathering A/B testing data helped marketing professionals show their value. “Once we started to get the data, that’s how we started to get more resources,” McNamee said. “We kept seeing actual results. We were seeing the effectiveness. We were seeing that we could close a deal much more closely, we could get repeat behavior, we could develop loyalty, we could spread the word. We were getting people to achieve what we were after them achieving.”

McNamee emphasizes that marketing professionals know what L&D can learn from marketing because they’ve been in the same position. “Everyone keeps talking about having a seat at the table. Well, this is how marketers really did it,” she said. With data.

“The learning department should be integrated throughout the organization, I think, and have that valuable voice,” she added. “They’re bringing a real value and service to the organization, but their objectives need to be aligned with the business objectives.”

Start small

A small L&D team might quail at the idea of creating multiple eLearning products with the same content, but it’s possible to start small, gather some data, and make the case for additional resources. “You’re changing the mindset of your organization; you’re showing this proof to the management that this is how they are going to see those results,” McNamee said.

The biggest mistakes people make are trying to do too much too fast and not having a way to measure what they are testing, McNamee said. Before going all-in on a big initiative, test; find someone in the organization with a specific problem, and offer to help solve it. To apply A/B testing in eLearning, start with a simple A/B test, presenting the same information using two different modalities. Figure out which one works best, then use that as the control to test another option, she said.

“Know what is the business problem that you’re trying to solve—not what learning problem you’re trying to solve,” she said. “Let the business metrics clearly define the path—and make sure you have some way to measure it.”