An interesting anthropological phenomenon occurred as a consequence of World War II. Pacific Islanders who had encountered Western civilization for the first time created a set of beliefs about how they could obtain more of the unusual goods these new characters had introduced. Rituals were created to bring the return of the providers or at least their mana, the gifts that rained down from above. These were called cargo cults.

We seem to have the same situation in the eLearning industry. We act as if, hearing of wondrous rewards others have experienced, we too shall have fantastic outcomes by investing in the same approaches. Yet this frequently leads to disappointment rather than the expected boon. What’s going on here?

Shiny Object Syndrome and eLearning

There are a variety of reasons why people keep placing hope in new technologies—or, at least, feel that they need to pay attention to them. New tools (I almost wrote “toys”) appear regularly, and many have legitimate upsides. And, of course, none of them are going to be a panacea. Yet we like to chase the newest shiny object.

It’s fun to experiment with the latest technology. It’s new, it’s cool, and it’s unexplored. Maybe we can do something meaningful with it! Microlearning, AI, AR, and VR: these are currently the shiny objects of interest. Each has some meaningful affordances, and some real barriers. And yet none of these are going to address the full suite of needs. Even microlearning, an umbrella term that encompasses (and doesn’t sufficiently differentiate between) spaced learning and performance support, doesn’t address the whole performance ecosystem.

It may be important to be seen chasing the new technologies. Organizations want to be up-to-date, and it’s valid to be experimenting with new technologies to understand their capabilities and apply their core affordances to meet organizational needs. However, doing so without some caution is also a worry.

Be wary

There are reasons to be wary. Some of the new approaches, the latest shiny objects, are snake oil. The touted benefits aren’t real. That may be based on misapprehension of data or over-extension of results, or are premised on unsolid foundations. The point is that many products are sold on empty promises that are empty or to meet requirements other than focusing on the impact. Fast and cheap are ok after you’re ensuring you’re getting an appropriate impact first.

A common problem is the anecdote. People who’ve tried a new system will tout the benefits they believe they’ve obtained. There are several problems with that. For one, they have a vested interest in believing that their expenditure was worthwhile; otherwise they’ve wasted money. So, you have to take their results with a grain of salt. Second, the results might be a placebo or Hawthorne effect; just the attention improved some measure, or it came as an outcome of some other aligned initiative. And even if their results were real, will they directly transfer to your situation?

You also have to question the data; was it truly independent? Has it been published in refereed journals? Be wary of “proprietary” research. You aren’t expected to be an expert in research methodology, so what do other independent voices say? Who can you trust?

The biggest concern, of course, is that this may be distracting from getting the core right first. A gold-plated bad egg is still a bad egg. Money spent chasing fads may well be detracting from going back and ensuring that the core learning design is right.

How to get back to the core

The best way to ensure that the impacts of your initiatives are optimal is to ensure that you’re following what research has determined is the best approach. There’s a reason four individuals in our field (caveat: I was one) banded together to put a stake in the ground about what good learning should be by publishing the eLearning Manifesto. And we’re seeing an increasing suite of research-driven guides to quality learning, from authors including Ruth Clark, Julie Dirksen, and Patti Shank.

Experimenting on top of a quality foundation is smart. Experimenting instead of a quality foundation is eLearning malpractice. If we invest first in making sure our learning and performance initiatives are sound, then we can and should explore new approaches. But get the foundations right. Otherwise, you’re exhibiting a cargo cult mentality.