In this era of increasing change, it’s a given that we’re facing more unique and/or ambiguous situations. The issue then is: how do we cope? Our first line of defense is (or should be) science: specific results or inferences from theory. However, these may fall short. What do we do then? The short answer is experimentation. Experimentation in L&D should be part of the corporate culture.
We prefer to use good principles to determine our actions. There are well-established principles that can help us adapt: learning theory, behavioral economics, and factors for innovation are all relevant. These can guide us to good answers for how to design, how to solve, how to answer.
(As an aside, my personal take is that I still see many gaps between what we know and what is practiced. Open plan seating, yearly reviews, learning styles, millennials, and more are all still extant despite being soundly renounced as flawed. We aren’t professional enough yet in our practices!)
Even when we do follow science, however, there are times when it’s not obvious what to do. The situation is new enough that there aren’t results yet, or it’s complex enough that it’s not clear what frameworks hold. What do we do then? Experiment!
The Cynefin framework
Dave Snowden’s Cynefin framework proposes a set of characteristics. His model has five areas: the simple, complicated, complex, chaotic, and disorder. For the first two there are either rote solutions or experts who know what to do. These are known situations. The following two require different approaches.
For what Snowden characterizes as complex domains—where the right course of action can’t be determined but you have time to react—he recommends experimentation. When you get to the extreme of chaotic domains, he recommends just trying something and see what results.
The complex domain is really where you’re innovating. You make some hypotheses, create a test, evaluate the results, and iterate until you determine the best course. Should we revise our interface this way? Should we use AR here? Let’s prototype and test. We can run an A/B study, or compare new results to a baseline. In many cases, the internet makes this easy with the ability to create different web pages with relative ease, for instance.
An important outcome is that we have a basis upon which to move. However, there are a number of requirements that guide the ability to successfully experiment and get useful answers, and it’s worth reviewing them.
The first criterion is to make sure you are asking the right question! For instance, there’s the all-too-familiar “we need a course on this” request, without knowing the real problem. In this case, we have to be clear what the problem is that we need an answer to. To put it another way, we need to know what the data will tell us. On principle, you shouldn’t collect data you don’t know what to do with. Ensure there’s a match between the question and the data you’ll receive from the experiment.
Then a second criterion is creating a design where the data will be relevant. Are you testing the right range of values, and with sufficient spacing? For instance, testing learning one week after a class, when the performance opportunities come on average every four weeks, won’t tell you if your learning is working. Are you testing the right people? Are they representative? There are a lot of methodological issues that matter. It’s not difficult, but it is important!
And, you need to build some slack into your schedules and budgets to allow for this sort of decision making. If you’re expected to just execute, with no margin for evaluating different courses, you’ll either cut corners or be off the mark.
Building an innovation culture
There are other entailments, as well. For one, it has to be safe to experiment. Amy Edmondson points this out in her book Teaming: How Organizations Learn, Innovate, and Compete in the Knowledge Economy. If it’s not safe to fail, you won’t experiment (or, at least, share the learning.) Safety in this sense is a critical component of an innovation culture. And yet such a culture is the only sustainable key to coping with the increasing change.
A second entailment is the benefit of sharing. The results of our learnings can just stay with us, but that doesn’t benefit the organization. One of the recurrent problems is disparate teams making the same mistake because no one’s shared. You don’t want to reward the failure, but you do want to celebrate the lesson learned. This where a “show your work” mentality is so valuable.
We prefer to act on defensible bases for the success of our teams and our organizations. Empirical testing takes a long time but it’s accurate if you’ve used an appropriate methodology. Don’t reinvent the wheel when you can avoid it, but experiments are next best. Random or “guess” decisions may be necessary in the worst situations, but it’s not the way to bet.
Making experimentation part of the L&D corporate culture, with all the entailments, is a step on the path to a learning organization. And that’s the success path in the long term. Getting there with alacrity is a defensible approach. It’s time to experiment with experimenting!
Clark Quinn will be participating in The eLearning Guild’s second annual Executive Forum on October 23, 2018. This special one-day experience, which takes place prior to the Guild’s DevLearn 2018 Conference & Expo, is designed for senior learning and development leaders who want to collaborate with peers and industry experts on cutting-edge strategies that address the key challenges of the modern learning organization.