Good learning research helps you improve your practice and make decisions. For example, let’s say you want to know how long you should make the videos in a current eLearning project. The client says 15 minutes is fine. You know that’s way too long but you need research to back you up. But what research should you use to convince her? Well, good research, of course.

This month I’ll begin to help you determine what is good research. A few months ago I added the role of Research Director at The eLearning Guild to my plate. And a large part of my role as Research Director is about bringing you research you can use to improve your practice and make decisions. But I also want to help you become a better user of learning research in general.

So this month, I’ll start by reviewing some of the important things to consider when looking at learning research.

Sample size

When you read learning (or any other) research, one thing to look at is the number of people who were sampled (surveyed, interviewed, or watched). The sample, or the “n,” is the number of participants or respondents. In general, the larger the sample size, the better. The problem with smaller sample sizes is that it’s easier to have results happen just by chance than it is with larger sample sizes.

For example, if 16 out of 20 people watching a video tuned out after an average of six minutes, you might not be sure you have enough data to make inferences about their attention span. On the other hand, if you had a much larger sample, say 1,600 out of 2,000 tuned out after an average of six minutes, you would feel much more confident in making inferences about their attention span, all other things being equal.

When reviewing research, it’s important to pay attention to the sample sizes and understand that conclusions drawn from smaller samples are often harder to make inferences about than those drawn from larger samples. (There are certainly more things to consider, but let’s not make this short column into a graduate research seminar.)

Sample type

One problem with much of the research that you read in our field is that most studies are based on “convenience samples.” Convenience samples mean that the participants are convenient to sample! In other words, the participants or respondents chose to be sampled or are easy to sample. This is different than those who were chosen by a random sample, and who truly represent the larger population of potential participants or respondents. Do people who choose to take a survey or answer interviewer questions differ from everyone who could have responded? The truth is that we don’t know because we don’t have information about those who didn’t respond.

From what I have seen, the main players who do research in this area, by surveying practitioners and managers of training, eLearning, human resources, and related disciplines, all use convenience samples, not random samples. Random sampling would be very expensive and difficult to do. So we often have to deal with convenience samples, but be aware that they may not be fully representative. Even more reason to keep an eye on those being sampled.

And so, who are those being sampled? Are they like the people you want to apply the research to? Let’s say you find some great research on video attention span, based on surveys of 4,000 people, but those people were all middle-school children. Does this research apply to adults in workplace settings, too? I would think that the attention spans of middle-schoolers and adults would be different, but perhaps not. If the people that the research was done with aren’t the same as, or similar to, the people you are applying the research to, the research results may not apply in that situation, so you should certainly take that into consideration.

Outcomes

Researchers typically base conclusions in their study on the methodology of the study. For example, in the hypothetical video attention span study, I said that they measured when the participants “tuned out.” In real life, we could do this by observing the participants. But what if, instead of observing participants, we asked participants to write down how long it was before they got bored? Which method would give us a better measure of attention span? We regularly ask people to “self-report” things that we know that they do not accurately self-report (such as how much they ate or how often they lie). So another one of the things you should consider when reviewing research is what they measured and how they measured it. Is it likely to accurately represent the thing being measured? Direct measurement is always better than self-reporting. But when asking people questions, rather than directly measuring, make sure to ask questions they can and will answer! (We too often ask people to answer things they cannot or will not answer.)

These are some of the big things to look at when reviewing research. Much research is poorly done, so using it won’t help you improve your practice and make decisions. Next time I’ll show you some examples of good and not-so-good research.