Your Source for Learning
Technology, Strategy, and News
ARTICLES       RSS feed RSS feed

Nuts and Bolts: Design Assessments First

by Jane Bozarth

April 2, 2013

Column

by Jane Bozarth

April 2, 2013

“Work backwards. Write the performance goals, decide how you will assess those, and then design the program. The content and activities you create should support eventual achievement of those goals.”

Here’s the problem (and you’ve been there): You move through an online course and reach the completion screen, which turns out to be 25 badly written multiple-choice questions asking about things like fine points of a policy or seemingly random definitions or rarely occurring product failures.

Why? Because the designer got the course finished, realized she needed something to prove completion, and went back scrambling for 25 multiple-choice questions. She pulled things that are easy to test, like “define” or “describe” or “match.”

The matter of assessment is one of the most consistent problems I see with instructional design. My beef isn’t even so much with badly written items as with what they are assessing, usually little more than knowledge and recall of often-irrelevant information. And it’s no wonder: When I was researching this column I pulled from my shelf one of the gold-standard instructional design texts from grad school to find exactly two pages on the subject of assessment. The disconnect between workplace performance, course performance objectives, assessment, and content is a huge contributor to learner failure.

Why does it happen?

For starters, assessment is too often an afterthought. Some blame may rest here with the ADDIE project-planning model so popular among instructional designers, with many equating assessing learner performance with evaluating the course. Apart from creating assessments is the matter of timing: We depend far too much on summative end-of-course assessment when formative knowledge checks along the way would serve the learner far better.

Another big reason: poorly written or academic learning objectives drive assessment items. Objectives that use verbs such as “list,” “define,” and “describe” practically force the designer to load bulleted content onto slides, followed up with a multiple-choice test or Jeopardy!-style quiz. The content is easy to write, and bullet, and test … but the items aren’t testing performance. Really: When’s the last time your boss asked you to list something? Do you want a washing machine repairman who can name parts or one who can fix the machine? How about the surgeon who will be taking out your appendix?

How to fix it?

Short answer? Create assessments before developing content. Once you’ve established the workplace performance, and set the performance objectives for learners, create the plan for how you’ll assess that. Course content and activities should support eventual achievement of those goals. My geekier readers will recognize a bit of similarity here to test-driven software development, a branch of the extreme programming movement. Those less geeky, or from other fields, might remember Stephen Covey’s mantra, “Begin with the end in mind.” Is the work performance that the washing machine repairman will replace the agitator? Assessing that may mean an interactive online simulation on replacing the washing machine agitator or an onsite supervised final exam in which the learner replaces the agitator. In such a case, design the course so they can do that. In our work the end should be successful work performance, not scoring 98 on a knowledge quiz.

So: Work backwards. Write the performance goals, decide how you will assess those, and then design the program. The content and activities you create should support eventual achievement of those goals. Otherwise other aspects of the design process will affect the assessment, and not in useful ways: Once you have the content, it’s just too easy to start pulling assessment items from it.

Want more?

Hale, J. (2002). Performance-Based Evaluation. San Francisco: Jossey-Bass/Pfeiffer.

Topics Covered

(90)
Appreciate this!
Google Plusone Twitter LinkedIn Facebook Email Print
Comments

Login or subscribe to comment

Great article, Jane, and I agree that the true measure of learning is being able to perform in context. I would point out, however, that many of our courses don’t support learning that goes all the way to the point of performance. In order for our washer repairman to be able to fix a washer (the real performance), we would have to build learning strategies (like simulations) that actually provide that practice. We can argue about whether all of our coursework should go that far, but the fact of the matter is that a lot of our courseware doesn’t go that far – for reasons of time constraints, budget constraints, or – hopefully – a course’s position in a blended learning strategy that starts with knowing the parts of the machine and ends with being able to competently and efficiently fix the washer. It may sometimes be helpful to pause and test whether a learner has obtained that enabling knowledge (list, describe, etc.), but even more important that we continue to stretch the concept of “course” so that the test for completion of the course is indeed performance. That’s the level of learning that we are shooting for in the end, and quite a bit of learning that we design stops well short of that goal. We get a lot of test items in the knowledge and comprehension areas of Bloom’s Taxonomy because that’s the kind of learning that the “course” achieves – and that is perhaps the thing that we need to change.
Agreed. If stakeholders accept (which is another way of saying they refuse to allow for better budget/time/development talent) that the outcome of a "course" is only for the repairman to name the parts of the machine, then we have a different problem. On another note -- maybe another column -- I have a very, very hard time getting those same stakeholders to articulate exactly what desired performance IS. It's one thing for the washer repair training, but for "leadership" and "communications" skills courses? Not so much. Which would bother me less if it weren't for the fact that employees are at some point evaluated on their performance of those things.
Jane
Over here we follow the ELO (Enabling Learning Objective) mantra of at least one assessment question per. It definitely helps with assessment development.
Jane,
What can I say? I'm disappointed.

Here's how it should go, using your example.

Course Goal: Repair a Maytag 200

Sample Objective: Using a continutiy tester, the student will check the five major circuits of a Maytag 200 and determine it a short exists.

Test Item: You give the student a continuity tester and a Maytag 200 and have the student test the circuits.

The course goal, drives real world, performance based objectives. The assessment for all intents and purposes is an extension of the objective. The assessment is embedded in the course objective.
Hi Jane,

I couldn't agree more. What you describe is exactly the design process as we support in easygenerator. First define your learning objectives, then figure out how you want to asses it and finally add the content. We find that this will improve the quality of a lot of eLearning courses and it will make tham smaller as well.
From my perspective, it is a vicious circle in that SMEs are accustomed to the type of tests you describe and expect it. If you present them with anything more difficult they send you back to the drawing board to dumb down the questions. It's sad, but true.
Absolutely great article! It's so funny how easy it is to test learners during a course in an engaging way. In our article about scenario based learning (http://www.learningsolutionsmag.com/articles/1108/how-to-engage-learners-with-scenario-based-learning-) me and my collegue state that we get more and more requests for scenario based learning (SBL). SBL is in fact an assessment. To my opinion, creating an SBL forces you to design with learning goals in mind. The beauty of an SBL is that, with referrence to Bloom's, you can quite easily move the learner away from learning merely knowledge towards application, evaluation and creation of knowledge. It must be said however that also in the case of SBL, the learner needs to have basic knowledge of the topics in the SBL. Another great thing about SBL's: using modern development tools like easygenerator it is fairly easy to create an SBL at relatively low cost.
I would highly recommend books, articles, and webinars by Michael Allen and Allen Interactions for more information on this topic. Hopefully more IDs will begin to realize that their "PowerPoint" designed courses don't cut it.
All of our designers here would agree whole-heartedly with your article! However, 90% of what we are trying to teach in our organization has no "do" at the end. Most are courses that impart policies and legislation, the rest teach "people" skills that are hard to pin down. My biggest pet-peeve though - is that stakeholders always try to TEACH in their questions! They want every answer to be "all of the above" and the feedback is a giant paragraph of info, with some of it being new to the learner.
@ctalford: Have you ever heard Peter Block speak on getting your expertise used? I recommend, as reading for everyone in our business, his book _Flawless Consulting: A Guide to Getting Your Expertise Used_. Get the third edition that came out in 2011. It's updated and extremely relevant to our work. Begin with Chapters 18 and 19, the last two in the book, which are all about learning and what readers of this magazine do. Then read the rest of the book to find out how to do what Block is talking about. It isn't a cut-and-dried kind of recipe, but it's worth the effort to learn to do.
I like designing the assessment STRATEGY and the objectives at the same time. Put them in two columns of a table next to each other and see if they "match." If not, is it the strategy that needs to shift, or does the objective need to be rewritten to better point to the training goal we're trying to assess? If the budget doesn't allow for full-on simulations, can we use hotspots to mimic a simulation? Can we use decision-based multiple choice questions to provide "real-life" scenarios?
Also agreed that it's worth pointing out to our stakeholders that our online course on the parts and systems involved in washer repair is not the same as a repair course. Could be part of a blended approach, as stated!
Can't agree more...this reminds me of one approach that I always follow with my storyboards....I write the topic slides and summary first and then at the last I frame the Intro. Intro is a bigger picture of what the topic entails and how can I effectively write it without writing about the topic itself. So, after the topic slides, summary - I get that bigger picture and by that time, I also know what to emphaize on in the Intro. Thanks for the post - will do the same for assessments as well!
Related Articles