Your Source for Learning
Technology, Strategy, and News
ARTICLES       RSS feed RSS feed

A.D.D.I.E. Meets the Kirkpatrick Four: A 3-Act Play

by Tita Beal

March 26, 2007

Feature

by Tita Beal

March 26, 2007

Evaluation is a key part of Instructional Design -- it is the only way to know whether an e-Learning application is producing actual learning. Unfortunately, evaluation is also one of the parts of ID most frequently left out or short-changed. While there is some disagreement about whether the Kirkpatrick Four Level model is totally correct, it remains the most widely-accepted evaluation method . This article will give you a good picture of what the four levels are and what they aim to achieve.

SYNOPSIS: Our story, rated X for Xcellence, begins with ADDIE ... a woman who knows instructional design so well that her name has become synonymous with the steps in the process: Analysis, Design, Development, Implementation, Evaluation.

Editor’s Note: Parts of this article may not format well on smartphones and smaller mobile devices. We recommend viewing on larger screens.

While ADDIE is busy working with her team to design new online and onsite instructional programs, her evaluation specialists usually wait. ADDIE often does not call them until after her team has finished the design. Then the evaluation team puts on its judges’ wigs and robes in order to design a customized evaluation system, and to develop evaluative instruments.

But on the day of this story, while ADDIE and her instructional design team were creating a new e-Learning program, all was not well. As the curtain comes up on our play, the highly qualified members of ADDIE’s design and development team have just started the Analysis phase of their I.D. (Instructional Design) process.

Cast

NARRATOR

THE KIRKPATRICK FOUR

ADDIE

PATRICIA

JACK

Production Notes: The Narrator reads the script that follows as the characters in the story act out their parts, saying their lines when the NARRATOR mentions them.

Act I

[A windowless pantry with coffee machines, tea bags and powdered milk, somewhere in the middle of an office building down five long halls, past several acres of cubicles.]

NARRATOR

The design team was working well together. Before they dealt with evaluation, the designers wanted to analyze the need for training, set learning objectives, and clarify the content — the information, concepts, skills, procedures, and attitudes — they would include in the course. The design team also wanted to plan methods and media as well as sequence the content before calling in the evaluation specialists. ADDIE felt this was “good practice” — she had to know what the program was teaching before discussing how the evaluation team would determine whether participants mastered the content.

She had no idea that trouble was brewing among the four evaluation specialists waiting around in the break room. Actually three were waiting around. The other one was repairing a damaged part of the coffee machine’s cup holder that wasn’t working well.

One of the group, an elegant executive in a dark blue suit, suddenly blocked access to the coffee machine and challenged the others: “We’re wasting valuable organizational time and money sitting around here waiting. Why should four experts trained personally by Donald Kirkpatrick be stuck in a coffee klatch?! Come on! We’re the Kirkpatrick Four. Let’s go take our rightful place in the I.D. process!!” This self-appointed rebel leader tried to guide a laid-back, pleasant, easy-going woman to the door. “You go first — you’re Kirk One, the specialist in satisfaction surveys. The rest of us will come in after you present your reaction feedback sheets.”

Instead of budging, Kirk One patted the leader’s shoulder: “Chill, Kirk Four — they’ll hand out my reaction forms and probably never have time for the rest of you anyway.” Kirk One pushed the leader away from the coffee machine, adding, “Meanwhile, I want to try the vanilla cappuccino.”

“That’s exactly the problem!” snapped the rebel leader. “We wait around until the end. And then there’s never time for any evaluation except your ‘smile sheets.’ Come on, Kirk One — you’ve got to make sure they at least set objectives that participants consider valuable and useful.”

Kirk Two, a worried, professorial evaluator holding a stack of exam books watched Kirk One go to the door to head up the line. “It’s not procedurally correct to barge in on the instructional designers. After all, Evaluation comes at the end of the process.”

But Kirk Two’s complaint didn’t get far — the leader was determined. “Kirk Two, we’re not barging in — we belong there! The instructional designers know that an evaluation system should be part of the initial design. I know you can create rigorous test items for any content. But you must take your place right after Kirk One. You’ve got to make sure the content they teach aligns with the original learning objectives. You’ve got to test if people learned what they really needed to learn, not just their ability to recall whatever content someone dumped into the course. Come on — you line up second, Kirk Two, so you can march right in armed with your Tests.”

As Kirk Two gathered up the test forms and got in line behind Kirk One, the third evaluation specialist (who had been repairing the coffee machine) muttered, “I hope this great take-over won’t just be a lot of hot air. We can march in and demand that the instructional design team include our evaluation team from the start ... but if all we do is argue about instructional design theories, what’s the point? I’d rather fix this coffee cup holder. At least I’ll be doing something useful with my time.”

“That’s exactly why we need you!” replied the leader. “You’ll keep us focused on performance. Kirk One can survey reactions to find out how the participants, facilitators, and even line managers feel about the value and usefulness of the program. Kirk Two can test participants’ ability to understand new concepts, define new terms or carry out new skills and procedures. But if graduates of the program can’t perform better on the job, the program was a waste of time and money. When ADDIE asks who you are, you tell her you’re Kirk Three, in charge of on-the-job performance change.”

Kirk Three stared at Kirk Four with a puzzled look and said, “But isn’t that what you do — check for results?”

Kirk Four, feigning humility, replied, “Well actually, you make sure that the learning program results in performance change on the job. I don’t get involved in the little details of who does what in their e-Learning courses or on the job. My job is to make sure the organization gets a Return on Investment of time and money — to show that the learning program produced needed business results. You know — increased revenue, reduced costs, retention of customers, expanded market share, leveraging previous investments rather than making new purchases, or whatever would make this program valuable to senior management.”

Kirk One looked back at the other three evaluators lining up behind her and laughed. “Hey, you guys! Last year the e-Learning Guild surveyed evaluation strategies and they found that e-Learning specialists rarely have time to look at my reaction sheets. Not many courses even bother to test the participants’ ability to remember key information, define new terms, match new concepts with correct examples, or recall the elements of a skill or procedure. You’re dreaming if you think ADDIE’s team will ever get around to assessing on-the-job performance results or calculating the Return on Investment in the program. But here we come anyway — the Kirkpatrick Four!”

And with that as their battle cry, the insurgents charged into the “war room” — a conference room with lists of training needs on flip charts taped all over the walls — to join ADDIE and her team at the start of the instructional design process rather than sit around waiting for them to tack evaluation on as a last thought.

Act II

[A training war room with tables and walls covered with scribbled-on flip charts and several laptops, their screens lit with worksheets, diagrams, flowcharts, text, or brightly colored Webpages.]

ADDIE tried to hide her shock that the evaluators would interrupt her Analysis of learning needs.

ADDIE blocked the door. “Excuse me, but you can’t design an evaluation system before my design team knows what we’re teaching!”

 


Figure 1 Kirk One — Reaction, Ratings, and Raves

 

Kirk Four laid out the evaluation team’s grievances: “ADDIE, we’re tired of being tacked on at the end of your instructional design efforts. Once we go into action, we can create evaluation instruments to identify who did what and how well, what works, what needs improvement. Kirk One finds out how participants feel. Kirk Two makes sure they learned whatever content they had to learn. Kirk Three checks for improvements in participants’ on-the-job performance. And I, Kirk Four, find out if senior management has ac

hieved any overall organizational business and financial goals. I focus on — whatever represents a return on the investment in training to the organization’s bean counters — increased revenue, reduced costs, expanded market share.”

ADDIE tried to reassure the intruders: “Don’t worry. We’ll call you in to design a solid evaluation program as soon as we’re ready.”

But the Kirks held their ground. “There’s a big problem when you tack us on to the end,” insisted Kirk Four. “Exactly,” chimed in Kirk One, “we get stuck testing whatever content your instructional designers cram into your course.” Then Kirk Three blasted away, saying, “When you cram as many points as you can fit onto the screens of a presentation, we end up testing recall of points on slides instead of testing performance power.” Even Kirk Two, usually library-quiet, joined the fray with a loud, “You must embed us in every phase of your instructional battle plan!”

Being a good corporate strategist, ADDIE didn’t want to win a battle for procedure only to lose the war for on-the-job performance quality. So she decided to welcome the evaluators into the entire instructional design process from A to E.

Analysis

ADDIE and her team gave in. She asked Kirk One to start by explaining how to build participant preferences into the Analysis of training needs. But Kirk Four objected: “When you don’t think about me until last, you end up leaving me out. You blame me for being impossible. But we should be talking about needed results, not needed topics. If you start at the end, with me, you can ‘backward chain’ from needed results to the concepts and skills people need to master before they can achieve those results”

ADDIE interrupted, “Why don’t you stop complaining and tell us how you and your R.O.I. strategies fit into the needs analysis?!” So Kirk Four offered to help structure this Analysis by asking a few basic questions about needed organizational results and how the investment in training might return business value:

“What are your organization’s overall business goals that make the investment in this new training program a way to build organizational success? For example, do you need to increase revenue and decrease costs? Do you need to retain customers despite comcompetitive pressures and expand market share? Do you need to leverage past investments in order to avoid the need to purchase new equipment or systems?”

Before she heard Kirk Four’s questions, ADDIE thought she had analyzed the training need thoroughly for all of her courses. For example, her research showed that most sales reps needed to “improve their closing skills by asking for referrals whether or not the prospect agreed to make a purchase.” But now she realized she had focused too narrowly on specific job skills without relating them to the larger picture of the organization’s need to succeed in a competitive industry. Thanks to Kirk Four, ADDIE was able to explain to participants — and to senior management — why the e-Learning program was crucial to the survival of their organization ... and therefore to their jobs.

For the course on closing skills, ADDIE’s team of instructional designers revised the needs statement to read: “Expand market share by asking for referrals after closing a sale, whether or not the prospect agreed to make a purchase.” A slight change, but one that helped senior management start measuring the return on their investment in this and future e-Learning programs. In fact, ADDIE earned points just by taking Kirk Four’s questions to senior management. Line managers couldn’t believe that a staff training person who was not responsible for revenue would even mention R.O.I.

 

Figure 2 Kirk Two — Competency Tests

 

Kirk Three stepped up to the plate next: “ADDIE, instead of just listing topics and skills for the course, now it makes sense to clarify needed on-the-job performance results.” Kirk Three’s questions:

“What performance is needed to achieve these business goals? What strategies do top performers use as opposed to whatever the so-so employees do? What challenges and obstacles do you anticipate will make it difficult to achieve those business goals? And if participants in your course forger everything after a few days — or hours — what kinds of checklists, flowcharts, skill guides, diagrams, or other job aids can they use as on-the-job reminders?”

ADDIE realized her design team had started listing competency areas related to the course they were designing, and then analyzed the key concepts, skills and attitudes needed to achieve those competencies. They had even observed top sales reps ask for referrals to make sure the skills she taught were based on successes in the field — for example, the reps who got the most referrals had figured out ways to “sell” their prospects on the benefits of making a referral.

But now, thanks to Kirk Three, she realized she and her team needed to link training needs to business goals. That way, when they moved into the Design phase, they would be able to state very clear performance objectives that related to overall business needs. Since increased market share was the organization’s main goal, ADDIE saw that just training reps to ask for referrals was only part of the objective — they also needed to ask for referrals to prospects in new types of market segments in order to significantly expand market share.

At this point Kirk Two told ADDIE, “I don’t just want to make up quiz items to test whatever content your team puts into the course. I want to make sure the content is clearly aligned to specific performance objectives and overall organizational goals.” Kirk Two then worked with ADDIE and her team to answer this needs analysis question:

“What specific knowledge (information, concepts, terminology, etc.), skills, and attitudes do participants need to master in order to perform in ways that support organizational goals?”

ADDIE often did a task analysis as part of her needs analysis. But she usually just focused on training topics, not on the relationship between training content and overall business goals as well as the performance needed to achieve those goals.

She tweaked her needs statement again, this time adding, “In order to ask for a referral by discussing the benefits to the customer of making the referral, participants need to: 1) understand the reasons why prospects may want to refer a sales rep to a colleague even if the customer/prospect has not yet decided to make the purchase, 2) identify clues that indicate which reasons are relevant to different types of customers, and 3) state the benefits of making a referral in ways that fit the customer’s personal style.”

Things were getting too serious for Kirk One, who had been waiting patiently but now warned, “You can have the most effective objectives in the world, but if participants don’t like your program, they won’t learn anything. I know you laugh at my work. Call my feedback questionnaires ‘smile sheets’ if you will, but you can’t just focus on your objectives.” Kirk One then asked the key questions about participants’ reactions:

“How can you make sure participants understand why all this is useful and valuable? You could drive them up the wall with a hundred skill points. What do they need to make this easy to master, fast and fun?! ”

As ADDIE thought about Kirk One’s questions, she realized the needs analysis should not just be a list of 500 bits of knowledge and skill elements needed to improve job performance and support organizational goals. Participants would spend hours trying to remember enough to complete exercises and tests, then not be able to remember details when they went back to work.

So ADDIE and her instructional design team decided to organize the needs she had identified into job aids that would guide the course design — a flowchart, a diagram that showed a conceptual framework, a checklist or skill guide, a worksheet with questions that guide a procedure, or whatever fits the content. They also looked for engaging interesting ways to present the information.

 


Topics Covered

(10)
Appreciate this!
Google Plusone Twitter LinkedIn Facebook Email Print
Comments

Login or subscribe to comment

Be the first to comment.

Related Articles